You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A great feature would be if the notification webhook will have 2 additional improvements:
Define that the webhook has to be acknowledged via 2xx return code
Implementation would be as a configure option notifier.acknowledge with true or false inside the workspace.yml if webhooks are a fire and forget or wait for 2xx response.
Define global Trace ID via Environment variable.
Currently it is implemented that every pipeline run will calculates it's own UUID for every pipeline run.
In the scenario that we are splitting the jobs across different machines, moon ci --jobs --jobsTotal, it would be great that the user can calculate it's own "CI ID" before executing the moon commands.
With this the data can be analysed across machines.
@milesj: If you are agreeing that this is feature makes sense, I can provide the merge request.
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
A great feature would be if the notification webhook will have 2 additional improvements:
Define that the webhook has to be acknowledged via 2xx return code
Implementation would be as a configure option
notifier.acknowledge
with true or false inside theworkspace.yml
if webhooks are a fire and forget or wait for 2xx response.Define global Trace ID via Environment variable.
Currently it is implemented that every pipeline run will calculates it's own UUID for every pipeline run.
In the scenario that we are splitting the jobs across different machines,
moon ci --jobs --jobsTotal
, it would be great that the user can calculate it's own "CI ID" before executing the moon commands.With this the data can be analysed across machines.
@milesj: If you are agreeing that this is feature makes sense, I can provide the merge request.
The text was updated successfully, but these errors were encountered: