You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Probably because of the switch, the trigger for bumping timeouts is increased way beyond what it should: originally the intent was to set the start of the timeout on the latest build start in order to account for large runbot queues: a staging can be created then if the runbot has crazy amounts of load the build is only picked up half an hour later, which should not count in the mergebot's timeout.
So the intent was every time we receive a "pending" status (which generally marks the start of a runbot build) we bump the timeout to account for the possibility of ci/runbot or ci/l10n timing getting a late start.
However with the statuses refactoring the timeout bump now occurs any time a _compute_state results in a pending, this means not just new pending statuses but also all success statuses except for the very last one.
This is obviously not what was ever expected.
The text was updated successfully, but these errors were encountered:
By updating the staging timeout every time we run `_compute_state` and
still have a `pending` status, we'd actually bump the timeout *on
every success CI* except for the last one. Which was never the
intention and can add an hour or two to the mergebot-side timeout.
Instead, add an `updated_at` attribute to statuses (name taken from
the webhook payload even though we don't use that), read it when we
see `pending` required statuses, and update the staging timeout based
on that if necessary.
That way as long as we don't get *new* pending statuses, the timeout
doesn't get updated.
Fixes#952
Probably because of the switch, the trigger for bumping timeouts is increased way beyond what it should: originally the intent was to set the start of the timeout on the latest build start in order to account for large runbot queues: a staging can be created then if the runbot has crazy amounts of load the build is only picked up half an hour later, which should not count in the mergebot's timeout.
So the intent was every time we receive a "pending" status (which generally marks the start of a runbot build) we bump the timeout to account for the possibility of ci/runbot or ci/l10n timing getting a late start.
However with the statuses refactoring the timeout bump now occurs any time a
_compute_state
results in apending
, this means not just newpending
statuses but also all success statuses except for the very last one.This is obviously not what was ever expected.
The text was updated successfully, but these errors were encountered: