You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently BitbucketCloudApiClient retries all requests which have been rate limited endlessly (as long as every single try again results in a rate limit response). This is troublesome for a few reasons (and I'm sure there are more):
in case you have lot's of events (many projects, repos, etc.) which triggered requests running into rate-limiting this is unlikely to ever recover in a timely fashion (especially also given the fact that there is no incremental back-off between retries yet either) and there is basically no visbility into this behavior (repos / branches / PRs / changes will just not be discoverd and not trigger a build)
there is no way (not even manually) to abort these queued up requests either, so they just sit there although they may already be outdated anyway
the current retry strategy (fixed delay of 5 seconds) leads to an incredibly large amount of retry requests in certain configuration scenarios (== the same credentials are used by many jobs for the same project/repos with many branches / a lot of activity) as soon as the rate-limit threshold was breached.
In comparison BitbucketServerAPIClient actually stops after a max duration (currently 30 minutes) of total wait time has accumulated.
Note: Additionally, when the request happens to be one that should be cached, this potentially leads to a a deadlock to the usage of a synchronized method in the Cache - this is a separate issue though and should not be tackled as part of this issue.
Related resources for API request limits of bitbucket cloud:
yes, the overall implementation wouldn't be that difficult, the main issue is to agree on a proper approach.
A max duration would already have been introduced as well with #414 but there was a comment that it should be kept as is, without any further rational (and the PR seems to be abandoned at this point anyway).
In general I see a few different approaches that might be viable (order does not reflect my preference):
Align the current implementation for the bitbucket cloud client with the one for bitbucket server
Add a separate maxRetryCount or maxRetryDuration with sensible default values
Allow configurable behavior
For example by adding maxRetryCount or maxRetryDuration as a configuration options per bitbucket endpoint (similar to manageHooks)
The text was updated successfully, but these errors were encountered:
Currently BitbucketCloudApiClient retries all requests which have been rate limited endlessly (as long as every single try again results in a rate limit response). This is troublesome for a few reasons (and I'm sure there are more):
In comparison BitbucketServerAPIClient actually stops after a max duration (currently 30 minutes) of total wait time has accumulated.
Note: Additionally, when the request happens to be one that should be cached, this potentially leads to a a deadlock to the usage of a
synchronized
method in the Cache - this is a separate issue though and should not be tackled as part of this issue.Related resources for API request limits of bitbucket cloud:
Are you interested in contributing this feature?
yes, the overall implementation wouldn't be that difficult, the main issue is to agree on a proper approach.
A max duration would already have been introduced as well with #414 but there was a comment that it should be kept as is, without any further rational (and the PR seems to be abandoned at this point anyway).
In general I see a few different approaches that might be viable (order does not reflect my preference):
maxRetryCount
ormaxRetryDuration
with sensible default valuesFor example by adding
maxRetryCount
ormaxRetryDuration
as a configuration options per bitbucket endpoint (similar tomanageHooks
)The text was updated successfully, but these errors were encountered: