Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvement: Add Backoff for Rate Limits #53

Open
s4ke opened this issue Feb 27, 2024 · 9 comments
Open

Improvement: Add Backoff for Rate Limits #53

s4ke opened this issue Feb 27, 2024 · 9 comments

Comments

@s4ke
Copy link
Contributor

s4ke commented Feb 27, 2024

Currently it can happen that for whatever reason the requests to the Hetzner Cloud API are going a bit crazy, causing 429 errors.

Currently, once a rate limit is reached, we keep hammering the rate API and we never get out of the rate limit issue.

@s4ke s4ke changed the title Backoff for Rate Limits Bug: No Backoff for Rate Limits Feb 27, 2024
@s4ke
Copy link
Contributor Author

s4ke commented Feb 27, 2024

To me it is still a bit unclear whether we want to just communicate back the error to the docker engine directly or whether we want to wrap the calling code in https://github.com/cenkalti/backoff to automatically do the backoff.

@s4ke s4ke changed the title Bug: No Backoff for Rate Limits Improvement: Add Backoff for Rate Limits Feb 27, 2024
@s4ke
Copy link
Contributor Author

s4ke commented Feb 27, 2024

Prototype here: master...neuroforgede:docker-volume-hetzner:master

Currently evaluating.

@s4ke
Copy link
Contributor Author

s4ke commented Mar 17, 2024

@costela What are your thoughts on this?

@s4ke
Copy link
Contributor Author

s4ke commented Apr 22, 2024

@costela ping

@costela
Copy link
Owner

costela commented Apr 23, 2024

sorry, I'm bit swamped, as always 😓

I understand the motivation, but I'm not sure about the particular solution. Why not something a bit more general like WithBackoffFunc?

@s4ke
Copy link
Contributor Author

s4ke commented Apr 23, 2024

No worries.

If I read the docs for WithBackoffFunc correctly this is used for automatic retrying of requests. We don't want automatic retries though, right? I'd rather have the volume creation fail?

@costela
Copy link
Owner

costela commented Apr 24, 2024

ah, you mean the fact that the retries are endless? Yes, that's a bit of a problem. But I'd suggest making that improvement at the hcloud-go side? Adding a way to stop retrying?

That feels like the better way to put this?

@s4ke
Copy link
Contributor Author

s4ke commented Sep 19, 2024

Hmm, this looks new: hetznercloud/hcloud-go#470 but depending on the call patterns this could not be enough.

@s4ke
Copy link
Contributor Author

s4ke commented Sep 19, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants