Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--max-retries #914

Closed
eburcat opened this issue Aug 15, 2017 · 4 comments
Closed

--max-retries #914

eburcat opened this issue Aug 15, 2017 · 4 comments
Milestone

Comments

@eburcat
Copy link

eburcat commented Aug 15, 2017

Currently the retries are hard-coded to 5.
It'd help if it was possible to control the number of retries, especially when we're throttling our traffic and we're trying to upload big files.
If it was possible to set the value to somewhere in the range of 1 to 30 I think it can be reasonable in some cases.

@fviard
Copy link
Contributor

fviard commented Aug 19, 2017

I don't think that a lot of people would use this option but that looks ok to me to add that in config options.

@fviard fviard added this to the Future - easy fixes milestone Aug 19, 2017
@lfdnas
Copy link

lfdnas commented Mar 16, 2022

It would be great to have this feature in case of failure!

@sebastiandanconia
Copy link

sebastiandanconia commented May 8, 2022

A --max-retries option could be part of a successful solution to my problem.

I use an S3-compatible private cloud which I access over a less-than-reliable connection that is extremely slow in relation to the size of the data I'm sending (> 1TiB over 1.25 MB/s, which takes weeks or more). Meanwhile, the server's network connection goes down almost weekly.

Currently, s3cmd increases the timeout duration linearly based on the retry number. My preferred backoff intervals would be a few seconds in the beginning, then increase to a maximum of 10-20 minutes or so between retries, finally giving up after 3 days or so. The --max-retries option is reasonable, but I additionally advocate tuning the backoff/retry timeout function so it's reasonable for a really long retry window before s3cmd gives up.

I'm willing to try coding up a solution once I have clarity on what kind of Pull Request you'd accept. I was thinking of adding a concept of "retry profiles" in the config file: retry_profile = normal, retry_profile = aggressive, etc., and tuning the client's behavior accordingly.

@fviard fviard closed this as completed in 3d69fce Oct 5, 2022
@fviard fviard modified the milestones: Future - easy fixes, 2.4.0 Oct 5, 2022
@fviard
Copy link
Contributor

fviard commented Oct 5, 2022

A "--max-retries" option was added in MASTER.
(But still anything like retry_profile for the moment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants