You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're limiting the bandwidth of our upload to 500KBps, when we upload files that are about 1GB.
For some reason, even though we're within AWS's network - we need several retries.
Current throttling's behavior is that on each retry - the bandwidth drops further - which increases the chances we'll need another retry.
So, once we start retrying - the chances that our upload will succeed becomes slim.
By patching our copy of S3.py, not to reduce the bandwidth with each retry - we've considerably increased the chances of a successful upload.
The text was updated successfully, but these errors were encountered:
Thank you for your investigation.
Looking at the code, indeed i can imagine that this can trigger some server timeouts when the connection is very fast, the file is big and the limit is low.
Do you have change the default config values for :
send_chunk = 64 * 1024
recv_chunk = 64 * 1024
Can you show me the retry error that indicate the reason of the error?
And maybe, if you can have a run with the '-d' flag to see what is happening around the error (like an error response from the server), that could be interesting.
FYI, after diving in the code today, I noticed something strange.
And then I did remember this mysterious issue, and so things became clear.
The throttling algorithm in s3cmd is not right:
Each time a transfer goes too fast, a sleep is issued to slow down the transfer.
How much to sleep is calculated based on the difference between the expected duration and the real duration.
The bad thing is that once such a throttle is set, it will not be clear, it can only become bigger.
So if you have a sudden spike of speed, all your following chunks will still be slowed down even if the network speed is not great anymore.
And indeed, as you noticed it, it was even worse in the case of a retry.
fviard
added a commit
to fviard/s3cmd
that referenced
this issue
Mar 2, 2018
@eburcat Took a very long time, but I think that your issue should be fixed in the last MASTER.
If ever you have the possibility to give it a try, that would be highly appreciated.
We're limiting the bandwidth of our upload to 500KBps, when we upload files that are about 1GB.
For some reason, even though we're within AWS's network - we need several retries.
Current throttling's behavior is that on each retry - the bandwidth drops further - which increases the chances we'll need another retry.
So, once we start retrying - the chances that our upload will succeed becomes slim.
By patching our copy of S3.py, not to reduce the bandwidth with each retry - we've considerably increased the chances of a successful upload.
The text was updated successfully, but these errors were encountered: