Skip to content

Cover #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed

Conversation

dbelaventsev-reef
Copy link

Closes Backblaze/B2_Command_Line_Tool#554

Description

The original intent of making these changes was to fix the bug, which is mentioned above. Using B2 CLI, users were facing error in B2 SDK - like b2.exception.TruncatedOutput: only 40719702112 of 79957501184 bytes read ERROR: only 40719702112 of 79957501184 bytes read. And as it was discovered, the most probable reason - failing download of some chunks. I.e., in case of big file we will have a lot of chunks to download. And due to unability to get some of them - we'll finally get less data than required. And the original issues was reported with B2 SDK version, which had number of retries equal to 1 (means no retries).

As the first step in making changes - last state of master in original repo was merged into working branch (of my fork https://github.com/dbelaventsev-reef/b2-sdk-python). So, the rest of the changes were added on the top of that.

Tests

Automated test suite is passing. Additional tests were added - downloads with retries and download with failing retries. Test case for failing retries will make 4 fails in a row (having 5 retry numbers in the code). Raw simulator was extended to support those things.

Fix

This issue doesn't require any changes in the code (since the latest B2 SDK was extended to have max number of retries equal to 5, which is more than enough to fight rare failing cases). Test cases was extended to fail 4 times in a row (having 5 retry attempts in the code). So we should be covered with all things.

Possible Improvement

The only thing, which could be a good candidate to extend this changeset is to stop download of a big file (parallel downloader) if we have first failed chunk. That will save credits for a user. Let me exlain.

If a big file was split into 100 parts and we can't get 10 of them - user will get corrupted file at the end AND he'll loose credits, required for download of that "useless set of bytes".

So we could just abandon the download at all after first fail (which was happen after 5 retries).

ppolewicz and others added 21 commits April 4, 2019 18:01
Minor tweaks to README.release.md
Python 2.6 has been EOL since 29-10-2013
…_empty

Fix transferer crashing on empty file download attempt
@pawelpolewicz
Copy link

this doesn't actually fix the issue, I fixed it a few days later for real

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

TruncatedOutput: only 1495314753 of 2121389892 bytes read
5 participants