Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

503 Error when uploading #285

Open
chripede opened this issue Aug 23, 2021 · 6 comments
Open

503 Error when uploading #285

chripede opened this issue Aug 23, 2021 · 6 comments

Comments

@chripede
Copy link

I would normally contact BackBlaze support for this, but every time it happens they can see nothing wrong and says that everything should work, so I am trying here.

Since Saturday I have been getting upload errors across all my buckets. Both using the CLI and using this library directly

Pausing thread for 1 seconds because that is what the default exponential backoff is

How can I debug what is wrong?

@ppolewicz
Copy link
Collaborator

This sounds like 429, which would indicate that the server is throttling you. Are you making too many requests?

@chripede
Copy link
Author

We are actually getting 503 too_busy

Can I make too many requests? We upload millions of files every day usually without problems.

@ppolewicz
Copy link
Collaborator

503 too busy (generally, not just in case of B2 storage) indicates that the server you are trying to upload the data to cannot serve the request because it's out of capacity. This code confirms it. I've used B2 for a few years and I've seen a 503 now and then, but usually those issues are temporary and you can resume uploading almost immediately.

The retry policy in b2sdk is nto as good as it could be, we've known it for years but we haven't figured a good way to deal with it.

Can you say something more about the way you use B2? That could help us figure out a more appropriate retry policy

@chripede
Copy link
Author

We upload to several buckets. However one bucket is receiving ~6 million small files every day. When this error happens for us, all our buckets are affected. Even though we stop all uploads for a day, we still get 503 trying to upload just one file.

So I do not believe the problem is within this SDK, I believe the problems are with B2. I am however unable to get any help to get it fixed, so I was hoping to reach some Backblaze developers through here that could somehow escalate the problem. This time we were unable to upload for 4 days.

@ppolewicz
Copy link
Collaborator

This is very odd, but as you said, your usage of B2 is pretty extreme! I don't know what went up there, but we can note that you are the only client here reporting a problem and B2 support says their servers are generally fine.

It kind of looks like if there was a per-account limit of how many small files can be safely uploaded per day and if you sustain such load for a long time some kind of re-indexing of the bucket (when index size crosses some size boundary) may fall behind and you write to temporary index. For any normal user this would be fine, however if this reindexing doesn't finish before another reindexing is required, you get 503 (instead of 429, which would normally be expected in such case, but may be impractical for implementation in such a rare case). I'm purely speculating, my familiarity with the B2 infrastructure comes from public documentation of their infrastructure and observing behavior of the server over the last few years during development. You may be the only client to have ever hit this issue since the inception of B2 storage. Someone would come here and complain if this happened to them, but nobody ever did.

How small are those files you upload? I think B2 storage is generally optimized for larger files, it supports objects of 10TB in size. Would it be possible to change your usage pattern to reduce the number of uploads and decrease the amount of files? If you stacked a bunch of those small files together to then read the content of the individual file via a web server, that would allow you to greatly reduce the number of file name index inserts and could thus reduce the load on the impacted component and eliminate the problem you are hitting.

@chripede
Copy link
Author

I can understand the re-indexing if it was only that one bucket that was impacted.

Our files are around 150 KB and we cannot combine them. Since this library retries 5? times and have no way that I can find of changing that, we have changed to using BB S3 gateway as that fails early and we can continue our flow.

If I hear anything from BB support I will update this report.

@ppolewicz ppolewicz changed the title Error when uploading 503 Error when uploading Sep 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants