Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TS client: Limit number of parallel part uploads #143

Open
zachmullen opened this issue Dec 5, 2020 · 2 comments
Open

TS client: Limit number of parallel part uploads #143

zachmullen opened this issue Dec 5, 2020 · 2 comments

Comments

@zachmullen
Copy link
Contributor

We've run into problems when we perform an unbounded number of concurrent XHRs that upload data. We should adopt a solution similar to what GWC did that allows for a pool of configurable size for sending parts in parallel. Ideally the pool size could be configured via the client class constructor.

@brianhelba
Copy link
Contributor

Following some discussion with @danlamanna, I'm mostly convinced that there's no benefit to having any parallelization of part uploads.

With small HTTP requests, for script files, API endpoints, etc., fetching multiple things in parallel allows for a shorter total turnaround time, since the limiting factor is the latency involved in the request-response cycle for each request.

In this case, the bottleneck ought to be in the raw bandwidth between an end user and S3. Sending multiple parts in parallel should just result in more contention for the finite total bandwidth. For any significant-sized upload, latency involved in the initial TCP handshake, CORS preflight, etc. should be de minimis compared to the total transfer time.

Perhaps it'd be better to resolve this issue by just executing all of the parts strictly in serial.

What do you think? @danlamanna @zachmullen @dchiquit

@zachmullen
Copy link
Contributor Author

I'm fine with this decision. My use case is complicated by the fact that I am often uploading multiple files at once. In that case I can parallelize at the file level, and knowing that this library will not parallelize internally makes my decision as a downstream easier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Backlog
Development

No branches or pull requests

2 participants