Skip to content

filetransfer: don't allow the queue size to be 0 #327

Merged
cole-h merged 2 commits intomainfrom
sep-cole/nix-268-3152s-limit-the-number-of-active-curl-handles-breaks-http
Jan 28, 2026
Merged

filetransfer: don't allow the queue size to be 0 #327
cole-h merged 2 commits intomainfrom
sep-cole/nix-268-3152s-limit-the-number-of-active-curl-handles-breaks-http

Conversation

@cole-h
Copy link
Member

@cole-h cole-h commented Jan 27, 2026

Motivation

This inadvertently caused file transfers to hang when used in conjunction with http-connections = 0: the worker checks if the in-progress items and items we've added to the incoming queue are larger than the max queue size, which was previously defined to be 5x the value of http-connections. Well, as you can probably tell, 5*0=0, and unless the .size()` of any of these containers became negative, we would always wait another 100ms instead of processing the incoming queue.

Because http-connections = 0 is a special value to signal, roughly, "as much possible" (mostly depends on how cURL handles it, but this is probably approximately right), we default to std::thread::hardware_concurrency() (as is done in a couple other places; specifically, this is what the ThreadPool constructor does for a value of 0).

There may be a better fix, but this works for now, and is approximately right.

Context

Summary by CodeRabbit

  • Bug Fixes
    • Improved file transfer queue size calculation with safer defaults for systems without explicit configuration, reducing the risk of potential hangs during file transfer operations.

✏️ Tip: You can customize this high-level summary in your review settings.

A queue size of 0 will cause hanging in the file transfer process.
@coderabbitai
Copy link

coderabbitai bot commented Jan 27, 2026

📝 Walkthrough

Walkthrough

The pull request modifies the maxQueueSize calculation in the curlFileTransfer class to implement a fallback mechanism. When httpConnections is not explicitly configured, the queue size now defaults to either 1 or the system's hardware concurrency count, multiplied by 5. An assertion is added to ensure the calculated queue size never becomes zero.

Changes

Cohort / File(s) Summary
File transfer queue initialization
src/libstore/filetransfer.cc
Modified maxQueueSize field initialization from direct multiplication to conditional fallback logic: uses configured httpConnections if set, otherwise falls back to std::max(1U, std::thread::hardware_concurrency()) before multiplying by 5. Added assertion in workerThreadMain to enforce maxQueueSize > 0.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Poem

🐰 A queue that learns to count itself,
No longer lost on barren shelf,
Defaults bloom when none were said,
Hardware whispers thread counts spread.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: preventing the queue size from being 0, which directly addresses the bug described in the PR objectives where file transfers would hang when the queue size was 0.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

This inadvertently caused file transfers to hang when used in conjunction
with `http-connections = 0`: the worker checks if the in-progress
items and items we've added to the incoming queue are larger than
the max queue size, which was previously defined to be 5x the value of
`http-connections. Well, as you can probably tell, 5*0=0, and unless the
`.size()` of any of these containers became negative, we would always
wait another 100ms instead of processing the incoming queue.

Because `http-connections = 0` is a special value to signal,
roughly, "as much possible" (mostly depends on how cURL handles
it, but this is probably approximately right), we default to
`std::thread::hardware_concurrency()` (as is done in a couple other
places; specifically, this is what the ThreadPool constructor does for
a value of `0`).

There may be a better fix, but this works for now, and is approximately
right.
@cole-h cole-h force-pushed the sep-cole/nix-268-3152s-limit-the-number-of-active-curl-handles-breaks-http branch from 5b4a357 to 116b10a Compare January 27, 2026 23:04
@github-actions
Copy link

github-actions bot commented Jan 27, 2026

@github-actions github-actions bot temporarily deployed to pull request January 27, 2026 23:08 Inactive
@cole-h cole-h added this pull request to the merge queue Jan 28, 2026
Merged via the queue into main with commit 02fcf7b Jan 28, 2026
29 checks passed
@cole-h cole-h deleted the sep-cole/nix-268-3152s-limit-the-number-of-active-curl-handles-breaks-http branch January 28, 2026 21:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants