Limit the number of active curl handles#14993
Conversation
Previously, calling queryValidPaths() with a large number (e.g. 100K) of store paths failed because Nix immediately creates a `TransferItem` for each .narinfo, which is then registered as a handle with curl. However curl appears to scale poorly internally: even though only a few downloads are actually started (up to the connections/streams limits), it spends a lot of CPU time dealing with the inactive handles. So the curl thread is sitting at 100% CPU, the active downloads stall and time out, and everything grind to a halt. So now we limit the number of curl handles to http-connections * 5. With this, fetching 100K .narinfo files from localhost succeeds in ~15 seconds.
There can be a long time between the creation of `TransferItem` and the start of the curl download, which can lead to misleading download durations and progress bar status. So now we create the `Activity` and update `startTime` when curl actually starts the download.
xokdvium
left a comment
There was a problem hiding this comment.
I wonder if we could repurpose https://curl.se/libcurl/c/CURLMOPT_MAX_CONCURRENT_STREAMS.html. That does seem like the maximum number of concurrent http 2 streams. So the maximum would be max connections times CURLMOPT_MAX_CONCURRENT_STREAMS (which is 100 by default, but we could also make it configurable).
|
Hm, but there would have to be a way to limit the per-connection transfers. |
|
Yeah, I thought about using |
|
Oh there's also https://curl.se/libcurl/c/CURLMOPT_MAX_HOST_CONNECTIONS.html. It's unlimited by default and doesn't seem like we set it. Could it be an issue too? |
|
I don't think that would help. Curl does observe the connection limit, e.g. it prints debug messages like: It just spends a lot of time on inactive handles. Probably it has a for loop iterating over the handles somewhere. |
xokdvium
left a comment
There was a problem hiding this comment.
Seems good to me. Left one minor comment
I see. I can try poking at code and maybe file an issue for that. Maybe it's something that's pretty easy to fix upstream. |
|
Oh, it occurs to me what we do a wakeupMulti on each enqueueItem. That might be one of the culprits (but I don't see how we could avoid that either). |
Motivation
Previously, calling
queryValidPaths()with a large number (e.g. 100K) of store paths failed because Nix immediately creates aTransferItemfor each.narinfo, which is then registered as a handle with curl. However curl appears to scale poorly internally: even though only a few downloads are actually started (up to the connections/streams limits), it spends a lot of CPU time dealing with the inactive handles. So the curl thread is sitting at 100% CPU, the active downloads stall and time out, and everything grind to a halt.So now we limit the number of curl handles to
http-connections * 5. With this, fetching 100K.narinfofiles fromlocalhostsucceeds in ~15 seconds.Also, we now create the
Activityassociated with a download later. There can be a long time between the creation ofTransferItemand the start of the curl download, which can lead to misleading download durations and progress bar status. So now we create theActivityand updatestartTimewhen curl actually starts the download.Context
Add 👍 to pull requests you find important.
The Nix maintainer team uses a GitHub project board to schedule and track reviews.