Skip to content

daemon: fix deadlock when SSH client disconnects during remote builds#14865

Merged
Mic92 merged 1 commit intoNixOS:masterfrom
Mic92:daemon-deadlock
Dec 26, 2025
Merged

daemon: fix deadlock when SSH client disconnects during remote builds#14865
Mic92 merged 1 commit intoNixOS:masterfrom
Mic92:daemon-deadlock

Conversation

@Mic92
Copy link
Member

@Mic92 Mic92 commented Dec 25, 2025

When a remote SSH client disconnects during a long-running operation like addToStore(), the nix-daemon can deadlock in a circular wait:

  • Process A (SSH daemon): blocked reading from downstream store socket, waiting for response from local daemon
  • Process B (local daemon): blocked reading from upstream socket, waiting for more NAR data from SSH daemon

The existing interrupt mechanism (ReceiveInterrupts + MonitorFdHup) correctly detects the SSH disconnect and sets _isInterrupted, but the daemon remains blocked in read() on the downstream store connection. Even though SIGUSR1 causes read() to return EINTR, the circular dependency prevents forward progress.

Fix this by adding shutdownConnections() to RemoteStore that calls shutdown(fd, SHUT_RDWR) on all tracked connection file descriptors. Register an interrupt callback in processConnection() that invokes this method when the store is a RemoteStore. This causes any blocking read() to return 0 (EOF), breaking the circular wait and allowing both processes to exit cleanly.

The fix tracks connection FDs in a synchronized set, populated when connections are created by the Pool factory. On interrupt, all FDs are shut down regardless of whether they're idle or in-use.

I have this setup now deployed for a while in my two CI setups (one of them with higher load, that triggered this issue more reliable) and haven't had any lockup since for a couple of weeks. I wish the solution was a bit cleaner, maybe someone can think of a better way, but I couldn't get the interrupt code to actually leave the daemon protocol loop without this code.
The key to produce this deadlock seem to be to build the same store path from different nix build instances in a concurrent way.

Motivation

Context


Add 👍 to pull requests you find important.

The Nix maintainer team uses a GitHub project board to schedule and track reviews.

When a remote SSH client disconnects during a long-running operation
like addToStore(), the nix-daemon can deadlock in a circular wait:

  - Process A (SSH daemon): blocked reading from downstream store socket,
    waiting for response from local daemon
  - Process B (local daemon): blocked reading from upstream socket,
    waiting for more NAR data from SSH daemon

The existing interrupt mechanism (ReceiveInterrupts + MonitorFdHup)
correctly detects the SSH disconnect and sets _isInterrupted, but the
daemon remains blocked in read() on the downstream store connection.
Even though SIGUSR1 causes read() to return EINTR, the circular
dependency prevents forward progress.

Fix this by adding shutdownConnections() to RemoteStore that calls
shutdown(fd, SHUT_RDWR) on all tracked connection file descriptors.
Register an interrupt callback in processConnection() that invokes
this method when the store is a RemoteStore. This causes any blocking
read() to return 0 (EOF), breaking the circular wait and allowing
both processes to exit cleanly.

The fix tracks connection FDs in a synchronized set, populated when
connections are created by the Pool factory. On interrupt, all FDs
are shut down regardless of whether they're idle or in-use.
@Mic92 Mic92 requested a review from Ericson2314 as a code owner December 25, 2025 06:32
@github-actions github-actions bot added the store Issues and pull requests concerning the Nix store label Dec 25, 2025
Copy link
Contributor

@xokdvium xokdvium left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall makes sense to me. I've also seen clients lock-up on a blocking write to a local daemon socket in FdSink in the destructor (it does a blocking flush in the destructor). Will this also fix it?

@Mic92
Copy link
Member Author

Mic92 commented Dec 26, 2025

Overall makes sense to me. I've also seen clients lock-up on a blocking write to a local daemon socket in FdSink in the destructor (it does a blocking flush in the destructor). Will this also fix it?

Only if MonitorFdHup is triggered. You might need to check if this thread is still active in your case.

@xokdvium
Copy link
Contributor

I see, then that's a separate fix we should do.

@Mic92 Mic92 added this pull request to the merge queue Dec 26, 2025
Merged via the queue into NixOS:master with commit f6ca5dc Dec 26, 2025
17 checks passed
@Mic92 Mic92 deleted the daemon-deadlock branch December 26, 2025 15:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

store Issues and pull requests concerning the Nix store

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants