fix: discovery AyncFilter deadlock on shutdown#32572
Merged
cskiraly merged 1 commit intoethereum:masterfrom Oct 2, 2025
Merged
fix: discovery AyncFilter deadlock on shutdown#32572cskiraly merged 1 commit intoethereum:masterfrom
cskiraly merged 1 commit intoethereum:masterfrom
Conversation
cskiraly
approved these changes
Oct 2, 2025
Contributor
cskiraly
left a comment
There was a problem hiding this comment.
The rootcause of the hang is caused by the extra 1 slot, which was designed to make sure the routines in AsyncFilter(...) can be finished. This PR fixes it by making sure the extra 1 shot can always be resumed when node shutdown.
I don't see how the extra slot could cause the hang here. I think it should be either the check or the internal iterator. Still, the change is good and it seems to solve the issue.
Sahil-4555
pushed a commit
to Sahil-4555/go-ethereum
that referenced
this pull request
Oct 12, 2025
…2572) Description: We found a occasionally node hang issue on BSC, I think Geth may also have the issue, so pick the fix patch here. The fix on BSC repo: bnb-chain/bsc#3347 When the hang occurs, there are two routines stuck. - routine 1: AsyncFilter(...) On node start, it will run part of the DiscoveryV4 protocol, which could take considerable time, here is its hang callstack: ``` goroutine 9711 [chan receive]: // this routine was stuck on read channel: `<-f.slots` github.com/ethereum/go-ethereum/p2p/enode.AsyncFilter.func1() github.com/ethereum/go-ethereum/p2p/enode/iter.go:206 +0x125 created by github.com/ethereum/go-ethereum/p2p/enode.AsyncFilter in goroutine 1 github.com/ethereum/go-ethereum/p2p/enode/iter.go:192 +0x205 ``` - Routine 2: Node Stop It is the main routine to shutdown the process, but it got stuck when it tries to shutdown the discovery components, as it tries to drain the channel of `<-f.slots`, but the extra 1 slot will never have chance to be resumed. ``` goroutine 11796 [chan receive]: github.com/ethereum/go-ethereum/p2p/enode.(*asyncFilterIter).Close.func1() github.com/ethereum/go-ethereum/p2p/enode/iter.go:248 +0x5c sync.(*Once).doSlow(0xc032a97cb8?, 0xc032a97d18?) sync/once.go:78 +0xab sync.(*Once).Do(...) sync/once.go:69 github.com/ethereum/go-ethereum/p2p/enode.(*asyncFilterIter).Close(0xc092ff8d00?) github.com/ethereum/go-ethereum/p2p/enode/iter.go:244 +0x36 github.com/ethereum/go-ethereum/p2p/enode.(*bufferIter).Close.func1() github.com/ethereum/go-ethereum/p2p/enode/iter.go:299 +0x24 sync.(*Once).doSlow(0x11a175f?, 0x2bfe63e?) sync/once.go:78 +0xab sync.(*Once).Do(...) sync/once.go:69 github.com/ethereum/go-ethereum/p2p/enode.(*bufferIter).Close(0x30?) github.com/ethereum/go-ethereum/p2p/enode/iter.go:298 +0x36 github.com/ethereum/go-ethereum/p2p/enode.(*FairMix).Close(0xc0004bfea0) github.com/ethereum/go-ethereum/p2p/enode/iter.go:379 +0xb7 github.com/ethereum/go-ethereum/eth.(*Ethereum).Stop(0xc000997b00) github.com/ethereum/go-ethereum/eth/backend.go:960 +0x4a github.com/ethereum/go-ethereum/node.(*Node).stopServices(0xc0001362a0, {0xc012e16330, 0x1, 0xc000111410?}) github.com/ethereum/go-ethereum/node/node.go:333 +0xb3 github.com/ethereum/go-ethereum/node.(*Node).Close(0xc0001362a0) github.com/ethereum/go-ethereum/node/node.go:263 +0x167 created by github.com/ethereum/go-ethereum/cmd/utils.StartNode.func1.1 in goroutine 9729 github.com/ethereum/go-ethereum/cmd/utils/cmd.go:101 +0x78 ``` The rootcause of the hang is caused by the extra 1 slot, which was designed to make sure the routines in `AsyncFilter(...)` can be finished. This PR fixes it by making sure the extra 1 shot can always be resumed when node shutdown.
atkinsonholly
pushed a commit
to atkinsonholly/ephemery-geth
that referenced
this pull request
Nov 24, 2025
…2572) Description: We found a occasionally node hang issue on BSC, I think Geth may also have the issue, so pick the fix patch here. The fix on BSC repo: bnb-chain/bsc#3347 When the hang occurs, there are two routines stuck. - routine 1: AsyncFilter(...) On node start, it will run part of the DiscoveryV4 protocol, which could take considerable time, here is its hang callstack: ``` goroutine 9711 [chan receive]: // this routine was stuck on read channel: `<-f.slots` github.com/ethereum/go-ethereum/p2p/enode.AsyncFilter.func1() github.com/ethereum/go-ethereum/p2p/enode/iter.go:206 +0x125 created by github.com/ethereum/go-ethereum/p2p/enode.AsyncFilter in goroutine 1 github.com/ethereum/go-ethereum/p2p/enode/iter.go:192 +0x205 ``` - Routine 2: Node Stop It is the main routine to shutdown the process, but it got stuck when it tries to shutdown the discovery components, as it tries to drain the channel of `<-f.slots`, but the extra 1 slot will never have chance to be resumed. ``` goroutine 11796 [chan receive]: github.com/ethereum/go-ethereum/p2p/enode.(*asyncFilterIter).Close.func1() github.com/ethereum/go-ethereum/p2p/enode/iter.go:248 +0x5c sync.(*Once).doSlow(0xc032a97cb8?, 0xc032a97d18?) sync/once.go:78 +0xab sync.(*Once).Do(...) sync/once.go:69 github.com/ethereum/go-ethereum/p2p/enode.(*asyncFilterIter).Close(0xc092ff8d00?) github.com/ethereum/go-ethereum/p2p/enode/iter.go:244 +0x36 github.com/ethereum/go-ethereum/p2p/enode.(*bufferIter).Close.func1() github.com/ethereum/go-ethereum/p2p/enode/iter.go:299 +0x24 sync.(*Once).doSlow(0x11a175f?, 0x2bfe63e?) sync/once.go:78 +0xab sync.(*Once).Do(...) sync/once.go:69 github.com/ethereum/go-ethereum/p2p/enode.(*bufferIter).Close(0x30?) github.com/ethereum/go-ethereum/p2p/enode/iter.go:298 +0x36 github.com/ethereum/go-ethereum/p2p/enode.(*FairMix).Close(0xc0004bfea0) github.com/ethereum/go-ethereum/p2p/enode/iter.go:379 +0xb7 github.com/ethereum/go-ethereum/eth.(*Ethereum).Stop(0xc000997b00) github.com/ethereum/go-ethereum/eth/backend.go:960 +0x4a github.com/ethereum/go-ethereum/node.(*Node).stopServices(0xc0001362a0, {0xc012e16330, 0x1, 0xc000111410?}) github.com/ethereum/go-ethereum/node/node.go:333 +0xb3 github.com/ethereum/go-ethereum/node.(*Node).Close(0xc0001362a0) github.com/ethereum/go-ethereum/node/node.go:263 +0x167 created by github.com/ethereum/go-ethereum/cmd/utils.StartNode.func1.1 in goroutine 9729 github.com/ethereum/go-ethereum/cmd/utils/cmd.go:101 +0x78 ``` The rootcause of the hang is caused by the extra 1 slot, which was designed to make sure the routines in `AsyncFilter(...)` can be finished. This PR fixes it by making sure the extra 1 shot can always be resumed when node shutdown.
prestoalvarez
pushed a commit
to prestoalvarez/go-ethereum
that referenced
this pull request
Nov 27, 2025
…2572) Description: We found a occasionally node hang issue on BSC, I think Geth may also have the issue, so pick the fix patch here. The fix on BSC repo: bnb-chain/bsc#3347 When the hang occurs, there are two routines stuck. - routine 1: AsyncFilter(...) On node start, it will run part of the DiscoveryV4 protocol, which could take considerable time, here is its hang callstack: ``` goroutine 9711 [chan receive]: // this routine was stuck on read channel: `<-f.slots` github.com/ethereum/go-ethereum/p2p/enode.AsyncFilter.func1() github.com/ethereum/go-ethereum/p2p/enode/iter.go:206 +0x125 created by github.com/ethereum/go-ethereum/p2p/enode.AsyncFilter in goroutine 1 github.com/ethereum/go-ethereum/p2p/enode/iter.go:192 +0x205 ``` - Routine 2: Node Stop It is the main routine to shutdown the process, but it got stuck when it tries to shutdown the discovery components, as it tries to drain the channel of `<-f.slots`, but the extra 1 slot will never have chance to be resumed. ``` goroutine 11796 [chan receive]: github.com/ethereum/go-ethereum/p2p/enode.(*asyncFilterIter).Close.func1() github.com/ethereum/go-ethereum/p2p/enode/iter.go:248 +0x5c sync.(*Once).doSlow(0xc032a97cb8?, 0xc032a97d18?) sync/once.go:78 +0xab sync.(*Once).Do(...) sync/once.go:69 github.com/ethereum/go-ethereum/p2p/enode.(*asyncFilterIter).Close(0xc092ff8d00?) github.com/ethereum/go-ethereum/p2p/enode/iter.go:244 +0x36 github.com/ethereum/go-ethereum/p2p/enode.(*bufferIter).Close.func1() github.com/ethereum/go-ethereum/p2p/enode/iter.go:299 +0x24 sync.(*Once).doSlow(0x11a175f?, 0x2bfe63e?) sync/once.go:78 +0xab sync.(*Once).Do(...) sync/once.go:69 github.com/ethereum/go-ethereum/p2p/enode.(*bufferIter).Close(0x30?) github.com/ethereum/go-ethereum/p2p/enode/iter.go:298 +0x36 github.com/ethereum/go-ethereum/p2p/enode.(*FairMix).Close(0xc0004bfea0) github.com/ethereum/go-ethereum/p2p/enode/iter.go:379 +0xb7 github.com/ethereum/go-ethereum/eth.(*Ethereum).Stop(0xc000997b00) github.com/ethereum/go-ethereum/eth/backend.go:960 +0x4a github.com/ethereum/go-ethereum/node.(*Node).stopServices(0xc0001362a0, {0xc012e16330, 0x1, 0xc000111410?}) github.com/ethereum/go-ethereum/node/node.go:333 +0xb3 github.com/ethereum/go-ethereum/node.(*Node).Close(0xc0001362a0) github.com/ethereum/go-ethereum/node/node.go:263 +0x167 created by github.com/ethereum/go-ethereum/cmd/utils.StartNode.func1.1 in goroutine 9729 github.com/ethereum/go-ethereum/cmd/utils/cmd.go:101 +0x78 ``` The rootcause of the hang is caused by the extra 1 slot, which was designed to make sure the routines in `AsyncFilter(...)` can be finished. This PR fixes it by making sure the extra 1 shot can always be resumed when node shutdown.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Hi, we found a occasionally node hang issue on BSC, I think Geth may also have the issue, so pick the fix patch here.
The fix on BSC repo: bnb-chain/bsc#3347
When the hang occurs, there are two routines stuck.
On node start, it will run part of the DiscoveryV4 protocol, which could take considerable time, here is its hang callstack:
It is the main routine to shutdown the process, but it got stuck when it tries to shutdown the discovery components, as it tries to drain the channel of
<-f.slots, but the extra 1 slot will never have chance to be resumed.The rootcause of the hang is caused by the extra 1 slot, which was designed to make sure the routines in
AsyncFilter(...)can be finished. This PR fixes it by making sure the extra 1 shot can always be resumed when node shutdown.