-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent split-brain active node writes when using Consul #23013
Conversation
1c52bd3
to
dc27239
Compare
CI Results: |
Yay, the new test failed as expected:
In this case client 0 which is connected to the old leader that is partitioned managed to write some data to it which was acknowledged by that leader, but then was "overwritten" by a subsequent update from the new leader that didn't know about those updates so they end up missing from the final set of results in the Key. If you look at the state at intermediate points you can see the opposite happen - where client 1 has written lots more new entries to the new leader during the partitoin and they are then lost when the old leader writes back over them only knowing about the writes from before the partition. But because we wait for the old leader to notice the partition resilve and step down, the last write in the test is pretty much always going to be from the new leader and so we'll detect the failure as missing the last few writes that old leader (client 0) wrote rather than a gap in client 2's set of writes. |
c41a027
to
ae10069
Compare
CI passed the new test first time now we have the fix in place. The fix uses Consul's I've run the new test scenario locally on my mac 180+ times in a loop without a single failure so I'm relatively confident it is a correct fix an a relatively reliable test despite it's inherent non-determinism. |
I've also performed some performance testing with this change to verify my assumption that the additional I was unable to measure a significant difference in throughput or latency between this branch and |
Build Results: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vault/external_tests/consul_fencing_binary/consul_fencing_test.go
Outdated
Show resolved
Hide resolved
vault/external_tests/consul_fencing_binary/consul_fencing_test.go
Outdated
Show resolved
Hide resolved
vault/external_tests/consul_fencing_binary/consul_fencing_test.go
Outdated
Show resolved
Hide resolved
vault/external_tests/consul_fencing_binary/consul_fencing_test.go
Outdated
Show resolved
Hide resolved
73e50d9
to
345c93c
Compare
345c93c
to
4d03b34
Compare
This PR will add a test and then a fix for a correctness bug when using Consul as a HA backend.
This issue, while possible to hit in our Community Edition as shown by the test added, is much less likely to cause noticeable problem in CE. It could at worse cause a major failure - say if the mount table was simultaneously modified by two active nodes in the small window of time that they both thing they are active. Chances are, even if you do manage to hit this, you will only loose a handful of updates, and even then only if multiple clients are writing to the same keys at the same time.
The issue is much more pronounced in Vault Enterprise where the active node is responsible for managing replicated state and does so with use of an index that must remain perfectly consistent with the underlying state.
Test Scenario
This test is specially constructed to maximise the chance of detecting bad behaviour in the case that multiple nodes consider themselves to be active for a while.
We use a KVv2 mount and patch updates. Two separate client goroutines connect to two different servers (one starts as active node) and then write updates to the same Key but with unique sub-keys. If Vault is correct, no matter what happens to the leader nodes, we should always end up with a single consistent record containing the full set of keys written by both. If we allow multiple leaders to overlap and still write (like we do before this fix) then each active node is likely to overwrite updates from the other resulting in gaps in one or both of the client's sets of keys.
We start writing, partition the leader from the rest of the Vault/Consul nodes, wait for more writes to complete on a new leader, then un-partition the leader again while it still has a client writing directly to it. This currently results in that leader completing at least one write that conflicts with the new writes from the new leader and "looses" some data.
This PR initially will be just the new test to ensure that it fails and fails for the right reasons in CI. Once I've seen that I'll push the fix too.