Vtorc: Recheck primary health before attempting a failure mitigation#18234
Vtorc: Recheck primary health before attempting a failure mitigation#18234deepthi merged 9 commits intovitessio:mainfrom
Conversation
Review ChecklistHello reviewers! 👋 Please follow this checklist when reviewing this Pull Request. General
Tests
Documentation
New flags
If a workflow is added or modified:
Backward compatibility
|
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
a687948 to
54ff72a
Compare
| return err | ||
| } | ||
|
|
||
| // We lock the shard here and then refresh the tablets information |
There was a problem hiding this comment.
@bantyK/everyone: hypothetically, what would happen if there was a reparent between recheckPrimaryHealth and the .LockShard? I suspect this ok, but it would be good to understand
There was a problem hiding this comment.
thanks for the review @timvaillancourt
This should be ok because vtorc performs a primary health check in the forceRefreshAllTabletsInShard method after the locking step like it used to be.
|
@bantyK a thought: the issue this PR aims to solve could apply to other detections in the future, by which I mean X detection is more urgent than Y.. Would it be possible for the logic to instead work on a sorted, numeric priority, and a primary-failover is always the highest priority? I think if detections were priority sorted we could still keep the logic under the |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #18234 +/- ##
==========================================
- Coverage 67.46% 67.45% -0.01%
==========================================
Files 1601 1602 +1
Lines 262153 262256 +103
==========================================
+ Hits 176851 176914 +63
- Misses 85302 85342 +40 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
The issue primarily here was lock contention between the operations and not priorities between the operations. Even if we introduce priority for different operations, as long as the operations are sharing the same lock in the topo server, it probably won't help much. There is no capability for one operation to force-stop another operation to release its lock, and I don't know if it is a safe thing to do. Hence, as part of this PR, I am not thinking about this and only want to focus on getting primary-failover the highest priority which was already the intention in the VTorc's implementation. |
GuptaManan100
left a comment
There was a problem hiding this comment.
Thank you for your first contribution! The idea is great - just a few changes needed in the code to get it ready.
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
121c1ed to
e59dad7
Compare
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
GuptaManan100
left a comment
There was a problem hiding this comment.
Looks good to me now 👍. Thank you for your first contribution to Vitess @bantyK!
Just to answer the question @timvaillancourt put before (Apologies I missed it), we do have recoveries ordered. This is happening in the GetReplicationAnalysis function where-in the recovery that we check for first in the long if-condition block are the ones prioritised first. For example, if the primary is dead and the replica is broken because of it, we will first find the DeadPrimary analysis, and then set the hasClusterwideAction field, that prevents any further recoveries from being found for this shard.
The fix that this PR is providing for, is around the situation where the primary tablet died after the VTOrc instance in question had reloaded its information. So, it only sees that the replica is broken. Only after reloading the primary tablet information will it realise that the tablet is dead.
We would have done this after acquiring the shard lock anyways, but that's the point. If we do it behind a shard lock, then other VTOrc instances who have already seen the failure can't proceed with the fix because this instance is holding the lock. And the reload of primary information will take the RemoteOperationTimeout (default 15s) amount of time (because it's dead), delaying the recovery by that much. I have added a comment in the code just to explain that this step is purely an optimisation and not for correctness, explaining ☝️ .
|
@deepthi @timvaillancourt It would be great if you could also review the PR. |
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Done |
|
Test failure is probably flakiness. This PR makes no changes that affect the failing test. |
…itessio#18234) Signed-off-by: Banty Kumar <bantyp92@gmail.com> Signed-off-by: Manan Gupta <manan@planetscale.com> Co-authored-by: Manan Gupta <manan@planetscale.com>
…itessio#18234) Signed-off-by: Banty Kumar <bantyp92@gmail.com> Signed-off-by: Manan Gupta <manan@planetscale.com> Co-authored-by: Manan Gupta <manan@planetscale.com>
…itessio#18234) (#659) Signed-off-by: Banty Kumar <bantyp92@gmail.com> Signed-off-by: Manan Gupta <manan@planetscale.com> Co-authored-by: Banty Kumar <bantyp92@gmail.com> Co-authored-by: Manan Gupta <manan@planetscale.com>
…itessio#18234) Signed-off-by: Banty Kumar <bantyp92@gmail.com> Signed-off-by: Manan Gupta <manan@planetscale.com> Co-authored-by: Manan Gupta <manan@planetscale.com>
…itessio#18234) (#739) Signed-off-by: Banty Kumar <bantyp92@gmail.com> Signed-off-by: Manan Gupta <manan@planetscale.com> Co-authored-by: Banty Kumar <bantyp92@gmail.com> Co-authored-by: Manan Gupta <manan@planetscale.com>
…itessio#18234) (#739) Signed-off-by: Banty Kumar <bantyp92@gmail.com> Signed-off-by: Manan Gupta <manan@planetscale.com> Co-authored-by: Banty Kumar <bantyp92@gmail.com> Co-authored-by: Manan Gupta <manan@planetscale.com>
…itessio#18234) (#739) Signed-off-by: Banty Kumar <bantyp92@gmail.com> Signed-off-by: Manan Gupta <manan@planetscale.com> Co-authored-by: Banty Kumar <bantyp92@gmail.com> Co-authored-by: Manan Gupta <manan@planetscale.com>
Description
This PR aims to fix the lock contention issue described in #18207.
The proposed fix prioritizes primary failure mitigation over any other failures in the cluster. Right now vtorc acquires a shard lock to perform mitigation but vtorc may prioritise a secondary issue caused by a dead primary. For example, ReplicationStopped instead of DeadPrimary. This is described in the attached issue.
Vtorc can spend a considerable amount of time(we have observed more than 10 seconds) to realize that the actual issue is a dead primary and perform an ERS. To fix this, this PR adds a check before the shard-locking step to check that the underlying issue is not a primary failure.
We have tested this patch in our internal deployment and observed that vtorc can consistently perform ERS within 15 seconds. Before this patch, observed downtimes were more than 60 seconds in many cases. Vtorc deployment used for this testing is 4 running vtorc instances distributed across 2 cells. Please let us know if we need any further proof and the format.
Related Issue(s)
Fixes #18207
Checklist
Deployment Notes