Skip to content

Vtorc: Recheck primary health before attempting a failure mitigation#18234

Merged
deepthi merged 9 commits intovitessio:mainfrom
bantyK:vtorc-lock-contention
Jun 2, 2025
Merged

Vtorc: Recheck primary health before attempting a failure mitigation#18234
deepthi merged 9 commits intovitessio:mainfrom
bantyK:vtorc-lock-contention

Conversation

@bantyK
Copy link
Contributor

@bantyK bantyK commented May 2, 2025

Description

This PR aims to fix the lock contention issue described in #18207.
The proposed fix prioritizes primary failure mitigation over any other failures in the cluster. Right now vtorc acquires a shard lock to perform mitigation but vtorc may prioritise a secondary issue caused by a dead primary. For example, ReplicationStopped instead of DeadPrimary. This is described in the attached issue.

Vtorc can spend a considerable amount of time(we have observed more than 10 seconds) to realize that the actual issue is a dead primary and perform an ERS. To fix this, this PR adds a check before the shard-locking step to check that the underlying issue is not a primary failure.

We have tested this patch in our internal deployment and observed that vtorc can consistently perform ERS within 15 seconds. Before this patch, observed downtimes were more than 60 seconds in many cases. Vtorc deployment used for this testing is 4 running vtorc instances distributed across 2 cells. Please let us know if we need any further proof and the format.

Related Issue(s)

Fixes #18207

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Deployment Notes

@vitess-bot
Copy link
Contributor

vitess-bot bot commented May 2, 2025

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels May 2, 2025
@github-actions github-actions bot added this to the v23.0.0 milestone May 2, 2025
@bantyK bantyK changed the title Vtorc lock contention Vtorc: Recheck primary health before attempting a failure mitigation May 2, 2025
bantyK added 2 commits May 2, 2025 14:31
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
@bantyK bantyK force-pushed the vtorc-lock-contention branch from a687948 to 54ff72a Compare May 2, 2025 09:02
@bantyK bantyK marked this pull request as ready for review May 2, 2025 09:04
@timvaillancourt timvaillancourt added Type: Enhancement Logical improvement (somewhere between a bug and feature) Component: VTOrc Vitess Orchestrator integration and removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsWebsiteDocsUpdate What it says NeedsIssue A linked issue is missing for this Pull Request NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels May 2, 2025
return err
}

// We lock the shard here and then refresh the tablets information
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bantyK/everyone: hypothetically, what would happen if there was a reparent between recheckPrimaryHealth and the .LockShard? I suspect this ok, but it would be good to understand

Copy link
Contributor Author

@bantyK bantyK May 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the review @timvaillancourt
This should be ok because vtorc performs a primary health check in the forceRefreshAllTabletsInShard method after the locking step like it used to be.

@timvaillancourt
Copy link
Contributor

timvaillancourt commented May 2, 2025

@bantyK a thought: the issue this PR aims to solve could apply to other detections in the future, by which I mean X detection is more urgent than Y..

Would it be possible for the logic to instead work on a sorted, numeric priority, and a primary-failover is always the highest priority? I think if detections were priority sorted we could still keep the logic under the .LockShard area. cc @GuptaManan100 for thoughts

@codecov
Copy link

codecov bot commented May 2, 2025

Codecov Report

Attention: Patch coverage is 0% with 24 lines in your changes missing coverage. Please review.

Project coverage is 67.45%. Comparing base (b3d80b2) to head (e59dad7).
Report is 15 commits behind head on main.

Files with missing lines Patch % Lines
go/vt/vtorc/logic/topology_recovery.go 0.00% 24 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #18234      +/-   ##
==========================================
- Coverage   67.46%   67.45%   -0.01%     
==========================================
  Files        1601     1602       +1     
  Lines      262153   262256     +103     
==========================================
+ Hits       176851   176914      +63     
- Misses      85302    85342      +40     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Signed-off-by: Banty Kumar <bantyp92@gmail.com>
@bantyK
Copy link
Contributor Author

bantyK commented May 5, 2025

@bantyK a thought: the issue this PR aims to solve could apply to other detections in the future, by which I mean X detection is more urgent than Y..

Would it be possible for the logic to instead work on a sorted, numeric priority, and a primary-failover is always the highest priority? I think if detections were priority sorted we could still keep the logic under the .LockShard area. cc @GuptaManan100 for thoughts

The issue primarily here was lock contention between the operations and not priorities between the operations.

Even if we introduce priority for different operations, as long as the operations are sharing the same lock in the topo server, it probably won't help much. There is no capability for one operation to force-stop another operation to release its lock, and I don't know if it is a safe thing to do. Hence, as part of this PR, I am not thinking about this and only want to focus on getting primary-failover the highest priority which was already the intention in the VTorc's implementation.

Copy link
Contributor

@GuptaManan100 GuptaManan100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your first contribution! The idea is great - just a few changes needed in the code to get it ready.

bantyK added 2 commits May 12, 2025 10:37
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
@bantyK bantyK force-pushed the vtorc-lock-contention branch from 121c1ed to e59dad7 Compare May 12, 2025 05:08
bantyK and others added 3 commits May 26, 2025 10:19
Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Copy link
Contributor

@GuptaManan100 GuptaManan100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me now 👍. Thank you for your first contribution to Vitess @bantyK!

Just to answer the question @timvaillancourt put before (Apologies I missed it), we do have recoveries ordered. This is happening in the GetReplicationAnalysis function where-in the recovery that we check for first in the long if-condition block are the ones prioritised first. For example, if the primary is dead and the replica is broken because of it, we will first find the DeadPrimary analysis, and then set the hasClusterwideAction field, that prevents any further recoveries from being found for this shard.

The fix that this PR is providing for, is around the situation where the primary tablet died after the VTOrc instance in question had reloaded its information. So, it only sees that the replica is broken. Only after reloading the primary tablet information will it realise that the tablet is dead.
We would have done this after acquiring the shard lock anyways, but that's the point. If we do it behind a shard lock, then other VTOrc instances who have already seen the failure can't proceed with the fix because this instance is holding the lock. And the reload of primary information will take the RemoteOperationTimeout (default 15s) amount of time (because it's dead), delaying the recovery by that much. I have added a comment in the code just to explain that this step is purely an optimisation and not for correctness, explaining ☝️ .

@GuptaManan100
Copy link
Contributor

@deepthi @timvaillancourt It would be great if you could also review the PR.

Copy link
Collaborator

@deepthi deepthi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bantyK can you push an empty commit to trigger CI? It seems to have got stuck.

Signed-off-by: Banty Kumar <bantyp92@gmail.com>
@bantyK
Copy link
Contributor Author

bantyK commented Jun 2, 2025

@bantyK can you push an empty commit to trigger CI? It seems to have got stuck.

Done

@deepthi
Copy link
Collaborator

deepthi commented Jun 2, 2025

Test failure is probably flakiness. This PR makes no changes that affect the failing test.

@deepthi deepthi merged commit 91acd72 into vitessio:main Jun 2, 2025
100 of 109 checks passed
@bantyK bantyK deleted the vtorc-lock-contention branch June 3, 2025 03:26
timvaillancourt pushed a commit to slackhq/vitess that referenced this pull request Jun 3, 2025
…itessio#18234)

Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Manan Gupta <manan@planetscale.com>
timvaillancourt pushed a commit to slackhq/vitess that referenced this pull request Jun 3, 2025
…itessio#18234)

Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Manan Gupta <manan@planetscale.com>
timvaillancourt added a commit to slackhq/vitess that referenced this pull request Jun 4, 2025
…itessio#18234) (#659)

Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Banty Kumar <bantyp92@gmail.com>
Co-authored-by: Manan Gupta <manan@planetscale.com>
tanjinx pushed a commit to slackhq/vitess that referenced this pull request Nov 4, 2025
…itessio#18234)

Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Manan Gupta <manan@planetscale.com>
tanjinx added a commit to slackhq/vitess that referenced this pull request Nov 6, 2025
…itessio#18234) (#739)

Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Banty Kumar <bantyp92@gmail.com>
Co-authored-by: Manan Gupta <manan@planetscale.com>
tanjinx added a commit to slackhq/vitess that referenced this pull request Nov 10, 2025
…itessio#18234) (#739)

Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Banty Kumar <bantyp92@gmail.com>
Co-authored-by: Manan Gupta <manan@planetscale.com>
sbaker617 pushed a commit to slackhq/vitess that referenced this pull request Feb 5, 2026
…itessio#18234) (#739)

Signed-off-by: Banty Kumar <bantyp92@gmail.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Banty Kumar <bantyp92@gmail.com>
Co-authored-by: Manan Gupta <manan@planetscale.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Component: VTOrc Vitess Orchestrator integration Type: Enhancement Logical improvement (somewhere between a bug and feature)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug Report: Potential lock contention in vtorc during ungraceful leader election.

4 participants