Fix ERS to work when the primary candidate's replication is stopped#9512
Merged
GuptaManan100 merged 7 commits intovitessio:mainfrom Jan 19, 2022
Merged
Conversation
Signed-off-by: Manan Gupta <manan@planetscale.com>
…rimary-elect has replication stopped Signed-off-by: Manan Gupta <manan@planetscale.com>
…ng for the replica Signed-off-by: Manan Gupta <manan@planetscale.com>
… tablets Signed-off-by: Manan Gupta <manan@planetscale.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
…ents and add logging for vtctl commands Signed-off-by: Manan Gupta <manan@planetscale.com>
…tion-candidate Signed-off-by: Manan Gupta <manan@planetscale.com>
deepthi
approved these changes
Jan 19, 2022
This was referenced Apr 15, 2022
mattlord
added a commit
that referenced
this pull request
Apr 18, 2022
After #9512 we always attempted to start the replication SQL_Thread(s) when waiting for a given position. The problem with this, however, is that if the SQL_Thread is running but the IO_Thread is not then the tablet repair does not try and start replication on a replica tablet. So in certain states such as when initializing a shard, replication may end up in a non-healthy state and never be repaired. This changes the behavior so that: 1. We only attempt to start the SQL_Thread(s) if it's not already running 2. If we explicitly start the SQL_Thread(s) then we also explicitly reset it to what it was (stopped) as we exit the call Because the caller should be/have a TabletManager which has a mutex, this should ensure that the replication manager calls are serialized and because we are resetting the replication state after mutating it, everything should work as it did before #9512 with the exception being that when waiting we ensure that the replica at least has the possibility of catching up. Signed-off-by: Matt Lord <mattalord@gmail.com>
mattlord
added a commit
to planetscale/vitess
that referenced
this pull request
Apr 21, 2022
…sio#10104) After vitessio#9512 we always attempted to start the replication SQL_Thread(s) when waiting for a given position. The problem with this, however, is that if the SQL_Thread is running but the IO_Thread is not then the tablet repair does not try and start replication on a replica tablet. So in certain states such as when initializing a shard, replication may end up in a non-healthy state and never be repaired. This changes the behavior so that: 1. We only attempt to start the SQL_Thread(s) if it's not already running 2. If we explicitly start the SQL_Thread(s) then we also explicitly reset it to what it was (stopped) as we exit the call Because the caller should be/have a TabletManager which has a mutex, this should ensure that the replication manager calls are serialized and because we are resetting the replication state after mutating it, everything should work as it did before vitessio#9512 with the exception being that when waiting we ensure that the replica at least has the possibility of catching up. Signed-off-by: Matt Lord <mattalord@gmail.com>
3 tasks
mattlord
added a commit
that referenced
this pull request
Apr 24, 2022
…ded (#10123) * Only start SQL thread temporarily to WaitForPosition if needed (#10104) After #9512 we always attempted to start the replication SQL_Thread(s) when waiting for a given position. The problem with this, however, is that if the SQL_Thread is running but the IO_Thread is not then the tablet repair does not try and start replication on a replica tablet. So in certain states such as when initializing a shard, replication may end up in a non-healthy state and never be repaired. This changes the behavior so that: 1. We only attempt to start the SQL_Thread(s) if it's not already running 2. If we explicitly start the SQL_Thread(s) then we also explicitly reset it to what it was (stopped) as we exit the call Because the caller should be/have a TabletManager which has a mutex, this should ensure that the replication manager calls are serialized and because we are resetting the replication state after mutating it, everything should work as it did before #9512 with the exception being that when waiting we ensure that the replica at least has the possibility of catching up. Signed-off-by: Matt Lord <mattalord@gmail.com> * Use older replication status interface As release-13.0 does not have this: #9853 Signed-off-by: Matt Lord <mattalord@gmail.com>
3 tasks
notfelineit
pushed a commit
to planetscale/vitess
that referenced
this pull request
May 3, 2022
…sio#561) * Only start SQL thread temporarily to WaitForPosition if needed (vitessio#10104) After vitessio#9512 we always attempted to start the replication SQL_Thread(s) when waiting for a given position. The problem with this, however, is that if the SQL_Thread is running but the IO_Thread is not then the tablet repair does not try and start replication on a replica tablet. So in certain states such as when initializing a shard, replication may end up in a non-healthy state and never be repaired. This changes the behavior so that: 1. We only attempt to start the SQL_Thread(s) if it's not already running 2. If we explicitly start the SQL_Thread(s) then we also explicitly reset it to what it was (stopped) as we exit the call Because the caller should be/have a TabletManager which has a mutex, this should ensure that the replication manager calls are serialized and because we are resetting the replication state after mutating it, everything should work as it did before vitessio#9512 with the exception being that when waiting we ensure that the replica at least has the possibility of catching up. Signed-off-by: Matt Lord <mattalord@gmail.com> * Use older replication status interface As vitess-private does not have this: vitessio#9853 Signed-off-by: Matt Lord <mattalord@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
While investigation flakiness in one of the runs of cluster 14, it was noticed that ERS failed when the replication on the most advanced tablet was stopped. The failure occurred when we waited for it to apply the relay log and timed out. This PR fixes this issue and adds an end to end test for it.
The solution that is proposed is that we start the
SQL_Threadof the MySQL server when we wait for it to apply the relay logs.Related Issue(s)
Checklist
Deployment Notes