Skip to content

Conversation

@ashking94
Copy link
Member

@ashking94 ashking94 commented Aug 6, 2025

Description

After the bug which led to infinite loop of segment replication was fixed in PR #18636, the FullRollingRestartIT test became flaky as seen in #18490. On deeper analysis, I found that this happens due to race condition in primary shard relocation. On primary shard relocation, the new primary has a bumped up segment infos generation and version which is broadcasted to all of it's replica via the checkpoint publisher. This happens around the same time when the shard_started primary action is called to active cluster manager to inform that the primary handover happened successfully. In certain condition, it was seen that the replica received the latest checkpoint from the new primary, but the cluster applier service was yet to be applied. This led to the replica reaching out to the old primary for getting the segment infos. This issue has slight probability of happening for indexes not getting any kind of ingestion during relocation after the permits have been acquired on the older primary.

With this PR, the following things would happen to prevent the issue that happens now:

  1. This is only for segrep local indexes only. The reason why it is not handled for remote store is mentioned in 4th point below.
  2. If the checkpoint that is received during the get checkpoint info transport action is behind the checkpoint that was received during the checkpoint publish action by primary, then we fail the replication event with appropriate reason.
  3. There is retry built-in in the current segrep flow to retry on recoverable failure modes.
  4. The change in PR fails the segrep event if the checkpoint received during get segment infos call is stale than the original checkpoint received during the checkpoint publish call. This is problematic for remote store as the new primary is not given the rights for upload until the cluster applier service runs on the primary node after the shard started action. This is done intentionally to prevent multi writer during primary relocation case.

Related Issues

Resolves #18490

Check List

  • Functionality includes testing.
  • [ ] API changes companion pull request created, if applicable.
  • [ ] Public documentation issue/PR created, if applicable.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@ashking94 ashking94 requested a review from a team as a code owner August 6, 2025 17:33
@github-actions github-actions bot added >test-failure Test failure from CI, local build, etc. autocut flaky-test Random test failure that succeeds on second run labels Aug 6, 2025
@ashking94
Copy link
Member Author

@mch2 @andrross @getsaurabh02 @sachinpkale @Bukhtawar - This one is a small PR for fixing a flaky test before 3.2 release. Can you help with the review?

@github-actions
Copy link
Contributor

github-actions bot commented Aug 6, 2025

❌ Gradle check result for 2d76208: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@mch2
Copy link
Member

mch2 commented Aug 6, 2025

@ashking94 thanks for fixing this. Not for now but with node-node segrep I'm thinking we could also change the replication source to fetch segments from the publisher of the cp vs today where we rely on the cluster state lookup of the active primary. This would allow us to replicate from non primary nodes.

@github-actions
Copy link
Contributor

github-actions bot commented Aug 6, 2025

❌ Gradle check result for 2d76208: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Contributor

github-actions bot commented Aug 6, 2025

❌ Gradle check result for 2d76208: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Contributor

github-actions bot commented Aug 7, 2025

❌ Gradle check result for 2d76208: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Contributor

github-actions bot commented Aug 7, 2025

❌ Gradle check result for 2d76208: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@ashking94
Copy link
Member Author

@ashking94 thanks for fixing this. Not for now but with node-node segrep I'm thinking we could also change the replication source to fetch segments from the publisher of the cp vs today where we rely on the cluster state lookup of the active primary. This would allow us to replicate from non primary nodes.

Sure, Marc. This does make sense.

@github-actions
Copy link
Contributor

github-actions bot commented Aug 7, 2025

✅ Gradle check result for 3c475c1: SUCCESS

@codecov
Copy link

codecov bot commented Aug 7, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 72.85%. Comparing base (c01ff89) to head (3c475c1).
⚠️ Report is 4 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main   #18944      +/-   ##
============================================
- Coverage     72.89%   72.85%   -0.04%     
- Complexity    69318    69340      +22     
============================================
  Files          5642     5642              
  Lines        318636   318640       +4     
  Branches      46107    46108       +1     
============================================
- Hits         232254   232138     -116     
- Misses        67540    67752     +212     
+ Partials      18842    18750      -92     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@ashking94 ashking94 merged commit 251cc36 into opensearch-project:main Aug 7, 2025
31 checks passed
@opensearch-trigger-bot
Copy link
Contributor

The backport to 2.19 failed:

The process '/usr/bin/git' failed with exit code 128

To backport manually, run these commands in your terminal:

# Navigate to the root of your repository
cd $(git rev-parse --show-toplevel)
# Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add ../.worktrees/OpenSearch/backport-2.19 2.19
# Navigate to the new working tree
pushd ../.worktrees/OpenSearch/backport-2.19
# Create a new branch
git switch --create backport/backport-18944-to-2.19
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 251cc3603e73405cc047e192663e8b5dbaa1c61d
# Push it to GitHub
git push --set-upstream origin backport/backport-18944-to-2.19
# Go back to the original working tree
popd
# Delete the working tree
git worktree remove ../.worktrees/OpenSearch/backport-2.19

Then, create a pull request where the base branch is 2.19 and the compare/head branch is backport/backport-18944-to-2.19.

@ashking94 ashking94 added the backport 3.2 Backport to 3.2 branch label Aug 7, 2025
opensearch-trigger-bot bot pushed a commit that referenced this pull request Aug 7, 2025
* Fix segment replication bug during primary relocation

Signed-off-by: Ashish Singh <[email protected]>

* Fix applicable for segrep local indexes only

Signed-off-by: Ashish Singh <[email protected]>

---------

Signed-off-by: Ashish Singh <[email protected]>
(cherry picked from commit 251cc36)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
ashking94 added a commit to ashking94/OpenSearch that referenced this pull request Aug 7, 2025
…ject#18944)

* Fix segment replication bug during primary relocation

Signed-off-by: Ashish Singh <[email protected]>

* Fix applicable for segrep local indexes only

Signed-off-by: Ashish Singh <[email protected]>

---------

Signed-off-by: Ashish Singh <[email protected]>
@ashking94
Copy link
Member Author

Raised manual backport for 2.19 - #18958

ashking94 pushed a commit that referenced this pull request Aug 7, 2025
* Fix segment replication bug during primary relocation



* Fix applicable for segrep local indexes only



---------


(cherry picked from commit 251cc36)

Signed-off-by: Ashish Singh <[email protected]>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
mch2 pushed a commit that referenced this pull request Aug 7, 2025
* Fix segment replication bug during primary relocation



* Fix applicable for segrep local indexes only



---------

Signed-off-by: Ashish Singh <[email protected]>
RajatGupta02 pushed a commit to RajatGupta02/OpenSearch that referenced this pull request Aug 18, 2025
…ject#18944)

* Fix segment replication bug during primary relocation

Signed-off-by: Ashish Singh <[email protected]>

* Fix applicable for segrep local indexes only

Signed-off-by: Ashish Singh <[email protected]>

---------

Signed-off-by: Ashish Singh <[email protected]>
kh3ra pushed a commit to kh3ra/OpenSearch that referenced this pull request Sep 5, 2025
…ject#18944)

* Fix segment replication bug during primary relocation

Signed-off-by: Ashish Singh <[email protected]>

* Fix applicable for segrep local indexes only

Signed-off-by: Ashish Singh <[email protected]>

---------

Signed-off-by: Ashish Singh <[email protected]>
vinaykpud pushed a commit to vinaykpud/OpenSearch that referenced this pull request Sep 26, 2025
…ject#18944)

* Fix segment replication bug during primary relocation

Signed-off-by: Ashish Singh <[email protected]>

* Fix applicable for segrep local indexes only

Signed-off-by: Ashish Singh <[email protected]>

---------

Signed-off-by: Ashish Singh <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

autocut backport 2.19 backport 3.2 Backport to 3.2 branch backport-failed flaky-test Random test failure that succeeds on second run skip-changelog >test-failure Test failure from CI, local build, etc.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[AUTOCUT] Gradle Check Flaky Test Report for FullRollingRestartIT

4 participants