-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filter shards for sliced search at coordinator #16771
base: main
Are you sure you want to change the base?
Conversation
❌ Gradle check result for 2d2fd05: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
2d2fd05
to
541979e
Compare
❌ Gradle check result for 541979e: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
541979e
to
b4aaa2f
Compare
❌ Gradle check result for b4aaa2f: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
❌ Gradle check result for 1680b9b: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
1680b9b
to
8842514
Compare
❌ Gradle check result for 8842514: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
8842514
to
eadaabd
Compare
❌ Gradle check result for eadaabd: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #16771 +/- ##
============================================
- Coverage 72.32% 72.12% -0.20%
+ Complexity 65310 65193 -117
============================================
Files 5299 5299
Lines 303534 303561 +27
Branches 43941 43954 +13
============================================
- Hits 219527 218948 -579
- Misses 66021 66629 +608
+ Partials 17986 17984 -2 ☔ View full report in Codecov by Sentry. |
server/src/main/java/org/opensearch/cluster/routing/OperationRouting.java
Show resolved
Hide resolved
❌ Gradle check result for c7abf62: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
❌ Gradle check result for f29e111: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
f29e111
to
6a2de32
Compare
❌ Gradle check result for 6a2de32: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
❌ Gradle check result for f29e111: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
Prior to this commit, a sliced search would fan out to every shard, then apply a MatchNoDocsQuery filter on shards that don't correspond to the current slice. This still creates a (useless) search context on each shard for every slice, though. For a long-running sliced scroll, this can quickly exhaust the number of available scroll contexts. This change avoids fanning out to all the shards by checking at the coordinator if a shard is matched by the current slice. This should reduce the number of open scroll contexts to max(numShards, numSlices) instead of numShards * numSlices. Signed-off-by: Michael Froh <[email protected]>
Signed-off-by: Michael Froh <[email protected]>
Signed-off-by: Michael Froh <[email protected]>
Signed-off-by: Michael Froh <[email protected]>
6a2de32
to
a5bc4ca
Compare
} | ||
List<ShardIterator> allShardIterators = new ArrayList<>(); | ||
for (List<ShardIterator> indexIterators : shardIterators.values()) { | ||
if (slice != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you don't need to check for (slice != null)
for every iteration, may be something along these lines:
if (slice != null) {
for (List<ShardIterator> indexIterators : shardIterators.values()) {
// Filter the returned shards for the given slice
CollectionUtil.timSort(indexIterators);
// We use the ordinal of the iterator in the group (after sorting) rather than the shard id, because
// computeTargetedShards may return a subset of shards for an index, if a routing parameter was
// specified. In that case, the set of routable shards is considered the full universe of available
// shards for each index, when mapping shards to slices. If no routing parameter was specified,
// then ordinals and shard IDs are the same. This mimics the logic in
// org.opensearch.search.slice.SliceBuilder.toFilter.
for (int i = 0; i < indexIterators.size(); i++) {
if (slice.shardMatches(i, indexIterators.size())) {
allShardIterators.add(indexIterators.get(i));
}
}
}
} else {
shardIterators.values().forEach(allShardIterators::addAll);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this change, we avoid creating unnecessary long-lived search contexts on every shard, but you're complaining about unnecessary (extremely cheap) null checks in a tight loop?!
This is what makes you awesome, @reta! I love how you say, "Yeah, this is cool, but couldn't it be better?"
Description
Prior to this commit, a sliced search would fan out to every shard, then apply a MatchNoDocsQuery filter on shards that don't correspond to the current slice. This still creates a (useless) search context on each shard for every slice, though. For a long-running sliced scroll, this can quickly exhaust the number of available scroll contexts.
This change avoids fanning out to all the shards by checking at the coordinator if a shard is matched by the current slice. This should reduce the number of open scroll contexts to max(numShards, numSlices) instead of numShards * numSlices.
Related Issues
Related to #16289
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.