Skip to content

Comments

Add ability to schedule splits based on Task load, not Node load.#26030

Merged
spershin merged 1 commit intoprestodb:masterfrom
spershin:IndroduceTaskBasedSplitScheduling
Sep 18, 2025
Merged

Add ability to schedule splits based on Task load, not Node load.#26030
spershin merged 1 commit intoprestodb:masterfrom
spershin:IndroduceTaskBasedSplitScheduling

Conversation

@spershin
Copy link
Contributor

@spershin spershin commented Sep 12, 2025

Description

Adding system config property and session property to enable split scheduling based on the task load, rather than the node load.

Motivation and Context

See #25906

Impact

This is particularly useful for the native worker as it runs splits for tasks differently than the java worker.
Reduces query execution time across the board (see the issue for details).

It might be a good idea if we actually considered not number of running + queued splits for each task, but just queued ones. This is because the running ones are already running and if they are slow or not would affect the number of queued splits, so we don't really care how many are running except in the rare moments when we have completed K splits, but haven't started running queued ones instead of them and are reporting the stats to coordinator. But this is a rare event and we will have some race conditions like that anyway in any case.

Test Plan

Not sure.
So far ran this in the dev cluster with shadowed real life workload.

== NO RELEASE NOTE ==

@spershin spershin requested a review from a team as a code owner September 12, 2025 23:12
@prestodb-ci prestodb-ci added the from:Meta PR from Meta label Sep 12, 2025
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Sep 12, 2025

Reviewer's Guide

This PR introduces an optional mode to schedule splits based on per-task load instead of per-node load by adding a new configuration and session property, propagating them through NodeScheduler into SimpleNodeSelector, and implementing a new weighted selection algorithm in SimpleNodeSelector.

Sequence diagram for split assignment decision based on task load vs node load

sequenceDiagram
    participant "SimpleNodeSelector"
    participant "RemoteTask(s)"
    participant "InternalNode(s)"
    participant "NodeAssignmentStats"
    participant "NodeSelectionStats"
    participant "Session"
    "SimpleNodeSelector"->>"Session": Check scheduleSplitsBasedOnTaskLoad property
    alt scheduleSplitsBasedOnTaskLoad == true and tasks == nodes
        "SimpleNodeSelector"->>"RemoteTask(s)": Get task status and split weights
        "SimpleNodeSelector"->>"InternalNode(s)": Map tasks to nodes
        "SimpleNodeSelector"->>"NodeAssignmentStats": Get queued splits weight
        "SimpleNodeSelector"->>"NodeSelectionStats": Update stats
        "SimpleNodeSelector"->>"InternalNode(s)": Assign split to least busy node (by task load)
    else
        "SimpleNodeSelector"->>"InternalNode(s)": Get node split weights
        "SimpleNodeSelector"->>"NodeAssignmentStats": Get total splits weight
        "SimpleNodeSelector"->>"NodeSelectionStats": Update stats
        "SimpleNodeSelector"->>"InternalNode(s)": Assign split to least busy node (by node load)
    end
Loading

Class diagram for updated NodeSchedulerConfig and related properties

classDiagram
    class NodeSchedulerConfig {
        int minCandidates
        boolean includeCoordinator
        int maxSplitsPerNode
        int maxSplitsPerTask
        boolean scheduleSplitsBasedOnTaskLoad
        int maxPendingSplitsPerTask
        int maxUnacknowledgedSplitsPerTask
        String networkTopology
        int getMaxSplitsPerTask()
        NodeSchedulerConfig setMaxSplitsPerTask(int)
        boolean isScheduleSplitsBasedOnTaskLoad()
        NodeSchedulerConfig setScheduleSplitsBasedOnTaskLoad(boolean)
    }
    class SystemSessionProperties {
        +SCHEDULE_SPLITS_BASED_ON_TASK_LOAD : String
        +isScheduleSplitsBasedOnTaskLoad(Session) : Boolean
    }
    NodeSchedulerConfig <.. SystemSessionProperties : uses
Loading

Class diagram for updated NodeScheduler and SimpleNodeSelector

classDiagram
    class NodeScheduler {
        int minCandidates
        boolean includeCoordinator
        long maxSplitsWeightPerNode
        long maxSplitsWeightPerTask
        long maxPendingSplitsWeightPerTask
        NodeTaskMap nodeTaskMap
        NodeSelector createNodeSelector(Session, ConnectorId, Supplier<NodeMap>)
    }
    class SimpleNodeSelector {
        NodeSelectionStats nodeSelectionStats
        NodeTaskMap nodeTaskMap
        boolean includeCoordinator
        boolean scheduleSplitsBasedOnTaskLoad
        AtomicReference<Supplier<NodeMap>> nodeMap
        int minCandidates
        long maxSplitsWeightPerNode
        long maxSplitsWeightPerTask
        long maxPendingSplitsWeightPerTask
        int maxUnacknowledgedSplitsPerTask
        int maxTasksPerStage
        Optional<InternalNodeInfo> chooseLeastBusyNodeBasedOnTaskLoad(...)
        Optional<InternalNodeInfo> chooseLeastBusyNode(...)
    }
    NodeScheduler o-- SimpleNodeSelector : creates
Loading

File-Level Changes

Change Details Files
Introduce task-load scheduling configuration and session property
  • Add maxSplitsPerTask and scheduleSplitsBasedOnTaskLoad fields with getters and @config in NodeSchedulerConfig
  • Expose schedule_splits_based_on_task_load as a session property in SystemSessionProperties
NodeSchedulerConfig.java
SystemSessionProperties.java
Propagate new scheduling flags through NodeScheduler
  • Read scheduleSplitsBasedOnTaskLoad and maxSplitsPerTask from config and session
  • Pass new parameters into SimpleNodeSelector constructor
NodeScheduler.java
Extend SimpleNodeSelector with task-load based split assignment
  • Add flag and maxSplitsWeightPerTask field to constructor
  • Branch computeAssignments to call chooseLeastBusyNodeBasedOnTaskLoad when enabled
  • Implement chooseLeastBusyNodeBasedOnTaskLoad method that computes and compares per-task weights
SimpleNodeSelector.java

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `presto-main-base/src/main/java/com/facebook/presto/execution/scheduler/nodeSelection/SimpleNodeSelector.java:242` </location>
<code_context>
+    protected Optional<InternalNodeInfo> chooseLeastBusyNodeBasedOnTaskLoad(SplitWeight splitWeight, List<RemoteTask> existingTasks, OptionalInt preferredNodeCount, long maxSplitsWeight, NodeAssignmentStats assignmentStats)
</code_context>

<issue_to_address>
Consider handling the case where node is null more explicitly.

If node is unexpectedly null, consider logging a warning or throwing an exception to improve debugging and visibility.

Suggested implementation:

```java
            InternalNode node = nodeMap.getActiveNodesByNodeId().get(remoteTask.getNodeId());
            if (node == null) {
                // Log a warning for unexpected null node
                log.warn("Node with ID %s is not found in active nodes. This should not happen.", remoteTask.getNodeId());
                // Optionally, throw an exception to fail fast and aid debugging
                // throw new IllegalStateException("Node with ID " + remoteTask.getNodeId() + " is not found in active nodes.");
                continue;
            }

```

1. Ensure that a suitable logger (e.g., `private static final Logger log = Logger.get(SimpleNodeSelector.class);`) is present at the top of the class. If not, add it.
2. Decide whether to log only, throw an exception, or both, depending on your desired failure mode. The code above logs a warning and continues, but you can uncomment the exception if you want to fail fast.
</issue_to_address>

### Comment 2
<location> `presto-main-base/src/main/java/com/facebook/presto/execution/scheduler/nodeSelection/SimpleNodeSelector.java:268` </location>
<code_context>
+                continue;
+            }
+            TaskStatus taskStatus = remoteTask.getTaskStatus();
+            long currentWeight = taskStatus.getQueuedPartitionedSplitsWeight() + taskStatus.getRunningPartitionedSplitsWeight() + assignmentStats.getQueuedSplitsWeightForStage(node);
+            boolean canAssignToNode = canAssignSplitBasedOnWeight(currentWeight, maxSplitsWeight, splitWeight);
+
</code_context>

<issue_to_address>
The calculation of currentWeight may double-count queued splits.

Ensure that the queued splits weight from assignmentStats and taskStatus are not overlapping to avoid double-counting.
</issue_to_address>

### Comment 3
<location> `presto-main-base/src/main/java/com/facebook/presto/execution/scheduler/NodeSchedulerConfig.java:137` </location>
<code_context>
+    }
+
+    @Config("node-scheduler.schedule-splits-based-on-task-load")
+    public NodeSchedulerConfig setScheduleSplitsBasedOnTaskLoad(boolean scheduleSplitsBasedOnTaskLoad)
+    {
+        this.scheduleSplitsBasedOnTaskLoad = scheduleSplitsBasedOnTaskLoad;
</code_context>

<issue_to_address>
Missing @ConfigDescription for scheduleSplitsBasedOnTaskLoad.

Please add the annotation to enhance documentation and make the configuration option easier to find.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines 242 to 251
protected Optional<InternalNodeInfo> chooseLeastBusyNodeBasedOnTaskLoad(SplitWeight splitWeight, List<RemoteTask> existingTasks, OptionalInt preferredNodeCount, long maxSplitsWeight, NodeAssignmentStats assignmentStats)
{
long minWeight = Long.MAX_VALUE;
InternalNode chosenNode = null;
NodeMap nodeMap = this.nodeMap.get().get();
for (int i = 0; i < existingTasks.size(); i++) {
RemoteTask remoteTask = existingTasks.get(i);

InternalNode node = nodeMap.getActiveNodesByNodeId().get(remoteTask.getNodeId());
// TODO(spershin): This, ideally, should not happen. Should we throw instead?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider handling the case where node is null more explicitly.

If node is unexpectedly null, consider logging a warning or throwing an exception to improve debugging and visibility.

Suggested implementation:

            InternalNode node = nodeMap.getActiveNodesByNodeId().get(remoteTask.getNodeId());
            if (node == null) {
                // Log a warning for unexpected null node
                log.warn("Node with ID %s is not found in active nodes. This should not happen.", remoteTask.getNodeId());
                // Optionally, throw an exception to fail fast and aid debugging
                // throw new IllegalStateException("Node with ID " + remoteTask.getNodeId() + " is not found in active nodes.");
                continue;
            }
  1. Ensure that a suitable logger (e.g., private static final Logger log = Logger.get(SimpleNodeSelector.class);) is present at the top of the class. If not, add it.
  2. Decide whether to log only, throw an exception, or both, depending on your desired failure mode. The code above logs a warning and continues, but you can uncomment the exception if you want to fail fast.

continue;
}
TaskStatus taskStatus = remoteTask.getTaskStatus();
long currentWeight = taskStatus.getQueuedPartitionedSplitsWeight() + taskStatus.getRunningPartitionedSplitsWeight() + assignmentStats.getQueuedSplitsWeightForStage(node);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question (bug_risk): The calculation of currentWeight may double-count queued splits.

Ensure that the queued splits weight from assignmentStats and taskStatus are not overlapping to avoid double-counting.

@spershin spershin force-pushed the IndroduceTaskBasedSplitScheduling branch from 60b528c to e2c80ce Compare September 12, 2025 23:20
Copy link
Contributor

@steveburnett steveburnett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! (docs)

Pull branch, local doc build, looks good. Thanks!

@steveburnett
Copy link
Contributor

I assume that a corresponding session property does not exist?

@spershin
Copy link
Contributor Author

I assume that a corresponding session property does not exist?

@steveburnett
Do you mean session property added in this PR?

@steveburnett
Copy link
Contributor

I assume that a corresponding session property does not exist?

@steveburnett Do you mean session property added in this PR?

I suppose I do. If it existed I'd ask for it to be added to the doc, and cross-linked like in this example screenshot of a different configuration property, but if there is no session property no worries. Thanks!

Screenshot 2025-09-16 at 10 31 03 AM

@spershin
Copy link
Contributor Author

@steveburnett
This PR does add a new session property.
Here: https://github.com/prestodb/presto/pull/26030/files#diff-868975b0683b79f53b8cfe89e409ef97376e6aeeca7a8116839db1aaa80e3fdf

I looked up few session props around and didn't see them in any documentation.
Was I looking at the wrong spot?

@steveburnett
Copy link
Contributor

@steveburnett This PR does add a new session property. Here: https://github.com/prestodb/presto/pull/26030/files#diff-868975b0683b79f53b8cfe89e409ef97376e6aeeca7a8116839db1aaa80e3fdf

Thanks!

I looked up few session props around and didn't see them in any documentation. Was I looking at the wrong spot?

Session property documentation is in
https://github.com/prestodb/presto/blob/master/presto-docs/src/main/sphinx/admin/properties-session.rst

Look at line 687 in https://github.com/prestodb/presto/blob/master/presto-docs/src/main/sphinx/admin/properties.rst

The corresponding session property is :ref:`admin/properties-session:\`\`task_writer_count\`\``. 

for a cross-reference entry linking to properties-session.rst.

The session property doc entry should also have a cross-reference link to the configuration property.

Let me know if that answered your questions!

@spershin
Copy link
Contributor Author

spershin commented Sep 17, 2025

@steveburnett

Yes, that helps.
I was looking if "resource_aware_scheduling_strategy" session prop is in the documentation and it is not!
So I skipped adding the new one.
Will add it.

Actually, seems like a lot of session props are NOT in properties-session.rst
Is that normal?

@spershin spershin force-pushed the IndroduceTaskBasedSplitScheduling branch from e2c80ce to bb236ab Compare September 17, 2025 18:37
@steveburnett
Copy link
Contributor

@steveburnett

Yes, that helps. I was looking if "resource_aware_scheduling_strategy" session prop is in the documentation and it is not! So I skipped adding the new one. Will add it.

Thanks, that's appreciated!

Actually, seems like a lot of session props are NOT in properties-session.rst Is that normal?

Great point! Yes, it is normal - but it is also not the desired state of the documentation. Identifying the missing ones and documenting them, as well as any missing configuration properties, is something I want to address when possible. Until then I ask questions like these in new PRs to try to keep the doc gap from getting larger :). Thanks for your help!

steveburnett
steveburnett previously approved these changes Sep 17, 2025
Copy link
Contributor

@steveburnett steveburnett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! (docs)

Pull updated branch, new local doc build, looks good. Thanks!

Copy link
Contributor

@rschlussel rschlussel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add tests?

@spershin spershin requested a review from rschlussel September 17, 2025 22:00
@spershin spershin force-pushed the IndroduceTaskBasedSplitScheduling branch 2 times, most recently from 99cde6f to 9651173 Compare September 17, 2025 23:51
@spershin spershin force-pushed the IndroduceTaskBasedSplitScheduling branch 2 times, most recently from 0484ca7 to 44f7001 Compare September 18, 2025 20:02
Copy link
Contributor

@rschlussel rschlussel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small comment for the test. Otherwise looks good. Thanks!

@spershin spershin force-pushed the IndroduceTaskBasedSplitScheduling branch from 44f7001 to a86be1a Compare September 18, 2025 20:46
@spershin spershin requested a review from rschlussel September 18, 2025 20:46
@spershin spershin merged commit 48bd9f8 into prestodb:master Sep 18, 2025
74 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

from:Meta PR from Meta

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants