-
Notifications
You must be signed in to change notification settings - Fork 7k
[Doc] Adding docs for Kuberay KAI scheduler integration #54857
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 10 commits
ead34c7
7024252
156d5df
3a87992
8627c3b
0fdb725
970bc33
0f21e8d
bf27b38
91bf42f
f01878d
9794ed3
87ae6ca
dace493
5497faf
55105b4
fa6ad0d
4a91176
add44b1
03019ee
36e0327
ca1524b
017a827
77ee7f5
62afa32
399552c
d40dedf
59135cd
1afba05
57faa86
75d2a09
5cc1127
31100eb
2892aa1
f35b579
7300e7d
f4dc60c
ba27377
b08eaca
2df426f
156c996
6f55166
3c1cdda
48ba71e
3217d90
dedab5c
edd90a0
e3d11e2
077c5b1
2447938
ed97293
28a1ef5
28b4d68
aad0131
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,300 @@ | ||||||
| (kuberay-kai-scheduler)= | ||||||
| # Gang Scheduling, Queue Priority, and GPU Sharing for RayClusters using KAI Scheduler | ||||||
|
|
||||||
| This guide demonstrates how to use KAI Scheduler for setting up hierarchical queues with quotas, gang scheduling and GPU sharing using RayClusters. | ||||||
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||
|
|
||||||
|
|
||||||
| ## KAI Scheduler | ||||||
|
|
||||||
| [KAI Scheduler](https://github.com/NVIDIA/KAI-Scheduler) is a high-performance, scalable Kubernetes scheduler built for AI/ML workloads. Designed to orchestrate GPU clusters at massive scale, KAI optimizes GPU allocation and supports the full AI lifecycle - from interactive development to large distributed training and inference. Some of the key features are: | ||||||
| - **Bin-packing & Spread Scheduling**: Optimize node usage either by minimizing fragmentation (bin-packing) or increasing resiliency and load balancing (spread scheduling) | ||||||
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||
| - **GPU Sharing**: Allow multiple Ray workloads from across teams to be packed on the same GPU, letting your organization fit more work onto your existing hardware and reducing idle GPU time. | ||||||
|
||||||
| - **GPU Sharing**: Allow multiple Ray workloads from across teams to be packed on the same GPU, letting your organization fit more work onto your existing hardware and reducing idle GPU time. | |
| - **GPU sharing**: Allow Ray to pack multiple workloads from across teams on the same GPU, letting your organization fit more work onto your existing hardware and reducing idle GPU time. |
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Quota: The baseline amount of resources guaranteed to the queue. Quotas are allocated first to ensure fairness. | |
| - Quota: The baseline amount of resources guaranteed to the queue. Queues allocate quotas first to ensure fairness. |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Queue Priority: Determines the order in which queues receive resources beyond their quota. Higher-priority queues are served first. | |
| - Queue priority: Determines the order in which queues receive resources beyond their quota. The schedules serves the higher-priority queues first. |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Over-Quota Weight: Controls how surplus resources are shared among queues within the same priority level. Queues with higher weights receive a larger share of the extra resources. | |
| - Over-quota weight: Controls how the scheduler shares surplus resources among queues within the same priority level. Queues with higher weights receive a larger share of the extra resources. |
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Install KAI Scheduler with gpu-sharing enabled: | |
| Install KAI Scheduler with gpuSharing enabled: |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind to put KAI-scheduler release page link in this section(Install KAI Scheduler) to help the user find the version easily ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems need to install GPU-operator first as mentioned in KAI-scheduler Prerequisites even if not using GPU. Or, the kai-operator will prompt
no matches for kind \"ClusterPolicy\" in version \"nvidia.com/v1\"
By the way, I guess it might result from some recent change. Previously, it seems not need to install GPU-operator.
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The key pattern is to simply add the queue label to your RayCluster. [Here's a basic example](https://github.com/ray-project/kuberay/tree/master/ray-operator/config/samples/ray-cluster.kai-scheduler.yaml) from the KubeRay repository: | |
| The key pattern adds the queue label to your RayCluster. See the [basic example](https://github.com/ray-project/kuberay/tree/master/ray-operator/config/samples/ray-cluster.kai-scheduler.yaml) from the KubeRay repository |
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better to put the output of the command to help the reader verify the expect output easily.
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| priorityClassName: build # Here you can specify the priority class (optional) | |
| priorityClassName: build # Specify the priority class (optional) |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: RayCluster Priority Class Misplacement
The priorityClassName field is incorrectly placed in metadata.labels. For RayClusters, priorityClassName belongs in the pod template spec (e.g., spec.headGroupSpec.template.spec and spec.workerGroupSpecs[].template.spec), not as a label. This placement means the priority class won't be applied to the pods.
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Priority Class Misplacement in RayCluster
The priorityClassName field is incorrectly placed in the RayCluster's metadata.labels section. This field belongs in the pod template spec (e.g., headGroupSpec.template.spec.priorityClassName and workerGroupSpecs[].template.spec.priorityClassName) to ensure the priority class is applied to pods.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Priority Class Misplaced in RayCluster Metadata
The priorityClassName is incorrectly specified as a label in the RayCluster metadata. This field belongs in the pod template spec for both head and worker groups, and its current placement prevents the priority class from being applied to the pods.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Priority Class Misplacement in RayCluster
The priorityClassName field is currently set as a label on the RayCluster metadata. In Kubernetes, priorityClassName belongs in the pod template spec (e.g., spec.headGroupSpec.template.spec.priorityClassName), so its current placement means the priority class won't be applied to the pods.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: RayCluster Priority Class Misplacement
The priorityClassName is incorrectly placed in the RayCluster's metadata.labels section. Kubernetes expects this field within the pod template spec (e.g., spec.headGroupSpec.template.spec.priorityClassName), so its current location prevents the priority class from being applied to the Ray pods.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: RayCluster Priority Class Placement Error
The priorityClassName is incorrectly placed under metadata.labels in the RayCluster example. Kubernetes expects priorityClassName within the pod template spec (spec.template.spec.priorityClassName) for it to be applied to pods, so it won't function as intended here.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @EkinKarabulut, could you make priorityClassName be spec level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
others look good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @rueian KAI Scheduler reads priority classes from workload labels (metadata.labels.priorityClassName) rather than pod specs, which allows it to assign priority to entire workloads. This is consistent with KAI Scheduler's official documentation and examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh nice! I will put a comment saying that it should not be the priorityClassName in the pod spec.
rueian marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: RayCluster YAML Configuration Error
The priorityClassName field in the RayCluster example is incorrectly placed under metadata.labels. This field is a spec-level attribute that belongs within the spec.template.spec of both headGroupSpec and workerGroupSpecs. Its current placement results in invalid YAML, causing Kubernetes to reject the configuration.
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
EkinKarabulut marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better to put the output of the command to help the reader verify the expect output easily.
Uh oh!
There was an error while loading. Please reload this page.