-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-38135][K8S] Introduce job sheduling related configurations #35436
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
Outdated
Show resolved
Hide resolved
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
Outdated
Show resolved
Hide resolved
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
Outdated
Show resolved
Hide resolved
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
Outdated
Show resolved
Hide resolved
| .version("3.3.0") | ||
| .intConf | ||
| .checkValue(value => value > 0, "The minimum number should be a positive integer") | ||
| .createWithDefault(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this mean that the driver and the executor(s) will be on the same pod ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it's for driver, in the case of spot instance, we allow user only create driver pod first. So, it's equivalent to no limit by default.
dongjoon-hyun
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, but -1. Apache Spark doesn't add the configuration first like this.
|
It seems that you are confused. Please provide a complete working PR with the test coverage. |
What changes were proposed in this pull request?
This patch inroduces 5 scheduling related configurations:
spark.kubernetes.job.queue: The name of the queue to which the job is submitted.spark.kubernetes.job.minCPU: The minimum CPU for running the job.spark.kubernetes.job.minMemory: The minimum memory for running the job, in MiB unless otherwise specified.spark.kubernetes.job.minMember: The minimum number of pods running in a job.spark.kubernetes.job.priorityClassName: The priority of the running job.These info will be stored in configuration and passed to specified feature step, such as it will be used by Yunikorn/Volcano feature step.
Why are the changes needed?
This PR help user integrate Spark with Volcano Scheduler to enable minResource/queue/priority support.
See also: SPARK-36057
Does this PR introduce any user-facing change?
Yes, configuration added.
How was this patch tested?
CI passed