-
Notifications
You must be signed in to change notification settings - Fork 7.2k
On Kubernetes, set pod anti-affinity at the host level for pods of type 'ray' #4131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Test FAILed. |
ericl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about using the soft form instead?preferredDuringSchedulingIgnoredDuringExecution
|
Seems like during execution and scheduling you would always want node anti-affinity, not just during execution. |
|
It looks like the current pr already ignores during execution. I think it's
better to use a soft instead of hard constraint as default is all.
…On Mon, Feb 25, 2019, 10:38 AM Luke ***@***.***> wrote:
Seems like during execution and scheduling you would always want node
anti-affinity, not just during execution.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#4131 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAA6SuxBjNEOYvIlXEzCh4oqNmR5sDJ6ks5vRC2SgaJpZM4bJl6s>
.
|
|
I definitely prefer the harder (requiredDuringSchedulingIgnoreDuringExecution) form over the softer (preferredDuringSchedulingIgnoredDuringExecution). I am looking at cluster stability and want a hard rule about scheduling of ray pods. If others want it as a cluster scheduling suggestion (which is the softer form) than I am ok with that, just will use the hard version on our cluster. |
ericl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I think either is fine if you think there's a strong benefit, so this LGTM.
|
Can one of the admins verify this patch? |
|
I just tried this out and it works for me. One issue I ran into when running this out of the box (pre-existing I think, and probably unrelated to this PR) is that |
|
Thanks @virtualluke! |
What do these changes do?
Added 'type: ray' to the deployments' metadata and keyed off that type for pod anti-affinity on hosts. This will cause kubernetes to not schedule more than one pod of type 'ray' onto the same host.