-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Private/onprem clusters always need explicit ssh_private_key in docker #10838
Comments
If the key is available on the host at
|
This should really be documented... |
@DmitriGekhtman , what do you suggest we do here? |
@AmeerHajAli @ijrsvt why is this an issue for on-prem clusters but not for cloud clusters? In both cases, the head needs ssh access to workers. |
Oh I see, it's a documentation problem. I think we could add an example-docker.yaml or something with this info to the local cluster examples. Or modify example-full to use docker -- my main hesitation there is that variable size clusters don't really work right now with docker, also currently we're not careful enough to clean up docker state when we're done using Ray on an on-prem node. |
For cloud clusters we auto-insert this field!
GCP:
|
Right, we try to auto-configure a key for cloud providers. But for cloud users who supply a key manually, the situation should be the same as on-prem users who supply a key? |
When starting private clusters without docker, it is not necessary to provide the ssh_private_key as the user has authorized access to the nodes. But when adding docker, the head node process inside the container does not have privileges to ssh to the other nodes and hence needs the ssh_private_key to be provided explicitly.
I think it is fine to leave it as is but just wanted to bring it up.
The text was updated successfully, but these errors were encountered: