-
Notifications
You must be signed in to change notification settings - Fork 2k
【Resolved】How can docker-machine add an exist docker host? #3212
Comments
Does the docker-machine support add an existing docker host ? |
As I was searching the docs for the last 5 hours, in fact, there is "--driver=none" option (undocumented). See #2270 . |
Related issue: I have a droplet on digital ocean created with docker-machine. How can I manage an instance from another laptop ? A colleague recreated the certs from his laptop using
|
I've found a workaround: #2270
So I copied all the files from my colleague's folder and replaced all the paths with that of my machine. In my case the final path is
|
@dweomer Isn't adding existing docker machine from another computer a very common and basic usage case? |
@atemerev: No, I do not think that it is. As I understand it, Docker Machine exists to create/provision hosts that are Docker-enabled. That being said, there exists the somewhat less-than-obvious |
@dweomer 1.Docker Daemon Process in Remote Host 2.Configure the ssh connection without password 3.Add remote docker host to local docker machines CENTOS_MANTISBT_PROJECT="CentOS-7" Couldn't set key CPE_NAME, no corresponding struct field found (linodevps) Calling .GetMachineName 4.The remote docker host command line. The docker daemon process was restarted, but there was two more docker version process, then i tried executing the docker version command manually, it was hang. It's so difficult to add an exist docker host to docker-machine command line... |
If the remote host without any docker engine in it, it's easy to create docker host(install docker engine) from docker-machine command line, but as the problem above, the docker-machine is limited. |
@dweomer This is true, but if I provisioned docker-machine or docker-swarm, how do I, say, enable another developer to deploy containers there? How do I transfer configuration / env variables between machines? So far, I only can use provisioned docker-machine hosts myself, and only from a single machine (what if it's broken? How do I restore the configuration on another one?). For me, docker-machine/docker-swarm are very far from being production ready, and should be marked beta at least. Or I'd like to hear from anyone actually using it in production... |
FWIW, if you just drop ~/.docker from that workstation host onto the new one (presuming the home directory is the same path), it will work. If the home directory differs, you will need to edit the .json files in several places (keys and certificates, et c) to change e.g. |
Seems this question has been thoroughly addressed and/or covers ground already established in other issues. Thanks all. |
@nathanleclaire Just to be clear, is the official approach for adding existing docker hosts (whether created by docker-machine or otherwise) to copy the ~/.docker folder between the client machines? If so, is there any particular subset of the files/a file we need to copy? If the official approach is instead to use the generic driver, it would be useful to address the issue @tdy218's outlined in the comment here. |
I dont think that this question has been addressed at all. I'm trying to push code to a new machine and the best answer that anyone can come up with it to find the guy that made the machine and copy his files to mine. I'm not sure if anyone has tried this out here in the real world, but it sucks. |
@TheSeanBrady Let's keep discussion on issues civil. If you have proposals for solutions you would like to see, please share those. Let's stay focused on solutions over problems. |
I'm just being honest. You try sending emails asking for files, because so far it's not working for me. This was closed without addressing the issue. |
You don't feel that saying something "sucks" and implying that the rest of us don't live "in the real world" is unnecessarily harsh and non-constructive? We try to foster a community where collaboration and positivity are encouraged. I ask that if you want to participate, you follow these principles as well. Due to the use of stored API credentials, SSH keys, and certificates sharing If you'd like to submit a proposal for dealing with this, feel free. If you'd like to also make a proposal backed up by code in a pull request, I also encourage you to do that. But at any rate, please focus the discussion on solutions and stay positive. |
Not really when it's following up...
Thats pretty much what I call a summary dismissal. And by "real world", I mean the environment where we use this stuff, not where you can pass one of the developers in the hall. This would have been a more appropriate answer...
...even though those technologies have been invented and most of them are open source. However, since you asked, what about a |
That could solve the problem ... for dev environments at least |
Another option is to use a docker socket on a host available via SSH, which is vastly preferable to me as it uses the existing SSH authentication mechanism (which in my case is backed up with HSMs) and doesn't require adjusting ports/firewalls to support TLS. I have hacked together my own solution for this using socat, but it would be very nice if docker-machine could support it. It's not hard to ssh to a host, install socat if it's not already installed, and use socat to turn the ssh session into a local socket for communicating with the remote /var/lib/docker.sock. It also means docker-machine wouldn't have to know anything about authentication or certificates or TLS in this driver mode, though it does depend on socat locally to create the local socket... It would be nice to see a driver mode that does this sort of local-socket-setup the way the existing ssh driver sets up all the TLS stuff. Many orgs already have SSH key distribution/portforwarding solved, and expecting them to now distribute/manage a PKI for the TLS (and open another port) is burdensome.
|
So, no "docker-machine add ..."? |
+1 for We want to have the beauty of |
I spun up a droplet with the digitalocean driver and got a site up and running using docker-compose from my workstation. Then I had to do work on the site but from a completely different location, far away from my workstation where I did the original docker-machine commands. Isn't this a common enough situation? |
It is a very common solution. Especially if you work in a thing called "team". But they avoid this topic for about 18 months now. All they do is closing issues like this and then say something like "its super complicated to implement such a feature" and that you can do it yourself and propose a pull request at any time. |
Thanks for your reply, although no easy way to realize yet. |
Keep the credentials and whatnot in the cloud. Give me instructions how to use my dropbox or google drive to store it. Let me query the info from digitalocean, since that's the driver I was using. This really can't be that hard. |
Yeah that sounds secure.
|
I tried to re-provision an existing docker host ( |
Can you reopen this issue? I don't think this is resolved yet. |
Stumbled upon this by accident. Well, @nathanleclaire made clear the point: this is NOT the point of adding a feature do docker-machine, but rather the point of sharing keys that are NOT supposed to be stored anywhere else other that the original client machine. This is exactly how this is supposed to be done and some people here are getting angry without spending sufficient thought on the subject. If you really want to share the keys with a team just go on with any source control private repo and accept the risks you are taking. To export the needed keys there is a script someone already made: https://gist.github.com/schickling/2c48da462a7def0a577e "There is no reason we can't be civil" - Leonidas |
I still think docker-machine should behave more like scp. What prevents to add support for multiple keys? |
One workaround to this, which I have just found.
(Don't copy the ssh commands verbatim, lookup how to do it properly - https://encrypted.google.com/search?hl=en&q=ssh%20copy%20public%20key) |
+1 for the export/import command. Not being able to move management workstations is simply unacceptable and creates a massive single point of failure. e.g. Lost computer, theft, fire, hardware failure, software failure, team access issues, admin issues etc... |
Docker-machine was not conceived with that in mind - this is a fast-food tool for single developer provisioning. This is it, plain and simple, and it works - and it is free! Some features mentioned here are implemented in the Docker EE offering (like teams and RBAC). If people can't be thankful try at least to be reasonable. |
I dont believe machine-share has been mentioned in this thread yet:
|
Hey @andrevtg Sorry if I seemed ungrateful. I really love docker-machine and thought this would make it even better. If my schedule frees up I might take a stab at learning GO and try to add this feature. |
Hey @dmitrym0 I think this ticket was more focused on the lack of ability to connect to existing remote docker hosts. |
@Jared-Harrington-Gibbs machine-share specifically solves the problem of "adding existing docker-machine hosts". We deploy various apps via docker-compose, and it works well for us. We export the set of certificates with |
I tried using generic driver to add an exist docker host, it will add successfully after several times retry. |
@sneak This solution doesn't seem to work for me, as the paths in This presents a problem for me, because I maybe be executing from an environment where I might not know the exact path the files are stored in. Maybe there could be an option for a relative path? If the problem is man power, I could spare a few days to try to provide a solution for the community. Would prefer to have some direction from the maintainers though. Does |
This is feature that is NOT INTENDED to be implemented at all. It makes no sense to expect the remote machine to store the private keys that are not supposed to exist anywhere except on the original developer's machine. Even Docker EE makes sure that a different "client bundle" is generated each time it is requested, and does not store them on the host. People here are asking for a feature that is simple to implement, but that makes no sense at all to be implemented due to a reasonable security constraint. |
@andrevtg The request is not for the VM itself to store the keys (which would make the keys entirely useless), but for the docker-machine client (which is made of code, it's an actual application), to provide a way for users to voluntarily transmit keys from one client to another. One simple way to do this might be to provide a |
Exactly Ideally, there should be a |
@dhrp I got a docker server running on an rpi. I would like to access it from my localhost with docker machine. So all I would have to do is: |
So @360disrupt, did it? I struggling with the same usecase. |
No, not yet. |
Hi everyone. This was not solved yet? We use docker for every apps in the company where I'm working for. Our biggest problem is related with sharing the machines. I tried to use the generic driver multiple times, but the problem with that approach is that when you create a machine with the generic driver and connects it with an existing server, the creation just drops that containers that are currently running in the server. The only way that we've found was to build a simple script in Python that just import and exports the machine that the developer wants to access. This script is responsible to copy all the configuration files and the certs from the machine owner to the new source. It works well and we're not having any problems with that, it's just that I cannot believe that there isn't an official way to share machines without having to share our private certs. I'm working in a project and I will publish it in GitHub to solve this problem in a different way. Basically, I'm just building a PaaS that will concentrate all the certs of all the machines that we have here in our company. And when someone wants to deploy or do something like that, you just need to connect with the PaaS, and not with the server. It's like a tunnel. Soon I will release the first version of this PaaS. |
+1 for |
this solved my problem: machine-share 💻 🐇 |
I know this looks to be a settled issue, but I just wanted to state I have to agree with @atemerev: as a development team we have the need to connect to other provisioned machines on an almost weekly basis. |
I can confirm that this npm package does work. |
Thanks @dmitrym0 a lot. machine import package did the job for me. Moving docker machine from one laptop to another, such that I can redeploy from new machine. |
It's an old problem, but i can't find the useful answer.
I have the following env :
Local Host (My Laptop) Name : Chris-Laptop
docker-machine is already installed , version 0.6.0, build e27fb87, Mac OS X 10.11
Remote Host (My VPS) Name : li845-130 (139.162.3.130)
docker-engine is already installed, CentOS 7.0
1.Docker daemon Process in Remote Host
[root@li845-130 ~]# ps -ef|grep docker | grep -v grep
root 12093 1 0 02:09 ? 00:00:00 /usr/bin/docker daemon -H tcp://0.0.0.0:2376
2.Configure the ssh connection without password
[tdy218@Chris-Laptop .ssh]$ ssh [email protected]
Last failed login: Mon Mar 21 02:54:06 UTC 2016 from 125.88.177.95 on ssh:notty
There were 54 failed login attempts since the last successful login.
Last login: Mon Mar 21 02:25:25 2016 from 114.248.235.223
[root@li845-130 ~]#
3.Add remote docker host to local docker machines
[tdy218@Chris-Laptop .ssh]$ docker-machine create --driver none -url=tcp://139.162.3.130:2376 linodevps
Running pre-create checks...
Creating machine...
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env linodevps
[tdy218@Chris-Laptop .ssh]$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.10.3
linodevps - none Running tcp://139.162.3.130:2376 Unknown Unable to query docker version: Unable to read TLS config: open /Users/tdy218/.docker/machine/machines/linodevps/server.pem: no such file or directory
[tdy218@Chris-Laptop .ssh]$
[tdy218@Chris-Laptop .ssh]$ docker-machine -D regenerate-certs linodevps
Docker Machine Version: 0.6.0, build e27fb87
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Found binary path at /usr/local/bin/docker-machine
Launching plugin server for driver none
Plugin server listening at address 127.0.0.1:54648
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
command=configureAuth machine=linodevps
Waiting for SSH to be available...
Getting to WaitForSSH function...
(linodevps) Calling .GetSSHHostname
(linodevps) Calling .GetSSHPort
(linodevps) Calling .GetSSHKeyPath
(linodevps) Calling .GetSSHUsername
Using SSH client type: external
{[-o BatchMode=yes -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none @ -p 0] /usr/bin/ssh}
About to run SSH command:
exit 0
SSH cmd err, output: exit status 255: usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b
..................
Error getting ssh command 'exit 0' : Something went wrong running an SSH command!
command : exit 0
err : exit status 255
output : usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
.....................
It report "Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded..." at last.
Why ? i have configurated the ssh connection between local host and remote docker host without password.
How can export the docker host to local docker-machine command line ?
The text was updated successfully, but these errors were encountered: