Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NoneDriver: Minikube start restarts containerd, hence it kills all my non k8s containers #4541

Closed
marcosdiez opened this issue Jun 20, 2019 · 3 comments · Fixed by #4545
Closed
Labels
co/none-driver co/runtime/docker Issues specific to a docker runtime

Comments

@marcosdiez
Copy link
Contributor

marcosdiez commented Jun 20, 2019

I am using version minikube 1.1.1

Whenever I do a

sudo -E minikube start --vm-driver=none --extra-config kubeadm.ignore-preflight-errors=SystemVerification ${EXTRA_DNS_PARAMETER} --logtostderr

I see this:

I0620 13:33:02.265561   22899 exec_runner.go:39] Run: systemctl is-active --quiet service containerd
I0620 13:33:02.279081   22899 exec_runner.go:39] Run: sudo systemctl stop containerd
I0620 13:33:02.361533   22899 exec_runner.go:39] Run: systemctl is-active --quiet service containerd
I0620 13:33:02.381293   22899 exec_runner.go:39] Run: systemctl is-active --quiet service crio
I0620 13:33:02.396272   22899 exec_runner.go:39] Run: sudo systemctl start docker

which I traced to

https://github.com/kubernetes/minikube/blob/master/pkg/minikube/cruntime/docker.go#L78

// Enable idempotently enables Docker on a host
func (r *Docker) Enable() error {
	if err := disableOthers(r, r.Runner); err != nil {
		glog.Warningf("disableOthers: %v", err)
	}
	return r.Runner.Run("sudo systemctl start docker")
}

disableOthers is here by the way: https://github.com/kubernetes/minikube/blob/master/pkg/minikube/cruntime/cruntime.go#L96

What's the rationale for that ?
Why does minikube bother messing with my container daemons in first place ?

Should we just remove disableOthers ? Should we remove it only for the none driver ?
Should we just not restart containerd if docker is > SOME_VERSION ?

Should we just not restart containerd because today everybody uses it, hence it shouldn't just be restarted ?

All the solutions are trivial to implement. Hence I am opening the discussion so I can create the right PR :)

@afbjorklund
Copy link
Collaborator

The Minikube ISO starts all three container runtimes (especially Docker, which is mandatory) at boot
So therefore the start script will stop the others again, in order to not use as many resources.

When you give Minikube your other dedicated VM, it doesn't expect there to be any other daemons. But we could avoid "disableOthers" for the none driver - similar to how we e.g. avoid the caching ?


Switching to containerd as the default is definitely something we discuss, but today people are using Minikube both for docker (via docker-env) and for kubectl. But we could start it on-demand ?

If the current restart code could be improved, that would be interesting. I know there were some issues when stopping the dockerd, and we need to look further into how dockerd and containerd play together.

@afbjorklund afbjorklund added co/runtime/docker Issues specific to a docker runtime co/none-driver triage/discuss Items for discussion labels Jun 20, 2019
@marcosdiez
Copy link
Contributor Author

Now it makes sense. So I'll just send a PR that will not run disableOthers when the driver is none.
Thank you @afbjorklund !

@afbjorklund
Copy link
Collaborator

Also, we only recently started running containerd - before that it was docker-containerd

Right now the code still expects each container runtime to be separate (docker/rkt/containerd/crio/etc)
Since that is not really true anymore (and probably wasn't for docker 18.06 either), it needs to improve

https://docs.docker.com/engine/release-notes/#18090

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver co/runtime/docker Issues specific to a docker runtime
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants