Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

None driver error on minikube 1.11.0 #8361

Closed
staticdev opened this issue Jun 2, 2020 · 19 comments
Closed

None driver error on minikube 1.11.0 #8361

staticdev opened this issue Jun 2, 2020 · 19 comments
Labels
co/none-driver kind/support Categorizes issue or PR as a support question.

Comments

@staticdev
Copy link
Contributor

Since first versions of minikube I had a bad experience with driver none, and a would really like to get it working without VMs. Even using KVM2 I think it wastes resources and have poor performance.

Steps to reproduce the issue:

  1. minikube start --vm-drive=none

Full output of failed command:

$ sudo minikube start --vm-driver=none --alsologtostderr
I0602 17:21:13.290278   51066 start.go:98] hostinfo: {"hostname":"static-Aspire-R5-471T","uptime":35216,"bootTime":1591094057,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-33-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"65f6c450-e2bf-81e5-328e-441ca823e081"}
I0602 17:21:13.291406   51066 start.go:108] virtualization: kvm host
😄  minikube v1.11.0 on Ubuntu 20.04
I0602 17:21:13.295673   51066 driver.go:253] Setting default libvirt URI to qemu:///system
✨  Using the none driver based on user configuration
I0602 17:21:13.296039   51066 notify.go:125] Checking for updates...
I0602 17:21:13.298818   51066 start.go:214] selected driver: none
I0602 17:21:13.298839   51066 start.go:611] validating driver "none" against <nil>
I0602 17:21:13.298855   51066 start.go:617] status for none: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0602 17:21:13.299021   51066 start_flags.go:218] no existing cluster config was found, will generate one from the flags 
I0602 17:21:13.299201   51066 start_flags.go:232] Using suggested 2200MB memory alloc based on sys=7843MB, container=0MB
I0602 17:21:13.299376   51066 start_flags.go:556] Wait components to verify : map[apiserver:true system_pods:true]
👍  Starting control plane node minikube in cluster minikube
I0602 17:21:13.302962   51066 profile.go:156] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0602 17:21:13.303093   51066 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/config.json: {Name:mk270d1b5db5965f2dc9e9e25770a63417031943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0602 17:21:13.303310   51066 exit.go:58] WithError(error provisioning host)=Failed to save config: failed to acquire lock for /root/.minikube/profiles/minikube/config.json: {Name:mk270d1b5db5965f2dc9e9e25770a63417031943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}: unable to open /tmp/juju-mk270d1b5db5965f2dc9e9e25770a63417031943: permission denied called from:
goroutine 1 [running]:
runtime/debug.Stack(0x40c11a, 0x187eca0, 0x1982e80)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1ae579a, 0x17, 0x1db6b80, 0xc00010de20)
	/app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2b00360, 0xc000777220, 0x0, 0x2)
	/app/cmd/minikube/cmd/start.go:169 +0xac2
github.com/spf13/cobra.(*Command).execute(0x2b00360, 0xc000777200, 0x2, 0x2, 0x2b00360, 0xc000777200)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2b05220, 0x0, 0x1, 0xc00078ea40)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
	/app/cmd/minikube/cmd/root.go:112 +0x747
main.main()
	/app/cmd/minikube/main.go:66 +0xea

❌  [JUJU_LOCK_DENIED] error provisioning host Failed to save config: failed to acquire lock for /root/.minikube/profiles/minikube/config.json: {Name:mk270d1b5db5965f2dc9e9e25770a63417031943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}: unable to open /tmp/juju-mk270d1b5db5965f2dc9e9e25770a63417031943: permission denied
💡  Suggestion: Run 'sudo sysctl fs.protected_regular=0', or try a driver which does not require root, such as '--driver=docker'
⁉️   Related issue: https://github.com/kubernetes/minikube/issues/6391

Full output of minikube start command used, if not already included:
N/A

Optional: Full output of minikube logs command:

$ minikube logs
🤷  There is no local cluster named "minikube"
👉  To fix this, run: "minikube start"

@afbjorklund
Copy link
Collaborator

Sorry you got bitten by the systemd vs the world (and juju) bug #6391, as triggered by sudo...

Have you tried the "docker" driver ? There is no VM involved with that one, if that is your concern

@afbjorklund afbjorklund added co/none-driver kind/support Categorizes issue or PR as a support question. labels Jun 2, 2020
@staticdev
Copy link
Contributor Author

staticdev commented Jun 2, 2020

@afbjorklund I saw that in the issue but it is closed. Should it be a Won't fix?

I just tried docker driver:

$ sudo minikube start --driver=docker
😄  minikube v1.11.0 on Ubuntu 20.04
✨  Using the docker driver based on user configuration
🛑  The "docker" driver should not be used with root privileges.
💡  If you are running minikube within a VM, consider using --driver=none:
📘    https://minikube.sigs.k8s.io/docs/reference/drivers/none/

The user experience here (regarding the messages) is not so good since I just tried driver=none.

It also did not work out-of-the-box without root:

$ minikube start --driver=docker
😄  minikube v1.11.0 on Ubuntu 20.04
    ▪ MINIKUBE_ACTIVE_DOCKERD=minikube
✨  Using the docker driver based on user configuration

❗  'docker' driver reported an issue: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix /var/run/docker.sock: connect: permission denied
💡  Suggestion: Add your user to the 'docker' group: 'sudo usermod -aG docker $USER && newgrp docker'
📘  Documentation: https://docs.docker.com/engine/install/linux-postinstall/

💣  Failed to validate 'docker' driver

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 2, 2020

The issue was closed because there is a workaround and upstream ticket for the library (juju/mutex)
Supposedly it could also be fixed by switching the library used, but that's a bigger change.

There was some discussion about the default minikube permissions and root recently: #8257
Basically minikube assumes a regular laptop installation where user is part of docker group

@staticdev
Copy link
Contributor Author

staticdev commented Jun 2, 2020

I also tried juju instalation of kubeadm in the past. My experience was also a bit frustrated. I will take a look on #8257. I really don't think requiring root for minikube is a big deal, since any node of kubeadm would require that.

@afbjorklund
Copy link
Collaborator

I'm not sure if juju-the-installer is related to the mutex library, by more than name association.

Basically systemd introduced a breaking change, and this particular library failed to adapt to it.

@staticdev
Copy link
Contributor Author

@afbjorklund do you have a link for this juju-mutex issue?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 2, 2020

@afbjorklund do you have a link for this juju-mutex issue?

juju/mutex#7

But the last commit is from Jun 2018, whereas the systemd breakage is from Feb 2019

@afbjorklund
Copy link
Collaborator

Since first versions of minikube I had a bad experience with driver none, and a would really like to get it working without VMs. Even using KVM2 I think it wastes resources and have poor performance.

There also seems to be a mismatch of expectations, about what minikube is actually doing...
It is creating a Kubernetes cluster, and configuring your console (or desktop) to talk to it.

So if all you want to do is to run locally on the master node as root, maybe minikube is overkill ?
You could just run kubeadm directly (this is what "none" does), and remove the node taint.

@staticdev
Copy link
Contributor Author

@afbjorklund last commit of this dependency on June 2018. I saw that a Juju member responded the issue but is waiting for response. Maybe that is why they did not fix it yet.

@staticdev
Copy link
Contributor Author

Since first versions of minikube I had a bad experience with driver none, and a would really like to get it working without VMs. Even using KVM2 I think it wastes resources and have poor performance.

There also seems to be a mismatch of expectations, about what minikube is actually doing...
It is creating a Kubernetes cluster, and configuring your console (or desktop) to talk to it.

So if all you want to do is to run locally on the master node as root, maybe minikube is overkill ?
You could just run kubeadm directly (this is what "none" does), and remove the node taint.

I always liked minikube and what it does. And I tried to make my teams use it for running locally applications that would run on kubernetes without having to know interns of how to install kubeadm. minikube improved a lot but this none driver was always a stone in my shoe.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 2, 2020

minikube improved a lot but this none driver was always a stone in my shoe.

The none driver is intended for running it on an existing VM of your choice, like in CI.

It's always been a bit flawed. Requiring root and binding publically are just two issues.

#3760

#4313

@staticdev
Copy link
Contributor Author

staticdev commented Jun 2, 2020

@afbjorklund indeed this new docker option is more appropriate for my intent. Should it work without this sudo usermod -aG docker $USER && newgrp docker?

@medyagh
Copy link
Member

medyagh commented Jun 2, 2020

@staticdev I also recommend the docker driver. As long as you have docker installed by your distro you should be good to go.
More info:
https://minikube.sigs.k8s.io/docs/drivers/docker/

None driver is not our recommended driver and there is warning in minikube to not use it unless it inside a ci VM, docker driver can be replaced with none driver for all use cases and it supports more features such as multi node and loadbalancer

And yes we don't allow running as root for security reasons.

I believe the distro's package manager will take care of the user groups ( Ubuntu takes care of it ) but if your distro doesn't do that you might need to add the user to the group

I recommend checkout docker docs how to install docker in your distro

@staticdev
Copy link
Contributor Author

@medyagh Nice. I am using Ubuntu 20.04 and installed it with Docker repository following their documentation. And yet, I am having the error when running the driver:

❗  'docker' driver reported an issue: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix /var/run/docker.sock: connect: permission denied
💡  Suggestion: Add your user to the 'docker' group: 'sudo usermod -aG docker $USER && newgrp docker'

@afbjorklund
Copy link
Collaborator

@afbjorklund indeed this new docker option is more appropriate for my intent. Should it work without this sudo usermod -aG docker $USER && newgrp docker?

No, that is currently required. It is related to #7963

We default to running docker through their root group (rather than using sudo docker) and running podman through passwordless sudo (rather than newgrp podman) according to upstream wishes...

When using KVM2, it uses a similar libvirt group...

As discussed in that other issue, the goal here is to run minikube as the user but the driver as root. We don't want root-owned files under $HOME, but we don't support rootless kubernetes either.

@staticdev
Copy link
Contributor Author

Thanks for the enlightenment @afbjorklund and @medyagh. I am happy with docker option, we can close this one.

@siddubellanki
Copy link

minikube v1.26.0 on Ubuntu 22.04 (xen/amd64)

  • Using the docker driver based on user configuration
  • The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
  • If you are running minikube within a VM, consider using --driver=none:
  • https://minikube.sigs.k8s.io/docs/reference/drivers/none/

@siddubellanki
Copy link

please anyone solve this

@staticdev
Copy link
Contributor Author

@siddubellanki please open another issue, this one is already closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

4 participants