Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deleting Docker cluster hangs on password prompt: sudo podman ps #7958

Closed
andk opened this issue May 1, 2020 · 5 comments · Fixed by #7959 or #8038
Closed

deleting Docker cluster hangs on password prompt: sudo podman ps #7958

andk opened this issue May 1, 2020 · 5 comments · Fixed by #7959 or #8038
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@andk
Copy link

andk commented May 1, 2020

Steps to reproduce the issue:

  1. I start minikube with ./out/minikube start
  2. I try to delete all minikube clusters with ./out/minikube delete --all

The start command looks good. The delete command leads to a prompt from sudo (actually nine prompts since I always answer with just RETURN). Expected is no prompt from sudo. Is this a regression or a changed behaviour? Last time (a couple of days ago) I tried the same sequence of commands, the delete step worked without any sudo prompt. After the third unsuccessful attempt to get something done via sudo, the delete command seems to work and there are no logs left over (./out/minikube logs answers There is no local cluster named "minikube"). I feel there's something not correct in this setup. What shall I try?

Full output of failed command:
% ./out/minikube delete --all --alsologtostderr
I0501 11:28:55.690433 31894 cli_runner.go:108] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
I0501 11:28:55.768173 31894 cli_runner.go:108] Run: docker ps -a --filter label=created_by.minikube.sigs.k8s.io=true --format {{.Names}}
I0501 11:28:55.857771 31894 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0501 11:28:55.945993 31894 cli_runner.go:108] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
I0501 11:28:57.153000 31894 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0501 11:28:57.355540 31894 oci.go:504] container minikube status is Stopped
I0501 11:28:57.355566 31894 oci.go:516] Successfully shutdown container minikube
I0501 11:28:57.355612 31894 cli_runner.go:108] Run: docker rm -f -v minikube
I0501 11:28:57.489665 31894 volumes.go:34] trying to delete all docker volumes with label created_by.minikube.sigs.k8s.io=true
I0501 11:28:57.490017 31894 cli_runner.go:108] Run: docker volume ls --filter label=created_by.minikube.sigs.k8s.io=true --format {{.Name}}
I0501 11:28:57.557507 31894 cli_runner.go:108] Run: docker volume rm --force minikube
I0501 11:28:57.920941 31894 volumes.go:56] trying to prune all docker volumes with label created_by.minikube.sigs.k8s.io=true
I0501 11:28:57.921009 31894 cli_runner.go:108] Run: docker volume prune -f --filter label=created_by.minikube.sigs.k8s.io=true
🔥 Deleting "minikube" in docker ...
I0501 11:28:58.006147 31894 cli_runner.go:108] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}
I0501 11:28:58.087536 31894 volumes.go:34] trying to delete all docker volumes with label name.minikube.sigs.k8s.io=minikube
I0501 11:28:58.087599 31894 cli_runner.go:108] Run: docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}
I0501 11:28:58.179693 31894 volumes.go:56] trying to prune all docker volumes with label name.minikube.sigs.k8s.io=minikube
I0501 11:28:58.179757 31894 cli_runner.go:108] Run: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube
I0501 11:28:58.258713 31894 cli_runner.go:108] Run: sudo podman ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.

[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
I0501 11:29:06.663100 31894 cli_runner.go:147] Completed: sudo podman ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}: (8.404352766s)
I0501 11:29:06.663135 31894 volumes.go:34] trying to delete all podman volumes with label name.minikube.sigs.k8s.io=minikube
I0501 11:29:06.663208 31894 cli_runner.go:108] Run: sudo podman volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.

[sudo] password for sand:

Sorry, try again.
[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
I0501 11:29:14.111421 31894 cli_runner.go:147] Completed: sudo podman volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}: (7.448182524s)
W0501 11:29:14.111462 31894 delete.go:211] error deleting volumes (might be okay).
To see the list of volumes run: 'docker volume ls'
:[listing volumes by label "name.minikube.sigs.k8s.io=minikube": sudo podman volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}: exit status 1
stdout:

stderr:
sudo: 3 incorrect password attempts
]
I0501 11:29:14.111580 31894 volumes.go:56] trying to prune all podman volumes with label name.minikube.sigs.k8s.io=minikube
I0501 11:29:14.111655 31894 cli_runner.go:108] Run: sudo podman volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.

[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
Sorry, try again.
[sudo] password for sand:
I0501 11:29:23.003258 31894 cli_runner.go:147] Completed: sudo podman volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube: (8.891577654s)
W0501 11:29:23.003307 31894 delete.go:216] error pruning volume (might be okay):
[prune volume by label name.minikube.sigs.k8s.io=minikube: sudo podman volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube: exit status 1
stdout:

stderr:
sudo: 3 incorrect password attempts
]
I0501 11:29:23.005642 31894 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0501 11:29:23.086619 31894 delete.go:75] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": docker inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such object: minikube
🔥 Removing /home/sand/.minikube/machines/minikube ...
I0501 11:29:23.097300 31894 lock.go:35] WriteFile acquiring /home/sand/.kube/config: {Name:mk505f793881700367e2a950c92de29206d7625a Clock:{} Delay:500ms Timeout:1m0s Cancel:}
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles

Full output of minikube start command used, if not already included:
% ./out/minikube start
😄 minikube v1.10.0-beta.2 on Debian bullseye/sid (xen/amd64)
✨ Automatically selected the docker driver
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.18.1 preload ...
> preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4: 525.47 MiB
🔥 Creating docker container (CPUs=2, Memory=3200MB) ...
🐳 Preparing Kubernetes v1.18.1 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"

@afbjorklund afbjorklund added the co/podman-driver podman driver issues label May 1, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented May 1, 2020

The delete command leads to a prompt from sudo (actually nine prompts since I always answer with just RETURN). Expected is no prompt from sudo. Is this a regression or a changed behaviour?

This is a side-effect from the podman driver, it should probably not run unless configured.

sudo podman ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}

When you do use the podman driver, it first asks you to set up sudo access without password.

Most likely the best approach here is to always use sudo -n (to avoid asking for a password)

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels May 1, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented May 1, 2020

@andk : thanks for reporting, definitely not intended behaviour!

Will open another issue, to avoid running docker and podman volume command always.
Especially the docker volume command is annoying, because it is so slow (15 seconds)

@afbjorklund
Copy link
Collaborator

Added #7960 for avoid running podman in the first place (if not using podman driver that is)

@afbjorklund
Copy link
Collaborator

With the new behaviour (sudo -n), it will cry silently in the logs rather than bothering the user:

I0502 13:00:21.209379   12931 volumes.go:34] trying to delete all docker volumes with label name.minikube.sigs.k8s.io=minikube
I0502 13:00:21.209691   12931 cli_runner.go:108] Run: docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}
W0502 13:00:21.248676   12931 delete.go:211] error deleting volumes (might be okay).
To see the list of volumes run: 'docker volume ls'
:[listing volumes by label "name.minikube.sigs.k8s.io=minikube": docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}: exit status 1
stdout:

stderr:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/volumes?filters=%7B%22label%22%3A%7B%22name.minikube.sigs.k8s.io%3Dminikube%22%3Atrue%7D%7D: dial unix /var/run/docker.sock: connect: permission denied
]
I0502 13:00:21.248911   12931 volumes.go:56] trying to prune all docker volumes with label name.minikube.sigs.k8s.io=minikube
I0502 13:00:21.248959   12931 cli_runner.go:108] Run: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube
W0502 13:00:21.288456   12931 delete.go:216] error pruning volume (might be okay):
[prune volume by label name.minikube.sigs.k8s.io=minikube: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube: exit status 1
stdout:

stderr:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/volumes/prune?filters=%7B%22label%22%3A%7B%22name.minikube.sigs.k8s.io%3Dminikube%22%3Atrue%7D%7D: dial unix /var/run/docker.sock: connect: permission denied
]
I0502 13:00:21.288559   12931 cli_runner.go:108] Run: sudo -n podman ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}
I0502 13:00:21.293284   12931 volumes.go:34] trying to delete all podman volumes with label name.minikube.sigs.k8s.io=minikube
I0502 13:00:21.293349   12931 cli_runner.go:108] Run: sudo -n podman volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}
W0502 13:00:21.298132   12931 delete.go:211] error deleting volumes (might be okay).
To see the list of volumes run: 'docker volume ls'
:[listing volumes by label "name.minikube.sigs.k8s.io=minikube": sudo -n podman volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}: exit status 1
stdout:

stderr:
sudo: a password is required
]
I0502 13:00:21.298161   12931 volumes.go:56] trying to prune all podman volumes with label name.minikube.sigs.k8s.io=minikube
I0502 13:00:21.298236   12931 cli_runner.go:108] Run: sudo -n podman volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube
W0502 13:00:21.304770   12931 delete.go:216] error pruning volume (might be okay):
[prune volume by label name.minikube.sigs.k8s.io=minikube: sudo -n podman volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube: exit status 1
stdout:

stderr:
sudo: a password is required
]

This was assuming that docker and podman were installed, but not given root access yet (#7963)

Otherwise it would look more like:


I0502 13:04:30.612220   13742 volumes.go:34] trying to delete all docker volumes with label name.minikube.sigs.k8s.io=minikube
I0502 13:04:30.612280   13742 cli_runner.go:108] Run: docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}
W0502 13:04:30.612333   13742 delete.go:211] error deleting volumes (might be okay).
To see the list of volumes run: 'docker volume ls'
:[listing volumes by label "name.minikube.sigs.k8s.io=minikube": docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}: exec: "docker": executable file not found in $PATH
stdout:

stderr:
]
I0502 13:04:30.612564   13742 volumes.go:56] trying to prune all docker volumes with label name.minikube.sigs.k8s.io=minikube
I0502 13:04:30.612631   13742 cli_runner.go:108] Run: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube
W0502 13:04:30.612658   13742 delete.go:216] error pruning volume (might be okay):
[prune volume by label name.minikube.sigs.k8s.io=minikube: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube: exec: "docker": executable file not found in $PATH
stdout:

stderr:
]
I0502 13:04:30.612748   13742 cli_runner.go:108] Run: sudo -n podman ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}
I0502 13:04:30.618194   13742 volumes.go:34] trying to delete all podman volumes with label name.minikube.sigs.k8s.io=minikube
I0502 13:04:30.618297   13742 cli_runner.go:108] Run: sudo -n podman volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}
W0502 13:04:30.623514   13742 delete.go:211] error deleting volumes (might be okay).
To see the list of volumes run: 'docker volume ls'
:[listing volumes by label "name.minikube.sigs.k8s.io=minikube": sudo -n podman volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}: exit status 1
stdout:

stderr:
sudo: podman: command not found
]
I0502 13:04:30.623537   13742 volumes.go:56] trying to prune all podman volumes with label name.minikube.sigs.k8s.io=minikube
I0502 13:04:30.623668   13742 cli_runner.go:108] Run: sudo -n podman volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube
W0502 13:04:30.629107   13742 delete.go:216] error pruning volume (might be okay):
[prune volume by label name.minikube.sigs.k8s.io=minikube: sudo -n podman volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube: exit status 1
stdout:

stderr:
sudo: podman: command not found
]

But nothing to the user:

🙄 "minikube" profile does not exist, trying anyways.
💀 Removed all traces of the "minikube" cluster.

@tstromberg tstromberg added this to the v1.10.0 milestone May 7, 2020
@tstromberg tstromberg changed the title Changed behaviour of minikube delete --all: a prompt from sudo appears deleting Docker cluster hangs on password prompt: sudo podman ps May 7, 2020
@afbjorklund
Copy link
Collaborator

It was only partially resolved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
3 participants