Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman driver crashes after timeout and volume recreate #8056

Closed
afbjorklund opened this issue May 9, 2020 · 5 comments · Fixed by #8057
Closed

Podman driver crashes after timeout and volume recreate #8056

afbjorklund opened this issue May 9, 2020 · 5 comments · Fixed by #8057
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@afbjorklund
Copy link
Collaborator

It is almost working, but there is an arbitrary 13 second timeout on start (for some reason).

Then it tries to delete the container and start over, but fails on the podman volume existing

😄  [podman] minikube v1.10.0-beta.2 on Ubuntu 18.04
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node podman in cluster podman
🔥  Creating podman container (CPUs=2, Memory=8000MB) ...
✋  Stopping "podman" in podman ...
🔥  Deleting "podman" in podman ...
🤦  StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: apply authorized_keys file ownership, output 
** stderr ** 
Error: can only create exec sessions on running containers: container state improper

** /stderr **: chown docker:docker /home/docker/.ssh/authorized_keys: exit status 255
stdout:

stderr:
Error: can only create exec sessions on running containers: container state improper

🔥  Creating podman container (CPUs=2, Memory=8000MB) ...
😿  Failed to start podman container. "minikube start -p podman" may fix it: creating host: create: creating: setting up container node: creating volume for podman container: sudo -n podman volume create podman --label name.minikube.sigs.k8s.io=podman --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name podman already exists: volume already exists


💣  error provisioning host: Failed to start host: creating host: create: creating: setting up container node: creating volume for podman container: sudo -n podman volume create podman --label name.minikube.sigs.k8s.io=podman --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name podman already exists: volume already exists


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose
@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. co/podman-driver podman driver issues labels May 9, 2020
@afbjorklund
Copy link
Collaborator Author

Looks like the previous error is back:

$ sudo podman logs podman
...
INFO: setting iptables to detected mode: legacy
update-alternatives: error: no alternatives for iptables

Even now with the named volumes :-(

Version:            1.9.1
RemoteAPI Version:  1
Go Version:         go1.10.1
OS/Arch:            linux/amd64

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 9, 2020

ls: cannot access '/var/lib/dpkg': No such file or directory

root@podman:/# ls /var/lib     
docker  minikube

That is, by mounting the /var volume we "lose" the original contents.

I thought this was working, but I guess that must have been temporary ?


./out/minikube start --profile docker --driver docker

$ docker run -it -v docker:/var docker.io/library/ubuntu:19.10 /bin/bash
root@787ed620f1f8:/# ls /var/lib/
apt  containerd  containers  docker  dockershim  dpkg  kubelet  minikube  misc  pam  polkit-1  private  sudo  systemd  ucf

./out/minikube start --profile podman --driver podman

$ sudo podman run -it -v podman:/var docker.io/library/ubuntu:19.10 /bin/bash
root@d83b0437e686:/# ls /var/lib
docker  minikube

@afbjorklund
Copy link
Collaborator Author

Apparently this is a timing issue. It's now racing between the volume creation, versus the untar of the preload and the regular systemd bootup. On docker, the "run" wins - but on podman, the "tar" wins...

0509 14:36:56.815426   29243 kic.go:138] duration metric: took 7.063881 seconds to extract preloaded images to volume
I0509 14:36:58.658581   29243 client.go:164] LocalClient.Create took 9.005540283s
I0509 14:52:39.151024   27759 kic.go:138] duration metric: took 7.043847 seconds to extract preloaded images to volume
I0509 14:53:13.930638   27759 client.go:164] LocalClient.Create took 42.012570905s

Previously the preload tar was hardcoded to docker volume, so didn't break podman until e2d7d94
It's probably a design flaw in KIC anyway, it should not mount the entire /var directory I think.

@afbjorklund
Copy link
Collaborator Author

Commenting out the oci.ExtractTarballToVolume call makes it boot properly again...

😄  [podman] minikube v1.10.0-beta.2 on Ubuntu 18.04
❗  Both driver=podman and vm-driver=virtualbox have been set.

    Since vm-driver is deprecated, minikube will default to driver=podman.

    If vm-driver is set in the global config, please run "minikube config unset vm-driver" to resolve this warning.
			
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node podman in cluster podman
🔥  Creating podman container (CPUs=2, Memory=8000MB) ...
🐳  Preparing Kubernetes v1.18.1 on Docker 19.03.2 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "podman"

real	0m43,249s
user	0m5,277s
sys	0m2,442s

@afbjorklund
Copy link
Collaborator Author

 ./out/minikube profile list
|----------|------------|---------|----------------|------|---------|---------|
| Profile  | VM Driver  | Runtime |       IP       | Port | Version | Status  |
|----------|------------|---------|----------------|------|---------|---------|
| docker   | docker     | docker  | 172.17.0.3     | 8443 | v1.18.1 | Running |
| minikube | virtualbox | docker  | 192.168.99.101 | 8443 | v1.18.0 | Running |
| podman   | podman     | docker  | 10.88.0.118    | 8443 | v1.18.1 | Running |
|----------|------------|---------|----------------|------|---------|---------|

virtualbox

                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ findmnt --tree /dev/sda1
TARGET                    SOURCE                           FSTYPE OPTIONS
/tmp/hostpath_pv          /dev/sda1[/hostpath_pv]          ext4   rw,relatime
/tmp/hostpath-provisioner /dev/sda1[/hostpath-provisioner] ext4   rw,relatime
/mnt/sda1                 /dev/sda1                        ext4   rw,relatime
/var/lib/boot2docker      /dev/sda1[/var/lib/boot2docker]  ext4   rw,relatime
/var/lib/docker           /dev/sda1[/var/lib/docker]       ext4   rw,relatime
/var/lib/containers       /dev/sda1[/var/lib/containers]   ext4   rw,relatime
/var/log                  /dev/sda1[/var/log]              ext4   rw,relatime
/var/lib/cni              /dev/sda1[/var/lib/cni]          ext4   rw,relatime
/var/lib/kubelet          /dev/sda1[/var/lib/kubelet]      ext4   rw,relatime
/data                     /dev/sda1[/data]                 ext4   rw,relatime
/var/lib/minikube         /dev/sda1[/var/lib/minikube]     ext4   rw,relatime
/var/lib/toolbox          /dev/sda1[/var/lib/toolbox]      ext4   rw,relatime
/var/lib/minishift        /dev/sda1[/var/lib/minishift]    ext4   rw,relatime

docker

docker@docker:~$ findmnt -u /dev/mapper/ubuntu--vg-root
TARGET         SOURCE                                                                                                                              FSTYPE OPTIONS
/var           /dev/mapper/ubuntu--vg-root[/var/lib/docker/volumes/docker/_data]                                                                   ext4   rw,relatime,errors=remount-ro,data=ordered
/usr/lib/modules
               /dev/mapper/ubuntu--vg-root[/lib/modules]                                                                                           ext4   ro,relatime,errors=remount-ro,data=ordered
/etc/resolv.conf
               /dev/mapper/ubuntu--vg-root[/var/lib/docker/containers/5d401c58befd06c0c0f33a826173c3a611b23ab54a40515000a9fc571785fc5e/resolv.conf]
                                                                                                                                                   ext4   rw,relatime,errors=remount-ro,data=ordered
/etc/hostname  /dev/mapper/ubuntu--vg-root[/var/lib/docker/containers/5d401c58befd06c0c0f33a826173c3a611b23ab54a40515000a9fc571785fc5e/hostname]   ext4   rw,relatime,errors=remount-ro,data=ordered
/etc/hosts     /dev/mapper/ubuntu--vg-root[/var/lib/docker/containers/5d401c58befd06c0c0f33a826173c3a611b23ab54a40515000a9fc571785fc5e/hosts]      ext4   rw,relatime,errors=remount-ro,data=ordered

podman

docker@podman:~$ findmnt -u /dev/mapper/ubuntu--vg-root      
TARGET           SOURCE                                                                        FSTYPE OPTIONS
/var             /dev/mapper/ubuntu--vg-root[/var/lib/containers/storage/volumes/podman/_data] ext4   rw,nosuid,nodev,relatime,errors=remount-ro,data=ordered
/usr/lib/modules /dev/mapper/ubuntu--vg-root[/lib/modules]                                     ext4   ro,relatime,errors=remount-ro,data=ordered

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant