Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman on Fedora 32: cannot exec into container that is not running: container state improper (no alternatives for iptables) #7885

Open
tstromberg opened this issue Apr 24, 2020 · 6 comments
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. os/linux priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@tstromberg
Copy link
Contributor

To replicate:

  • Install Fedora 32 beta (I did using Hyper-V)
  • sudo sysctl fs.protected_regular=0
  • sudo ./minikube-linux-amd64 start --driver=podman

Here's the output:

😄  minikube v1.10.0-beta.1 on Fedora 32
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E0423 21:26:42.139792    6132 cache.go:117] Error downloading kic artifacts:  writing image: error loading image: Cannot connect to the Docker daemon at un
ix:///var/run/docker.sock. Is the docker daemon running?
🔥  Creating podman container (CPUs=2, Memory=2200MB) ...
🤦  StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: apply authorized_keys file ownership, output
** stderr **
Error: cannot exec into container that is not running: container state improper
 
** /stderr **: chown docker:docker /home/docker/.ssh/authorized_keys: exit status 126
stdout:
 
stderr:
Error: cannot exec into container that is not running: container state improper
 
✋  Stopping "minikube" in podman ...
🔥  Deleting "minikube" in podman ...
🔥  Creating podman container (CPUs=2, Memory=2200MB) ...
😿  Failed to start podman container. "minikube start" may fix it: creating host: create: creating: create kic node: check container "minikube" running: te
mporary error created container "minikube" is not running yet
 
💣  error provisioning host: Failed to start host: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet

Looking at podman, I see that the container exited unceremoniously:

[root@localhost t]# podman ps -a
CONTAINER ID  IMAGE                               COMMAND  CREATED        STATUS                    PORTS
                                                  NAMES
d929e0421912  gcr.io/k8s-minikube/kicbase:v0.0.9           3 minutes ago  Exited (2) 3 minutes ago  127.0.0.1:40851->22/tcp, 127.0.0.1:43399->2376/tcp, 127
.0.0.1:36819->5000/tcp, 127.0.0.1:42079->8443/tcp  minikube
[root@localhost t]# podman logs d929e0421912
INFO: ensuring we can execute /bin/mount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: fix cgroup mounts for all subsystems
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: faking /sys/class/dmi/id/product_name to be "kind"
INFO: faking /sys/class/dmi/id/product_uuid to be random
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
INFO: setting iptables to detected mode: legacy
update-alternatives: error: no alternatives for iptables
@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 24, 2020

This looks the same issue as in #7631 (comment) and friends

It should be fixed, once #7480 is done - need to test on Fedora

@afbjorklund afbjorklund added co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Apr 24, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 25, 2020

Verified that PR works OK on Fedora 32 Beta-1.2, but needs a rebase.
(EDIT: have updated the branch and fixed merge conflicts: see #7631)

[anders@localhost ~]$ minikube start --driver=podman
😄  minikube v1.9.2 on Fedora 32 (vbox/amd64)
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating podman container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
🕵️   Verifying Kubernetes Components:
    🔎 verifying node conditions ...
    🔎 verifying api server ...
    🔎 verifying system pods ...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"
💡  For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/
[anders@localhost ~]$ minikube version
minikube version: v1.9.2
commit: e99340b4ac8daa800ae9955a377afd4c44cba16a
[anders@localhost ~]$ sudo podman ps
CONTAINER ID  IMAGE                               COMMAND  CREATED         STATUS             PORTS                                                                                                     NAMES
adca0c6644b6  gcr.io/k8s-minikube/kicbase:v0.0.9           21 minutes ago  Up 21 minutes ago  127.0.0.1:38511->22/tcp, 127.0.0.1:33585->2376/tcp, 127.0.0.1:37965->5000/tcp, 127.0.0.1:37303->8443/tcp  minikube

It uses either docker (moby-engine) - or the pre-installed podman (1.8.2).
User needs to do the docker usermod or the podman visudo as required.
i.e. so that "docker version" and "sudo podman version" works properly
But there are checks and solution messages for these, in minikube start.

As per the discussion above, it is still broken on master (or 10.0 betas)

@afbjorklund
Copy link
Collaborator

It also gives the proper error messages, when trying to run minikube with sudo...
(except when using the none driver, which still installs under /root/.minikube)

It seems that there are known DNS errors, when running this under virtualization.
That is: my containers have some issues reaching the 10.0.2.3 NAT DHCP server.

But the Kubernetes installation works OK from the preload, so that part boots up.
Maybe this needs to be tweaked to use the external DNS server or something ?

Anyway, those issues are the same for all types of containers so not only minikube.
It is possible that something in the default Fedora network setup is blocking it...

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 25, 2020

Here is the plan to fix the pulling of the base image, that only works for Docker: #7766
Currently the minikube pull is disabled for podman, which means that the run will do it

Another benefit of dropping go-containerregistry, is that we can show progress: #7012
Since the API is called pkg/v1/daemon, it's not really possible to "fix" (it requires dockerd)

Currently we just show an image of a tractor (which doesn't really translate very well)
I know it is from https://en.wikipedia.org/wiki/Tractor_pulling but the pun is lost in translation

Currently the docker image is about twice as big as the VM ISO, after compression...
When unpacked on disk in the podman storage after the pull, it is five (5) times as big.

175M minikube-v1.9.0.iso
313M gcr.io/k8s-minikube/kicbase_v0.0.9

REPOSITORY                                                  TAG                  IMAGE ID       CREATED         SIZE
gcr.io/k8s-minikube/kicbase                                 v0.0.9               819f427ce275   2 weeks ago     975 MB

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 24, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 23, 2020
@sharifelgamal sharifelgamal added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 16, 2020
@priyawadhwa priyawadhwa added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Jan 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. os/linux priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

6 participants