Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

driver=podman, container-engine=docker, restart stopped minikube OCI runtime error #7996

Closed
elegos opened this issue May 4, 2020 · 17 comments · Fixed by #8001
Closed

driver=podman, container-engine=docker, restart stopped minikube OCI runtime error #7996

elegos opened this issue May 4, 2020 · 17 comments · Fixed by #8001
Labels
co/podman-driver podman driver issues co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@elegos
Copy link
Contributor

elegos commented May 4, 2020

Minikube is able to start once. When stopped, podman won't be able to turn it on again due to an OCI runtime error. This seems to happen only on minikube (i.e. no problems stopping and starting other services).

Steps to reproduce the issue:

  1. minikube start --driver=podman
  2. minikube stop
  3. minikube start

Full output of failed command:

    ~ : minikube start
😄  minikube v1.10.0-beta.2 on Fedora 32
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating podman container (CPUs=2, Memory=3900MB) ...
🐳  Preparing Kubernetes v1.18.1 on Docker 19.03.2 ...
🤦  Unable to restart cluster, will reset it: getting k8s client: client config: client config: context "minikube" does not exist
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"
    ~ : minikube stop
✋  Stopping "minikube" in podman ...
🛑  Powering off "minikube" via SSH ...
✋  Stopping "minikube" in podman ...
🛑  Powering off "minikube" via SSH ...
✋  Stopping "minikube" in podman ...
🛑  Node "" stopped.
    ~ : minikube start
😄  minikube v1.10.0-beta.2 on Fedora 32
✨  Using the podman (experimental) driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing podman container for "minikube" ...
🤦  StartHost failed, but will try again: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error

🔄  Restarting existing podman container for "minikube" ...
😿  Failed to start podman container. "minikube start" may fix it: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error


💣  error provisioning host: Failed to start host: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

minikube start with --alsologtostderr option:

    ~ : minikube start --alsologtostderr
I0504 19:42:04.195690   17511 start.go:99] hostinfo: {"hostname":"localhost.localdomain","uptime":642,"bootTime":1588613482,"procs":432,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"32","kernelVersion":"5.6.8-300.fc32.x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"5a3e2727-374c-4665-8b07-e67a1fc66448"}
I0504 19:42:04.196612   17511 start.go:109] virtualization: kvm host
😄  minikube v1.10.0-beta.2 on Fedora 32
I0504 19:42:04.196779   17511 notify.go:125] Checking for updates...
I0504 19:42:04.197146   17511 driver.go:253] Setting default libvirt URI to qemu:///system
I0504 19:42:04.253079   17511 podman.go:97] podman version: 1.9.1
✨  Using the podman (experimental) driver based on existing profile
I0504 19:42:04.253177   17511 start.go:206] selected driver: podman
I0504 19:42:04.253188   17511 start.go:579] validating driver "podman" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:3900 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.1 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true]}
I0504 19:42:04.253277   17511 start.go:585] status for podman: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
👍  Starting control plane node minikube in cluster minikube
I0504 19:42:04.253410   17511 cache.go:103] Beginning downloading kic artifacts for podman with docker
I0504 19:42:04.253423   17511 cache.go:115] Driver isn't docker, skipping base-image download
I0504 19:42:04.253434   17511 preload.go:81] Checking if preload exists for k8s version v1.18.1 and runtime docker
I0504 19:42:04.253465   17511 preload.go:96] Found local preload: /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4
I0504 19:42:04.253475   17511 cache.go:47] Caching tarball of preloaded images
I0504 19:42:04.253491   17511 preload.go:122] Found /home/elegos/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0504 19:42:04.253501   17511 cache.go:50] Finished verifying existence of preloaded tar for  v1.18.1 on docker
I0504 19:42:04.253600   17511 profile.go:156] Saving config to /home/elegos/.minikube/profiles/minikube/config.json ...
I0504 19:42:04.253873   17511 cache.go:125] Successfully downloaded all kic artifacts
I0504 19:42:04.253898   17511 start.go:223] acquiring machines lock for minikube: {Name:mk54bbd76b9ba071d84e6139eee3a3cd7ecc36f4 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0504 19:42:04.254300   17511 start.go:227] acquired machines lock for "minikube" in 379.272µs
I0504 19:42:04.254323   17511 start.go:87] Skipping create...Using existing machine configuration
I0504 19:42:04.254332   17511 fix.go:53] fixHost starting: 
I0504 19:42:04.254721   17511 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 19:42:04.324147   17511 fix.go:105] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0504 19:42:04.324175   17511 fix.go:131] unexpected machine state, will restart: <nil>
🔄  Restarting existing podman container for "minikube" ...
I0504 19:42:04.324623   17511 exec_runner.go:49] Run: sudo podman start minikube
I0504 19:42:04.669868   17511 fix.go:55] fixHost completed within 415.534003ms
I0504 19:42:04.669885   17511 start.go:74] releasing machines lock for "minikube", held for 415.570933ms
🤦  StartHost failed, but will try again: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error

I0504 19:42:09.670062   17511 start.go:223] acquiring machines lock for minikube: {Name:mk54bbd76b9ba071d84e6139eee3a3cd7ecc36f4 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0504 19:42:09.670314   17511 start.go:227] acquired machines lock for "minikube" in 218.322µs
I0504 19:42:09.670337   17511 start.go:87] Skipping create...Using existing machine configuration
I0504 19:42:09.670346   17511 fix.go:53] fixHost starting: 
I0504 19:42:09.670748   17511 cli_runner.go:108] Run: sudo podman inspect minikube --format={{.State.Status}}
I0504 19:42:09.749097   17511 fix.go:105] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0504 19:42:09.749114   17511 fix.go:131] unexpected machine state, will restart: <nil>
🔄  Restarting existing podman container for "minikube" ...
I0504 19:42:09.749272   17511 exec_runner.go:49] Run: sudo podman start minikube
I0504 19:42:10.077585   17511 fix.go:55] fixHost completed within 407.236685ms
I0504 19:42:10.077604   17511 start.go:74] releasing machines lock for "minikube", held for 407.277395ms
😿  Failed to start podman container. "minikube start" may fix it: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error

I0504 19:42:10.077735   17511 exit.go:58] WithError(error provisioning host)=Failed to start host: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error
 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x0)
        /usr/lib/golang/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1ade51e, 0x17, 0x1d98a00, 0xc0007c21a0)
        /home/elegos/Development/minikube/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2ae98a0, 0xc0004b72d0, 0x0, 0x1)
        /home/elegos/Development/minikube/cmd/minikube/cmd/start.go:161 +0xa7f
github.com/spf13/cobra.(*Command).execute(0x2ae98a0, 0xc0004b72c0, 0x1, 0x1, 0x2ae98a0, 0xc0004b72c0)
        /home/elegos/go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2ae88e0, 0x0, 0x1, 0xc000049ea0)
        /home/elegos/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
        /home/elegos/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
        /home/elegos/Development/minikube/cmd/minikube/cmd/root.go:108 +0x6a4
main.main()
        /home/elegos/Development/minikube/cmd/minikube/main.go:66 +0xea
W0504 19:42:10.077876   17511 out.go:201] error provisioning host: Failed to start host: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error

💣  error provisioning host: Failed to start host: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

sudo podman start minikube

    ~ : sudo podman start minikube
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

Thank you for your awesome work :)

@elegos elegos changed the title driver=podman, runtime-engine=docker, restart stopped minikube OCI runtime error driver=podman, container-engine=docker, restart stopped minikube OCI runtime error May 4, 2020
@afbjorklund afbjorklund added the co/podman-driver podman driver issues label May 4, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented May 4, 2020

Possibly related to containers/podman#4481 and migration issues from cgroups v2

EDIT: Nope, seems like only the error was the same. Easy to reproduce this (stop and start)

unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 4, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented May 4, 2020

But it does seem like sudo podman --cgroup-manager cgroupfs start minikube makes it work...

So it might be something related after all, though more like systemd vs cgroupfs than v1 vs v2 ?

Seems to be more like an accident, but anyway.

The usual start was:

WARN[0000] Failed to add conmon to systemd sandbox cgroup: Invalid unit name '/libpod_parent'
DEBU[000] Received: -1

This start now is:

WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: write /sys/fs/cgroup/unified/libpod_parent/conmon/tasks: open /sys/fs/cgroup/unified/libpod_parent/conmon/tasks: permission denied
DEBU[000] Received: 3114

So this failure is somehow more acceptable to it.

@elegos
Copy link
Contributor Author

elegos commented May 4, 2020

This bug seems reproducible with --container-runtime=cri-o, too.

@afbjorklund
Copy link
Collaborator

The code says:

        // to run nested container from privileged container in podman https://bugzilla.redhat.com/show_bug.cgi?id=1687713
        if ociBin == Podman {
                args = append(args, "--cgroup-manager", "cgroupfs")
        }

So if we are going to use that workaround, we need it for "podman start" as well...

Because on fedora, the default is systemd. But it seems it doesn't work for DIND:

We default to using systemd for managment, but systemd is not available inside of the container.
Another option that might work would be to volume mount in /run/systemd into the container.


On a side note, the error reporting code is broken.

There is an {{error}} where it should be {{.error}}

@afbjorklund afbjorklund added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 4, 2020
@afbjorklund
Copy link
Collaborator

This bug seems reproducible with --container-runtime=cri-o, too.

The bug is with the driver runtime, so independent on the inner runtime.

@elegos
Copy link
Contributor Author

elegos commented May 4, 2020

Hello @afbjorklund

I've tried your tree, and I've encountered the same issue:

    minikube    podman-start  : minikube start --container-runtime=cri-o
😄  minikube v1.10.0-beta.2 on Fedora 32
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating podman container (CPUs=2, Memory=3900MB) ...
🎁  Preparing Kubernetes v1.18.1 on CRI-O 1.17.3 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"
    minikube    podman-start  : minikube stop
✋  Stopping "minikube" in podman ...
🛑  Powering off "minikube" via SSH ...
✋  Stopping "minikube" in podman ...
🛑  Node "" stopped.
    minikube    podman-start  : minikube start
😄  minikube v1.10.0-beta.2 on Fedora 32
✨  Using the podman (experimental) driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing podman container for "minikube" ...
🤦  StartHost failed, but will try again: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error

🔄  Restarting existing podman container for "minikube" ...
😿  Failed to start podman container. "minikube start" may fix it: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error


💣  error provisioning host: Failed to start host: driver start: start: sudo podman start minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": sd-bus call: Invalid argument: OCI runtime error


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

@afbjorklund
Copy link
Collaborator

That is weird, where did the "--cgroup-manager cgroupfs" go ?

as in: sudo podman --cgroup-manager cgroupfs start minikube

@afbjorklund
Copy link
Collaborator

That is weird, where did the "--cgroup-manager cgroupfs" go ?

as in: sudo podman --cgroup-manager cgroupfs start minikube

It is not in your output:

🤦 StartHost failed, but will try again: driver start: start: sudo podman start minikube: exit status 125

@elegos
Copy link
Contributor Author

elegos commented May 4, 2020

I've added out.Ln(strings.Join(args[:], " ")) just before oci.go#270 and it prints out start --cgroup-manager cgroupfs minikube, not sure if the arguments get cut out somewhere else...

    minikube    podman-start  : minikube start
😄  minikube v1.10.0-beta.2 on Fedora 32
✨  Using the podman (experimental) driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing podman container for "minikube" ...
start --cgroup-manager cgroupfs minikube
🤦  StartHost failed, but will try again: driver start: start: exit status 125
🔄  Restarting existing podman container for "minikube" ...
start --cgroup-manager cgroupfs minikube
😿  Failed to start podman container. "minikube start" may fix it: driver start: start: exit status 125

💣  error provisioning host: Failed to start host: driver start: start: exit status 125

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

@elegos
Copy link
Contributor Author

elegos commented May 5, 2020

In any case I've tried manually restarting the minikube container, but with no better results (though it fails with a more talking error):

$ sudo podman start --cgroup-manager cgroupfs minikube
Error: unable to start container "minikube": writing file `devices.allow`: Invalid argument: OCI runtime error

@afbjorklund
Copy link
Collaborator

I think you also want to increase verbosity:

sudo podman --log-level debug --cgroup-manager cgroupfs start minikube

But the error looked different this time ?

@elegos
Copy link
Contributor Author

elegos commented May 5, 2020

I know nothing but what the console tells me :(

$ sudo podman --log-level debug --cgroup-manager cgroupfs start minikube
DEBU[0000] Found deprecated file /usr/share/containers/libpod.conf, please remove. Use /etc/containers/containers.conf to override defaults. 
DEBU[0000] Reading configuration file "/usr/share/containers/libpod.conf" 
DEBU[0000] Ignoring lipod.conf EventsLogger setting "journald". Use containers.conf if you want to change this setting and remove libpod.conf files. 
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] container-default [] host [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] [nproc=4194304:4194304]  [] [] [] true [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false  private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm   false 2048 crun map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] kata-fc:[/usr/bin/kata-fc] kata-qemu:[/usr/bin/kata-qemu] kata-runtime:[/usr/bin/kata-runtime] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing [] [crun runc] [crun] {false false false true true true}  false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/lib/containers/storage 
DEBU[0000] Using run root /var/run/containers/storage   
DEBU[0000] Using static dir /var/lib/containers/storage/libpod 
DEBU[0000] Using tmp dir /var/run/libpod                
DEBU[0000] Using volume path /var/lib/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay test mount with multiple lowers succeeded 
DEBU[0000] overlay test mount indicated that metacopy is being used 
WARN[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true 
DEBU[0000] Initializing event backend file              
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
WARN[0000] Error initializing configured OCI runtime kata-runtime: no valid executable found for OCI runtime kata-runtime: invalid argument 
WARN[0000] Error initializing configured OCI runtime kata-qemu: no valid executable found for OCI runtime kata-qemu: invalid argument 
WARN[0000] Error initializing configured OCI runtime kata-fc: no valid executable found for OCI runtime kata-fc: invalid argument 
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
WARN[0000] Default CNI network name podman is unchangeable 
DEBU[0000] Initialized SHM lock manager at path /libpod_lock 
DEBU[0000] Podman detected system restart - performing state refresh 
DEBU[0000] Made network namespace at /var/run/netns/cni-6b8f6035-3e7c-8aa9-3477-955f2ad0265a for container 5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 
INFO[0000] About to add CNI network lo (type=loopback)  
INFO[0000] Got pod network &{Name:minikube Namespace:minikube ID:5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 NetNS:/var/run/netns/cni-6b8f6035-3e7c-8aa9-3477-955f2ad0265a Networks:[] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[{HostPort:43077 ContainerPort:5000 Protocol:tcp HostIP:127.0.0.1} {HostPort:42465 ContainerPort:8443 Protocol:tcp HostIP:127.0.0.1} {HostPort:38729 ContainerPort:22 Protocol:tcp HostIP:127.0.0.1} {HostPort:39535 ContainerPort:2376 Protocol:tcp HostIP:127.0.0.1}] Bandwidth:<nil> IpRanges:[]}]} 
INFO[0000] About to add CNI network podman (type=bridge) 
DEBU[0000] overlay: mount_data=nodev,metacopy=on,lowerdir=/var/lib/containers/storage/overlay/l/TFX323LODTL2RJPGRAISRGRBS7:/var/lib/containers/storage/overlay/l/SXHVX7FDEE5OZFTJHI6YQ5ELLB:/var/lib/containers/storage/overlay/l/AYSYXAN44FUQZFABTKGS4WACMC:/var/lib/containers/storage/overlay/l/YGULYJQKYBWJZOUDC2WT5232Q2:/var/lib/containers/storage/overlay/l/43DRKDH7Q7PMPIRUTTJIPXRUZ4:/var/lib/containers/storage/overlay/l/AQ2LPE36QEKHWPMJOE2KSOLYU2:/var/lib/containers/storage/overlay/l/BRS2OVYIJHBWAZKMMB2P7XYK5A:/var/lib/containers/storage/overlay/l/NGSSKMRHDA2WAO6ACDFKWIS625:/var/lib/containers/storage/overlay/l/FCS536OPFGW6QCZVKFZCUYHZPY:/var/lib/containers/storage/overlay/l/XQG3QBEAULUXSOEP3EEEL56GEQ:/var/lib/containers/storage/overlay/l/IAP6SBUS74DZ7BL6OUQE3IVNUW:/var/lib/containers/storage/overlay/l/PWEKOX4BCKL4IZLE3Y7EZAA7GL:/var/lib/containers/storage/overlay/l/XYXW3NE6E7FN54YHSQFBLXAMSC:/var/lib/containers/storage/overlay/l/I4OY2QILLQM5TYOBBWZCIKQYNR:/var/lib/containers/storage/overlay/l/QB36IXDZTJBKUFULJBAFRPFECT:/var/lib/containers/storage/overlay/l/GNHON5A7C6RR2OICZAYP7KQTT2:/var/lib/containers/storage/overlay/l/DLMF53FULDJBMBUKIUKS3COP2I:/var/lib/containers/storage/overlay/l/GMLLDGHBA7VQFC4G45M37HDP7G:/var/lib/containers/storage/overlay/l/SLXUGJA7TYYHZ6WAYVX7WHGFRV:/var/lib/containers/storage/overlay/l/YJS6UANXRCJM6KYXRMGIMDLI2K:/var/lib/containers/storage/overlay/l/4VM2CHPSLEAPIGQKOGP2BOJIYT:/var/lib/containers/storage/overlay/l/XKKFBLCJDWVRA7YLUEHUIX4CDM,upperdir=/var/lib/containers/storage/overlay/9549f24bf386f7f2bff2457e346c093781c0e3bfdd8f36e73400aead71a2b26d/diff,workdir=/var/lib/containers/storage/overlay/9549f24bf386f7f2bff2457e346c093781c0e3bfdd8f36e73400aead71a2b26d/work,context="system_u:object_r:container_file_t:s0:c364,c430" 
DEBU[0000] mounted container "5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864" at "/var/lib/containers/storage/overlay/9549f24bf386f7f2bff2457e346c093781c0e3bfdd8f36e73400aead71a2b26d/merged" 
DEBU[0000] Created root filesystem for container 5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 at /var/lib/containers/storage/overlay/9549f24bf386f7f2bff2457e346c093781c0e3bfdd8f36e73400aead71a2b26d/merged 
DEBU[0000] [0] CNI result: &{0.4.0 [{Name:cni-podman0 Mac:6a:ec:90:4c:45:62 Sandbox:} {Name:veth88817626 Mac:fe:b7:f0:cc:2c:92 Sandbox:} {Name:eth0 Mac:06:c9:d5:a1:ac:a6 Sandbox:/var/run/netns/cni-6b8f6035-3e7c-8aa9-3477-955f2ad0265a}] [{Version:4 Interface:0xc00037fa48 Address:{IP:10.88.0.108 Mask:ffff0000} Gateway:10.88.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}] {[]  [] []}} 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Setting CGroup path for container 5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 to /libpod_parent/libpod-5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Created OCI spec for container 5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 at /var/lib/containers/storage/overlay-containers/5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 -u 5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864/userdata -p /var/run/containers/storage/overlay-containers/5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864/userdata/pidfile -l k8s-file:/var/lib/containers/storage/overlay-containers/5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -t --conmon-pidfile /var/run/containers/storage/overlay-containers/5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864]"
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: write /sys/fs/cgroup/unified/libpod_parent/conmon/tasks: open /sys/fs/cgroup/unified/libpod_parent/conmon/tasks: permission denied 
DEBU[0000] Received: 10814                              
INFO[0000] Got Conmon PID as 10810                      
DEBU[0000] Created container 5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 in OCI runtime 
DEBU[0000] Starting container 5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 with command [/usr/local/bin/entrypoint /sbin/init] 
DEBU[0000] Started container 5c3e1424232ca058aba55490fc143574428f6f63aa8cb97dc637417ac9ca1864 
minikube

And before you ask:

dnf provides /usr/share/containers/libpod.conf
Ultima verifica della scadenza dei metadati: 18:00:33 fa il lun 4 mag 2020, 14:36:16.
podman-2:1.8.2-2.fc32.x86_64 : Manage Pods, Containers and Container Images
Repo         : fedora
Corrispondenza trovata in:
Nome file   : /usr/share/containers/libpod.conf

podman-2:1.9.1-1.fc32.x86_64 : Manage Pods, Containers and Container Images
Repo         : @System
Corrispondenza trovata in:
Nome file   : /usr/share/containers/libpod.conf

podman-2:1.9.1-1.fc32.x86_64 : Manage Pods, Containers and Container Images
Repo         : updates
Corrispondenza trovata in:
Nome file   : /usr/share/containers/libpod.conf

dnf provides /usr/share/containers/containers.conf
Ultima verifica della scadenza dei metadati: 18:01:03 fa il lun 4 mag 2020, 14:36:16.
containers-common-1:0.1.41-1.fc32.x86_64 : Configuration files for working with image signatures
Repo         : fedora
Corrispondenza trovata in:
Nome file   : /usr/share/containers/containers.conf

containers-common-1:0.2.0-1.fc32.x86_64 : Configuration files for working with image signatures
Repo         : @System
Corrispondenza trovata in:
Nome file   : /usr/share/containers/containers.conf

containers-common-1:0.2.0-1.fc32.x86_64 : Configuration files for working with image signatures
Repo         : updates
Corrispondenza trovata in:
Nome file   : /usr/share/containers/containers.conf

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 5, 2020

Looks happy enough now

This is just an rpm packaging issue:
Found deprecated file /usr/share/containers/libpod.conf, please remove. Use /etc/containers/containers.conf to override defaults.
(shouldn't affect anything)

@elegos
Copy link
Contributor Author

elegos commented May 5, 2020

Unfortunately it did nothing :(

$ sudo podman --log-level debug --cgroup-manager cgroupfs start minikube
DEBU[0000] Ignoring lipod.conf EventsLogger setting "journald". Use containers.conf if you want to change this setting and remove libpod.conf files. 
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] container-default [] host [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] [nproc=4194304:4194304]  [] [] [] true [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false  private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm   false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing [] [crun runc] [crun] {false false false true true true}  false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/lib/containers/storage 
DEBU[0000] Using run root /var/run/containers/storage   
DEBU[0000] Using static dir /var/lib/containers/storage/libpod 
DEBU[0000] Using tmp dir /var/run/libpod                
DEBU[0000] Using volume path /var/lib/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] cached value indicated that overlay is supported 
DEBU[0000] cached value indicated that metacopy is being used 
DEBU[0000] cached value indicated that native-diff is not being used 
WARN[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true 
DEBU[0000] Initializing event backend file              
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
WARN[0000] Default CNI network name podman is unchangeable 
DEBU[0000] Made network namespace at /var/run/netns/cni-03015af0-a448-1270-613c-0462929404aa for container 9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c 
INFO[0000] About to add CNI network lo (type=loopback)  
DEBU[0000] overlay: mount_data=nodev,metacopy=on,lowerdir=/var/lib/containers/storage/overlay/l/TFX323LODTL2RJPGRAISRGRBS7:/var/lib/containers/storage/overlay/l/SXHVX7FDEE5OZFTJHI6YQ5ELLB:/var/lib/containers/storage/overlay/l/AYSYXAN44FUQZFABTKGS4WACMC:/var/lib/containers/storage/overlay/l/YGULYJQKYBWJZOUDC2WT5232Q2:/var/lib/containers/storage/overlay/l/43DRKDH7Q7PMPIRUTTJIPXRUZ4:/var/lib/containers/storage/overlay/l/AQ2LPE36QEKHWPMJOE2KSOLYU2:/var/lib/containers/storage/overlay/l/BRS2OVYIJHBWAZKMMB2P7XYK5A:/var/lib/containers/storage/overlay/l/NGSSKMRHDA2WAO6ACDFKWIS625:/var/lib/containers/storage/overlay/l/FCS536OPFGW6QCZVKFZCUYHZPY:/var/lib/containers/storage/overlay/l/XQG3QBEAULUXSOEP3EEEL56GEQ:/var/lib/containers/storage/overlay/l/IAP6SBUS74DZ7BL6OUQE3IVNUW:/var/lib/containers/storage/overlay/l/PWEKOX4BCKL4IZLE3Y7EZAA7GL:/var/lib/containers/storage/overlay/l/XYXW3NE6E7FN54YHSQFBLXAMSC:/var/lib/containers/storage/overlay/l/I4OY2QILLQM5TYOBBWZCIKQYNR:/var/lib/containers/storage/overlay/l/QB36IXDZTJBKUFULJBAFRPFECT:/var/lib/containers/storage/overlay/l/GNHON5A7C6RR2OICZAYP7KQTT2:/var/lib/containers/storage/overlay/l/DLMF53FULDJBMBUKIUKS3COP2I:/var/lib/containers/storage/overlay/l/GMLLDGHBA7VQFC4G45M37HDP7G:/var/lib/containers/storage/overlay/l/SLXUGJA7TYYHZ6WAYVX7WHGFRV:/var/lib/containers/storage/overlay/l/YJS6UANXRCJM6KYXRMGIMDLI2K:/var/lib/containers/storage/overlay/l/4VM2CHPSLEAPIGQKOGP2BOJIYT:/var/lib/containers/storage/overlay/l/XKKFBLCJDWVRA7YLUEHUIX4CDM,upperdir=/var/lib/containers/storage/overlay/acf7053a46135b868c8a75fd3b59a81c7ad50b9230cb44cace7be06132d41d3d/diff,workdir=/var/lib/containers/storage/overlay/acf7053a46135b868c8a75fd3b59a81c7ad50b9230cb44cace7be06132d41d3d/work,context="system_u:object_r:container_file_t:s0:c852,c959" 
DEBU[0000] mounted container "9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c" at "/var/lib/containers/storage/overlay/acf7053a46135b868c8a75fd3b59a81c7ad50b9230cb44cace7be06132d41d3d/merged" 
DEBU[0000] Created root filesystem for container 9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c at /var/lib/containers/storage/overlay/acf7053a46135b868c8a75fd3b59a81c7ad50b9230cb44cace7be06132d41d3d/merged 
INFO[0000] Got pod network &{Name:minikube Namespace:minikube ID:9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c NetNS:/var/run/netns/cni-03015af0-a448-1270-613c-0462929404aa Networks:[] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[{HostPort:44127 ContainerPort:8443 Protocol:tcp HostIP:127.0.0.1} {HostPort:46431 ContainerPort:22 Protocol:tcp HostIP:127.0.0.1} {HostPort:35847 ContainerPort:2376 Protocol:tcp HostIP:127.0.0.1} {HostPort:35335 ContainerPort:5000 Protocol:tcp HostIP:127.0.0.1}] Bandwidth:<nil> IpRanges:[]}]} 
INFO[0000] About to add CNI network podman (type=bridge) 
DEBU[0000] [0] CNI result: &{0.4.0 [{Name:cni-podman0 Mac:46:ce:f2:b1:fd:03 Sandbox:} {Name:vethdd0390a4 Mac:86:73:b3:48:5b:af Sandbox:} {Name:eth0 Mac:76:5c:ea:cb:b3:0f Sandbox:/var/run/netns/cni-03015af0-a448-1270-613c-0462929404aa}] [{Version:4 Interface:0xc000456268 Address:{IP:10.88.0.111 Mask:ffff0000} Gateway:10.88.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}] {[]  [] []}} 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] Setting CGroup path for container 9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c to /libpod_parent/libpod-9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Created OCI spec for container 9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c at /var/lib/containers/storage/overlay-containers/9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c -u 9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c/userdata -p /var/run/containers/storage/overlay-containers/9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c/userdata/pidfile -l k8s-file:/var/lib/containers/storage/overlay-containers/9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -t --conmon-pidfile /var/run/containers/storage/overlay-containers/9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c]"
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: write /sys/fs/cgroup/unified/libpod_parent/conmon/tasks: open /sys/fs/cgroup/unified/libpod_parent/conmon/tasks: permission denied 
DEBU[0000] Received: 19107                              
INFO[0000] Got Conmon PID as 19096                      
DEBU[0000] Created container 9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c in OCI runtime 
DEBU[0000] Starting container 9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c with command [/usr/local/bin/entrypoint /sbin/init] 
DEBU[0000] Started container 9bdac471247f794fb7514b388233a67592254e3502b9f4b1640721863783d93c 
minikube

@elegos
Copy link
Contributor Author

elegos commented May 5, 2020

New fact:

now it seems it's not "hard" failing, but still there's something to do with systemd cgroup, this is the minikube's container log (from when I try to wake it up):

INFO: ensuring we can execute /bin/mount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: fix cgroup mounts for all subsystems
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: faking /sys/class/dmi/id/product_name to be "kind"
INFO: faking /sys/class/dmi/id/product_uuid to be random
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
INFO: setting iptables to detected mode: legacy
systemd 242 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization container-other.
Detected architecture x86-64.

Welcome to Ubuntu 19.10!

Set hostname to <minikube>.
Failed to bump fs.file-max, ignoring: Invalid argument
Failed to attach 1 to compat systemd cgroup /libpod_parent/libpod-00efc8a8417971d4edccc018bec5ddb3c3229fb45d238f85dff2b69e59784ee7/init.scope: No such file or directory
Failed to open pin file: No such file or directory
Failed to allocate manager object: No such file or directory
[!!!!!!] Failed to allocate manager object.
Exiting PID 1...

It seems that, even if the cgroup-manager is set to cgroupfs, minikube's image is still trying to use systemd?

@elegos
Copy link
Contributor Author

elegos commented May 5, 2020

Another fact:

the previous logs are with --container-runtime=cri-o. Using Docker, it tries to start the container, just to delete it and starting it again, without deleting the "minikube" volume, ending with an error:

$ minikube start
😄  minikube v1.10.0-beta.2 on Fedora 32
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating podman container (CPUs=2, Memory=3900MB) ...
✋  Stopping "minikube" in podman ...
🔥  Deleting "minikube" in podman ...
🤦  StartHost failed, but will try again: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
🔥  Creating podman container (CPUs=2, Memory=3900MB) ...
😿  Failed to start podman container. "minikube start" may fix it: creating host: create: creating: setting up container node: creating volume for minikube container: sudo podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists


💣  error provisioning host: Failed to start host: creating host: create: creating: setting up container node: creating volume for minikube container: sudo podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

@afbjorklund
Copy link
Collaborator

See #8033 (comment) and #8033 (comment) for why systemd isn't starting when running with podman

@afbjorklund afbjorklund added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels May 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants