Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube fails to start with podman driver #8384

Closed
FlorianLudwig opened this issue Jun 5, 2020 · 9 comments
Closed

minikube fails to start with podman driver #8384

FlorianLudwig opened this issue Jun 5, 2020 · 9 comments
Labels
co/podman-driver podman driver issues kind/support Categorizes issue or PR as a support question.

Comments

@FlorianLudwig
Copy link

Steps to reproduce the issue:

  1. use podman driver
  2. start / stop minikube several times
  3. ... ?
  4. stops starting

Full output of failed command:

minikube start --alsologtostderr
I0605 13:57:54.305851   83952 start.go:98] hostinfo: {"hostname":"knight1","uptime":57386,"bootTime":1591300888,"procs":385,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"32","kernelVersion":"5.6.15-300.fc32.x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"8add5042-9071-47e4-b19b-6ab65e3d6572"}
I0605 13:57:54.306347   83952 start.go:108] virtualization: kvm host
😄  minikube v1.11.0 on Fedora 32
I0605 13:57:54.306507   83952 notify.go:125] Checking for updates...
I0605 13:57:54.306703   83952 driver.go:253] Setting default libvirt URI to qemu:///system
I0605 13:57:54.354616   83952 podman.go:99] podman version: 1.9.3
✨  Using the podman (experimental) driver based on existing profile
I0605 13:57:54.354682   83952 start.go:214] selected driver: podman
I0605 13:57:54.354686   83952 start.go:611] validating driver "podman" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3900 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.88.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false dashboard:false default-storageclass:true efk:false freshpod:false gvisor:false helm-tiller:false ingress:true ingress-dns:false istio:false istio-provisioner:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false] VerifyComponents:map[apiserver:true system_pods:true]}
I0605 13:57:54.354748   83952 start.go:617] status for podman: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
👍  Starting control plane node minikube in cluster minikube
I0605 13:57:54.354825   83952 cache.go:105] Beginning downloading kic artifacts for podman with docker
I0605 13:57:54.354832   83952 cache.go:127] Driver isn't docker, skipping base-image download
I0605 13:57:54.354838   83952 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0605 13:57:54.354857   83952 preload.go:103] Found local preload: /home/root2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
I0605 13:57:54.354863   83952 cache.go:49] Caching tarball of preloaded images
I0605 13:57:54.354871   83952 preload.go:129] Found /home/root2/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0605 13:57:54.354877   83952 cache.go:52] Finished verifying existence of preloaded tar for  v1.18.3 on docker
I0605 13:57:54.354952   83952 profile.go:156] Saving config to /home/root2/.minikube/profiles/minikube/config.json ...
I0605 13:57:54.355151   83952 cache.go:152] Successfully downloaded all kic artifacts
I0605 13:57:54.355170   83952 start.go:240] acquiring machines lock for minikube: {Name:mk2b1aca88a11c4ee9c5ef88eb362d7622ca20e8 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0605 13:57:54.355327   83952 start.go:244] acquired machines lock for "minikube" in 134.341µs
I0605 13:57:54.355340   83952 start.go:88] Skipping create...Using existing machine configuration
I0605 13:57:54.355346   83952 fix.go:53] fixHost starting: 
I0605 13:57:54.355520   83952 cli_runner.go:108] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0605 13:57:54.430512   83952 fix.go:105] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0605 13:57:54.430538   83952 fix.go:131] unexpected machine state, will restart: <nil>
🔄  Restarting existing podman container for "minikube" ...
I0605 13:57:54.430777   83952 cli_runner.go:108] Run: sudo -n podman start --cgroup-manager cgroupfs minikube
I0605 13:57:54.806293   83952 fix.go:55] fixHost completed within 450.941437ms
I0605 13:57:54.806318   83952 start.go:75] releasing machines lock for "minikube", held for 450.980859ms
🤦  StartHost failed, but will try again: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": writing file `devices.allow`: Invalid argument: OCI runtime error

I0605 13:57:59.806570   83952 start.go:240] acquiring machines lock for minikube: {Name:mk2b1aca88a11c4ee9c5ef88eb362d7622ca20e8 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0605 13:57:59.806711   83952 start.go:244] acquired machines lock for "minikube" in 114.19µs
I0605 13:57:59.806793   83952 start.go:88] Skipping create...Using existing machine configuration
I0605 13:57:59.806799   83952 fix.go:53] fixHost starting: 
I0605 13:57:59.807106   83952 cli_runner.go:108] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0605 13:57:59.885478   83952 fix.go:105] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0605 13:57:59.885504   83952 fix.go:131] unexpected machine state, will restart: <nil>
🔄  Restarting existing podman container for "minikube" ...
I0605 13:57:59.885621   83952 cli_runner.go:108] Run: sudo -n podman start --cgroup-manager cgroupfs minikube
I0605 13:58:00.278054   83952 fix.go:55] fixHost completed within 471.240699ms
I0605 13:58:00.278123   83952 start.go:75] releasing machines lock for "minikube", held for 471.394778ms
😿  Failed to start podman container. "minikube start" may fix it: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": writing file `devices.allow`: Invalid argument: OCI runtime error

I0605 13:58:00.278835   83952 exit.go:58] WithError(error provisioning host)=Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": writing file `devices.allow`: Invalid argument: OCI runtime error
 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x40c11a, 0x187eca0, 0x1863880)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1ae579a, 0x17, 0x1db6b80, 0xc000a82740)
	/app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2b00360, 0xc0001cf910, 0x0, 0x1)
	/app/cmd/minikube/cmd/start.go:169 +0xac2
github.com/spf13/cobra.(*Command).execute(0x2b00360, 0xc0001cf8e0, 0x1, 0x1, 0x2b00360, 0xc0001cf8e0)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2b05220, 0x0, 0x1, 0xc0006fe140)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
	/app/cmd/minikube/cmd/root.go:112 +0x747
main.main()
	/app/cmd/minikube/main.go:66 +0xea
W0605 13:58:00.279952   83952 out.go:201] error provisioning host: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": writing file `devices.allow`: Invalid argument: OCI runtime error

💣  error provisioning host: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:

stderr:
Error: unable to start container "minikube": writing file `devices.allow`: Invalid argument: OCI runtime error


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 5, 2020

What OCI runtime for podman is this ? Can you share your podman configuration ?

@afbjorklund afbjorklund added co/podman-driver podman driver issues kind/support Categorizes issue or PR as a support question. labels Jun 5, 2020
@afbjorklund
Copy link
Collaborator

It seems like the Fedora 32 package for podman defaults to the crun runtime.

# Default OCI runtime
runtime = "crun"

podman-1.9.3-1.fc32.x86_64
crun-0.13-2.fc32.x86_64
runc-1.0.0-144.dev.gite6555cc.fc32.x86_64

This was for "writing file devices.allow: Invalid argument: OCI runtime error"

@afbjorklund
Copy link
Collaborator

Similar errors seen in #7996

@FlorianLudwig
Copy link
Author

Hi @afbjorklund, indeed crun runtinme.

# cat /usr/share/containers/libpod.conf
# libpod.conf is the default configuration file for all tools using libpod to
# manage containers

# Default transport method for pulling and pushing for images
image_default_transport = "docker://"

# Paths to look for the conmon container manager binary.
# If the paths are empty or no valid path was found, then the `$PATH`
# environment variable will be used as the fallback.
conmon_path = [
	    "/usr/libexec/podman/conmon",
	    "/usr/local/libexec/podman/conmon",
	    "/usr/local/lib/podman/conmon",
	    "/usr/bin/conmon",
	    "/usr/sbin/conmon",
	    "/usr/local/bin/conmon",
	    "/usr/local/sbin/conmon",
	    "/run/current-system/sw/bin/conmon",
]

# Environment variables to pass into conmon
conmon_env_vars = [
		"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
]

# CGroup Manager - valid values are "systemd" and "cgroupfs"
cgroup_manager = "systemd"

# Container init binary
#init_path = "/usr/libexec/podman/catatonit"

# Directory for persistent libpod files (database, etc)
# By default, this will be configured relative to where containers/storage
# stores containers
# Uncomment to change location from this default
#static_dir = "/var/lib/containers/storage/libpod"

# Directory for temporary files. Must be tmpfs (wiped after reboot)
tmp_dir = "/var/run/libpod"

# Maximum size of log files (in bytes)
# -1 is unlimited
max_log_size = -1

# Whether to use chroot instead of pivot_root in the runtime
no_pivot_root = false

# Directory containing CNI plugin configuration files
cni_config_dir = "/etc/cni/net.d/"

# Directories where the CNI plugin binaries may be located
cni_plugin_dir = [
	       "/usr/libexec/cni",
	       "/usr/lib/cni",
	       "/usr/local/lib/cni",
	       "/opt/cni/bin"
]

# Default CNI network for libpod.
# If multiple CNI network configs are present, libpod will use the network with
# the name given here for containers unless explicitly overridden.
# The default here is set to the name we set in the
# 87-podman-bridge.conflist included in the repository.
# Not setting this, or setting it to the empty string, will use normal CNI
# precedence rules for selecting between multiple networks.
cni_default_network = "podman"

# Default libpod namespace
# If libpod is joined to a namespace, it will see only containers and pods
# that were created in the same namespace, and will create new containers and
# pods in that namespace.
# The default namespace is "", which corresponds to no namespace. When no
# namespace is set, all containers and pods are visible.
#namespace = ""

# Default infra (pause) image name for pod infra containers
infra_image = "k8s.gcr.io/pause:3.2"

# Default command to run the infra container
infra_command = "/pause"

# Determines whether libpod will reserve ports on the host when they are
# forwarded to containers. When enabled, when ports are forwarded to containers,
# they are held open by conmon as long as the container is running, ensuring that
# they cannot be reused by other programs on the host. However, this can cause
# significant memory usage if a container has many ports forwarded to it.
# Disabling this can save memory.
#enable_port_reservation = true

# Default libpod support for container labeling
# label=true

# The locking mechanism to use
lock_type = "shm"

# Number of locks available for containers and pods.
# If this is changed, a lock renumber must be performed (e.g. with the
# 'podman system renumber' command).
num_locks = 2048

# Directory for libpod named volumes.
# By default, this will be configured relative to where containers/storage
# stores containers.
# Uncomment to change location from this default.
#volume_path = "/var/lib/containers/storage/volumes"

# Selects which logging mechanism to use for Podman events.  Valid values
# are `journald` or `file`.
# events_logger = "journald"

# Specify the keys sequence used to detach a container.
# Format is a single character [a-Z] or a comma separated sequence of
# `ctrl-<value>`, where `<value>` is one of:
# `a-z`, `@`, `^`, `[`, `\`, `]`, `^` or `_`
#
# detach_keys = "ctrl-p,ctrl-q"

# Default OCI runtime
runtime = "crun"

# List of the OCI runtimes that support --format=json.  When json is supported
# libpod will use it for reporting nicer errors.
runtime_supports_json = ["crun", "runc"]

# List of all the OCI runtimes that support --cgroup-manager=disable to disable
# creation of CGroups for containers.
runtime_supports_nocgroups = ["crun"]

# Paths to look for a valid OCI runtime (runc, runv, etc)
# If the paths are empty or no valid path was found, then the `$PATH`
# environment variable will be used as the fallback.
[runtimes]
runc = [
	    "/usr/bin/runc",
	    "/usr/sbin/runc",
	    "/usr/local/bin/runc",
	    "/usr/local/sbin/runc",
	    "/sbin/runc",
	    "/bin/runc",
	    "/usr/lib/cri-o-runc/sbin/runc",
	    "/run/current-system/sw/bin/runc",
]

crun = [
		"/usr/bin/crun",
		"/usr/sbin/crun",
		"/usr/local/bin/crun",
		"/usr/local/sbin/crun",
		"/sbin/crun",
		"/bin/crun",
		"/run/current-system/sw/bin/crun",
]

# Kata Containers is an OCI runtime, where containers are run inside lightweight
# Virtual Machines (VMs). Kata provides additional isolation towards the host,
# minimizing the host attack surface and mitigating the consequences of
# containers breakout.
# Please notes that Kata does not support rootless podman yet, but we can leave
# the paths below blank to let them be discovered by the $PATH environment
# variable.

# Kata Containers with the default configured VMM
kata-runtime = [
    "/usr/bin/kata-runtime",
]

# Kata Containers with the QEMU VMM
kata-qemu = [
    "/usr/bin/kata-qemu",
]

# Kata Containers with the Firecracker VMM
kata-fc = [
    "/usr/bin/kata-fc",
]

# The [runtimes] table MUST be the last thing in this file.
# (Unless another table is added)
# TOML does not provide a way to end a table other than a further table being
# defined, so every key hereafter will be part of [runtimes] and not the main
# config.

@afbjorklund
Copy link
Collaborator

It is supposed to work, but then again we don't do any testing on Fedora

@mazzystr
Copy link

I do a ton of testing with Fedora 32, non-priv user + minikube --driver=podman ... it runs beautifully

  • Fedora 32 uses cgroups v2. Since k8s doesn't support v2 at this time the kernel must be directed to use v1. Execute this command to trigger that function then reboot ...
    sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"

  • Firewalld must be running. If it's not running CNI will not create nat or filter rules
    firewall-cmd --add-port=30000-65535/tcp --permanent
    firewall-cmd --add-port=30000-65535/udp --permanent
    firewall-cmd --add-port=8443/tcp --permanent
    firewall-cmd --add-interface=cni-podman0 --permanent
    ** DO NOT ... firewall-cmd --reload ... after minikube is running!!!

  • Install rpms
    yum install -y podman podman-docker

  • Install CRI-O
    cri-o install and quickstart is here
    systemctl enable --now crio

  • Install kubeadm, kubectl, kubelet, and kubernetes-cni
    Quick install link is here
    Follow section Letting iptables see bridged traffic and Installing kubeadm, kubelet and kubectl

  • Install minikube
    https://minikube.sigs.k8s.io/docs/start/

  • Create standard user account

  • visudo allow the wheel group to run all commands passwordless

  • Start minikube
    su - user
    minikube start --driver=podman --container-runtime=cri-o
    kubectl get nodes

@afbjorklund
Copy link
Collaborator

There are lots of issues with podman v2

@medyagh
Copy link
Member

medyagh commented Aug 12, 2020

we dont support podman 2

@medyagh medyagh closed this as completed Aug 12, 2020
@afbjorklund
Copy link
Collaborator

we dont support podman 2

I opened up a tracking bug for it: #9120

podman 1 is deprecated, partly unavailable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

4 participants