Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube 1.11.0 unable to start podman 1.9.3 on Fedora 32 because already existing volume #8508

Closed
FilBot3 opened this issue Jun 17, 2020 · 14 comments
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@FilBot3
Copy link

FilBot3 commented Jun 17, 2020

Steps to reproduce the issue:

  1. Installed Podman with DNF
  2. Installed Minikube with RPM
    3.Ran minikube using podman driver

Full output of failed command:

➜  ~ sudo podman system prune -a -f
Deleted Pods
Deleted Containers
Deleted Images
gcr.io/k8s-minikube/kicbase:v0.0.10
➜  ~ minikube delete               
🔥  Deleting "minikube" in podman ...
🔥  Removing /home/filbot/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.
➜  ~ sudo podman system prune -a -f
Deleted Pods
Deleted Containers
➜  ~ minikube start --driver=podman
😄  minikube v1.11.0 on Fedora 32
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating podman container (CPUs=2, Memory=3900MB) ...
✋  Stopping "minikube" in podman ...
🛑  Powering off "minikube" via SSH ...
🔥  Deleting "minikube" in podman ...
🤦  StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
err     : Process exited with status 1
output  : --- /lib/systemd/system/docker.service        2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-06-17 21:32:49.663321464 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

🔥  Creating podman container (CPUs=2, Memory=3900MB) ...
😿  Failed to start podman container. "minikube start" may fix it: creating host: create: creating: setting up container node: creating volume for minikube container: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists


💣  error provisioning host: Failed to start host: creating host: create: creating: setting up container node: creating volume for minikube container: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Full output of minikube start command used, if not already included:

See Above

Optional: Full output of minikube logs command:

NAME=Fedora
VERSION="32 (KDE Plasma)"
ID=fedora
VERSION_ID=32
VERSION_CODENAME=""
PLATFORM_ID="platform:f32"
PRETTY_NAME="Fedora 32 (KDE Plasma)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:32"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f32/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=32
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=32
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="KDE Plasma"
VARIANT_ID=kde
➜  ~ uname -a
Linux Reincarnate 5.6.18-300.fc32.x86_64 #1 SMP Wed Jun 10 21:38:25 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
➜  ~ podman version
Version:            1.9.3
RemoteAPI Version:  1
Go Version:         go1.14.2
OS/Arch:            linux/amd64
➜  ~ minikube version
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
➜  ~ go version
go version go1.14.3 linux/amd64

Followed the following Instructions for Podman Stable:

@FilBot3
Copy link
Author

FilBot3 commented Jun 17, 2020

I also pruned all the volumes using

sudo podman volume prune -f

and I still got the same error about the volume already existing.

@afbjorklund
Copy link
Collaborator

There are two problems here, the first is that the start fails and the second is that the restart fails too (on the existing volume)

🔥 Creating podman container (CPUs=2, Memory=3900MB) ...
🔥 Deleting "minikube" in podman ...

🤦 StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
See "systemctl status docker.service" and "journalctl -xe" for details.

So it needs some more logs, to try to understand why "podman run" and "start docker" failed. The volume is likely collerateral.

@afbjorklund afbjorklund added co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. os/linux labels Jun 18, 2020
@FilBot3
Copy link
Author

FilBot3 commented Jun 18, 2020

@FilBot3
Copy link
Author

FilBot3 commented Jun 18, 2020

From what I understand, podman does not require docker or the Docker Daemon/Socket to be running for Podman to work since it's using the OCI standards instead of what Docker does.

@afbjorklund
Copy link
Collaborator

There are two runtimes involved here.

The default is docker-in-docker (driver: docker), which has now changed to docker-in-podman (driver: podman)

If you want to change the container runtime for kubernetes as well, you need to use the --container-runtime.

Choices are containerd and cri-o

@FilBot3
Copy link
Author

FilBot3 commented Jun 18, 2020

So the command I'd be looking to use would be

minikube start --driver=podman --container-runtime=cri-o

Since containerd is mostly for Docker, and I assume that cri-o would be for Podman?

@afbjorklund
Copy link
Collaborator

Yes, that seems reasonable.

@FilBot3
Copy link
Author

FilBot3 commented Jun 18, 2020

Got a different error this time:

Seems it timed out. I rand the suggested systemctl status kubelet and journalctl -xeu kubelet commands which returned no unit file and no log entries, respectfully.

I don't seem to have crictl on my system however. So I'm looking for that now.
--edit--
Updated Gist with the crictl output.

@mazzystr
Copy link

Yup you need crio. Follow the guide here

@mazzystr
Copy link

to run Fedora 32 / minikube --driver=podman --container-runtime=crio you need the following...

  • Fedora 32 uses cgroups v2. Since k8s doesn't support v2 at this time the kernel must be directed to use v1. Execute this command to trigger that function then reboot ...
    sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
  • Firewalld must be running. If it's not running CNI will not create iptables rules
    firewall-cmd --add-port=30000-65535/tcp --permanent
    firewall-cmd --add-port=30000-65535/udp --permanent
    ** DO NOT ... firewall-cmd --reload ... after minikube is running!!!
  • Install rpms
    yum install -y podman podman-docker podman-plugins
  • Install CRI-O
    cri-o install and quickstart is here
    systemctl enable --now crio
  • Install kubeadm, kubectl, kubelet, and kubernetes-cni
    Quick install link is here
    Follow section Letting iptables see bridged traffic and Installing kubeadm, kubelet and kubectl
  • Create standard user account and add to wheel account
  • visudo and allow the wheel group to run all commands passwordless
  • Start minikube
  su - user
  minikube start --driver=podman --container-runtime=cri-o

@afbjorklund
Copy link
Collaborator

@mazzystr : You don't need to install CRI-O or Kubernetes on the host:

Install CRI-O
cri-o install and quickstart is here
systemctl enable --now crio
I don't seem to have crictl on my system however. So I'm looking for that now.

Install kubeadm, kubectl, kubelet, and kubernetes-cni
Quick install link is here
Follow section Letting iptables see bridged traffic and Installing kubeadm, kubelet and kubectl

When using the "podman" driver, these will come with the kicbase image.

You might want to install kubectl (but can also use minikube kubectl)

Running as a special user is not required, if you allow sudo podman...

@medyagh
Copy link
Member

medyagh commented Jul 29, 2020

@mazzystr has the advice in this comment helped #8508 (comment) ?
if you don't mind please re try with latest verison and comment if you still have this issue ?

@medyagh medyagh closed this as completed Jul 29, 2020
@medyagh medyagh reopened this Jul 29, 2020
@priyawadhwa
Copy link

Hey @mazzystr are you still seeing this issue?

@priyawadhwa priyawadhwa added the triage/needs-information Indicates an issue needs more information in order to work on it. label Aug 12, 2020
@medyagh
Copy link
Member

medyagh commented Aug 26, 2020

Hi @@FilBot3 , I haven't heard back from you, I wonder if you still have this issue?
Regrettably, there isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but please feel free to reopen whenever you feel ready to provide more information.

@medyagh medyagh closed this as completed Aug 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

5 participants