-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fedora 31 vm-driver=podman fail to start trying to start docker service #6795
Comments
The driver uses podman to create a fake VM, in this node there is another container runtime... |
even though the driver is podman, the run-time is docker, we recently added support for cri-o runtime (will be in next release) I think it would be a good default behavior to set cri-o as default run time for podman. since they are built together. |
Could you share the output of |
In the meanwhile I updated to 1.9.2 Output of:
Minikube logs:
|
@thobianchi : this has been fixed in PR #7631 and other ongoing work on But it (podman driver) is not working yet, with the current minikube 1.9.x releases... I don't think containerd or crio works yet, though. So initially it will be running docker-in-podman |
The juju stuff is a known bug when running See #7053 (see #6391 (comment)) |
With the current master version:
|
Fix is not merged yet (WIP). Hopefully it will be included in minikube v1.10, though. The suggested "fix" looks bogus, since you want to use Once merged, we will require that |
Oh I'm sorry, I was sure that PR was merged. I will look forward the merge. Yes |
There is a new version of the podman-sudo branch now, updated to v1.10 |
So I'm trying on commit 947dc21 but there are still errors:
|
That looks like wrong commit, was supposed to be 6644c5c Otherwise it should have complained about your use of I updated it again, so the latest available is currently 28106fa Make sure to use Those errors look temporary, can you delete it explicitly (and make sure it does not exist) ?
Can check with I think they were both fixed in 22aa1af Apparently podman status is a bit broken... |
I'm sorry.
Verified that there were not existing container nor volume. |
That's weird, were you able to look at the logs ? (to see what the real docker startup error was)
Do |
|
@thobianchi minikube v1.10.1 includes a lot of fixes for podman driver, do you mind giving it another try? |
I think is the same error:
|
Looks like the same issue. We need to capture the logs, before the container is torn down. Possibly you could start the container again (podman start), and try to start docker service ? But it seems that something goes wrong the first time, the logs for which aren't shown here.
And when it deletes the first container and tries again, there is nothing deleting the volume ?
It could probably keep the volume from the first time, and just avoid trying to create it again. I'm not 100% convinced about the auto-kill feature, it might just as well have stayed down... |
The container is deleted so I can't do podman start. Is too quick the failure to allow me to exec in the container. |
You can see all the logs with The actual start is something like: |
(I'm on F32, minikube 1.10.1, same issue) volume is created, container isn't. The logs seem to indicate that there's nothing between 'volume create' and 'inspect' (sans volume) sudo logs confirm "podman volume create" then "podman inspect". Is there a missing command? logs attached, I hope they help. |
There is supposed to be a matching "sudo -n podman run" in there somewhere.
|
Executed that command I exec'ed into the container:
I think the error here is failed to start daemon: Devices cgroup isn't mounted |
Are you trying to run it with cgroups v2 ? Because you need to revert Fedora 31+ to cgroups v1
|
oh.. I'm using podman because on Fedora I can't use docker driver with cgroups2. If podman driver has the same dependency as docker I have to continue to use kvm... :( |
You can use the cri-o container runtime instead of the docker runtime, but I don't think that Kubernetes works with cgroups v2 just yet. So I'm not sure it helps in this case. |
I think that we need to extend some of the warnings for the "none" driver #7905, to also cover the "docker" and "podman" drivers. Especially things like these, with kernel and cgroups limitations... |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The exact command to reproduce the issue:
minikube --vm-driver=podman start
The full output of the command that failed:
[root@thomas-work]~# minikube --vm-driver=podman start
😄 minikube v1.7.3 on Fedora 31
✨ Using the podman (experimental) driver based on user configuration
🔥 Creating Kubernetes in podman container with (CPUs=2), Memory=2000MB (15719MB available) ...
💣 Unable to start VM. Please investigate and run 'minikube delete' if possible: creating host: create: provisioning: ssh command error:
command : sudo systemctl -f restart docker
err : Process exited with status 1
output : Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
The operating system version:
Fedora 31 : 5.5.5-200.fc31.x86_64
minikube version: v1.7.3
commit: 436667c
podman version 1.8.0
Selinux on permissive
It seems that even with podman driver, minikube is trying to restart docker service.
The text was updated successfully, but these errors were encountered: