Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minipod: exit status 125 stdout: #18620

Closed
freeolive-guru opened this issue Apr 11, 2024 · 4 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@freeolive-guru
Copy link

What Happened?

minikube start --kubernetes-version=v1.27.10
--driver=podman --profile minipod

😄 [minipod] minikube v1.32.0 on Ubuntu 22.04
🆕 Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
✨ Using the podman driver based on existing profile
👍 Starting control plane node minipod in cluster minipod
🚜 Pulling base image ...
E0411 12:49:04.504994 65089 cache.go:189] Error downloading kic artifacts: not yet implemented, see issue #8426
🔄 Restarting existing podman container for "minipod" ...
🤦 StartHost failed, but will try again: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minipod: exit status 125
stdout:

stderr:
time="2024-04-11T12:49:04Z" level=warning msg="Error validating CNI config file /etc/cni/net.d/minipod.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
Error: no container with name or ID "minipod" found: no such container

🔄 Restarting existing podman container for "minipod" ...
😿 Failed to start podman container. Running "minikube delete -p minipod" may fix it: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minipod: exit status 125
stdout:

stderr:
time="2024-04-11T12:49:14Z" level=warning msg="Error validating CNI config file /etc/cni/net.d/minipod.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
Error: no container with name or ID "minipod" found: no such container

❌ Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minipod: exit status 125
stdout:

stderr:
time="2024-04-11T12:49:14Z" level=warning msg="Error validating CNI config file /etc/cni/net.d/minipod.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
Error: no container with name or ID "minipod" found: no such container

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │

Attach the log file

minikube start --kubernetes-version=v1.27.10
--driver=podman --profile minipod

😄 [minipod] minikube v1.32.0 on Ubuntu 22.04
🆕 Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
✨ Using the podman driver based on existing profile
👍 Starting control plane node minipod in cluster minipod
🚜 Pulling base image ...
E0411 12:49:04.504994 65089 cache.go:189] Error downloading kic artifacts: not yet implemented, see issue #8426
🔄 Restarting existing podman container for "minipod" ...
🤦 StartHost failed, but will try again: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minipod: exit status 125
stdout:

stderr:
time="2024-04-11T12:49:04Z" level=warning msg="Error validating CNI config file /etc/cni/net.d/minipod.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
Error: no container with name or ID "minipod" found: no such container

🔄 Restarting existing podman container for "minipod" ...
😿 Failed to start podman container. Running "minikube delete -p minipod" may fix it: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minipod: exit status 125
stdout:

stderr:
time="2024-04-11T12:49:14Z" level=warning msg="Error validating CNI config file /etc/cni/net.d/minipod.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
Error: no container with name or ID "minipod" found: no such container

❌ Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minipod: exit status 125
stdout:

stderr:
time="2024-04-11T12:49:14Z" level=warning msg="Error validating CNI config file /etc/cni/net.d/minipod.conflist: [plugin bridge does not support config version "1.0.0" plugin portmap does not support config version "1.0.0" plugin firewall does not support config version "1.0.0" plugin tuning does not support config version "1.0.0"]"
Error: no container with name or ID "minipod" found: no such container

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │

Operating System

Windows

Driver

VirtualBox

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 10, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 9, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 8, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants