Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube start after previous delete fails with "network ... is already being used by a cni configuration" #10379

Closed
adamish opened this issue Feb 5, 2021 · 10 comments
Assignees
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@adamish
Copy link

adamish commented Feb 5, 2021

Steps to reproduce the issue:

  1. minikube start --driver=podman
  2. minikube delete
  3. minikube start --driver=podman

Full output of minikube start command used

πŸ˜„  minikube v1.17.1 on Redhat 8.2
✨  Using the podman (experimental) driver based on user configuration
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating podman container (CPUs=2, Memory=3800MB) ...| E0205 22:30:38.549998  304006 network_create.go:85] error while trying to create network create network minikube 192.168.49.0/24: sudo -n podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 minikube: exit status 125
stdout:

stderr:
Error: network 192.168.49.0/24 is already being used by a cni configuration

The cause is that minikube delete doesn't clean up the network it created with podman, and the subsequent start command then sees the network and complains about it.

This can be worked around with the following, however this isn't obvious

sudo podman network rm minikube
@BLasan
Copy link
Contributor

BLasan commented Feb 6, 2021

@adamish try running minikube delete --all instead of minikube delete

@alekonko
Copy link

alekonko commented Feb 7, 2021

@BLasan i have same problem. Cluster seems to work correctly.

Steps to reproduce the issue:

  1. minikube delete --all
  2. minikube config set driver podman
  3. minikube start --driver=podman --container-runtime=cri-o -p minikube --cpus=4 --memory=12000MB

Full output of failed command:

Full output of minikube start command used, if not already included:
πŸ˜„ minikube v1.17.1 on Fedora 33
✨ Using the podman (experimental) driver based on user configuration
πŸ‘ Starting control plane node minikube in cluster minikube
πŸ”₯ Creating podman container (CPUs=4, Memory=12000MB) ...- E0207 12:41:09.265224 831751 network_create.go:85] error while trying to create network create network minikube 192.168.49.0/24: sudo -n podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 minikube: exit status 125
stdout:

stderr:
Error: network 192.168.49.0/24 is already being used by a cni configuration

❗ Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create network minikube 192.168.49.0/24: sudo -n podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 minikube: exit status 125
stdout:

stderr:
Error: network 192.168.49.0/24 is already being used by a cni configuration

🎁 Preparing Kubernetes v1.20.2 on CRI-O 1.19.1 ...
β–ͺ Generating certificates and keys ...
πŸ”— Configuring CNI (Container Networking Interface) ...
β–ͺ Booting up control plane ...
β–ͺ Configuring RBAC rules ...
πŸ”Ž Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass

❗ /usr/local/bin/kubectl is version 1.18.2-0-g52c56ce, which may have incompatibilites with Kubernetes 1.20.2.
β–ͺ Want kubectl v1.20.2? Try 'minikube kubectl -- get pods -A'
πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Optional: Full output of minikube logs command:

@BLasan
Copy link
Contributor

BLasan commented Feb 7, 2021

@BLasan i have same problem. Cluster seems to work correctly.

Steps to reproduce the issue:

  1. minikube delete --all
  2. minikube config set driver podman
  3. minikube start --driver=podman --container-runtime=cri-o -p minikube --cpus=4 --memory=12000MB

Full output of failed command:

Full output of minikube start command used, if not already included:
minikube v1.17.1 on Fedora 33
Using the podman (experimental) driver based on user configuration
Starting control plane node minikube in cluster minikube
Creating podman container (CPUs=4, Memory=12000MB) ...- E0207 12:41:09.265224 831751 network_create.go:85] error while trying to create network create network minikube 192.168.49.0/24: sudo -n podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 minikube: exit status 125
stdout:

stderr:
Error: network 192.168.49.0/24 is already being used by a cni configuration

Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create network minikube 192.168.49.0/24: sudo -n podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 minikube: exit status 125
stdout:

stderr:
Error: network 192.168.49.0/24 is already being used by a cni configuration

Preparing Kubernetes v1.20.2 on CRI-O 1.19.1 ...
Generating certificates and keys ...
Configuring CNI (Container Networking Interface) ...
Booting up control plane ...
Configuring RBAC rules ...
Verifying Kubernetes components...
Enabled addons: storage-provisioner, default-storageclass

/usr/local/bin/kubectl is version 1.18.2-0-g52c56ce, which may have incompatibilites with Kubernetes 1.20.2.
Want kubectl v1.20.2? Try 'minikube kubectl -- get pods -A'
Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Optional: Full output of minikube logs command:

I think we need to run the command sudo podman network rm minikube along with the cluster deletion. @medyagh Please take a look

@afbjorklund
Copy link
Collaborator

Related to #9705 - we were relying on docker labels to do the cleanup, but they were either buggy or missing in podman...

The correct implementation would be to clean up the volumes and networks explicitly, and then only do labels as a fallback.

@afbjorklund afbjorklund added co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. triage/duplicate Indicates an issue is a duplicate of other open issue. labels Feb 7, 2021
@BLasan
Copy link
Contributor

BLasan commented Feb 7, 2021

Related to #9705 - we were relying on docker labels to do the cleanup, but they were either buggy or missing in podman...

The correct implementation would be to clean up the volumes and networks explicitly, and then only do labels as a fallback.

Can I take up this issue?

@BLasan
Copy link
Contributor

BLasan commented Feb 7, 2021

/assign

@alekonko
Copy link

alekonko commented Feb 7, 2021

hi,
after cleaning podman bridge with "sudo podman network rm minikube", the startup errors are gone.
So before it worked as it used an old cni.

Thanks a lot

@BLasan
Copy link
Contributor

BLasan commented Feb 7, 2021

hi,
after cleaning podman bridge with "sudo podman network rm minikube", the startup errors are gone.
So before it worked as it used an old cni.

Thanks a lot

Yuh. we need to clean up those configurations from the minikube clean up process instead of running the command manually :)

@BLasan
Copy link
Contributor

BLasan commented Feb 11, 2021

@medyagh Can we close this issue sir?

@sharifelgamal
Copy link
Collaborator

Yeah, this issue should be fixed at HEAD and will be included in the next release. I'll close this for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

5 participants