Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to disable preinstalled bridge CNI(s): failed to disable all bridge cni configs in "/etc/cni/net.d": #16197

Closed
Sivayes opened this issue Mar 30, 2023 · 6 comments
Labels
co/none-driver kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Sivayes
Copy link

Sivayes commented Mar 30, 2023

What Happened?

Hi Team,

I am getting cni related error. please advise on this.

root@worker-ub:~/cri-dockerd# minikube start --network-plugin=cni --cni=calico

  • minikube v1.29.0 on Ubuntu 22.04 (vbox/amd64)
  • Automatically selected the docker driver. Other choices: none, ssh
  • The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
  • If you are running minikube within a VM, consider using --driver=none:
  • https://minikube.sigs.k8s.io/docs/reference/drivers/none/

X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.

root@worker-ub:/cri-dockerd#
root@worker-ub:
/cri-dockerd#
root@worker-ub:~/cri-dockerd# apt-get install -y conntrack
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
conntrack is already the newest version (1:1.4.6-2build2).
0 upgraded, 0 newly installed, 0 to remove and 45 not upgraded.

root@worker-ub:~/cri-dockerd# minikube start --network-plugin=cni --cni=calico

  • minikube v1.29.0 on Ubuntu 22.04 (vbox/amd64)
  • Automatically selected the docker driver. Other choices: none, ssh
  • The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
  • If you are running minikube within a VM, consider using --driver=none:
  • https://minikube.sigs.k8s.io/docs/reference/drivers/none/

X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.

root@worker-ub:~/cri-dockerd#

root@worker-ub:~/cri-dockerd# minikube status
minikube
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

root@worker-ub:~/cri-dockerd#

Attach the log file

root@worker-ub:~/cri-dockerd# cat log.txt
*

  • ==> Audit <==

  • |---------|--------------------------------|----------|------|---------|---------------------|----------|
    | Command | Args | Profile | User | Version | Start Time | End Time |
    |---------|--------------------------------|----------|------|---------|---------------------|----------|
    | start | --network-plugin=cni | minikube | root | v1.29.0 | 30 Mar 23 05:26 UTC | |
    | | --cni=calico | | | | | |
    | start | --network-plugin=cni | minikube | root | v1.29.0 | 30 Mar 23 05:28 UTC | |
    | | --cni=calico | | | | | |
    | start | --network-plugin=cni | minikube | root | v1.29.0 | 30 Mar 23 05:28 UTC | |
    | | --cni=calico --driver=none | | | | | |
    | | --force | | | | | |
    | start | --network-plugin=cni | minikube | root | v1.29.0 | 30 Mar 23 05:39 UTC | |
    | | --cni=calico --wait=false | | | | | |
    | start | --network-plugin=cni | minikube | root | v1.29.0 | 30 Mar 23 05:41 UTC | |
    | | --cni=calico | | | | | |
    | start | --network-plugin=cni | minikube | root | v1.29.0 | 30 Mar 23 05:45 UTC | |
    | | --cni=calico | | | | | |
    | start | --network-plugin=cni | minikube | root | v1.29.0 | 30 Mar 23 05:51 UTC | |
    | | --cni=calico | | | | | |
    |---------|--------------------------------|----------|------|---------|---------------------|----------|

  • ==> Last Start <==

  • Log file created at: 2023/03/30 05:51:20
    Running on machine: worker-ub
    Binary: Built with gc go1.19.5 for linux/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0330 05:51:20.698408 1967 out.go:296] Setting OutFile to fd 1 ...
    I0330 05:51:20.698534 1967 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
    I0330 05:51:20.698536 1967 out.go:309] Setting ErrFile to fd 2...
    I0330 05:51:20.698539 1967 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
    I0330 05:51:20.698613 1967 root.go:334] Updating PATH: /root/.minikube/bin
    W0330 05:51:20.698696 1967 root.go:311] Error reading config file at /root/.minikube/config/config.json: open /root/.minikube/config/config.json: no such file or directory
    I0330 05:51:20.698812 1967 out.go:303] Setting JSON to false
    I0330 05:51:20.699554 1967 start.go:125] hostinfo: {"hostname":"worker-ub","uptime":454,"bootTime":1680155027,"procs":119,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"5.15.0-67-generic","kernelArch":"x86_64","virtualizationSystem":"vbox","virtualizationRole":"guest","hostId":"7e891d43-49f8-8b42-84da-82ad6f100850"}
    I0330 05:51:20.699595 1967 start.go:135] virtualization: vbox guest
    I0330 05:51:20.703835 1967 out.go:177] * minikube v1.29.0 on Ubuntu 22.04 (vbox/amd64)
    W0330 05:51:20.709739 1967 preload.go:295] Failed to list preload files: open /root/.minikube/cache/preloaded-tarball: no such file or directory
    I0330 05:51:20.710334 1967 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.26.1
    I0330 05:51:20.710381 1967 driver.go:365] Setting default libvirt URI to qemu:///system
    I0330 05:51:20.710714 1967 notify.go:220] Checking for updates...
    I0330 05:51:20.712068 1967 exec_runner.go:51] Run: systemctl --version
    I0330 05:51:20.718691 1967 out.go:177] * Using the none driver based on existing profile
    I0330 05:51:20.721886 1967 start.go:296] selected driver: none
    I0330 05:51:20.721899 1967 start.go:857] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:1975 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:192.168.26.190 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/root:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
    I0330 05:51:20.722005 1967 start.go:868] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
    I0330 05:51:20.722022 1967 start.go:1617] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
    I0330 05:51:20.725832 1967 out.go:177]
    W0330 05:51:20.728762 1967 out.go:239] X The requested memory allocation of 1975MiB does not leave room for system overhead (total system memory: 2980MiB). You may face stability issues.
    W0330 05:51:20.728963 1967 out.go:239] * Suggestion: Start minikube with less memory allocated: 'minikube start --memory=2200mb'
    I0330 05:51:20.733951 1967 out.go:177]
    I0330 05:51:20.736117 1967 cni.go:84] Creating CNI manager for "calico"
    I0330 05:51:20.736152 1967 start_flags.go:319] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:1975 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:192.168.26.190 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/root:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
    I0330 05:51:20.738516 1967 out.go:177] * Starting control plane node minikube in cluster minikube
    I0330 05:51:20.740835 1967 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ...
    I0330 05:51:20.741152 1967 cache.go:193] Successfully downloaded all kic artifacts
    I0330 05:51:20.741177 1967 start.go:364] acquiring machines lock for minikube: {Name:mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89 Clock:{} Delay:500ms Timeout:13m0s Cancel:}
    W0330 05:51:20.741464 1967 start.go:689] error starting host: boot lock: unable to open /tmp/juju-mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89: permission denied
    W0330 05:51:20.741682 1967 none.go:130] unable to get port: "minikube" does not appear in /root/.kube/config
    I0330 05:51:20.741696 1967 api_server.go:165] Checking apiserver status ...
    I0330 05:51:20.741716 1967 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.minikube.
    W0330 05:51:20.782740 1967 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: exit status 1
    stdout:

stderr:
I0330 05:51:20.782769 1967 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0330 05:51:20.799619 1967 out.go:177] * Deleting "minikube" in none ...
I0330 05:51:20.804084 1967 exec_runner.go:51] Run: sudo systemctl stop -f kubelet
W0330 05:51:20.819316 1967 none.go:151] couldn't force stop kubelet. will continue with kill anyways: sudo systemctl stop -f kubelet: exit status 5
stdout:

stderr:
Failed to stop kubelet.service: Unit kubelet.service not loaded.
I0330 05:51:20.819353 1967 exec_runner.go:51] Run: docker ps -a --filter=name=k8s_ --format={{.ID}}
I0330 05:51:21.733912 1967 none.go:185] Removing: [/var/tmp/minikube /etc/kubernetes/manifests /var/lib/minikube]
I0330 05:51:21.733962 1967 exec_runner.go:51] Run: sudo rm -rf /var/tmp/minikube /etc/kubernetes/manifests /var/lib/minikube
W0330 05:51:21.744011 1967 out.go:239] ! StartHost failed, but will try again: boot lock: unable to open /tmp/juju-mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89: permission denied
I0330 05:51:21.744071 1967 start.go:704] Will try again in 5 seconds ...
I0330 05:51:26.744961 1967 start.go:364] acquiring machines lock for minikube: {Name:mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89 Clock:{} Delay:500ms Timeout:13m0s Cancel:}
W0330 05:51:26.745315 1967 out.go:239] * Failed to start none bare metal machine. Running "minikube delete" may fix it: boot lock: unable to open /tmp/juju-mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89: permission denied
I0330 05:51:26.758837 1967 out.go:177]
W0330 05:51:26.766024 1967 out.go:239] X Exiting due to HOST_JUJU_LOCK_PERMISSION: Failed to start host: boot lock: unable to open /tmp/juju-mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89: permission denied
W0330 05:51:26.766133 1967 out.go:239] * Suggestion: Run 'sudo sysctl fs.protected_regular=0', or try a driver which does not require root, such as '--driver=docker'
W0330 05:51:26.766187 1967 out.go:239] * Related issue: #6391
I0330 05:51:26.774025 1967 out.go:177]

root@worker-ub:~/cri-dockerd#

Operating System

Ubuntu

Driver

Docker

@kundan2707
Copy link
Contributor

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Mar 30, 2023
@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 2, 2023

You don't need to run as root, to use minikube start. As long as the user has access to docker.

For the none driver, the issue seems to be with unsupported systemd settings (for fs.protected_regular)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 1, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 18, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants