Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cgroup2, driver=docker, runtime=containerd] failed to create containerd task: cgroups: cgroup mountpoint does not exist #11310

Closed
AkihiroSuda opened this issue May 6, 2021 · 5 comments · Fixed by #11325
Labels
co/cgroup kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@AkihiroSuda
Copy link
Member

Steps to reproduce the issue:

  1. Install minikube v1.19.0 to Fedora 34 (cgroup v2), with Docker v20.10.6
  2. Run minikube start --driver=docker --container-runtime=containerd
  3. Pods cannot be started: "CreatePodSandbox for pod \"kube-apiserver-minikube_kube-system(c767dbeb9ddd2d01964c2fc02c621c4e)\" failed: rpc error: code = Unknown desc = failed to create containerd task: cgroups: cgroup mountpoint does not exist: unknown"

Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "fedora/34-cloud-base"
  config.vm.provider :virtualbox do |v|
    v.memory = 4096
    v.cpus = 2
  end
  config.vm.provision "shell", inline: <<-SHELL
  set -eux -o pipefail
  # Disable SELinux to avoid hitting potential issues
  setenforce 0

  # Install Docker
  curl -fsSL https://get.docker.com | sh
  systemctl enable --now docker
  usermod -aG docker vagrant

  # Install minikube
  curl -o /usr/local/bin/minikube -fsSL https://github.com/kubernetes/minikube/releases/download/v1.19.0/minikube-linux-amd64
  chmod +x /usr/local/bin/minikube
  SHELL
end

Full output of minikube logs command:

...
* May 06 14:52:05 minikube kubelet[2495]: E0506 14:52:05.830441    2495 pod_workers.go:191] Error syncing pod c767dbeb9ddd2d01964c2fc02c621c4e ("kube-apiserver-minikube_kube-system(c767dbeb9ddd2d01964c2fc02c621c4e)"), skipping: failed to "CreatePodSandbox" for "kube-apiserver-minikube_kube-system(c767dbeb9ddd2d01964c2fc02c621c4e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-apiserver-minikube_kube-system(c767dbeb9ddd2d01964c2fc02c621c4e)\" failed: rpc error: code = Unknown desc = failed to create containerd task: cgroups: cgroup mountpoint does not exist: unknown"
...

Full output of failed command:

[vagrant@fedora ~]$ minikube start --driver=docker --container-runtime=containerd                                                                                                                                                                                                     
* minikube v1.19.0 on Fedora 34 (vbox/amd64)                                                                                                                                                                                                                                          
* Using the docker driver based on user configuration                                                                                                  
* Starting control plane node minikube in cluster minikube                                                                                             
* Pulling base image ...                                                                                                                               
* Downloading Kubernetes v1.20.2 preload ...                                                                                                                           
    > gcr.io/k8s-minikube/kicbase...: 357.67 MiB / 357.67 MiB  100.00% 3.85 MiB                                                                                                          
    > preloaded-images-k8s-v10-v1...: 911.27 MiB / 911.27 MiB  100.00% 6.70 MiB                                                                                        
* Creating docker container (CPUs=2, Memory=2200MB) ...                                                                                                                                                                                       
* Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...                                                                                                 
  - Generating certificates and keys ...                                                                                                                               
  - Booting up control plane ...                                                                                                                       
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable
--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1                                                                          
stdout:                                                                                                                                                                                                         
[init] Using Kubernetes version: v1.20.2                                                                                                                                                 
[preflight] Running pre-flight checks                                                                                                                                                    
[preflight] Pulling images required for setting up a Kubernetes cluster                                                                                
[preflight] This might take a minute or two, depending on the speed of your internet connection                                                                                                                                               
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'                                                          
[certs] Using certificateDir folder "/var/lib/minikube/certs"                                                                                          
[certs] Using existing ca certificate authority                                                                                                        
[certs] Using existing apiserver certificate and key on disk                                                                                                                                                    
[certs] Generating "apiserver-kubelet-client" certificate and key                                                                                                      
[certs] Generating "front-proxy-ca" certificate and key                                                                                                                
[certs] Generating "front-proxy-client" certificate and key                                                                                                            
[certs] Generating "etcd/ca" certificate and key                                                                                                                                                                                              
[certs] Generating "etcd/server" certificate and key                                                                                                   
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]                                                     
[certs] Generating "etcd/peer" certificate and key                                                                                                                     
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]                                                                                                                              
[certs] Generating "etcd/healthcheck-client" certificate and key                                                                                       
[certs] Generating "apiserver-etcd-client" certificate and key                                                                                                                           
[certs] Generating "sa" key and public key                                                                                                                             
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"                                                                                                                 
[kubeconfig] Writing "admin.conf" kubeconfig file                                                                                                                      
[kubeconfig] Writing "kubelet.conf" kubeconfig file                                                                                                                    
[kubeconfig] Writing "controller-manager.conf" kubeconfig file                                                                                                                           
[kubeconfig] Writing "scheduler.conf" kubeconfig file                                                                                                                                    
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"                                                                                                        
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"                                                                                                     
[kubelet-start] Starting the kubelet                                                                                                                                                     
[control-plane] Using manifest folder "/etc/kubernetes/manifests"                                                                                                                                               
[control-plane] Creating static Pod manifest for "kube-apiserver"                                                                                                                                                                             
[control-plane] Creating static Pod manifest for "kube-controller-manager"                                                                                                                                                                    
[control-plane] Creating static Pod manifest for "kube-scheduler"                                                                                                                                                                             
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"                                                                                                                               
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s                                                   
[kubelet-check] Initial timeout of 40s passed.       
@medyagh
Copy link
Member

medyagh commented May 6, 2021

@AkihiroSuda I believe we currently dont have an integration test with cgroups v2 but we improved it a bit in 1.20.0-beta0
I am curious have you tried 1.20.0-beta0 and see what errror you get ?

@medyagh
Copy link
Member

medyagh commented May 6, 2021

/triage needs-information
/kind support

@k8s-ci-robot k8s-ci-robot added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels May 6, 2021
@AkihiroSuda
Copy link
Member Author

Tried 1.20, same error.

@AkihiroSuda
Copy link
Member Author

AkihiroSuda commented May 7, 2021

The problem is that minikube still uses deprecated runtime "io.containerd.runtime.v1.linux", which doesn't support cgroup v2.

This patch works:

--- /etc/containerd/config.toml.bak     2021-05-07 08:34:12.071667558 +0000
+++ /etc/containerd/config.toml 2021-05-07 08:35:41.652779974 +0000
@@ -36,15 +36,6 @@
     max_container_log_line_size = 16384
     [plugins.cri.containerd]
       snapshotter = "overlayfs"
-      no_pivot = true
-      [plugins.cri.containerd.default_runtime]
-        runtime_type = "io.containerd.runtime.v1.linux"
-        runtime_engine = ""
-        runtime_root = ""
-      [plugins.cri.containerd.untrusted_workload_runtime]
-        runtime_type = ""
-        runtime_engine = ""
-        runtime_root = ""
     [plugins.cri.cni]
       bin_dir = "/opt/cni/bin"
       conf_dir = "/etc/cni/net.mk"
@@ -55,12 +46,6 @@
           endpoint = ["https://registry-1.docker.io"]
         [plugins.diff-service]
     default = ["walking"]
-  [plugins.linux]
-    shim = "containerd-shim"
-    runtime = "runc"
-    runtime_root = ""
-    no_shim = false
-    shim_debug = false
   [plugins.scheduler]
     pause_threshold = 0.02
     deletion_threshold = 0
$ minikube kubectl get -- pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-74ff55c5b-2q8pj            1/1     Running   0          109s
kube-system   etcd-minikube                      1/1     Running   0          113s
kube-system   kindnet-858kj                      1/1     Running   0          109s
kube-system   kube-apiserver-minikube            1/1     Running   0          113s
kube-system   kube-controller-manager-minikube   1/1     Running   0          113s
kube-system   kube-proxy-rs987                   1/1     Running   0          109s
kube-system   kube-scheduler-minikube            1/1     Running   0          113s
kube-system   storage-provisioner                1/1     Running   0          2m1s

@AkihiroSuda
Copy link
Member Author

PR: #11325

@spowelljr spowelljr removed the triage/needs-information Indicates an issue needs more information in order to work on it. label May 17, 2021
@ilya-zuyev ilya-zuyev added kind/bug Categorizes issue or PR as related to a bug. co/cgroup and removed kind/support Categorizes issue or PR as a support question. labels May 18, 2021
@spowelljr spowelljr added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label May 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/cgroup kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
5 participants