Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubelet: misconfiguration: kubelet cgroup driver: "'systemd'" is different from docker cgroup driver: "systemd" #6544

Closed
zigarn opened this issue Feb 7, 2020 · 6 comments Β· Fixed by #6549
Labels
co/none-driver co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug.

Comments

@zigarn
Copy link
Contributor

zigarn commented Feb 7, 2020

The exact command to reproduce the issue:

$ minikube start --vm-driver none

The full output of the command that failed:

πŸ˜„  minikube v1.7.1 on Centos 7.6.1810 (xen/amd64)
    β–ͺ KUBECONFIG=/home/centos/.kube/config
    β–ͺ MINIKUBE_WANTREPORTERRORPROMPT=false
    β–ͺ MINIKUBE_WANTUPDATENOTIFICATION=false
    β–ͺ MINIKUBE_HOME=/home/centos
✨  Using the none driver based on user configuration
🀹  Running on localhost (CPUs=2, Memory=7819MB, Disk=8181MB) ...
ℹ️   OS release is CentOS Linux 7 (Core)
⚠️  Node may be unable to resolve external DNS records
🐳  Preparing Kubernetes v1.17.2 on Docker '19.03.5' ...
πŸ’Ύ  Downloading kubectl v1.17.2
πŸ’Ύ  Downloading kubelet v1.17.2
πŸ’Ύ  Downloading kubeadm v1.17.2
🚜  Pulling images ...
πŸš€  Launching Kubernetes ... 

πŸ’£  Error starting cluster: init failed. output: "-- stdout --\n[init] Using Kubernetes version: v1.17.2\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [kubernetes.2020-02-07-k8s-user-test.local localhost] and IPs [10.0.101.58 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [kubernetes.2020-02-07-k8s-user-test.local localhost] and IPs [10.0.101.58 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[kubelet-check] Initial timeout of 40s passed.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.\n[kubelet-check] It seems like the kubelet isn't running or healthy.\n[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.\nHere is one example how you may list all Kubernetes containers running in docker:\n\t- 'docker ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'docker logs CONTAINERID'\n\n-- /stdout --\n** stderr ** \nW0207 16:49:51.447922    5412 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:\"kubeadm.k8s.io\", Version:\"v1beta2\", Kind:\"ClusterConfiguration\"}: error converting YAML to JSON: yaml: unmarshal errors:\n  line 9: key \"apiServer\" already set in map\nW0207 16:49:51.448633    5412 validation.go:28] Cannot validate kube-proxy config - no validator is available\nW0207 16:49:51.448652    5412 validation.go:28] Cannot validate kubelet config - no validator is available\n\t[WARNING FileExisting-ebtables]: ebtables not found in system path\n\t[WARNING Hostname]: hostname \"kubernetes.2020-02-07-k8s-user-test.local\" could not be reached\n\t[WARNING Hostname]: hostname \"kubernetes.2020-02-07-k8s-user-test.local\": lookup kubernetes.2020-02-07-k8s-user-test.local on 10.0.0.2:53: no such host\nW0207 16:49:56.102743    5412 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"\nW0207 16:49:56.103742    5412 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster\nTo see the stack trace of this error execute with --v=5 or higher\n\n** /stderr **": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": exit status 1
stdout:
[init] Using Kubernetes version: v1.17.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubernetes.2020-02-07-k8s-user-test.local localhost] and IPs [10.0.101.58 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubernetes.2020-02-07-k8s-user-test.local localhost] and IPs [10.0.101.58 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'

stderr:
W0207 16:49:51.447922    5412 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error converting YAML to JSON: yaml: unmarshal errors:
  line 9: key "apiServer" already set in map
W0207 16:49:51.448633    5412 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0207 16:49:51.448652    5412 validation.go:28] Cannot validate kubelet config - no validator is available
	[WARNING FileExisting-ebtables]: ebtables not found in system path
	[WARNING Hostname]: hostname "kubernetes.2020-02-07-k8s-user-test.local" could not be reached
	[WARNING Hostname]: hostname "kubernetes.2020-02-07-k8s-user-test.local": lookup kubernetes.2020-02-07-k8s-user-test.local on 10.0.0.2:53: no such host
W0207 16:49:56.102743    5412 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0207 16:49:56.103742    5412 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher


😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Fri 2020-02-07 15:48:35 UTC, end at Fri 2020-02-07 16:53:07 UTC. --
Feb 07 16:28:06 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Failed to start Docker Application Container Engine.
Feb 07 16:28:06 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: docker.service failed.
Feb 07 16:28:07 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: start request repeated too quickly for docker.service
Feb 07 16:28:07 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Failed to start Docker Application Container Engine.
Feb 07 16:28:07 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: docker.service failed.
Feb 07 16:28:08 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: start request repeated too quickly for docker.service
Feb 07 16:28:08 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Failed to start Docker Application Container Engine.
Feb 07 16:28:08 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: docker.service failed.
Feb 07 16:28:09 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: start request repeated too quickly for docker.service
Feb 07 16:28:09 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Failed to start Docker Application Container Engine.
Feb 07 16:28:09 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: docker.service failed.
Feb 07 16:28:09 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: start request repeated too quickly for docker.service
Feb 07 16:28:09 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Failed to start Docker Application Container Engine.
Feb 07 16:28:09 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: docker.service failed.
Feb 07 16:28:10 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: start request repeated too quickly for docker.service
Feb 07 16:28:10 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Failed to start Docker Application Container Engine.
Feb 07 16:28:10 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: docker.service failed.
Feb 07 16:28:11 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: start request repeated too quickly for docker.service
Feb 07 16:28:11 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Failed to start Docker Application Container Engine.
Feb 07 16:28:11 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: docker.service failed.
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Starting Docker Application Container Engine...
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.199842461Z" level=info msg="Starting up"
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.202731924Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.202756998Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.202778040Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.202791502Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.204427629Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.204452199Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.204468881Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.204482155Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.214316790Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.220403720Z" level=info msg="Loading containers: start."
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.305131137Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.338183984Z" level=info msg="Loading containers: done."
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.351368298Z" level=info msg="Docker daemon" commit=633a0ea graphdriver(s)=overlay2 version=19.03.5
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.351431706Z" level=info msg="Daemon has completed initialization"
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Started Docker Application Container Engine.
Feb 07 16:28:12 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:12.371425564Z" level=info msg="API listen on /var/run/docker.sock"
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Stopping Docker Application Container Engine...
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:24.641935004Z" level=info msg="Processing signal 'terminated'"
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[30166]: time="2020-02-07T16:28:24.642774608Z" level=info msg="Daemon shutdown complete"
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Stopped Docker Application Container Engine.
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Starting Docker Application Container Engine...
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.713425251Z" level=info msg="Starting up"
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.714686209Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.714713286Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.714737047Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.715190355Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.716557760Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.716582971Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.716599179Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.716631318Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.726506829Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.733490028Z" level=info msg="Loading containers: start."
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.813929773Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.846686157Z" level=info msg="Loading containers: done."
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.860182361Z" level=info msg="Docker daemon" commit=633a0ea graphdriver(s)=overlay2 version=19.03.5
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.860238489Z" level=info msg="Daemon has completed initialization"
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local dockerd[31364]: time="2020-02-07T16:28:24.880391558Z" level=info msg="API listen on /var/run/docker.sock"
Feb 07 16:28:24 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Started Docker Application Container Engine.

==> container status <==
which: no crictl in (/home/centos/.minikube/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin)
sudo: crictl: command not found
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

==> dmesg <==
dmesg: invalid option -- '='

Usage:
 dmesg [options]

Options:
 -C, --clear                 clear the kernel ring buffer
 -c, --read-clear            read and clear all messages
 -D, --console-off           disable printing messages to console
 -d, --show-delta            show time delta between printed messages
 -e, --reltime               show local time and time delta in readable format
 -E, --console-on            enable printing messages to console
 -F, --file <file>           use the file instead of the kernel log buffer
 -f, --facility <list>       restrict output to defined facilities
 -H, --human                 human readable output
 -k, --kernel                display kernel messages
 -L, --color                 colorize messages
 -l, --level <list>          restrict output to defined levels
 -n, --console-level <level> set level of messages printed to console
 -P, --nopager               do not pipe output into a pager
 -r, --raw                   print the raw message buffer
 -S, --syslog                force to use syslog(2) rather than /dev/kmsg
 -s, --buffer-size <size>    buffer size to query the kernel ring buffer
 -T, --ctime                 show human readable timestamp (could be 
                               inaccurate if you have used SUSPEND/RESUME)
 -t, --notime                don't print messages timestamp
 -u, --userspace             display userspace messages
 -w, --follow                wait for new messages
 -x, --decode                decode facility and level to readable string

 -h, --help     display this help and exit
 -V, --version  output version information and exit

Supported log facilities:
    kern - kernel messages
    user - random user-level messages
    mail - mail system
  daemon - system daemons
    auth - security/authorization messages
  syslog - messages generated internally by syslogd
     lpr - line printer subsystem
    news - network news subsystem

Supported log levels (priorities):
   emerg - system is unusable
   alert - action must be taken immediately
    crit - critical conditions
     err - error conditions
    warn - warning conditions
  notice - normal but significant condition
    info - informational
   debug - debug-level messages


For more details see dmesg(q).

==> kernel <==
 16:53:07 up  1:04,  2 users,  load average: 0.01, 0.09, 0.22
Linux kubernetes.2020-02-07-k8s-user-test.local 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="CentOS Linux 7 (Core)"

==> kubelet <==
-- Logs begin at Fri 2020-02-07 15:48:35 UTC, end at Fri 2020-02-07 16:53:07 UTC. --
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local kubelet[21321]: E0207 16:53:06.477160   21321 reflector.go:153] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local kubelet[21321]: E0207 16:53:06.477230   21321 reflector.go:153] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes.2020-02-07-k8s-user-test.local&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local kubelet[21321]: W0207 16:53:06.483957   21321 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local kubelet[21321]: I0207 16:53:06.483988   21321 docker_service.go:240] Hairpin mode set to "hairpin-veth"
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local kubelet[21321]: W0207 16:53:06.484090   21321 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local kubelet[21321]: W0207 16:53:06.488027   21321 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local kubelet[21321]: W0207 16:53:06.488055   21321 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local kubelet[21321]: I0207 16:53:06.489397   21321 docker_service.go:255] Docker cri networking managed by kubernetes.io/no-op
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local kubelet[21321]: I0207 16:53:06.498472   21321 docker_service.go:260] Docker Info: &{ID:75Z5:TBOJ:QJLT:XNW6:TGM5:YZ6K:B5OT:5STN:4ROQ:BTMN:OJYF:HFIN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:13 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2020-02-07T16:53:06.490169125Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-957.1.3.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0001fed90 NCPU:2 MemTotal:8199827456 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes.2020-02-07-k8s-user-test.local Labels:[] ExperimentalBuild:false ServerVersion:19.03.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b34a5c8af56e510852c35414db4c1f4fa6172339 Expected:b34a5c8af56e510852c35414db4c1f4fa6172339} RuncCommit:{ID:3e425f80a8c931f88e6d94a8c831b9d5aa481657 Expected:3e425f80a8c931f88e6d94a8c831b9d5aa481657} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local kubelet[21321]: F0207 16:53:06.498563   21321 server.go:273] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "'systemd'" is different from docker cgroup driver: "systemd"
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Unit kubelet.service entered failed state.
Feb 07 16:53:06 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: kubelet.service failed.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: kubelet.service holdoff time over, scheduling restart.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.397892   21449 server.go:416] Version: v1.17.2
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.398096   21449 plugins.go:100] No cloud provider specified.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.398114   21449 server.go:821] Client rotation is on, will bootstrap in background
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.401287   21449 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529311   21449 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529662   21449 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: []
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529677   21449 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:'systemd' KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529784   21449 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529794   21449 container_manager_linux.go:305] Creating device plugin manager: true
E0207 16:53:07.620235   21406 style.go:172] unable to parse "Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529818   21449 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x1b1bbe0 0x6e97c50 0x1b1c4b0 map[] map[] map[] map[] map[] 0xc00061cde0 [0] 0x6e97c50}\n": template: Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529818   21449 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x1b1bbe0 0x6e97c50 0x1b1c4b0 map[] map[] map[] map[] map[] 0xc00061cde0 [0] 0x6e97c50}
:1: unexpected "}" in command - returning raw string.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529818   21449 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x1b1bbe0 0x6e97c50 0x1b1c4b0 map[] map[] map[] map[] map[] 0xc00061cde0 [0] 0x6e97c50}
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529863   21449 state_mem.go:36] [cpumanager] initializing new in-memory state store
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529983   21449 state_mem.go:84] [cpumanager] updated default cpuset: ""
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.529992   21449 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
E0207 16:53:07.620710   21406 style.go:172] unable to parse "Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.530010   21449 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x6e97c50 10000000000 0xc0005f8d20 <nil> <nil> <nil> <nil> map[memory:{{104857600 0} {<nil>}  BinarySI}] 0x6e97c50}\n": template: Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.530010   21449 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x6e97c50 10000000000 0xc0005f8d20 <nil> <nil> <nil> <nil> map[memory:{{104857600 0} {<nil>}  BinarySI}] 0x6e97c50}
:1: unexpected "}" in operand - returning raw string.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.530010   21449 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x6e97c50 10000000000 0xc0005f8d20 <nil> <nil> <nil> <nil> map[memory:{{104857600 0} {<nil>}  BinarySI}] 0x6e97c50}
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.530080   21449 kubelet.go:286] Adding pod path: /etc/kubernetes/manifests
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.530109   21449 kubelet.go:311] Watching apiserver
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.539757   21449 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.539782   21449 client.go:104] Start docker client with request timeout=2m0s
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: E0207 16:53:07.550722   21449 reflector.go:153] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes.2020-02-07-k8s-user-test.local&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: W0207 16:53:07.559594   21449 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.559620   21449 docker_service.go:240] Hairpin mode set to "hairpin-veth"
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: W0207 16:53:07.559757   21449 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: E0207 16:53:07.562770   21449 reflector.go:153] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dkubernetes.2020-02-07-k8s-user-test.local&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: E0207 16:53:07.562860   21449 reflector.go:153] k8s.io/kubernetes/pkg/kubelet/kubelet.go:449: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: W0207 16:53:07.563770   21449 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: W0207 16:53:07.563790   21449 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.565116   21449 docker_service.go:255] Docker cri networking managed by kubernetes.io/no-op
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: I0207 16:53:07.576484   21449 docker_service.go:260] Docker Info: &{ID:75Z5:TBOJ:QJLT:XNW6:TGM5:YZ6K:B5OT:5STN:4ROQ:BTMN:OJYF:HFIN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:13 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2020-02-07T16:53:07.566491697Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-957.1.3.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0007f8070 NCPU:2 MemTotal:8199827456 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kubernetes.2020-02-07-k8s-user-test.local Labels:[] ExperimentalBuild:false ServerVersion:19.03.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b34a5c8af56e510852c35414db4c1f4fa6172339 Expected:b34a5c8af56e510852c35414db4c1f4fa6172339} RuncCommit:{ID:3e425f80a8c931f88e6d94a8c831b9d5aa481657 Expected:3e425f80a8c931f88e6d94a8c831b9d5aa481657} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local kubelet[21449]: F0207 16:53:07.576611   21449 server.go:273] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "'systemd'" is different from docker cgroup driver: "systemd"
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: Unit kubelet.service entered failed state.
Feb 07 16:53:07 kubernetes.2020-02-07-k8s-user-test.local systemd[1]: kubelet.service failed.

Main problem:

Feb 07 16:55:31 kubernetes.2020-02-07-k8s-user-test.local kubelet[780]: F0207 16:55:31.497042     780 server.go:273] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "'systemd'" is different from docker cgroup driver: "systemd"
$ docker info --format '{{.CgroupDriver}}'
systemd

$ cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.17.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver='systemd' --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.101.58 --pod-manifest-path=/etc/kubernetes/manifests

[Install]

There are simple quotes around the cgroup-driver option.

The operating system version:

CentOS Linux release 7.6.1810 (Core)

@zigarn zigarn changed the title Kubelet doesn't start because of simple quotes around de cgroup-driver option Kubelet: misconfiguration: kubelet cgroup driver: "'systemd'" is different from docker cgroup driver: "systemd" Feb 7, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 7, 2020

I think this is a generic bug with the docker --format, where I copied the code from.

🐳  Preparing Kubernetes v1.17.2 on Docker '19.03.5' ...
ExecStart=/var/lib/minikube/binaries/v1.17.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver='systemd' --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.101.58 --pod-manifest-path=/etc/kubernetes/manifests

We don't need extra quotes, when we are running exec.Command (only from shell)

	c := exec.Command("docker", "version", "--format", "'{{.Server.Version}}'")
	c := exec.Command("docker", "info", "--format", "'{{.CgroupDriver}}'")

Unfortunately we don't have any centos test machines yet (#3552)

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. co/none-driver co/runtime/docker Issues specific to a docker runtime labels Feb 7, 2020
@khassel
Copy link

khassel commented Feb 7, 2020

see kubernetes/kubernetes#87918

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 7, 2020

Unfortunately the unit test was doctored to match the expected output, not reality.

Slightly odd that nobody has complained about the extra quotes in the version...

@afbjorklund
Copy link
Collaborator

Note that Kubernetes (kubeadm, really) totally ignores all driver values we pass in for Docker...

		driver, err := kubeadmutil.GetCgroupDriverDocker(opts.execer)
		if err != nil {
			klog.Warningf("cannot automatically assign a '--cgroup-driver' value when starting the Kubelet: %v\n", err)
		} else {
			kubeletFlags["cgroup-driver"] = driver
		}

And then fails to detect the default driver for any other container runtime, even though doc says so.

// TODO: add support for detecting the cgroup driver for CRI other than
// Docker. Currently only Docker driver detection is supported:
// Discussion:
//     https://github.com/kubernetes/kubeadm/issues/844

Since v1.14.0, kubeadm will try to automatically detect the container runtime on Linux nodes by scanning through a list of well known domain sockets.


So even when we pass in a totally bogus value for docker, we still get a valid kubelet config...

ExecStart=/var/lib/minikube/binaries/v1.17.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver='cgroupfs' --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.99.100 --pod-manifest-path=/etc/kubernetes/manifests

root 4475 4.0 4.6 1345528 92572 ? Ssl 21:32 0:30 /var/lib/minikube/binaries/v1.17.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.99.100 --pod-manifest-path=/etc/kubernetes/manifests

There's too many stupid docker hacks present still, would be much better if it started to use CRI ?

@afbjorklund
Copy link
Collaborator

@zigarn : Thanks for reporting, hopefully fixed in v1.7.2 ?

@zigarn
Copy link
Contributor Author

zigarn commented Feb 9, 2020

I confirm that the problem is solved in v1.7.2.

Thanks everyone for the great reactivity and quick fixing release!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants