Skip to content

kubeadm doesn't start due to /var/lib/kubelet #37063

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
bronger opened this issue Nov 18, 2016 · 28 comments
Closed

kubeadm doesn't start due to /var/lib/kubelet #37063

bronger opened this issue Nov 18, 2016 · 28 comments
Assignees

Comments

@bronger
Copy link

bronger commented Nov 18, 2016

Amongst the pre-flight checks of kubeadm is a check that /var/lib/kubelet does not exist. However, when I follow the instructions on http://kubernetes.io/docs/getting-started-guides/kubeadm/ for CentOS 7, using RHEL 7, right after the commands

# setenforce 0
# yum install -y docker kubelet kubeadm kubectl kubernetes-cni
# systemctl enable docker && systemctl start docker
# systemctl enable kubelet && systemctl start kubelet

the directory /var/lib/kubelet comes into existence. This does not happen every time, actually it is quite rare, so it is probably due to a race.

@errordeveloper
Copy link
Member

cc @dgoodwin

@dgoodwin
Copy link
Contributor

Did you install kubelet and kubeadm via the official CentOS 7 rpms? It carries an important systemd drop-in that configures kubelet to crash-loop until it's configuration is written. If this is not in place, kubelet starts and creates /var/lib/kubelet, and isn't running in the correct mode for kubeadm. First guess is this is a manually setup kubeadm env perhaps?

Come to think of it we should preflight check that kubelet is not fully running, or the systemd drop-in exists.

@bronger
Copy link
Author

bronger commented Nov 23, 2016

I followed http://kubernetes.io/docs/getting-started-guides/kubeadm/:

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# setenforce 0
# yum install -y docker kubelet kubeadm kubectl kubernetes-cni
# systemctl enable docker && systemctl start docker
# systemctl enable kubelet && systemctl start kubelet

… after a fresh RHEL7 install.

@dgoodwin
Copy link
Contributor

Ok I will try to reproduce ASAP, if possible would love to see the "systemctl status kubelet".

@dgoodwin
Copy link
Contributor

@bronger can you confirm your version of kubeadm/kubelet? I am unable to reproduce. Well /var/lib/kubelet exists as soon as the rpms are installed, but this is ok, it just has to be empty. The pre-flight checks should be ok with this.

(root@centos1 ~) $ rpm -qa | grep kube
kubelet-1.4.4-1.x86_64
kubeadm-1.5.0-1.alpha.2.380.85fe0f1aadf91e.0.x86_64
kubectl-1.4.4-1.x86_64
kubernetes-cni-0.3.0.1-1.07a8a2.x86_64
(root@centos1 ~) $ ll /var/lib/kubelet
total 0

Pre-flight checks pass on kubeadm init, as well as join.

Will need more info here, please show output from the above commands, exact error you got and command you ran to get it, contents of /var/lib/kubelet, and systemctl status kubelet.

@bronger
Copy link
Author

bronger commented Nov 24, 2016

I can check in 12h.

@bronger
Copy link
Author

bronger commented Nov 25, 2016

The following is in /var/lib/kubelet: https://gist.github.com/bronger/92d8cf703628c6d1ff9e93aa920515de

The versions:

[root@kubmaster ~]# rpm -qa | grep kube
kubernetes-cni-0.3.0.1-1.07a8a2.x86_64
kubeadm-1.5.0-1.alpha.2.380.85fe0f1aadf91e.0.x86_64
kubelet-1.4.4-1.x86_64
kubectl-1.4.4-1.x86_64

@lesaux
Copy link

lesaux commented Nov 25, 2016

I am seeing the same thing on Ubuntu Xenial.

I am deploying my kube cluster with terraform, and seeing this behaviour once in a while. Outcome is not consistent.

kubernetes-cni_0.3.0.1-07a8a2-00_amd64.deb
kubeadm_1.5.0-alpha.2-421-a6bea3d79b8bba-00_amd64.deb
kubectl_1.4.4-00_amd64.deb
kubelet_1.4.4-01_amd64.deb

For now my workaround is to

kubeadm reset
systemctl start kubelet.service

before running
kubeadm init and kubeadm join

@dgoodwin
Copy link
Contributor

@bronger systemctl status kubelet if you get a chance.

From your gist however it is very clear that kubeadm init has been run on this system already, can you confirm if there were previous or failed attempts to kubeadm init before you run this, or was the system completely clean? Are you using kubeadm reset between attempts?

@dgoodwin
Copy link
Contributor

Additionally, if anyone could capture output from apt-get install when this happens it would be helpful. I'm suspecting folks are being bitten by the automatic starting of services when package is installed in the .deb's. kubeadm deb should drop that systemd drop-in to make kubelet crash loop and wait for config, but I suspect it's not happening transactionally and kubelet might in some cases try to start itself before that kubeadm systemd drop-in is written to disk, so kubelet starts normally.

This theory isn't perfect because in @bronger 's output above we can be certain that kubeadm init was literally run, the secrets and such are all present, so hopefully this is just a result of running kubeadm init multiple times. I can't imagine the .deb's are automatically running kubeadm init.

So for anyone who sees this, grab the output from your apt-get install command, and list the contents of /var/lib/kubelet when this happens please!

CC @mikedanese @errordeveloper for their experience with the .debs.

@lesaux
Copy link

lesaux commented Nov 25, 2016

In my case I am definitely not running any kubeadm commands before hand.

After the package installation, I have the two empty plugins and pods folders in /var/lib/kubelet

I've put all of the terraform output log in a gist here (sorry for the lengthly log): https://gist.github.com/lesaux/0c480bab173eee71203162a725f90e9d#file-apt-get-install-journalctl-n-200

I am just creating a network, a bastion host, and four instances in openstack, using a Xenial cloud image.
The kubernetes installation starts at line 5017.
The command is just adding the repo and installing the 4 kube packages.
Immediately after that, I am printing the journalctl log to show that kubelet was started.

In this example you can see that kubelet started on all four nodes and thus the /var/lib/kubelet/pods and /var/lib/kubelet/plugins folders were created.

@dgoodwin
Copy link
Contributor

Thanks @lesaux

However kubeadm appears to already be unpacked and installed earlier in the logs.

This section is pretty interesting:

module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:04 kube-node-1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:04 kube-node-1 kubelet[6250]: W1125 18:30:04.815974    6250 server.go:383] No API client: no api servers specified
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:04 kube-node-1 kubelet[6250]: I1125 18:30:04.969993    6250 docker.go:375] Connecting to docker on unix:///var/run/docker.sock
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:04 kube-node-1 kubelet[6250]: I1125 18:30:04.970284    6250 docker.go:395] Start docker client with request timeout=2m0s
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:04 kube-node-1 kubelet[6250]: E1125 18:30:04.970572    6250 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 [SNIP]
18:30:05 kube-node-1 systemd[1]: Reloading.
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 [SNIP]
18:30:05 kube-node-1 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:05 kube-node-1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:05 kube-node-1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:05 kube-node-1 kubelet[6316]: error: failed to run Kubelet: invalid kubeconfig: stat /etc/kubernetes/kubelet.conf: no such file or directory
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:05 kube-node-1 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:05 kube-node-1 systemd[1]: kubelet.service: Unit entered failed state.
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:05 kube-node-1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
module.kube-nodes.null_resource.kubernetes_install.1 (remote-exec): Nov 25 18:30:05 kube-node-1 systemd[1]: Reloading.

Kubelet starts once, fully, a suspicious systemd reload, then it stops it and starts again, this time it sees our kubeadm drop-in and goes into the crash loop we expect for kubeadm usage. When user actually goes to run kubeadm, /var/lib/kubelet isn't empty because it was fully started once already.

Something's not right with the .deb's I think, it appears kubeadm is installed in correct order but perhaps the systemd drop-in isn't loaded yet, is there a missing daemon-reload or equivalent in the packaging scripts?

@bronger
Copy link
Author

bronger commented Nov 28, 2016

Please don't put any more effort into this.

I still have not found the root cause but it is not kubeadm. Apparently our VM snapshot (to which we return after every kubeadm) is unclean but the final analysis is surprisingly difficult.

@dgoodwin
Copy link
Contributor

@bronger ok good that explains the existence of the secrets and such, thanks for the update.

However there is definitely something up here, this has been reported by multiple people and I believe is demonstrated clearly in the gist above.

@bronger
Copy link
Author

bronger commented Nov 29, 2016

The problem was the following:

# rm -Rf /var/lib/kubelet
rm: cannot remove '/var/lib/kubelet/pods/60f036bc-b608-11e6-92c6-005056be54be/volumes/kubernetes.io~secret/default-token-t74z7': Device or resource busy

We return to the old snapshot with rsync which probably has the same problems as rm with removing this directory. After a reboot, one can remove var/lib/kubelet. It is surprising to me that the device is busy, though. No Kubernetes-related process is running and no kubelet service enabled anymore when the above error occurs.

@luxas
Copy link
Member

luxas commented Nov 29, 2016

@mikedanese Please move this issue

@kenzhaoyihui
Copy link

Hey, I met same problem.
Just follow kubernetes offical guide:

cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

setenforce 0

yum install -y docker kubelet kubeadm kubectl kubernetes-cni

systemctl enable docker && systemctl start docker

systemctl enable kubelet && systemctl start kubelet

Problem: kubelet.service don't start. But when use cmdline " systemctl start kubelet", it don't give some warnings or errors
#######################################################################
[root@master log]# systemctl start kubelet
[root@master log]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2017-01-05 05:00:07 EST; 2s ago
Docs: http://kubernetes.io/docs/
Process: 3356 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 3356 (code=exited, status=1/FAILURE)

Jan 05 05:00:07 master.redhat.com systemd[1]: kubelet.service: main process exite...E
Jan 05 05:00:07 master.redhat.com systemd[1]: Unit kubelet.service entered failed....
Jan 05 05:00:07 master.redhat.com systemd[1]: kubelet.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@master log]#
#########################################################################

I check the log:
#####################################################################
error: failed to run Kubelet: invalid kubeconfig: stat /etc/kubernetes/kubelet.conf: no such file or directory
Jan 5 05:02:00 master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Jan 5 05:02:00 master systemd: Unit kubelet.service entered failed state.
Jan 5 05:02:00 master systemd: kubelet.service failed.
Jan 5 05:02:10 master systemd: kubelet.service holdoff time over, scheduling restart.
Jan 5 05:02:10 master systemd: Started kubelet: The Kubernetes Node Agent.
Jan 5 05:02:10 master systemd: Starting kubelet: The Kubernetes Node Agent...
Jan 5 05:02:10 master kubelet: I0105 05:02:10.610953 3462 feature_gate.go:181] feature gates: map[]
Jan 5 05:02:10 master kubelet: error: failed to run Kubelet: invalid kubeconfig: stat /etc/kubernetes/kubelet.conf: no such file or directory
Jan 5 05:02:10 master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Jan 5 05:02:10 master systemd: Unit kubelet.service entered failed state.
Jan 5 05:02:10 master systemd: kubelet.service failed.
####################################################################
It says that need kubelet.cond in /etc/kubernetes, so the offical guide need other steps?

Version available if needed:
System : Centos7.0 or Fedora 25
[root@master log]# rpm -qa |grep kube
kubelet-1.5.1-0.x86_64
kubectl-1.5.1-0.x86_64
kubernetes-cni-0.3.0.1-0.07a8a2.x86_64
kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64

@luxas
Copy link
Member

luxas commented Jan 5, 2017

@kenzhaoyihui This is totally by design and isn't related to the issue above.
When you run kubeadm init, kubelet will get the file it's right now failing for and run with it.
It should be good to go, just continue...

@kenzhaoyihui
Copy link

@luxas Thank for your explanation. I will try again. That is a strange design, what is the purpose of this? Could you explain it for me? Thanks in advance!

@alvarolmedo
Copy link

alvarolmedo commented Feb 24, 2017

so, does the solution be
systemctl enable docker && systemctl start docker
kubeadm init
systemctl enable kubelet && systemctl start kubelet
?
Thanks in advance.
I am testing the kubeadm installation in a vagrant with RHEL7 and when i ran "kubeadm init" i get:
[apiclient] Creating a test deployment
[apiclient] Failed to create test deployment [namespaces "kube-system" not found] (will retry)
[apiclient] Failed to create test deployment [namespaces "kube-system" not found] (will retry)
[apiclient] Failed to get test deployment [Get https://10.0.2.15:6443/apis/extensions/v1beta1/namespaces/kube-system/deployments/dummy: stream error: stream ID 1333; INTERNAL_ERROR] (will retry)

@mnnit-geek
Copy link

I have similar issue kubelet service is exiting for some reason:
~ # ❯❯❯ systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2017-04-20 07:30:45 UTC; 5s ago
Docs: http://kubernetes.io/docs/
Process: 18660 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 18660 (code=exited, status=1/FAILURE)

Apr 20 07:30:45 ip-172-23-12-94.ap-south-1.compute.internal systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Apr 20 07:30:45 ip-172-23-12-94.ap-south-1.compute.internal systemd[1]: Unit kubelet.service entered failed state.
Apr 20 07:30:45 ip-172-23-12-94.ap-south-1.compute.internal systemd[1]: kubelet.service failed.
~ # ❯❯❯ systemctl status kubelet.service ⏎
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2017-04-20 07:30:45 UTC; 8s ago
Docs: http://kubernetes.io/docs/
Process: 18660 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 18660 (code=exited, status=1/FAILURE)

Apr 20 07:30:45 ip-172-23-12-94.ap-south-1.compute.internal systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Apr 20 07:30:45 ip-172-23-12-94.ap-south-1.compute.internal systemd[1]: Unit kubelet.service entered failed state.
Apr 20 07:30:45 ip-172-23-12-94.ap-south-1.compute.internal systemd[1]: kubelet.service failed.
~ # ❯❯❯ kubeadm version ⏎
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:33:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
~ # ❯❯❯ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
~ # ❯❯❯ kubelet version ⏎
I0420 07:34:04.446790 18904 feature_gate.go:144] feature gates: map[]
W0420 07:34:04.446891 18904 server.go:469] No API client: no api servers specified
I0420 07:34:04.446921 18904 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
I0420 07:34:04.446932 18904 docker.go:384] Start docker client with request timeout=2m0s
W0420 07:34:04.477693 18904 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
I0420 07:34:04.483662 18904 manager.go:143] cAdvisor running in container: "/"
W0420 07:34:04.556788 18904 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
I0420 07:34:04.616276 18904 fs.go:117] Filesystem partitions: map[/dev/xvda1:{mountpoint:/ major:202 minor:1 fsType:xfs blockSize:0}]
I0420 07:34:04.619039 18904 manager.go:198] Machine: {NumCores:2 CpuFrequency:2300207 MemoryCapacity:7933222912 MachineID:f32e0af35637b5dfcbedcb0a1de8dca1 SystemUUID:EC27A431-0154-76A3-F6BB-C87E25E07975 BootID:bbc7e50b-6490-4d4b-93e1-adddf749dc4f Filesystems:[{Device:/dev/xvda1 Capacity:107361267712 Type:vfs Inodes:104855168 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:107374182400 Scheduler:none} 202:0:{Name:xvda Major:202 Minor:0 Size:107374182400 Scheduler:deadline}] NetworkDevices:[{Name:eth0 MacAddress:02:45:6c:cb:8a:c5 Speed:0 Mtu:9001}] Topology:[{Id:0 Memory:8589529088 Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:47185920 Type:Unified Level:3}]}] CloudProvider:AWS InstanceType:m4.large InstanceID:i-0f02b9e383741ba55}
I0420 07:34:04.647565 18904 manager.go:204] Version: {KernelVersion:3.10.0-514.16.1.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.12.6 CadvisorVersion: CadvisorRevision:}
W0420 07:34:04.648158 18904 server.go:350] No api server defined - no events will be sent to API server.
I0420 07:34:04.648169 18904 server.go:509] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0420 07:34:04.649575 18904 container_manager_linux.go:245] container manager verified user specified cgroup-root exists: /
I0420 07:34:04.649616 18904 container_manager_linux.go:250] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false EnableCRI:true NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} ExperimentalQOSReserved:map[]}
W0420 07:34:04.651413 18904 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0420 07:34:04.651453 18904 kubelet.go:494] Hairpin mode set to "hairpin-veth"
W0420 07:34:04.653379 18904 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
I0420 07:34:04.659366 18904 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
~ # ❯❯❯ docker version ⏎
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-16.el7.centos.x86_64
Go version: go1.7.4
Git commit: 3a094bd/1.12.6
Built: Fri Apr 14 13:46:13 2017
OS/Arch: linux/amd64

Server:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-16.el7.centos.x86_64
Go version: go1.7.4
Git commit: 3a094bd/1.12.6
Built: Fri Apr 14 13:46:13 2017
OS/Arch: linux/amd64
~ # ❯❯❯ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

@dgoodwin
Copy link
Contributor

@mnnit-geek please see kubernetes/release#306

@luxas
Copy link
Member

luxas commented May 30, 2017

I think we can/should move the conversation to kubernetes/kubeadm#262 in case this problem still exists.

@awesomemayank007
Copy link

please switchoff your swap memory i faced same problem and when i remove swap entry from my /etc/fstab file .then it worked in my case.also do swapoff

@sachinar
Copy link

I am also facing same issue with kubelet on cento7

@awesomemayank007 I have done same settings on machine but it isn't work.

[root@master1 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2017-11-16 08:29:00 UTC; 3s ago
Docs: http://kubernetes.io/docs/
Process: 1761 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 1761 (code=exited, status=1/FAILURE)

Nov 16 08:29:00 master1.com systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Nov 16 08:29:00 master1.com systemd[1]: Unit kubelet.service entered failed state.
Nov 16 08:29:00 master1.com systemd[1]: kubelet.service failed.

@luxas
Copy link
Member

luxas commented Nov 16, 2017

@sachinar Please don't comment on closed issues, but instead open a new issue in kubernetes/kubeadm with sufficient details

@arunm8489
Copy link

#yum install -y kubelet kubeadm kubectl docker
Jst try installing docker 1.12...iam not getting solution with docker 1.17ce.( i think its not supported with kubelet)
Make swap off by #swapoff -a
Now reset kubeadm by #kubeadm reset
Now try #kudeadm init
after that check #systemctl status kubelet
for me it worked

@balp0001
Copy link

Thanks Arunm8489 ,, its works for me as well..
Small correction is : kubeadm init (spelling is wrong)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests