Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

initial timeout of 40s passed: 'minikube' resolves to: 92.242.140.2 (unallocated.barefruit.co.uk) #9051

Closed
moonlight16 opened this issue Aug 21, 2020 · 4 comments · Fixed by #9029

Comments

@moonlight16
Copy link

This feels like a fairly fundamental issue. I'm unable to start a minikube cluster from my Macbook using the default Docker driver. I'd rather use Docker, as opposed to VMs. Docker is setup to use 4 CPUs and 8GB of RAM.

$  minikube start                                                                                                                                                                                      😄  minikube v1.12.3 on Darwin 10.15.6
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🤷  docker "minikube" container is missing, will recreate.
🔥  Creating docker container (CPUs=2, Memory=7918MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
🤦  Unable to restart cluster, will reset it: getting k8s client: client config: client config: context "minikube" does not exist
💥  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'


stderr:
W0821 20:32:30.603346    1073 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[WARNING DirAvailable--var-lib-minikube-etcd]: /var/lib/minikube/etcd is not empty
W0821 20:32:32.227248    1073 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0821 20:32:32.230995    1073 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
$ docker logs minikube                                                                                                                                                                                                                                                       
+ select_iptables
+ local mode=nft
++ grep '^-'
++ wc -l
++ true
+ num_legacy_lines=0
+ '[' 0 -ge 10 ']'
++ grep '^-'
++ wc -l
++ true
+ num_nft_lines=0
+ '[' 0 -ge 0 ']'
+ mode=legacy
+ echo 'INFO: setting iptables to detected mode: legacy'
INFO: setting iptables to detected mode: legacy
+ update-alternatives --set iptables /usr/sbin/iptables-legacy
+ echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
+ local 'args=--set iptables /usr/sbin/iptables-legacy'
++ seq 0 15
+ for i in $(seq 0 15)
+ /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
+ return
+ update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
+ echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
+ local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
++ seq 0 15
+ for i in $(seq 0 15)
+ /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
+ return
+ fix_kmsg
+ [[ ! -e /dev/kmsg ]]
+ fix_mount
+ echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
INFO: ensuring we can execute mount/umount even with userns-remap
++ which mount
++ which umount
+ chown root:root /usr/bin/mount /usr/bin/umount
++ which mount
++ which umount
+ chmod -s /usr/bin/mount /usr/bin/umount
++ stat -f -c %T /bin/mount
+ [[ overlayfs == \a\u\f\s ]]
+ echo 'INFO: remounting /sys read-only'
INFO: remounting /sys read-only
+ mount -o remount,ro /sys
+ echo 'INFO: making mounts shared'
INFO: making mounts shared
+ mount --make-rshared /
+ fix_cgroup
+ echo 'INFO: fix cgroup mounts for all subsystems'
INFO: fix cgroup mounts for all subsystems
+ local docker_cgroup_mounts
++ grep docker
++ grep /sys/fs/cgroup /proc/self/mountinfo
+ docker_cgroup_mounts='537 536 0:29 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:138 master:18 - cgroup cpuset rw,cpuset
538 536 0:30 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:139 master:19 - cgroup cpu rw,cpu
539 536 0:31 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:140 master:20 - cgroup cpuacct rw,cpuacct
540 536 0:32 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:141 master:21 - cgroup blkio rw,blkio
541 536 0:33 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:142 master:22 - cgroup memory rw,memory
542 536 0:34 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:143 master:23 - cgroup devices rw,devices
543 536 0:35 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:144 master:24 - cgroup freezer rw,freezer
544 536 0:36 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:145 master:25 - cgroup net_cls rw,net_cls
545 536 0:37 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:146 master:26 - cgroup perf_event rw,perf_event
546 536 0:38 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:147 master:27 - cgroup net_prio rw,net_prio
547 536 0:39 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:148 master:28 - cgroup hugetlb rw,hugetlb
548 536 0:40 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:149 master:29 - cgroup pids rw,pids
550 536 0:42 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:151 master:31 - cgroup cgroup rw,name=systemd'
+ [[ -n 537 536 0:29 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:138 master:18 - cgroup cpuset rw,cpuset
538 536 0:30 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:139 master:19 - cgroup cpu rw,cpu
539 536 0:31 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:140 master:20 - cgroup cpuacct rw,cpuacct
540 536 0:32 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:141 master:21 - cgroup blkio rw,blkio
541 536 0:33 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:142 master:22 - cgroup memory rw,memory
542 536 0:34 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:143 master:23 - cgroup devices rw,devices
543 536 0:35 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:144 master:24 - cgroup freezer rw,freezer
544 536 0:36 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:145 master:25 - cgroup net_cls rw,net_cls
545 536 0:37 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:146 master:26 - cgroup perf_event rw,perf_event
546 536 0:38 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:147 master:27 - cgroup net_prio rw,net_prio
547 536 0:39 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:148 master:28 - cgroup hugetlb rw,hugetlb
548 536 0:40 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:149 master:29 - cgroup pids rw,pids
550 536 0:42 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:151 master:31 - cgroup cgroup rw,name=systemd ]]
+ local docker_cgroup cgroup_subsystems subsystem
++ echo '537 536 0:29 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:138 master:18 - cgroup cpuset rw,cpuset
538 536 0:30 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:139 master:19 - cgroup cpu rw,cpu
539 536 0:31 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:140 master:20 - cgroup cpuacct rw,cpuacct
540 536 0:32 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:141 master:21 - cgroup blkio rw,blkio
541 536 0:33 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:142 master:22 - cgroup memory rw,memory
542 536 0:34 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:143 master:23 - cgroup devices rw,devices
543 536 0:35 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:144 master:24 - cgroup freezer rw,freezer
544 536 0:36 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:145 master:25 - cgroup net_cls rw,net_cls
545 536 0:37 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:146 master:26 - cgroup perf_event rw,perf_event
546 536 0:38 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:147 master:27 - cgroup net_prio rw,net_prio
547 536 0:39 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:148 master:28 - cgroup hugetlb rw,hugetlb
548 536 0:40 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:149 master:29 - cgroup pids rw,pids
550 536 0:42 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:151 master:31 - cgroup cgroup rw,name=systemd'
++ cut '-d ' -f 4
++ head -n 1
+ docker_cgroup=/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
++ echo '537 536 0:29 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:138 master:18 - cgroup cpuset rw,cpuset
538 536 0:30 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:139 master:19 - cgroup cpu rw,cpu
539 536 0:31 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:140 master:20 - cgroup cpuacct rw,cpuacct
540 536 0:32 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:141 master:21 - cgroup blkio rw,blkio
541 536 0:33 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:142 master:22 - cgroup memory rw,memory
542 536 0:34 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:143 master:23 - cgroup devices rw,devices
543 536 0:35 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:144 master:24 - cgroup freezer rw,freezer
544 536 0:36 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:145 master:25 - cgroup net_cls rw,net_cls
545 536 0:37 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:146 master:26 - cgroup perf_event rw,perf_event
546 536 0:38 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:147 master:27 - cgroup net_prio rw,net_prio
547 536 0:39 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:148 master:28 - cgroup hugetlb rw,hugetlb
548 536 0:40 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:149 master:29 - cgroup pids rw,pids
550 536 0:42 /docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:151 master:31 - cgroup cgroup rw,name=systemd'
++ cut '-d ' -f 5
+ cgroup_subsystems='/sys/fs/cgroup/cpuset
/sys/fs/cgroup/cpu
/sys/fs/cgroup/cpuacct
/sys/fs/cgroup/blkio
/sys/fs/cgroup/memory
/sys/fs/cgroup/devices
/sys/fs/cgroup/freezer
/sys/fs/cgroup/net_cls
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/net_prio
/sys/fs/cgroup/hugetlb
/sys/fs/cgroup/pids
/sys/fs/cgroup/systemd'
+ IFS=
+ read -r subsystem
+ echo '/sys/fs/cgroup/cpuset
/sys/fs/cgroup/cpu
/sys/fs/cgroup/cpuacct
/sys/fs/cgroup/blkio
/sys/fs/cgroup/memory
/sys/fs/cgroup/devices
/sys/fs/cgroup/freezer
/sys/fs/cgroup/net_cls
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/net_prio
/sys/fs/cgroup/hugetlb
/sys/fs/cgroup/pids
/sys/fs/cgroup/systemd'
+ mkdir -p /sys/fs/cgroup/cpuset/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/cpu/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/cpu /sys/fs/cgroup/cpu/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/cpuacct/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/cpuacct /sys/fs/cgroup/cpuacct/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/blkio/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/memory/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/devices/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/freezer/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/net_cls/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/net_cls /sys/fs/cgroup/net_cls/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/perf_event/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/net_prio/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/net_prio /sys/fs/cgroup/net_prio/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/hugetlb/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/pids/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ mkdir -p /sys/fs/cgroup/systemd/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/5193bb0acc53d4c787f99d2b6462dc697631a0a25983b5e7c6a7cf8a41fa7aef
+ IFS=
+ read -r subsystem
+ local podman_cgroup_mounts
++ grep /sys/fs/cgroup /proc/self/mountinfo
++ grep libpod_parent
++ true
+ podman_cgroup_mounts=
+ [[ -n '' ]]
+ fix_machine_id
+ echo 'INFO: clearing and regenerating /etc/machine-id'
INFO: clearing and regenerating /etc/machine-id
+ rm -f /etc/machine-id
+ systemd-machine-id-setup
Initializing machine ID from random generator.
+ fix_product_name
+ [[ -f /sys/class/dmi/id/product_name ]]
+ echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
INFO: faking /sys/class/dmi/id/product_name to be "kind"
+ echo kind
+ mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
+ fix_product_uuid
+ [[ ! -f /kind/product_uuid ]]
+ cat /proc/sys/kernel/random/uuid
+ [[ -f /sys/class/dmi/id/product_uuid ]]
+ echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
INFO: faking /sys/class/dmi/id/product_uuid to be random
+ mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
+ [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
+ echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
+ mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
+ configure_proxy
+ mkdir -p /etc/systemd/system.conf.d/
+ cat
+ enable_network_magic
+ local docker_embedded_dns_ip=127.0.0.11
+ local docker_host_ip
++ getent ahostsv4 host.docker.internal
++ head -n1
++ cut '-d ' -f1
+ docker_host_ip=192.168.65.2
+ [[ -z 192.168.65.2 ]]
+ iptables-restore
+ iptables-save
+ sed -e 's/-d 127.0.0.11/-d 192.168.65.2/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.65.2:53/g'
+ cp /etc/resolv.conf /etc/resolv.conf.original
+ sed -e s/127.0.0.11/192.168.65.2/g /etc/resolv.conf.original
++ head -n1
+++ hostname
++ cut '-d ' -f1
++ getent ahostsv4 minikube
+ curr_ipv4=92.242.140.2
+ echo 'INFO: Detected IPv4 address: 92.242.140.2'
INFO: Detected IPv4 address: 92.242.140.2
+ '[' -f /kind/old-ipv4 ']'
+ [[ -n 92.242.140.2 ]]
+ echo -n 92.242.140.2
++ cut '-d ' -f1
++ head -n1
+++ hostname
++ getent ahostsv6 minikube
++ true
+ curr_ipv6=
+ echo 'INFO: Detected IPv6 address: '
INFO: Detected IPv6 address:
+ '[' -f /kind/old-ipv6 ']'
+ [[ -n '' ]]
+ exec /sbin/init
Failed to find module 'autofs4'
systemd 245.4-4ubuntu3 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization docker.
Detected architecture x86-64.
Failed to create symlink /sys/fs/cgroup/cpuacct: File exists
Failed to create symlink /sys/fs/cgroup/cpu: File exists
Failed to create symlink /sys/fs/cgroup/net_prio: File exists
Failed to create symlink /sys/fs/cgroup/net_cls: File exists

Welcome to Ubuntu 20.04 LTS!

Set hostname to <minikube>.
/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
[  OK  ] Started Dispatch Password …ts to Console Directory Watch.
[UNSUPP] Starting of Arbitrary Exec…Automount Point not supported.
[  OK  ] Reached target Local Encrypted Volumes.
[  OK  ] Reached target Network is Online.
[  OK  ] Reached target Paths.
[  OK  ] Reached target Slices.
[  OK  ] Reached target Swap.
[  OK  ] Listening on Journal Audit Socket.
[  OK  ] Listening on Journal Socket (/dev/log).
[  OK  ] Listening on Journal Socket.
         Mounting Huge Pages File System...
         Mounting Kernel Debug File System...
         Mounting Kernel Trace File System...
         Starting Journal Service...
         Starting Create list of st…odes for the current kernel...
         Mounting FUSE Control File System...
         Starting Remount Root and Kernel File Systems...
         Starting Apply Kernel Variables...
[  OK  ] Mounted Huge Pages File System.
[  OK  ] Mounted Kernel Debug File System.
[  OK  ] Mounted Kernel Trace File System.
[  OK  ] Finished Create list of st… nodes for the current kernel.
[  OK  ] Finished Remount Root and Kernel File Systems.
         Starting Create System Users...
         Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Mounted FUSE Control File System.
[  OK  ] Finished Apply Kernel Variables.
[  OK  ] Finished Update UTMP about System Boot/Shutdown.
[  OK  ] Finished Create System Users.
         Starting Create Static Device Nodes in /dev...
[  OK  ] Finished Create Static Device Nodes in /dev.
[  OK  ] Reached target Local File Systems (Pre).
[  OK  ] Reached target Local File Systems.
[  OK  ] Started Journal Service.
[  OK  ] Reached target System Initialization.
[  OK  ] Started Daily Cleanup of Temporary Directories.
[  OK  ] Reached target Timers.
         Starting Docker Socket for the API.
         Starting Flush Journal to Persistent Storage...
[  OK  ] Listening on Docker Socket for the API.
[  OK  ] Reached target Sockets.
[  OK  ] Reached target Basic System.
         Starting containerd container runtime...
         Starting minikube automount...
         Starting OpenBSD Secure Shell server...
[  OK  ] Finished Flush Journal to Persistent Storage.
[  OK  ] Started containerd container runtime.
[  OK  ] Started OpenBSD Secure Shell server.
[  OK  ] Finished minikube automount.
         Starting Docker Application Container Engine...
[  OK  ] Started Docker Application Container Engine.
[  OK  ] Reached target Multi-User System.
[  OK  ] Reached target Graphical Interface.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Finished Update UTMP about System Runlevel Changes.

I reran with --logstostderr and its a huge output. I'm hoping this is the relevant info here. The new line spaces are where it hung out for a long time.

I0821 13:38:03.078856   49103 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0821 13:38:03.101504   49103 ssh_runner.go:148] Run: openssl version
I0821 13:38:03.107397   49103 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0821 13:38:03.117039   49103 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0821 13:38:03.121242   49103 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Aug 21 17:51 /usr/share/ca-certificates/minikubeCA.pem
I0821 13:38:03.121302   49103 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0821 13:38:03.127592   49103 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0821 13:38:03.138131   49103 kubeadm.go:327] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.11@sha256:6fee59db7d67ed8ae6835e4bcb02f32056dc95f11cb369c51e352b62dd198aa0 Memory:7918 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s}
I0821 13:38:03.138288   49103 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0821 13:38:03.177648   49103 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0821 13:38:03.188459   49103 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0821 13:38:03.198767   49103 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0821 13:38:03.198873   49103 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0821 13:38:03.211119   49103 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0821 13:38:03.211144   49103 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"




I0821 13:42:27.106050   49103 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (4m23.907355247s)
W0821 13:42:27.106196   49103 out.go:151] 💥  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
@tstromberg
Copy link
Contributor

Your ISP's DNS server is issuing fraudulent DNS replies for minikube, which breaks the IP autodetection mechanism:

++ getent ahostsv4 minikube
+ curr_ipv4=92.242.140.2
+ echo 'INFO: Detected IPv4 address: 92.242.140.2'
INFO: Detected IPv4 address: 92.242.140.2

This IP resolves to unallocated.barefruit.co.uk. You can read more about it here: https://forums.verizon.com/t5/Fios-Internet/FIOS-DNS-Hack-Directed-to-unallocated-barefruit-co-uk92-242-140/td-p/723697

My suggestion is to change your hosts DNS to a more trustworthy service, like 8.8.8.8. If you are on Verizon, you can follow their instructions as well: https://www.verizon.com/support/residential/internet/home-network/settings/opt-out-of-dns-assist

@tstromberg tstromberg changed the title Minikube fails to start on Docker, initial timeout of 40s passed initial timeout of 40s passed: 'minikube' resolves to: 92.242.140.2 (unallocated.barefruit.co.uk) Aug 21, 2020
@moonlight16
Copy link
Author

Oh wow. Apparently my router was setup to get DNS servers from the ISP (which is not verizon!). I switched this to Google's DNS servers (8.8.8.8 and 8.8.4.4) and now minikube is coming up without issue. This came to me as a surprise. Thanks!

@medyagh medyagh reopened this Aug 21, 2020
@medyagh
Copy link
Member

medyagh commented Aug 21, 2020

@moonlight16 thanks for confirming this, we at minikube should sitll do better and detect when user is using a fradulent sneaky DNS server. and avoid using that.

@keivanipchihagh
Copy link

Oh wow. Apparently my router was setup to get DNS servers from the ISP (which is not verizon!). I switched this to Google's DNS servers (8.8.8.8 and 8.8.4.4) and now minikube is coming up without issue. This came to me as a surprise. Thanks!

I did change the DNS servers, but am still getting the same error. Can you explain more about the process you went through to fix this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants