Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker: start fails on storage provisioner addon "x509: certificate is valid for 172.17.0.3" #8936

Closed
srilumpa opened this issue Aug 7, 2020 · 6 comments · Fixed by #9294
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@srilumpa
Copy link

srilumpa commented Aug 7, 2020

Steps to reproduce the issue:

  1. minikube start
  2. minikube stop
  3. minikube start

In my setup, it seems that minikube changes of IP address between startups. This makes the enabling of some addons failing because the certificate is seen as invalid.

The only way for me to be able to start minikube again is to either restart my workstation (and this does not always fix the issue) or to delete my minikube instance an recreate it.

Full output of failed command:

I0807 10:43:57.377118   18062 out.go:191] Setting JSON to false
I0807 10:43:57.400335   18062 start.go:100] hostinfo: {"hostname":"LAPL-MAD-01","uptime":90098,"bootTime":1596699739,"procs":395,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"19.10","kernelVersion":"5.3.0-64-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"bdc6865b-4fdf-43b8-b043-bbae16150497"}
I0807 10:43:57.405974   18062 start.go:110] virtualization: kvm host
😄  minikube v1.12.2 on Ubuntu 19.10
I0807 10:43:57.418027   18062 notify.go:125] Checking for updates...
I0807 10:43:57.418833   18062 driver.go:287] Setting default libvirt URI to qemu:///system
I0807 10:43:57.485269   18062 docker.go:87] docker version: linux-19.03.6
✨  Using the docker driver based on existing profile
I0807 10:43:57.493236   18062 start.go:229] selected driver: docker
I0807 10:43:57.493242   18062 start.go:635] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8192 CPUs:4 DiskSize:40960 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false dashboard:false default-storageclass:true efk:false freshpod:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false] VerifyComponents:map[apiserver:true system_pods:true]}
I0807 10:43:57.493321   18062 start.go:646] status for docker: {Installed:true Healthy:true NeedsImprovement:false Error:<nil> Fix: Doc:}
I0807 10:43:57.493382   18062 cli_runner.go:109] Run: docker system info --format "{{json .}}"
I0807 10:43:57.576231   18062 start_flags.go:344] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8192 CPUs:4 DiskSize:40960 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false dashboard:false default-storageclass:true efk:false freshpod:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false] VerifyComponents:map[apiserver:true system_pods:true]}
👍  Starting control plane node minikube in cluster minikube
I0807 10:43:57.628348   18062 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0807 10:43:57.628370   18062 cache.go:113] gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 exists in daemon, skipping pull
I0807 10:43:57.628379   18062 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0807 10:43:57.628410   18062 preload.go:105] Found local preload: /home/mad/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4
I0807 10:43:57.628416   18062 cache.go:51] Caching tarball of preloaded images
I0807 10:43:57.628425   18062 preload.go:131] Found /home/mad/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0807 10:43:57.628430   18062 cache.go:54] Finished verifying existence of preloaded tar for  v1.18.3 on docker
I0807 10:43:57.628529   18062 profile.go:150] Saving config to /home/mad/.minikube/profiles/minikube/config.json ...
I0807 10:43:57.628685   18062 cache.go:181] Successfully downloaded all kic artifacts
I0807 10:43:57.628704   18062 start.go:241] acquiring machines lock for minikube: {Name:mk7fdbf93c5af258f9be2a80affc880ca4e5b3a7 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0807 10:43:57.628806   18062 start.go:245] acquired machines lock for "minikube" in 83.907µs
I0807 10:43:57.628821   18062 start.go:89] Skipping create...Using existing machine configuration
I0807 10:43:57.628828   18062 fix.go:53] fixHost starting: 
I0807 10:43:57.629094   18062 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0807 10:43:57.670987   18062 fix.go:105] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0807 10:43:57.671008   18062 fix.go:131] unexpected machine state, will restart: <nil>
🔄  Restarting existing docker container for "minikube" ...
I0807 10:43:57.679741   18062 cli_runner.go:109] Run: docker start minikube
I0807 10:43:58.048843   18062 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0807 10:43:58.097725   18062 kic.go:330] container "minikube" state is running.
I0807 10:43:58.098036   18062 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0807 10:43:58.153597   18062 profile.go:150] Saving config to /home/mad/.minikube/profiles/minikube/config.json ...
I0807 10:43:58.153787   18062 machine.go:88] provisioning docker machine ...
I0807 10:43:58.153811   18062 ubuntu.go:166] provisioning hostname "minikube"
I0807 10:43:58.153877   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:43:58.238255   18062 main.go:115] libmachine: Using SSH client type: native
I0807 10:43:58.238514   18062 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0807 10:43:58.238542   18062 main.go:115] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0807 10:43:58.239128   18062 main.go:115] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45166->127.0.0.1:32779: read: connection reset by peer
I0807 10:44:01.422690   18062 main.go:115] libmachine: SSH cmd err, output: <nil>: minikube

I0807 10:44:01.423127   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:44:01.479333   18062 main.go:115] libmachine: Using SSH client type: native
I0807 10:44:01.479484   18062 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0807 10:44:01.479511   18062 main.go:115] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0807 10:44:01.612265   18062 main.go:115] libmachine: SSH cmd err, output: <nil>: 
I0807 10:44:01.612471   18062 ubuntu.go:172] set auth options {CertDir:/home/mad/.minikube CaCertPath:/home/mad/.minikube/certs/ca.pem CaPrivateKeyPath:/home/mad/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/mad/.minikube/machines/server.pem ServerKeyPath:/home/mad/.minikube/machines/server-key.pem ClientKeyPath:/home/mad/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/mad/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/mad/.minikube}
I0807 10:44:01.612529   18062 ubuntu.go:174] setting up certificates
I0807 10:44:01.612561   18062 provision.go:82] configureAuth start
I0807 10:44:01.612718   18062 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0807 10:44:01.678334   18062 provision.go:131] copyHostCerts
I0807 10:44:01.678378   18062 exec_runner.go:91] found /home/mad/.minikube/ca.pem, removing ...
I0807 10:44:01.678431   18062 exec_runner.go:98] cp: /home/mad/.minikube/certs/ca.pem --> /home/mad/.minikube/ca.pem (1029 bytes)
I0807 10:44:01.678499   18062 exec_runner.go:91] found /home/mad/.minikube/cert.pem, removing ...
I0807 10:44:01.678528   18062 exec_runner.go:98] cp: /home/mad/.minikube/certs/cert.pem --> /home/mad/.minikube/cert.pem (1070 bytes)
I0807 10:44:01.678599   18062 exec_runner.go:91] found /home/mad/.minikube/key.pem, removing ...
I0807 10:44:01.678626   18062 exec_runner.go:98] cp: /home/mad/.minikube/certs/key.pem --> /home/mad/.minikube/key.pem (1675 bytes)
I0807 10:44:01.678668   18062 provision.go:105] generating server cert: /home/mad/.minikube/machines/server.pem ca-key=/home/mad/.minikube/certs/ca.pem private-key=/home/mad/.minikube/certs/ca-key.pem org=mad.minikube san=[172.17.0.2 localhost 127.0.0.1 minikube minikube]
I0807 10:44:01.761149   18062 provision.go:159] copyRemoteCerts
I0807 10:44:01.761263   18062 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0807 10:44:01.761342   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:44:01.804057   18062 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/mad/.minikube/machines/minikube/id_rsa Username:docker}
I0807 10:44:01.909113   18062 ssh_runner.go:215] scp /home/mad/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1029 bytes)
I0807 10:44:01.948689   18062 ssh_runner.go:215] scp /home/mad/.minikube/machines/server.pem --> /etc/docker/server.pem (1139 bytes)
I0807 10:44:01.970920   18062 ssh_runner.go:215] scp /home/mad/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0807 10:44:01.990378   18062 provision.go:85] duration metric: configureAuth took 377.79991ms
I0807 10:44:01.990402   18062 ubuntu.go:190] setting minikube options for container-runtime
I0807 10:44:01.990619   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:44:02.037870   18062 main.go:115] libmachine: Using SSH client type: native
I0807 10:44:02.038037   18062 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0807 10:44:02.038052   18062 main.go:115] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0807 10:44:02.163946   18062 main.go:115] libmachine: SSH cmd err, output: <nil>: overlay

I0807 10:44:02.163976   18062 ubuntu.go:71] root file system type: overlay
I0807 10:44:02.164180   18062 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0807 10:44:02.164263   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:44:02.212561   18062 main.go:115] libmachine: Using SSH client type: native
I0807 10:44:02.212710   18062 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0807 10:44:02.212788   18062 main.go:115] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0807 10:44:02.393799   18062 main.go:115] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0807 10:44:02.394241   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:44:02.458034   18062 main.go:115] libmachine: Using SSH client type: native
I0807 10:44:02.458228   18062 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0807 10:44:02.458261   18062 main.go:115] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0807 10:44:02.600183   18062 main.go:115] libmachine: SSH cmd err, output: <nil>: 
I0807 10:44:02.600265   18062 machine.go:91] provisioned docker machine in 4.446442438s
I0807 10:44:02.600289   18062 start.go:204] post-start starting for "minikube" (driver="docker")
I0807 10:44:02.600335   18062 start.go:214] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0807 10:44:02.600558   18062 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0807 10:44:02.600738   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:44:02.670881   18062 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/mad/.minikube/machines/minikube/id_rsa Username:docker}
I0807 10:44:02.764150   18062 ssh_runner.go:148] Run: cat /etc/os-release
I0807 10:44:02.769387   18062 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0807 10:44:02.769427   18062 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0807 10:44:02.769460   18062 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0807 10:44:02.769480   18062 info.go:101] Remote host: Ubuntu 19.10
I0807 10:44:02.769497   18062 filesync.go:118] Scanning /home/mad/.minikube/addons for local assets ...
I0807 10:44:02.769566   18062 filesync.go:118] Scanning /home/mad/.minikube/files for local assets ...
I0807 10:44:02.769606   18062 start.go:207] post-start completed in 169.280614ms
I0807 10:44:02.769621   18062 fix.go:55] fixHost completed within 5.140793177s
I0807 10:44:02.769632   18062 start.go:76] releasing machines lock for "minikube", held for 5.140815719s
I0807 10:44:02.769732   18062 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0807 10:44:02.826906   18062 ssh_runner.go:148] Run: systemctl --version
I0807 10:44:02.826919   18062 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0807 10:44:02.826986   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:44:02.827018   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:44:02.882140   18062 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/mad/.minikube/machines/minikube/id_rsa Username:docker}
I0807 10:44:02.885993   18062 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/mad/.minikube/machines/minikube/id_rsa Username:docker}
I0807 10:44:03.049481   18062 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0807 10:44:03.070615   18062 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0807 10:44:03.097434   18062 cruntime.go:192] skipping containerd shutdown because we are bound to it
I0807 10:44:03.097580   18062 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0807 10:44:03.112331   18062 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0807 10:44:03.123548   18062 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0807 10:44:03.196286   18062 ssh_runner.go:148] Run: sudo systemctl start docker
I0807 10:44:03.205265   18062 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
I0807 10:44:03.287209   18062 cli_runner.go:109] Run: docker network ls --filter name=bridge --format {{.ID}}
I0807 10:44:03.335329   18062 cli_runner.go:109] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" b4840d89ac59
I0807 10:44:03.377635   18062 network.go:77] got host ip for mount in container by inspect docker network: 172.17.0.1
I0807 10:44:03.377732   18062 ssh_runner.go:148] Run: grep 172.17.0.1	host.minikube.internal$ /etc/hosts
I0807 10:44:03.380741   18062 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0807 10:44:03.390457   18062 preload.go:97] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0807 10:44:03.390494   18062 preload.go:105] Found local preload: /home/mad/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4
I0807 10:44:03.390595   18062 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0807 10:44:03.471843   18062 docker.go:381] Got preloaded images: -- stdout --
perso/analysis-api:latest
perso/fuglu:latest
perso/url-processing:latest
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
python:3.7.8-slim-buster
gcr.io/k8s-minikube/storage-provisioner:v2
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
node:13.8.0-alpine
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
node:13.3.0-alpine
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/metrics-server-amd64:v0.2.1
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0807 10:44:03.471880   18062 docker.go:319] Images already preloaded, skipping extraction
I0807 10:44:03.471943   18062 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0807 10:44:03.548289   18062 docker.go:381] Got preloaded images: -- stdout --
perso/analysis-api:latest
perso/fuglu:latest
perso/url-processing:latest
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
<none>:<none>
python:3.7.8-slim-buster
gcr.io/k8s-minikube/storage-provisioner:v2
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
node:13.8.0-alpine
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
node:13.3.0-alpine
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/metrics-server-amd64:v0.2.1
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0807 10:44:03.548322   18062 cache_images.go:69] Images are preloaded, skipping loading
I0807 10:44:03.548384   18062 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0807 10:44:03.601068   18062 cni.go:74] Creating CNI manager for ""
I0807 10:44:03.601085   18062 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0807 10:44:03.601096   18062 kubeadm.go:84] Using pod CIDR: 
I0807 10:44:03.601118   18062 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0807 10:44:03.601216   18062 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
controllerManager:
  extraArgs:
    "leader-elect": "false"
scheduler:
  extraArgs:
    "leader-elect": "false"
kubernetesVersion: v1.18.3
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 172.17.0.2:10249

I0807 10:44:03.601349   18062 kubeadm.go:796] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2

[Install]
 config:
{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0807 10:44:03.601437   18062 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
I0807 10:44:03.608713   18062 binaries.go:43] Found k8s binaries, skipping transfer
I0807 10:44:03.608788   18062 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0807 10:44:03.616949   18062 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
I0807 10:44:03.634210   18062 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0807 10:44:03.653195   18062 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1730 bytes)
I0807 10:44:03.671431   18062 ssh_runner.go:148] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
I0807 10:44:03.674633   18062 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0807 10:44:03.684548   18062 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0807 10:44:03.753517   18062 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0807 10:44:03.766827   18062 certs.go:52] Setting up /home/mad/.minikube/profiles/minikube for IP: 172.17.0.2
I0807 10:44:03.766876   18062 certs.go:169] skipping minikubeCA CA generation: /home/mad/.minikube/ca.key
I0807 10:44:03.766897   18062 certs.go:169] skipping proxyClientCA CA generation: /home/mad/.minikube/proxy-client-ca.key
I0807 10:44:03.766955   18062 certs.go:269] skipping minikube-user signed cert generation: /home/mad/.minikube/profiles/minikube/client.key
I0807 10:44:03.766979   18062 certs.go:269] skipping minikube signed cert generation: /home/mad/.minikube/profiles/minikube/apiserver.key.7b749c5f
I0807 10:44:03.767001   18062 certs.go:269] skipping aggregator signed cert generation: /home/mad/.minikube/profiles/minikube/proxy-client.key
I0807 10:44:03.767134   18062 certs.go:348] found cert: /home/mad/.minikube/certs/home/mad/.minikube/certs/ca-key.pem (1679 bytes)
I0807 10:44:03.767178   18062 certs.go:348] found cert: /home/mad/.minikube/certs/home/mad/.minikube/certs/ca.pem (1029 bytes)
I0807 10:44:03.767219   18062 certs.go:348] found cert: /home/mad/.minikube/certs/home/mad/.minikube/certs/cert.pem (1070 bytes)
I0807 10:44:03.767255   18062 certs.go:348] found cert: /home/mad/.minikube/certs/home/mad/.minikube/certs/key.pem (1675 bytes)
I0807 10:44:03.768340   18062 ssh_runner.go:215] scp /home/mad/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0807 10:44:03.793733   18062 ssh_runner.go:215] scp /home/mad/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0807 10:44:03.812314   18062 ssh_runner.go:215] scp /home/mad/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0807 10:44:03.832463   18062 ssh_runner.go:215] scp /home/mad/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0807 10:44:03.854057   18062 ssh_runner.go:215] scp /home/mad/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0807 10:44:03.874062   18062 ssh_runner.go:215] scp /home/mad/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0807 10:44:03.894248   18062 ssh_runner.go:215] scp /home/mad/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0807 10:44:03.917066   18062 ssh_runner.go:215] scp /home/mad/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0807 10:44:03.935308   18062 ssh_runner.go:215] scp /home/mad/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0807 10:44:03.954125   18062 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0807 10:44:03.971866   18062 ssh_runner.go:148] Run: openssl version
I0807 10:44:03.977459   18062 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0807 10:44:03.988077   18062 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0807 10:44:03.991552   18062 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Oct 11  2018 /usr/share/ca-certificates/minikubeCA.pem
I0807 10:44:03.991621   18062 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0807 10:44:03.996853   18062 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0807 10:44:04.003567   18062 kubeadm.go:327] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8192 CPUs:4 DiskSize:40960 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false dashboard:false default-storageclass:true efk:false freshpod:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false] VerifyComponents:map[apiserver:true system_pods:true]}
I0807 10:44:04.003704   18062 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0807 10:44:04.056754   18062 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0807 10:44:04.065257   18062 kubeadm.go:338] found existing configuration files, will attempt cluster restart
I0807 10:44:04.065282   18062 kubeadm.go:512] restartCluster start
I0807 10:44:04.065344   18062 ssh_runner.go:148] Run: sudo test -d /data/minikube
I0807 10:44:04.074339   18062 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:

stderr:
I0807 10:44:04.079519   18062 ssh_runner.go:148] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0807 10:44:04.087050   18062 api_server.go:146] Checking apiserver status ...
I0807 10:44:04.087123   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0807 10:44:04.098072   18062 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

stderr:
I0807 10:44:04.098116   18062 kubeadm.go:491] needs reconfigure: apiserver in state Stopped
I0807 10:44:04.098131   18062 kubeadm.go:919] stopping kube-system containers ...
I0807 10:44:04.098213   18062 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0807 10:44:04.154609   18062 docker.go:229] Stopping containers: [a364f86d3696 deeb26eb5757 b56435be3215 cc0a28316f12 05436888c471 fe90050e2ed0 463762f7110a 23887d2bb44c ef79cd663416 5a4191dc1a33 ba55f84c040f 180c95715c3d d1602d8abe70 2ac675887546 feae86461b4d fe890484bb13]
I0807 10:44:04.154700   18062 ssh_runner.go:148] Run: docker stop a364f86d3696 deeb26eb5757 b56435be3215 cc0a28316f12 05436888c471 fe90050e2ed0 463762f7110a 23887d2bb44c ef79cd663416 5a4191dc1a33 ba55f84c040f 180c95715c3d d1602d8abe70 2ac675887546 feae86461b4d fe890484bb13
I0807 10:44:04.210907   18062 ssh_runner.go:148] Run: sudo systemctl stop kubelet
I0807 10:44:10.142664   18062 ssh_runner.go:188] Completed: sudo systemctl stop kubelet: (5.93172146s)
I0807 10:44:10.142792   18062 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0807 10:44:10.152130   18062 kubeadm.go:150] found existing configuration files:
-rw------- 1 root root 5491 Aug  7 08:33 /etc/kubernetes/admin.conf
-rw------- 1 root root 5527 Aug  7 08:33 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1911 Aug  7 08:34 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5475 Aug  7 08:33 /etc/kubernetes/scheduler.conf

I0807 10:44:10.152223   18062 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0807 10:44:10.161867   18062 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0807 10:44:10.171926   18062 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0807 10:44:10.181937   18062 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0807 10:44:10.198127   18062 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0807 10:44:10.207243   18062 kubeadm.go:582] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0807 10:44:10.207274   18062 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0807 10:44:10.386633   18062 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0807 10:44:10.440428   18062 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0807 10:44:11.366764   18062 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0807 10:44:11.416857   18062 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0807 10:44:11.467046   18062 api_server.go:48] waiting for apiserver process to appear ...
I0807 10:44:11.467112   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:11.974894   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:12.474959   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:12.974892   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:13.475068   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:13.974853   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:14.474904   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:14.974845   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:15.474883   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:15.974972   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:16.474999   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:16.974906   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:17.474792   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:17.974898   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:44:17.983100   18062 api_server.go:68] duration metric: took 6.516055291s to wait for apiserver process to appear ...
I0807 10:44:17.983116   18062 api_server.go:84] waiting for apiserver healthz status ...
I0807 10:44:17.983124   18062 api_server.go:221] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0807 10:44:20.500264   18062 api_server.go:241] https://172.17.0.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0807 10:44:20.500295   18062 api_server.go:99] status: https://172.17.0.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0807 10:44:21.000397   18062 api_server.go:221] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0807 10:44:21.005934   18062 api_server.go:241] https://172.17.0.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0807 10:44:21.005978   18062 api_server.go:99] status: https://172.17.0.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0807 10:44:21.500443   18062 api_server.go:221] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0807 10:44:21.509093   18062 api_server.go:241] https://172.17.0.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0807 10:44:21.509168   18062 api_server.go:99] status: https://172.17.0.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0807 10:44:22.000491   18062 api_server.go:221] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0807 10:44:22.004742   18062 api_server.go:241] https://172.17.0.2:8443/healthz returned 200:
ok
W0807 10:44:22.007250   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:44:22.520760   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:44:23.017798   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:44:23.519523   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:44:24.018951   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:44:24.518822   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
[... snip ...]
W0807 10:48:17.018816   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:48:17.511556   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
I0807 10:48:18.008700   18062 kubeadm.go:516] restartCluster took 4m13.943396484s
🤦  Unable to restart cluster, will reset it: apiserver health: controlPlane never updated to v1.18.3
I0807 10:48:18.008898   18062 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0807 10:49:02.007468   18062 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (43.998547024s)
I0807 10:49:02.007541   18062 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0807 10:49:02.017521   18062 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0807 10:49:02.061942   18062 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0807 10:49:02.069087   18062 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0807 10:49:02.069160   18062 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0807 10:49:02.076815   18062 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0807 10:49:02.076860   18062 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0807 10:49:16.589523   18062 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (14.512626984s)
I0807 10:49:16.589579   18062 cni.go:74] Creating CNI manager for ""
I0807 10:49:16.589607   18062 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0807 10:49:16.589664   18062 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0807 10:49:16.589767   18062 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0807 10:49:16.589829   18062 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl label nodes minikube.k8s.io/version=v1.12.2 minikube.k8s.io/commit=be7c19d391302656d27f1f213657d925c4e1cfc2-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_08_07T10_49_16_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0807 10:49:17.189784   18062 ops.go:34] apiserver oom_adj: -16
I0807 10:49:17.189930   18062 kubeadm.go:872] duration metric: took 600.251983ms to wait for elevateKubeSystemPrivileges.
I0807 10:49:17.190362   18062 kubeadm.go:329] StartCluster complete in 5m13.186789574s
I0807 10:49:17.190450   18062 settings.go:123] acquiring lock: {Name:mk0cbfb55225c947572b1681cc76d271badaebd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0807 10:49:17.190738   18062 settings.go:131] Updating kubeconfig:  /home/mad/.kube/config
I0807 10:49:17.198347   18062 lock.go:35] WriteFile acquiring /home/mad/.kube/config: {Name:mkbc27101fe30991d987ef813e5317436055df40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0807 10:49:17.199031   18062 start.go:195] Will wait wait-timeout for node ...
🔎  Verifying Kubernetes components...
I0807 10:49:17.199172   18062 addons.go:353] enableAddons start: toEnable=map[ambassador:false dashboard:false default-storageclass:true efk:false freshpod:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false], additional=[]
I0807 10:49:17.206854   18062 api_server.go:48] waiting for apiserver process to appear ...
I0807 10:49:17.206948   18062 addons.go:53] Setting metrics-server=true in profile "minikube"
I0807 10:49:17.206895   18062 addons.go:53] Setting storage-provisioner=true in profile "minikube"
I0807 10:49:17.207035   18062 addons.go:129] Setting addon metrics-server=true in "minikube"
W0807 10:49:17.207080   18062 addons.go:138] addon metrics-server should already be in state true
I0807 10:49:17.206949   18062 addons.go:53] Setting default-storageclass=true in profile "minikube"
I0807 10:49:17.207125   18062 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0807 10:49:17.207174   18062 addons.go:267] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0807 10:49:17.199370   18062 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl scale deployment --replicas=1 coredns -n=kube-system
I0807 10:49:17.207134   18062 host.go:65] Checking if "minikube" exists ...
I0807 10:49:17.207081   18062 addons.go:129] Setting addon storage-provisioner=true in "minikube"
W0807 10:49:17.207631   18062 addons.go:138] addon storage-provisioner should already be in state true
I0807 10:49:17.207682   18062 host.go:65] Checking if "minikube" exists ...
I0807 10:49:17.208558   18062 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0807 10:49:17.209661   18062 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0807 10:49:17.210200   18062 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://172.17.0.2:8443/apis/storage.k8s.io/v1/storageclasses": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2]
I0807 10:49:17.297244   18062 addons.go:236] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0807 10:49:17.297262   18062 ssh_runner.go:215] scp deploy/addons/metrics-server/metrics-apiservice.yaml.tmpl --> /etc/kubernetes/addons/metrics-apiservice.yaml (401 bytes)
I0807 10:49:17.297270   18062 addons.go:236] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0807 10:49:17.297286   18062 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0807 10:49:17.297334   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:49:17.297335   18062 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0807 10:49:17.345061   18062 start.go:549] successfully scaled coredns replicas to 1
I0807 10:49:17.345090   18062 api_server.go:68] duration metric: took 145.982033ms to wait for apiserver process to appear ...
I0807 10:49:17.345106   18062 api_server.go:84] waiting for apiserver healthz status ...
I0807 10:49:17.345118   18062 api_server.go:221] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0807 10:49:17.348387   18062 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/mad/.minikube/machines/minikube/id_rsa Username:docker}
I0807 10:49:17.349313   18062 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/mad/.minikube/machines/minikube/id_rsa Username:docker}
I0807 10:49:17.349391   18062 api_server.go:241] https://172.17.0.2:8443/healthz returned 200:
ok
W0807 10:49:17.352860   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
I0807 10:49:17.499463   18062 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0807 10:49:17.500041   18062 addons.go:236] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0807 10:49:17.500104   18062 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (699 bytes)
I0807 10:49:17.558070   18062 addons.go:236] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0807 10:49:17.558186   18062 ssh_runner.go:215] scp deploy/addons/metrics-server/metrics-server-service.yaml.tmpl --> /etc/kubernetes/addons/metrics-server-service.yaml (401 bytes)
I0807 10:49:17.589053   18062 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0807 10:49:17.861350   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
🌟  Enabled addons: default-storageclass, metrics-server, storage-provisioner
I0807 10:49:17.908746   18062 addons.go:355] enableAddons completed in 709.598727ms
W0807 10:49:18.363407   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:49:18.865318   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:49:19.364450   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:49:19.856311   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:49:20.365011   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
W0807 10:49:20.865068   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
[.... snip ....]
W0807 10:53:17.375662   18062 api_server.go:117] api server version match failed: server version: Get "https://172.17.0.2:8443/version?timeout=32s": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2
I0807 10:53:17.375918   18062 exit.go:58] WithError(failed to start node)=startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x100000000000000)
	/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1baef73, 0x14, 0x1ec7e20, 0xc000988be0)
	/app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2ccd8a0, 0xc00096a730, 0x0, 0x1)
	/app/cmd/minikube/cmd/start.go:218 +0x6c9
github.com/spf13/cobra.(*Command).execute(0x2ccd8a0, 0xc00096a720, 0x1, 0x1, 0x2ccd8a0, 0xc00096a720)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0x2ccc8e0, 0x0, 0x1, 0xc0002c1440)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
	/app/cmd/minikube/cmd/root.go:106 +0x72c
main.main()
	/app/cmd/minikube/main.go:71 +0x11f
W0807 10:53:17.376321   18062 out.go:252] failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3

💣  failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Full output of minikube start command used, if not already included:

😄  minikube v1.12.2 on Ubuntu 19.10
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.18.3 preload ...
    > preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4: 510.91 MiB
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
🤦  Unable to restart cluster, will reset it: apiserver health: controlPlane never updated to v1.18.3
🔎  Verifying Kubernetes components...
❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://172.17.0.2:8443/apis/storage.k8s.io/v1/storageclasses": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2]
🌟  Enabled addons: default-storageclass, metrics-server, storage-provisioner

💣  failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Optional: Full output of minikube logs command:

==> Docker <==
-- Logs begin at Fri 2020-08-07 08:16:34 UTC, end at Fri 2020-08-07 08:40:33 UTC. --
Aug 07 08:26:51 minikube dockerd[487]: time="2020-08-07T08:26:51.715176313Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:26:52 minikube dockerd[487]: time="2020-08-07T08:26:52.645882189Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:26:52 minikube dockerd[487]: time="2020-08-07T08:26:52.645957837Z" level=warning msg="c755e8262f0a183579569bbabfc80447bb39bd35d992f9da0287caa456352339 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c755e8262f0a183579569bbabfc80447bb39bd35d992f9da0287caa456352339/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:26:53 minikube dockerd[487]: time="2020-08-07T08:26:53.225430160Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:26:53 minikube dockerd[487]: time="2020-08-07T08:26:53.225521647Z" level=warning msg="bcf53a34f16a79b4eb42123bb7ae6fcb888340d313a7123873dc65d19fa4ad38 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/bcf53a34f16a79b4eb42123bb7ae6fcb888340d313a7123873dc65d19fa4ad38/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:26:56 minikube dockerd[487]: time="2020-08-07T08:26:56.532370813Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:26:56 minikube dockerd[487]: time="2020-08-07T08:26:56.532444643Z" level=warning msg="97a3ea8c81264c2260d9a99316d2120acd910e5eac8d9630bf05781dbbfcfd8a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/97a3ea8c81264c2260d9a99316d2120acd910e5eac8d9630bf05781dbbfcfd8a/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:26:57 minikube dockerd[487]: time="2020-08-07T08:26:57.243370339Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:26:57 minikube dockerd[487]: time="2020-08-07T08:26:57.243454727Z" level=warning msg="9b510d0812df5bad0fc37db8c1fdd409c038c86ac38969330eab5930b17dcd17 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/9b510d0812df5bad0fc37db8c1fdd409c038c86ac38969330eab5930b17dcd17/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:27:05 minikube dockerd[487]: time="2020-08-07T08:27:05.716794271Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.221011887Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.320298819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.320382283Z" level=warning msg="27b8ac67d35fb410d43bacfb4bf80d431bc3bff0a504bd64e6c1dedbeaf2de93 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/27b8ac67d35fb410d43bacfb4bf80d431bc3bff0a504bd64e6c1dedbeaf2de93/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.331834150Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.331969605Z" level=warning msg="4fca384549959de0f1ce69871b6074368b2a6730f86b7cc8b962ce43e38cd98b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4fca384549959de0f1ce69871b6074368b2a6730f86b7cc8b962ce43e38cd98b/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.332015126Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.332144393Z" level=warning msg="4c5f34e14af1e96a23d83f02cb3f3860b212a4e667ad6ca37058ba00117a4fce cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4c5f34e14af1e96a23d83f02cb3f3860b212a4e667ad6ca37058ba00117a4fce/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.378609964Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.389899116Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.390870393Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.392247966Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.406353631Z" level=warning msg="31c06aac7281ed7417fc826a48cedfcce8cd3d1d2bf27f19e2be3de1f1f525a0 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/31c06aac7281ed7417fc826a48cedfcce8cd3d1d2bf27f19e2be3de1f1f525a0/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.406452914Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.410672286Z" level=warning msg="6a987092d18cbe970d3992d1854f699d5b06f9de52ca779a77e5700042691c00 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/6a987092d18cbe970d3992d1854f699d5b06f9de52ca779a77e5700042691c00/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.410974334Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.411658440Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.411811870Z" level=warning msg="69b1f7ac017844678eb88817599bf633f144f316f8f7582a782cd5b111caaa63 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/69b1f7ac017844678eb88817599bf633f144f316f8f7582a782cd5b111caaa63/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.429982157Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.430073019Z" level=warning msg="444ae043573488bb9f981ae242b500eb879910c856808f81197295d835f1f143 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/444ae043573488bb9f981ae242b500eb879910c856808f81197295d835f1f143/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.439193397Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.447858782Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:06 minikube dockerd[487]: time="2020-08-07T08:29:06.448561461Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:11 minikube dockerd[487]: time="2020-08-07T08:29:11.233281583Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:29:11 minikube dockerd[487]: time="2020-08-07T08:29:11.233346598Z" level=warning msg="5dd1832f1a149b0cfe7c90029c9220fd07dfd504df1801a7069d32015d712aff cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5dd1832f1a149b0cfe7c90029c9220fd07dfd504df1801a7069d32015d712aff/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:29:19 minikube dockerd[487]: time="2020-08-07T08:29:19.030982277Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Aug 07 08:33:52 minikube dockerd[487]: time="2020-08-07T08:33:52.649300394Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:52 minikube dockerd[487]: time="2020-08-07T08:33:52.649388246Z" level=warning msg="8c861e3756d5e7cace9fc76d0c65fbefd6b27311126300a51764333cbc579449 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8c861e3756d5e7cace9fc76d0c65fbefd6b27311126300a51764333cbc579449/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:33:52 minikube dockerd[487]: time="2020-08-07T08:33:52.796699163Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:52 minikube dockerd[487]: time="2020-08-07T08:33:52.796777642Z" level=warning msg="f7bc32c944e95d25dd45ed32ed1970855f8e63bcadc9734ef50cb55743de0a1f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f7bc32c944e95d25dd45ed32ed1970855f8e63bcadc9734ef50cb55743de0a1f/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:33:52 minikube dockerd[487]: time="2020-08-07T08:33:52.922601301Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:52 minikube dockerd[487]: time="2020-08-07T08:33:52.922893631Z" level=warning msg="3e8ead2eabbe7021160e8c29f8097372aa4a58e9d81a94d74799d27b357b13cc cleanup: failed to unmount IPC: umount /var/lib/docker/containers/3e8ead2eabbe7021160e8c29f8097372aa4a58e9d81a94d74799d27b357b13cc/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:33:53 minikube dockerd[487]: time="2020-08-07T08:33:53.129072811Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:53 minikube dockerd[487]: time="2020-08-07T08:33:53.474934030Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:53 minikube dockerd[487]: time="2020-08-07T08:33:53.475124055Z" level=warning msg="00edd50bf1689b121e5fdd01a6271c30be18b5cdf3842b3a7bc8ec23840c634f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/00edd50bf1689b121e5fdd01a6271c30be18b5cdf3842b3a7bc8ec23840c634f/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:33:53 minikube dockerd[487]: time="2020-08-07T08:33:53.644909657Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:53 minikube dockerd[487]: time="2020-08-07T08:33:53.644985643Z" level=warning msg="129ab0cbacd000967a50288eed04cea31dfeb1238b796a1031b2794e23c94ffc cleanup: failed to unmount IPC: umount /var/lib/docker/containers/129ab0cbacd000967a50288eed04cea31dfeb1238b796a1031b2794e23c94ffc/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:33:53 minikube dockerd[487]: time="2020-08-07T08:33:53.785400035Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:53 minikube dockerd[487]: time="2020-08-07T08:33:53.785408665Z" level=warning msg="c2fe3e3ced47f1baf9118b311576061333b7be04ae090e66a62ae54c898f449f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c2fe3e3ced47f1baf9118b311576061333b7be04ae090e66a62ae54c898f449f/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:33:53 minikube dockerd[487]: time="2020-08-07T08:33:53.941331060Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:54 minikube dockerd[487]: time="2020-08-07T08:33:54.099177213Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:54 minikube dockerd[487]: time="2020-08-07T08:33:54.099356252Z" level=warning msg="db5dbc5fa5ec8bb258580cd54410a35737005c81e3ebf137ce7d7d8066b8a6fb cleanup: failed to unmount IPC: umount /var/lib/docker/containers/db5dbc5fa5ec8bb258580cd54410a35737005c81e3ebf137ce7d7d8066b8a6fb/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:33:54 minikube dockerd[487]: time="2020-08-07T08:33:54.303285257Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:54 minikube dockerd[487]: time="2020-08-07T08:33:54.528033562Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:54 minikube dockerd[487]: time="2020-08-07T08:33:54.834650577Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:54 minikube dockerd[487]: time="2020-08-07T08:33:54.834720661Z" level=warning msg="c50cb92ba572ec1cd2d7c1255e6629ac469bf5dc84f80cc653b7ccfbc29b067c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c50cb92ba572ec1cd2d7c1255e6629ac469bf5dc84f80cc653b7ccfbc29b067c/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 08:33:55 minikube dockerd[487]: time="2020-08-07T08:33:55.018309067Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:55 minikube dockerd[487]: time="2020-08-07T08:33:55.165053513Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:55 minikube dockerd[487]: time="2020-08-07T08:33:55.337549094Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:33:55 minikube dockerd[487]: time="2020-08-07T08:33:55.499552666Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 08:34:30 minikube dockerd[487]: time="2020-08-07T08:34:30.569320737Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."

==> container status <==
CONTAINER           IMAGE                                                                                                     CREATED             STATE               NAME                      ATTEMPT             POD ID
a364f86d36962       9c3ca9f065bb1                                                                                             5 minutes ago       Running             storage-provisioner       0                   deeb26eb57572
b56435be32155       k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892   6 minutes ago       Running             metrics-server            0                   cc0a28316f120
05436888c471f       67da37a9a360e                                                                                             6 minutes ago       Running             coredns                   0                   fe90050e2ed01
463762f7110ac       3439b7546f29b                                                                                             6 minutes ago       Running             kube-proxy                0                   23887d2bb44cb
ef79cd663416a       303ce5db0e90d                                                                                             6 minutes ago       Running             etcd                      0                   d1602d8abe707
5a4191dc1a33a       76216c34ed0c7                                                                                             6 minutes ago       Running             kube-scheduler            0                   2ac6758875468
ba55f84c040f8       da26705ccb4b5                                                                                             6 minutes ago       Running             kube-controller-manager   0                   feae86461b4de
180c95715c3d4       7e28efa976bd1                                                                                             6 minutes ago       Running             kube-apiserver            0                   fe890484bb135

==> coredns [05436888c471] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

==> describe nodes <==
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=be7c19d391302656d27f1f213657d925c4e1cfc2-dirty
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_08_07T10_34_11_0700
                    minikube.k8s.io/version=v1.12.2
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 07 Aug 2020 08:34:07 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Fri, 07 Aug 2020 08:40:27 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 07 Aug 2020 08:39:29 +0000   Fri, 07 Aug 2020 08:34:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 07 Aug 2020 08:39:29 +0000   Fri, 07 Aug 2020 08:34:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 07 Aug 2020 08:39:29 +0000   Fri, 07 Aug 2020 08:34:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 07 Aug 2020 08:39:29 +0000   Fri, 07 Aug 2020 08:34:27 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.17.0.2
  Hostname:    minikube
Capacity:
  cpu:                8
  ephemeral-storage:  233372712Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16242936Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  233372712Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16242936Ki
  pods:               110
System Info:
  Machine ID:                 4754caffd24943f09508ec416ccf13a4
  System UUID:                31a5ef2f-8c5e-45fe-b569-c0ec2cdf5b07
  Boot ID:                    2e4b1082-3f37-4340-bd29-513e095f3ce4
  Kernel Version:             5.3.0-64-generic
  OS Image:                   Ubuntu 19.10
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.2
  Kubelet Version:            v1.18.3
  Kube-Proxy Version:         v1.18.3
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-66bff467f8-9dt6n            100m (1%)     0 (0%)      70Mi (0%)        170Mi (1%)     6m17s
  kube-system                 etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
  kube-system                 kube-apiserver-minikube             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m16s
  kube-system                 kube-controller-manager-minikube    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m16s
  kube-system                 kube-proxy-5cj9s                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
  kube-system                 kube-scheduler-minikube             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m15s
  kube-system                 metrics-server-7bc6d75975-4z22q     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
  kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                650m (8%)  0 (0%)
  memory             70Mi (0%)  170Mi (1%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)
Events:
  Type    Reason                   Age                    From                  Message
  ----    ------                   ----                   ----                  -------
  Normal  NodeHasSufficientMemory  6m30s (x5 over 6m30s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    6m30s (x5 over 6m30s)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     6m30s (x5 over 6m30s)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 6m16s                  kubelet, minikube     Starting kubelet.
  Normal  NodeHasSufficientMemory  6m16s                  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    6m16s                  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     6m16s                  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  6m16s                  kubelet, minikube     Updated Node Allocatable limit across pods
  Normal  Starting                 6m14s                  kube-proxy, minikube  Starting kube-proxy.
  Normal  NodeReady                6m6s                   kubelet, minikube     Node minikube status is now: NodeReady

==> dmesg <==
[  +0.000001] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 44)
[  +0.000062] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 44)
[  +0.000001] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 44)
[Aug 6 16:18] mce: CPU0: Core temperature above threshold, cpu clock throttled (total events = 86)
[  +0.000001] mce: CPU4: Core temperature above threshold, cpu clock throttled (total events = 86)
[  +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 171)
[  +0.000001] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 171)
[  +0.000001] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 171)
[  +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 171)
[  +0.000053] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 171)
[  +0.000001] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 171)
[  +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 171)
[  +0.000000] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 171)
[Aug 6 16:32] atkbd serio0: Unknown key pressed (translated set 2, code 0x85 on isa0060/serio0).
[  +0.000006] atkbd serio0: Use 'setkeycodes e005 <keycode>' to make it known.
[  +4.680104] IRQ 138: no longer affine to CPU4
[  +0.008047] IRQ 16: no longer affine to CPU5
[  +0.000008] IRQ 125: no longer affine to CPU5
[  +0.009574] IRQ 139: no longer affine to CPU6
[  +0.314914] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[Aug 6 16:50] mce: CPU0: Core temperature above threshold, cpu clock throttled (total events = 100)
[  +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 185)
[  +0.000002] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 185)
[  +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 185)
[  +0.000001] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 185)
[  +0.000001] mce: CPU4: Core temperature above threshold, cpu clock throttled (total events = 100)
[  +0.000000] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 185)
[  +0.000002] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 185)
[  +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 185)
[  +0.000000] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 185)
[Aug 6 17:10] mce: CPU6: Core temperature above threshold, cpu clock throttled (total events = 45)
[  +0.000001] mce: CPU2: Core temperature above threshold, cpu clock throttled (total events = 45)
[  +0.000002] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 230)
[  +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 230)
[  +0.000000] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 230)
[  +0.000002] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 230)
[  +0.000005] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 230)
[  +0.000001] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 230)
[  +0.000029] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 230)
[  +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 230)
[Aug 6 17:20] mce: CPU4: Core temperature above threshold, cpu clock throttled (total events = 158)
[  +0.000000] mce: CPU0: Core temperature above threshold, cpu clock throttled (total events = 158)
[  +0.000002] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000001] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000001] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000001] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 255)
[Aug 6 17:27] mce: CPU3: Core temperature above threshold, cpu clock throttled (total events = 46)
[  +0.000001] mce: CPU7: Core temperature above threshold, cpu clock throttled (total events = 46)
[  +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 295)
[  +0.000000] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 295)
[  +0.000049] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 295)
[  +0.000000] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 295)
[  +0.000002] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 295)
[  +0.000000] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 295)
[  +0.000002] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 295)
[  +0.000000] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 295)

==> etcd [ef79cd663416] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-08-07 08:34:04.663867 I | etcdmain: etcd Version: 3.4.3
2020-08-07 08:34:04.663895 I | etcdmain: Git SHA: 3cf2f69b5
2020-08-07 08:34:04.663898 I | etcdmain: Go Version: go1.12.12
2020-08-07 08:34:04.663901 I | etcdmain: Go OS/Arch: linux/amd64
2020-08-07 08:34:04.663904 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-08-07 08:34:04.663968 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-08-07 08:34:04.664652 I | embed: name = minikube
2020-08-07 08:34:04.664657 I | embed: data dir = /var/lib/minikube/etcd
2020-08-07 08:34:04.664661 I | embed: member dir = /var/lib/minikube/etcd/member
2020-08-07 08:34:04.664663 I | embed: heartbeat = 100ms
2020-08-07 08:34:04.664666 I | embed: election = 1000ms
2020-08-07 08:34:04.664669 I | embed: snapshot count = 10000
2020-08-07 08:34:04.664684 I | embed: advertise client URLs = https://172.17.0.2:2379
2020-08-07 08:34:04.679376 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
raft2020/08/07 08:34:04 INFO: b8e14bda2255bc24 switched to configuration voters=()
raft2020/08/07 08:34:04 INFO: b8e14bda2255bc24 became follower at term 0
raft2020/08/07 08:34:04 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/08/07 08:34:04 INFO: b8e14bda2255bc24 became follower at term 1
raft2020/08/07 08:34:04 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
2020-08-07 08:34:04.687183 W | auth: simple token is not cryptographically signed
2020-08-07 08:34:04.691509 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-08-07 08:34:04.691711 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/08/07 08:34:04 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
2020-08-07 08:34:04.691952 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
2020-08-07 08:34:04.694141 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-08-07 08:34:04.694215 I | embed: listening for peers on 172.17.0.2:2380
2020-08-07 08:34:04.694287 I | embed: listening for metrics on http://127.0.0.1:2381
raft2020/08/07 08:34:05 INFO: b8e14bda2255bc24 is starting a new election at term 1
raft2020/08/07 08:34:05 INFO: b8e14bda2255bc24 became candidate at term 2
raft2020/08/07 08:34:05 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
raft2020/08/07 08:34:05 INFO: b8e14bda2255bc24 became leader at term 2
raft2020/08/07 08:34:05 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
2020-08-07 08:34:05.380286 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
2020-08-07 08:34:05.380300 I | embed: ready to serve client requests
2020-08-07 08:34:05.380328 I | embed: ready to serve client requests
2020-08-07 08:34:05.380470 I | etcdserver: setting up the initial cluster version to 3.4
2020-08-07 08:34:05.381325 I | embed: serving client requests on 127.0.0.1:2379
2020-08-07 08:34:05.381358 I | embed: serving client requests on 172.17.0.2:2379
2020-08-07 08:34:05.389128 N | etcdserver/membership: set the initial cluster version to 3.4
2020-08-07 08:34:05.389189 I | etcdserver/api: enabled capabilities for version 3.4

==> kernel <==
 08:40:33 up 1 day, 58 min,  0 users,  load average: 2.04, 1.88, 1.97
Linux minikube 5.3.0-64-generic #58-Ubuntu SMP Fri Jul 10 19:33:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [180c95715c3d] <==
I0807 08:37:44.944197       1 log.go:172] http: TLS handshake error from 172.17.0.1:41808: remote error: tls: bad certificate
I0807 08:37:45.447510       1 log.go:172] http: TLS handshake error from 172.17.0.1:41812: remote error: tls: bad certificate
I0807 08:37:45.947577       1 log.go:172] http: TLS handshake error from 172.17.0.1:41816: remote error: tls: bad certificate
I0807 08:37:46.447712       1 log.go:172] http: TLS handshake error from 172.17.0.1:41820: remote error: tls: bad certificate
I0807 08:37:46.951217       1 log.go:172] http: TLS handshake error from 172.17.0.1:41824: remote error: tls: bad certificate
I0807 08:37:47.448125       1 log.go:172] http: TLS handshake error from 172.17.0.1:41826: remote error: tls: bad certificate
I0807 08:37:47.948364       1 log.go:172] http: TLS handshake error from 172.17.0.1:41832: remote error: tls: bad certificate
I0807 08:37:48.447879       1 log.go:172] http: TLS handshake error from 172.17.0.1:41834: remote error: tls: bad certificate
I0807 08:37:48.943540       1 log.go:172] http: TLS handshake error from 172.17.0.1:41838: remote error: tls: bad certificate
I0807 08:37:49.440560       1 log.go:172] http: TLS handshake error from 172.17.0.1:41840: remote error: tls: bad certificate
I0807 08:37:49.948277       1 log.go:172] http: TLS handshake error from 172.17.0.1:41844: remote error: tls: bad certificate
I0807 08:37:50.450220       1 log.go:172] http: TLS handshake error from 172.17.0.1:41846: remote error: tls: bad certificate
I0807 08:37:50.945448       1 log.go:172] http: TLS handshake error from 172.17.0.1:41850: remote error: tls: bad certificate
I0807 08:37:51.444418       1 log.go:172] http: TLS handshake error from 172.17.0.1:41852: remote error: tls: bad certificate
I0807 08:37:51.948511       1 log.go:172] http: TLS handshake error from 172.17.0.1:41856: remote error: tls: bad certificate
I0807 08:37:52.446914       1 log.go:172] http: TLS handshake error from 172.17.0.1:41860: remote error: tls: bad certificate
I0807 08:37:52.944253       1 log.go:172] http: TLS handshake error from 172.17.0.1:41864: remote error: tls: bad certificate
I0807 08:37:53.447794       1 log.go:172] http: TLS handshake error from 172.17.0.1:41866: remote error: tls: bad certificate
I0807 08:37:53.951201       1 log.go:172] http: TLS handshake error from 172.17.0.1:41872: remote error: tls: bad certificate
I0807 08:37:54.447770       1 log.go:172] http: TLS handshake error from 172.17.0.1:41876: remote error: tls: bad certificate
I0807 08:37:54.948729       1 log.go:172] http: TLS handshake error from 172.17.0.1:41880: remote error: tls: bad certificate
I0807 08:37:55.447731       1 log.go:172] http: TLS handshake error from 172.17.0.1:41884: remote error: tls: bad certificate
I0807 08:37:55.948010       1 log.go:172] http: TLS handshake error from 172.17.0.1:41888: remote error: tls: bad certificate
I0807 08:37:56.447852       1 log.go:172] http: TLS handshake error from 172.17.0.1:41892: remote error: tls: bad certificate
I0807 08:37:56.948334       1 log.go:172] http: TLS handshake error from 172.17.0.1:41896: remote error: tls: bad certificate
I0807 08:37:57.448222       1 log.go:172] http: TLS handshake error from 172.17.0.1:41898: remote error: tls: bad certificate
I0807 08:37:57.947737       1 log.go:172] http: TLS handshake error from 172.17.0.1:41904: remote error: tls: bad certificate
I0807 08:37:58.450863       1 log.go:172] http: TLS handshake error from 172.17.0.1:41906: remote error: tls: bad certificate
I0807 08:37:58.947585       1 log.go:172] http: TLS handshake error from 172.17.0.1:41910: remote error: tls: bad certificate
I0807 08:37:59.448036       1 log.go:172] http: TLS handshake error from 172.17.0.1:41912: remote error: tls: bad certificate
I0807 08:37:59.947934       1 log.go:172] http: TLS handshake error from 172.17.0.1:41916: remote error: tls: bad certificate
I0807 08:38:00.451076       1 log.go:172] http: TLS handshake error from 172.17.0.1:41918: remote error: tls: bad certificate
I0807 08:38:00.947899       1 log.go:172] http: TLS handshake error from 172.17.0.1:41922: remote error: tls: bad certificate
I0807 08:38:01.447661       1 log.go:172] http: TLS handshake error from 172.17.0.1:41924: remote error: tls: bad certificate
I0807 08:38:01.940212       1 log.go:172] http: TLS handshake error from 172.17.0.1:41928: remote error: tls: bad certificate
I0807 08:38:02.447361       1 log.go:172] http: TLS handshake error from 172.17.0.1:41932: remote error: tls: bad certificate
I0807 08:38:02.941927       1 log.go:172] http: TLS handshake error from 172.17.0.1:41936: remote error: tls: bad certificate
I0807 08:38:03.448154       1 log.go:172] http: TLS handshake error from 172.17.0.1:41938: remote error: tls: bad certificate
I0807 08:38:03.939994       1 log.go:172] http: TLS handshake error from 172.17.0.1:41944: remote error: tls: bad certificate
I0807 08:38:04.449694       1 log.go:172] http: TLS handshake error from 172.17.0.1:41948: remote error: tls: bad certificate
I0807 08:38:04.949764       1 log.go:172] http: TLS handshake error from 172.17.0.1:41952: remote error: tls: bad certificate
I0807 08:38:05.448673       1 log.go:172] http: TLS handshake error from 172.17.0.1:41956: remote error: tls: bad certificate
I0807 08:38:05.942122       1 log.go:172] http: TLS handshake error from 172.17.0.1:41960: remote error: tls: bad certificate
I0807 08:38:06.447103       1 log.go:172] http: TLS handshake error from 172.17.0.1:41964: remote error: tls: bad certificate
I0807 08:38:06.948448       1 log.go:172] http: TLS handshake error from 172.17.0.1:41968: remote error: tls: bad certificate
I0807 08:38:07.447648       1 log.go:172] http: TLS handshake error from 172.17.0.1:41970: remote error: tls: bad certificate
I0807 08:38:07.951023       1 log.go:172] http: TLS handshake error from 172.17.0.1:41976: remote error: tls: bad certificate
I0807 08:38:08.448404       1 log.go:172] http: TLS handshake error from 172.17.0.1:41978: remote error: tls: bad certificate
I0807 08:38:08.944795       1 log.go:172] http: TLS handshake error from 172.17.0.1:41982: remote error: tls: bad certificate
I0807 08:38:09.443284       1 log.go:172] http: TLS handshake error from 172.17.0.1:41984: remote error: tls: bad certificate
I0807 08:38:09.940033       1 log.go:172] http: TLS handshake error from 172.17.0.1:41988: remote error: tls: bad certificate
I0807 08:38:10.448307       1 log.go:172] http: TLS handshake error from 172.17.0.1:41990: remote error: tls: bad certificate
I0807 08:38:10.949210       1 log.go:172] http: TLS handshake error from 172.17.0.1:41994: remote error: tls: bad certificate
I0807 08:38:11.447779       1 log.go:172] http: TLS handshake error from 172.17.0.1:41996: remote error: tls: bad certificate
I0807 08:38:11.948806       1 log.go:172] http: TLS handshake error from 172.17.0.1:42000: remote error: tls: bad certificate
I0807 08:38:11.960371       1 log.go:172] http: TLS handshake error from 172.17.0.1:42002: remote error: tls: bad certificate
E0807 08:39:08.691664       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0807 08:39:08.691714       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0807 08:40:08.697916       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0807 08:40:08.697970       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.

==> kube-controller-manager [ba55f84c040f] <==
W0807 08:34:15.224668       1 controllermanager.go:525] Skipping "route"
I0807 08:34:15.224682       1 tokencleaner.go:118] Starting token cleaner controller
I0807 08:34:15.224713       1 shared_informer.go:223] Waiting for caches to sync for token_cleaner
I0807 08:34:15.224737       1 shared_informer.go:230] Caches are synced for token_cleaner 
I0807 08:34:15.923308       1 controllermanager.go:533] Started "horizontalpodautoscaling"
I0807 08:34:15.923372       1 horizontal.go:169] Starting HPA controller
I0807 08:34:15.923402       1 shared_informer.go:223] Waiting for caches to sync for HPA
I0807 08:34:16.175135       1 controllermanager.go:533] Started "bootstrapsigner"
I0807 08:34:16.175225       1 shared_informer.go:223] Waiting for caches to sync for bootstrap_signer
I0807 08:34:16.426054       1 controllermanager.go:533] Started "attachdetach"
I0807 08:34:16.426148       1 attach_detach_controller.go:338] Starting attach detach controller
I0807 08:34:16.426187       1 shared_informer.go:223] Waiting for caches to sync for attach detach
I0807 08:34:16.674981       1 controllermanager.go:533] Started "daemonset"
I0807 08:34:16.675576       1 daemon_controller.go:257] Starting daemon sets controller
I0807 08:34:16.675617       1 shared_informer.go:223] Waiting for caches to sync for daemon sets
I0807 08:34:16.675976       1 shared_informer.go:223] Waiting for caches to sync for resource quota
W0807 08:34:16.712652       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0807 08:34:16.722857       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
I0807 08:34:16.725098       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
I0807 08:34:16.725782       1 shared_informer.go:230] Caches are synced for expand 
I0807 08:34:16.737355       1 shared_informer.go:230] Caches are synced for job 
I0807 08:34:16.776158       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
I0807 08:34:16.776159       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
I0807 08:34:16.776165       1 shared_informer.go:230] Caches are synced for persistent volume 
I0807 08:34:16.776172       1 shared_informer.go:230] Caches are synced for taint 
I0807 08:34:16.777996       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
W0807 08:34:16.778435       1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0807 08:34:16.778549       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0807 08:34:16.778637       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"f93ac88f-45a8-45a4-b292-4d300d1e36c4", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0807 08:34:16.776200       1 shared_informer.go:230] Caches are synced for ReplicaSet 
I0807 08:34:16.778768       1 taint_manager.go:187] Starting NoExecuteTaintManager
I0807 08:34:16.776224       1 shared_informer.go:230] Caches are synced for PV protection 
I0807 08:34:16.776243       1 shared_informer.go:230] Caches are synced for deployment 
I0807 08:34:16.780667       1 shared_informer.go:230] Caches are synced for endpoint_slice 
I0807 08:34:16.776424       1 shared_informer.go:230] Caches are synced for daemon sets 
I0807 08:34:16.777369       1 shared_informer.go:230] Caches are synced for TTL 
I0807 08:34:16.787673       1 shared_informer.go:230] Caches are synced for PVC protection 
I0807 08:34:16.795394       1 shared_informer.go:230] Caches are synced for stateful set 
I0807 08:34:16.796431       1 shared_informer.go:230] Caches are synced for GC 
I0807 08:34:16.813376       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"metrics-server", UID:"d8660d07-2d0f-49e3-a2d1-0d0801da4954", APIVersion:"apps/v1", ResourceVersion:"257", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set metrics-server-7bc6d75975 to 1
I0807 08:34:16.818016       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"6f6cba00-31ba-48aa-b649-821389537bbb", APIVersion:"apps/v1", ResourceVersion:"238", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
I0807 08:34:16.825381       1 shared_informer.go:230] Caches are synced for ReplicationController 
I0807 08:34:16.836748       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b0c77485-2ccf-4912-af62-3801b7d365f7", APIVersion:"apps/v1", ResourceVersion:"313", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-9dt6n
I0807 08:34:16.888362       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"bb52eaac-0a75-4bfd-8576-158557727cf3", APIVersion:"apps/v1", ResourceVersion:"223", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-5cj9s
I0807 08:34:16.923704       1 shared_informer.go:230] Caches are synced for HPA 
I0807 08:34:17.209433       1 shared_informer.go:230] Caches are synced for namespace 
I0807 08:34:17.224298       1 shared_informer.go:230] Caches are synced for disruption 
I0807 08:34:17.224355       1 disruption.go:339] Sending events to api server.
I0807 08:34:17.227074       1 shared_informer.go:230] Caches are synced for service account 
I0807 08:34:17.326509       1 shared_informer.go:230] Caches are synced for attach detach 
I0807 08:34:17.327821       1 shared_informer.go:230] Caches are synced for resource quota 
I0807 08:34:17.333443       1 shared_informer.go:230] Caches are synced for garbage collector 
I0807 08:34:17.333507       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0807 08:34:17.371750       1 shared_informer.go:230] Caches are synced for endpoint 
W0807 08:34:17.375770       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0807 08:34:17.376578       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0807 08:34:17.376692       1 shared_informer.go:230] Caches are synced for garbage collector 
I0807 08:34:17.376832       1 shared_informer.go:230] Caches are synced for resource quota 
I0807 08:34:18.265307       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-7bc6d75975", UID:"bb2607c7-331d-4108-bdae-af622d679b17", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-7bc6d75975-4z22q
I0807 08:34:31.780065       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.

==> kube-proxy [463762f7110a] <==
W0807 08:34:19.138669       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0807 08:34:19.145557       1 node.go:136] Successfully retrieved node IP: 172.17.0.2
I0807 08:34:19.145590       1 server_others.go:186] Using iptables Proxier.
W0807 08:34:19.145597       1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0807 08:34:19.145602       1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0807 08:34:19.145916       1 server.go:583] Version: v1.18.3
I0807 08:34:19.146501       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0807 08:34:19.146875       1 config.go:133] Starting endpoints config controller
I0807 08:34:19.146901       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0807 08:34:19.146898       1 config.go:315] Starting service config controller
I0807 08:34:19.146932       1 shared_informer.go:223] Waiting for caches to sync for service config
I0807 08:34:19.247246       1 shared_informer.go:230] Caches are synced for service config 
I0807 08:34:19.247292       1 shared_informer.go:230] Caches are synced for endpoints config 

==> kube-scheduler [5a4191dc1a33] <==
I0807 08:34:04.684323       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0807 08:34:04.684377       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0807 08:34:04.954749       1 serving.go:313] Generated self-signed cert in-memory
W0807 08:34:07.704146       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0807 08:34:07.705356       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0807 08:34:07.705377       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0807 08:34:07.705386       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0807 08:34:07.714523       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0807 08:34:07.714541       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0807 08:34:07.715686       1 authorization.go:47] Authorization is disabled
W0807 08:34:07.715697       1 authentication.go:40] Authentication is disabled
I0807 08:34:07.715705       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0807 08:34:07.717178       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0807 08:34:07.717303       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0807 08:34:07.717318       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0807 08:34:07.717334       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0807 08:34:07.718354       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0807 08:34:07.718598       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0807 08:34:07.718939       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0807 08:34:07.719131       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0807 08:34:07.719192       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0807 08:34:07.719295       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0807 08:34:07.719371       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0807 08:34:07.719563       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0807 08:34:07.719803       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0807 08:34:08.654871       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0807 08:34:08.807789       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0807 08:34:08.854968       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0807 08:34:08.880696       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0807 08:34:11.517918       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
E0807 08:34:16.907751       1 factory.go:503] pod: kube-system/coredns-66bff467f8-9dt6n is already present in unschedulable queue

==> kubelet <==
-- Logs begin at Fri 2020-08-07 08:16:34 UTC, end at Fri 2020-08-07 08:40:33 UTC. --
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.508508   15561 desired_state_of_world_populator.go:139] Desired state populator starts to run
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.508950   15561 server.go:393] Adding debug handlers to kubelet server.
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.573748   15561 clientconn.go:106] parsed scheme: "unix"
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.573828   15561 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.574043   15561 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.574255   15561 clientconn.go:933] ClientConn switching balancer to "pick_first"
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.592773   15561 status_manager.go:158] Starting to sync pod status with apiserver
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.592876   15561 kubelet.go:1821] Starting kubelet main sync loop.
Aug 07 08:34:17 minikube kubelet[15561]: E0807 08:34:17.592987   15561 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.606703   15561 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.643853   15561 kubelet_node_status.go:70] Attempting to register node minikube
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.654632   15561 kubelet_node_status.go:112] Node minikube was previously registered
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.654710   15561 kubelet_node_status.go:73] Successfully registered node minikube
Aug 07 08:34:17 minikube kubelet[15561]: E0807 08:34:17.693096   15561 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.745683   15561 cpu_manager.go:184] [cpumanager] starting with none policy
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.745697   15561 cpu_manager.go:185] [cpumanager] reconciling every 10s
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.745710   15561 state_mem.go:36] [cpumanager] initializing new in-memory state store
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.745861   15561 state_mem.go:88] [cpumanager] updated default cpuset: ""
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.745868   15561 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.745875   15561 policy_none.go:43] [cpumanager] none policy: Start
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.747139   15561 plugin_manager.go:114] Starting Kubelet Plugin Manager
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.893436   15561 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.901045   15561 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.905128   15561 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.909443   15561 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.910755   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/26697bfbbac02de77868d7e47b99d36f-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "26697bfbbac02de77868d7e47b99d36f")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.910889   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-k8s-certs") pod "kube-controller-manager-minikube" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.911142   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/ca02679f24a416493e1c288b16539a55-etcd-data") pod "etcd-minikube" (UID: "ca02679f24a416493e1c288b16539a55")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.911307   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/26697bfbbac02de77868d7e47b99d36f-ca-certs") pod "kube-apiserver-minikube" (UID: "26697bfbbac02de77868d7e47b99d36f")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.912964   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/26697bfbbac02de77868d7e47b99d36f-k8s-certs") pod "kube-apiserver-minikube" (UID: "26697bfbbac02de77868d7e47b99d36f")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.914163   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/26697bfbbac02de77868d7e47b99d36f-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "26697bfbbac02de77868d7e47b99d36f")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.914332   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-ca-certs") pod "kube-controller-manager-minikube" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.914475   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.914653   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.914764   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/ca02679f24a416493e1c288b16539a55-etcd-certs") pod "etcd-minikube" (UID: "ca02679f24a416493e1c288b16539a55")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.914890   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/26697bfbbac02de77868d7e47b99d36f-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "26697bfbbac02de77868d7e47b99d36f")
Aug 07 08:34:17 minikube kubelet[15561]: I0807 08:34:17.915940   15561 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 07 08:34:18 minikube kubelet[15561]: I0807 08:34:18.015480   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Aug 07 08:34:18 minikube kubelet[15561]: I0807 08:34:18.015743   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Aug 07 08:34:18 minikube kubelet[15561]: I0807 08:34:18.015872   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/8c280dd6-a774-40f2-bcd4-91706068808a-lib-modules") pod "kube-proxy-5cj9s" (UID: "8c280dd6-a774-40f2-bcd4-91706068808a")
Aug 07 08:34:18 minikube kubelet[15561]: I0807 08:34:18.016431   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/8a9925b92c1bf68a9656aa86994b3aca-kubeconfig") pod "kube-controller-manager-minikube" (UID: "8a9925b92c1bf68a9656aa86994b3aca")
Aug 07 08:34:18 minikube kubelet[15561]: I0807 08:34:18.016668   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/dcddbd0cc8c89e2cbf4de5d3cca8769f-kubeconfig") pod "kube-scheduler-minikube" (UID: "dcddbd0cc8c89e2cbf4de5d3cca8769f")
Aug 07 08:34:18 minikube kubelet[15561]: I0807 08:34:18.016895   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8c280dd6-a774-40f2-bcd4-91706068808a-kube-proxy") pod "kube-proxy-5cj9s" (UID: "8c280dd6-a774-40f2-bcd4-91706068808a")
Aug 07 08:34:18 minikube kubelet[15561]: I0807 08:34:18.018233   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/8c280dd6-a774-40f2-bcd4-91706068808a-xtables-lock") pod "kube-proxy-5cj9s" (UID: "8c280dd6-a774-40f2-bcd4-91706068808a")
Aug 07 08:34:18 minikube kubelet[15561]: I0807 08:34:18.018353   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-lr7zv" (UniqueName: "kubernetes.io/secret/8c280dd6-a774-40f2-bcd4-91706068808a-kube-proxy-token-lr7zv") pod "kube-proxy-5cj9s" (UID: "8c280dd6-a774-40f2-bcd4-91706068808a")
Aug 07 08:34:18 minikube kubelet[15561]: I0807 08:34:18.018561   15561 reconciler.go:157] Reconciler: start to sync state
Aug 07 08:34:18 minikube kubelet[15561]: W0807 08:34:18.829274   15561 pod_container_deletor.go:77] Container "23887d2bb44cbb7fe8f866013cb2a5c1fe0551cd1b109cd43756888a7ed1dbc9" not found in pod's containers
Aug 07 08:34:29 minikube kubelet[15561]: I0807 08:34:29.832954   15561 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 07 08:34:29 minikube kubelet[15561]: I0807 08:34:29.861480   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9d23af6b-64c6-4966-b16f-5ced06c0e455-config-volume") pod "coredns-66bff467f8-9dt6n" (UID: "9d23af6b-64c6-4966-b16f-5ced06c0e455")
Aug 07 08:34:29 minikube kubelet[15561]: I0807 08:34:29.861569   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-wvwxq" (UniqueName: "kubernetes.io/secret/9d23af6b-64c6-4966-b16f-5ced06c0e455-coredns-token-wvwxq") pod "coredns-66bff467f8-9dt6n" (UID: "9d23af6b-64c6-4966-b16f-5ced06c0e455")
Aug 07 08:34:30 minikube kubelet[15561]: W0807 08:34:30.551805   15561 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-9dt6n through plugin: invalid network status for
Aug 07 08:34:30 minikube kubelet[15561]: W0807 08:34:30.938521   15561 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-9dt6n through plugin: invalid network status for
Aug 07 08:34:31 minikube kubelet[15561]: I0807 08:34:31.830420   15561 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 07 08:34:31 minikube kubelet[15561]: I0807 08:34:31.868725   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rh9l9" (UniqueName: "kubernetes.io/secret/d3d11fe9-2953-47f7-8cc7-e79251d02c35-default-token-rh9l9") pod "metrics-server-7bc6d75975-4z22q" (UID: "d3d11fe9-2953-47f7-8cc7-e79251d02c35")
Aug 07 08:34:32 minikube kubelet[15561]: W0807 08:34:32.517023   15561 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-7bc6d75975-4z22q through plugin: invalid network status for
Aug 07 08:34:32 minikube kubelet[15561]: W0807 08:34:32.968903   15561 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-7bc6d75975-4z22q through plugin: invalid network status for
Aug 07 08:34:34 minikube kubelet[15561]: W0807 08:34:34.010547   15561 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-7bc6d75975-4z22q through plugin: invalid network status for
Aug 07 08:34:35 minikube kubelet[15561]: I0807 08:34:35.834070   15561 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 07 08:34:35 minikube kubelet[15561]: I0807 08:34:35.880222   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/b164bc43-63f5-4844-82a2-bd152a25b347-tmp") pod "storage-provisioner" (UID: "b164bc43-63f5-4844-82a2-bd152a25b347")
Aug 07 08:34:35 minikube kubelet[15561]: I0807 08:34:35.880434   15561 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-n5z42" (UniqueName: "kubernetes.io/secret/b164bc43-63f5-4844-82a2-bd152a25b347-storage-provisioner-token-n5z42") pod "storage-provisioner" (UID: "b164bc43-63f5-4844-82a2-bd152a25b347")

==> storage-provisioner [a364f86d3696] <==
I0807 08:34:36.575117       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
I0807 08:34:36.581994       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0807 08:34:36.582175       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_229bf056-cd45-4f6f-806e-f516bd7715da!
I0807 08:34:36.582209       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"595751f7-bc6b-4940-a492-719239f8e000", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_229bf056-cd45-4f6f-806e-f516bd7715da became leader
I0807 08:34:36.682443       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_229bf056-cd45-4f6f-806e-f516bd7715da!
@medyagh
Copy link
Member

medyagh commented Aug 12, 2020

@srilumpa thank you for reporting this, does this happen consistantly or is it a flake?

I believe this is because the docker container IP was changed

❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://172.17.0.2:8443/apis/storage.k8s.io/v1/storageclasses": x509: certificate is valid for 172.17.0.3, 10.96.0.1, 127.0.0.1, 10.0.0.1, not 172.17.0.2

I believe once we have a static IP for docker containers this error should not happen.
do you mind verifying this happens only as a flake and minikube delete would fix it?

@medyagh medyagh changed the title Minikube fails to restart after proper stop docker: start fails on storage provisioner addon "x509: certificate is valid for 172.17.0.3" Aug 12, 2020
@medyagh medyagh added the kind/bug Categorizes issue or PR as related to a bug. label Aug 12, 2020
@medyagh
Copy link
Member

medyagh commented Aug 12, 2020

this PR might fix this issue #8764

but I am still puzzled why we had not seen this issue more often if that is an IP change problem.

@medyagh
Copy link
Member

medyagh commented Aug 12, 2020

/triage needs-information
/triage support

@k8s-ci-robot k8s-ci-robot added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Aug 12, 2020
@srilumpa
Copy link
Author

@medyagh, thank you for your time on this.

Yes, running minikube delete "fixes" it and I am able to start a new minikube cluster properly again

@medyagh
Copy link
Member

medyagh commented Aug 26, 2020

I belive this could result of the the IP change after minikube restart that could be fixed by static IP PR

@medyagh
Copy link
Member

medyagh commented Sep 16, 2020

the PR fixing this bug is still under works.

@medyagh medyagh added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Sep 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
3 participants