Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add warning for --network-plugin=cni (CNI has to be provided, see --cni) #8445

Closed
AurelienGasser opened this issue Jun 10, 2020 · 12 comments · Fixed by #9368
Closed

Add warning for --network-plugin=cni (CNI has to be provided, see --cni) #8445

AurelienGasser opened this issue Jun 10, 2020 · 12 comments · Fixed by #9368
Labels
area/cni CNI support good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@AurelienGasser
Copy link

AurelienGasser commented Jun 10, 2020

Steps to reproduce the issue:

$ minikube version
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
$ minikube start \
    --cpus 6 \
    --memory 8192 \
    --disk-size 50g \
    --kubernetes-version="v1.16.5"  \
    --network-plugin=cni \
    --driver=docker  \
    --alsologtostderr

Full output of failed command:

coredns pods stay in the ContainerCreating state forever.

Describe coredns pod:

Events:                                                                                      
  Type     Reason                  Age   From               Message                          
  ----     ------                  ----  ----               -------                          
  Warning  FailedScheduling        4s    default-scheduler  0/1 nodes are available: 1 node(s
) had taints that the pod didn't tolerate.                                                   
  Normal   Scheduled               2s    default-scheduler  Successfully assigned kube-system
/coredns-5644d7b6d9-bs8j2 to minikube                                                        
  Warning  FailedCreatePodSandBox  0s    kubelet, minikube  Failed create pod sandbox: rpc er
ror: code = Unknown desc = [failed to set up sandbox container "01c6adcb4ffe7ab8c8373791d65c4
8e3eec7bcdc811ddb453f12a2f3c0d90139" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugi
n cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set brid
ge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox co
ntainer "01c6adcb4ffe7ab8c8373791d65c48e3eec7bcdc811ddb453f12a2f3c0d90139" network for pod "c
oredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_
kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.3 -j CNI-8
b95f04587605305e55098fc -m comment --comment name: "crio-bridge" id: "01c6adcb4ffe7ab8c837379
1d65c48e3eec7bcdc811ddb453f12a2f3c0d90139" --wait]: exit status 2: iptables v1.8.3 (legacy): 
Couldn't load target `CNI-8b95f04587605305e55098fc':No such file or directory                
                                                                                             
Try `iptables -h' or 'iptables --help' for more information.                                 
]                                                                               

Full output of minikube start command used, if not already included:


I0610 15:51:15.674167 2332048 start.go:98] hostinfo: {"hostname":"aurelien-XPS-15-7590","uptime":703929,"bootTime":1591114746,"procs":746,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-33-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"c05fbee0-7d07-44e2-a1f3-3e8b637e5a75"}
I0610 15:51:15.674945 2332048 start.go:108] virtualization: kvm host
😄  minikube v1.11.0 on Ubuntu 20.04
I0610 15:51:15.677445 2332048 driver.go:253] Setting default libvirt URI to qemu:///system
I0610 15:51:15.744881 2332048 docker.go:95] docker version: linux-19.03.8
✨  Using the docker driver based on user configuration
I0610 15:51:15.746302 2332048 start.go:214] selected driver: docker
I0610 15:51:15.746309 2332048 start.go:611] validating driver "docker" against <nil>
I0610 15:51:15.746314 2332048 start.go:617] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0610 15:51:15.746323 2332048 start.go:935] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0610 15:51:15.746412 2332048 start_flags.go:218] no existing cluster config was found, will generate one from the flags
I0610 15:51:15.746541 2332048 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0610 15:51:15.806183 2332048 start_flags.go:556] Wait components to verify : map[apiserver:true system_pods:true]
🆕  Kubernetes 1.18.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.18.3
👍  Starting control plane node minikube in cluster minikube
I0610 15:51:15.807912 2332048 cache.go:105] Beginning downloading kic artifacts for docker with docker
I0610 15:51:16.237413 2332048 image.go:88] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0610 15:51:16.237433 2332048 preload.go:95] Checking if preload exists for k8s version v1.16.5 and runtime docker
I0610 15:51:16.237486 2332048 preload.go:103] Found local preload: /home/aurelien/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.16.5-docker-overlay2-amd64.tar.lz4
I0610 15:51:16.237490 2332048 cache.go:49] Caching tarball of preloaded images
I0610 15:51:16.237522 2332048 preload.go:129] Found /home/aurelien/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.16.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0610 15:51:16.237527 2332048 cache.go:52] Finished verifying existence of preloaded tar for  v1.16.5 on docker
I0610 15:51:16.237747 2332048 profile.go:156] Saving config to /home/aurelien/.minikube/profiles/minikube/config.json ...
I0610 15:51:16.237878 2332048 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/config.json: {Name:mkd70d182c58d972f857a014e9806132c250fc0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0610 15:51:16.238088 2332048 cache.go:152] Successfully downloaded all kic artifacts
I0610 15:51:16.238126 2332048 start.go:240] acquiring machines lock for minikube: {Name:mk0ee58fa15a4d4d0f69ebb22520a64a3bfe9901 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0610 15:51:16.238169 2332048 start.go:244] acquired machines lock for "minikube" in 36.137µs
I0610 15:51:16.238210 2332048 start.go:84] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8192 CPUs:6 DiskSize:51200 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.16.5 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.5 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} &{Name: IP: Port:8443 KubernetesVersion:v1.16.5 ControlPlane:true Worker:true}
I0610 15:51:16.238271 2332048 start.go:121] createHost starting for "" (driver="docker")
🔥  Creating docker container (CPUs=6, Memory=8192MB) ...
I0610 15:51:16.240832 2332048 start.go:157] libmachine.API.Create for "minikube" (driver="docker")
I0610 15:51:16.240850 2332048 client.go:161] LocalClient.Create starting
I0610 15:51:16.240873 2332048 main.go:110] libmachine: Reading certificate data from /home/aurelien/.minikube/certs/ca.pem
I0610 15:51:16.240894 2332048 main.go:110] libmachine: Decoding PEM data...
I0610 15:51:16.240906 2332048 main.go:110] libmachine: Parsing certificate...
I0610 15:51:16.240986 2332048 main.go:110] libmachine: Reading certificate data from /home/aurelien/.minikube/certs/cert.pem
I0610 15:51:16.240999 2332048 main.go:110] libmachine: Decoding PEM data...
I0610 15:51:16.241007 2332048 main.go:110] libmachine: Parsing certificate...
I0610 15:51:16.241225 2332048 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0610 15:51:16.379686 2332048 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0610 15:51:16.444645 2332048 oci.go:98] Successfully created a docker volume minikube
W0610 15:51:16.444695 2332048 oci.go:158] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0610 15:51:16.444758 2332048 preload.go:95] Checking if preload exists for k8s version v1.16.5 and runtime docker
I0610 15:51:16.444979 2332048 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0610 15:51:16.444992 2332048 preload.go:103] Found local preload: /home/aurelien/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.16.5-docker-overlay2-amd64.tar.lz4
I0610 15:51:16.444997 2332048 kic.go:134] Starting extracting preloaded images to volume ...
I0610 15:51:16.445020 2332048 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/aurelien/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.16.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0610 15:51:16.502903 2332048 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=6 --memory=8192mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0610 15:51:16.909656 2332048 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0610 15:51:16.961939 2332048 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0610 15:51:17.033339 2332048 oci.go:212] the created container "minikube" has a running status.
I0610 15:51:17.033363 2332048 kic.go:162] Creating ssh key for kic: /home/aurelien/.minikube/machines/minikube/id_rsa...
I0610 15:51:17.413016 2332048 kic_runner.go:179] docker (temp): /home/aurelien/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0610 15:51:17.634743 2332048 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0610 15:51:17.634760 2332048 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0610 15:51:19.012088 2332048 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/aurelien/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.16.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (2.567044009s)
I0610 15:51:19.012170 2332048 kic.go:139] duration metric: took 2.567148 seconds to extract preloaded images to volume
I0610 15:51:19.012277 2332048 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0610 15:51:19.064609 2332048 machine.go:88] provisioning docker machine ...
I0610 15:51:19.064664 2332048 ubuntu.go:166] provisioning hostname "minikube"
I0610 15:51:19.064715 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:19.118633 2332048 main.go:110] libmachine: Using SSH client type: native
I0610 15:51:19.118836 2332048 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32800 <nil> <nil>}
I0610 15:51:19.118869 2332048 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0610 15:51:19.238614 2332048 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0610 15:51:19.238657 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:19.296240 2332048 main.go:110] libmachine: Using SSH client type: native
I0610 15:51:19.296408 2332048 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32800 <nil> <nil>}
I0610 15:51:19.296422 2332048 main.go:110] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
                        fi
                fi
I0610 15:51:19.397275 2332048 main.go:110] libmachine: SSH cmd err, output: <nil>:
I0610 15:51:19.397313 2332048 ubuntu.go:172] set auth options {CertDir:/home/aurelien/.minikube CaCertPath:/home/aurelien/.minikube/certs/ca.pem CaPrivateKeyPath:/home/aurelien/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/aurelien/.minikube/machines/server.pem ServerKeyPath:/home/aurelien/.minikube/machines/server-key.pem ClientKeyPath:/home/aurelien/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/aurelien/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/aurelien/.minikube}
I0610 15:51:19.397350 2332048 ubuntu.go:174] setting up certificates
I0610 15:51:19.397356 2332048 provision.go:82] configureAuth start
I0610 15:51:19.397395 2332048 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0610 15:51:19.447667 2332048 provision.go:131] copyHostCerts
I0610 15:51:19.447741 2332048 exec_runner.go:91] found /home/aurelien/.minikube/key.pem, removing ...
I0610 15:51:19.448172 2332048 exec_runner.go:98] cp: /home/aurelien/.minikube/certs/key.pem --> /home/aurelien/.minikube/key.pem (1679 bytes)
I0610 15:51:19.448287 2332048 exec_runner.go:91] found /home/aurelien/.minikube/ca.pem, removing ...
I0610 15:51:19.448326 2332048 exec_runner.go:98] cp: /home/aurelien/.minikube/certs/ca.pem --> /home/aurelien/.minikube/ca.pem (1042 bytes)
I0610 15:51:19.448372 2332048 exec_runner.go:91] found /home/aurelien/.minikube/cert.pem, removing ...
I0610 15:51:19.448387 2332048 exec_runner.go:98] cp: /home/aurelien/.minikube/certs/cert.pem --> /home/aurelien/.minikube/cert.pem (1082 bytes)
I0610 15:51:19.448410 2332048 provision.go:105] generating server cert: /home/aurelien/.minikube/machines/server.pem ca-key=/home/aurelien/.minikube/certs/ca.pem private-key=/home/aurelien/.minikube/certs/ca-key.pem org=aurelien.minikube san=[172.17.0.3 localhost 127.0.0.1]
I0610 15:51:19.520451 2332048 provision.go:159] copyRemoteCerts
I0610 15:51:19.520485 2332048 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0610 15:51:19.520515 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:19.570761 2332048 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32800 SSHKeyPath:/home/aurelien/.minikube/machines/minikube/id_rsa Username:docker}
I0610 15:51:19.648129 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0610 15:51:19.658058 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1042 bytes)
I0610 15:51:19.668225 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/machines/server.pem --> /etc/docker/server.pem (1123 bytes)
I0610 15:51:19.678824 2332048 provision.go:85] duration metric: configureAuth took 281.409765ms
I0610 15:51:19.678905 2332048 ubuntu.go:190] setting minikube options for container-runtime
I0610 15:51:19.679066 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:19.737933 2332048 main.go:110] libmachine: Using SSH client type: native
I0610 15:51:19.738132 2332048 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32800 <nil> <nil>}
I0610 15:51:19.738165 2332048 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0610 15:51:19.841869 2332048 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0610 15:51:19.841919 2332048 ubuntu.go:71] root file system type: overlay
I0610 15:51:19.842020 2332048 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0610 15:51:19.842146 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:19.903487 2332048 main.go:110] libmachine: Using SSH client type: native
I0610 15:51:19.903633 2332048 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32800 <nil> <nil>}
I0610 15:51:19.903728 2332048 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0610 15:51:20.014179 2332048 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0610 15:51:20.014317 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:20.063507 2332048 main.go:110] libmachine: Using SSH client type: native
I0610 15:51:20.063603 2332048 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf820] 0x7bf7f0 <nil>  [] 0s} 127.0.0.1 32800 <nil> <nil>}
I0610 15:51:20.063616 2332048 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0610 15:51:20.447077 2332048 main.go:110] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service    2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new      2020-06-10 19:51:20.010499319 +0000
@@ -8,24 +8,22 @@

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0610 15:51:20.447130 2332048 machine.go:91] provisioned docker machine in 1.382506579s
I0610 15:51:20.447137 2332048 client.go:164] LocalClient.Create took 4.206281605s
I0610 15:51:20.447144 2332048 start.go:162] duration metric: libmachine.API.Create for "minikube" took 4.206312337s
I0610 15:51:20.447148 2332048 start.go:203] post-start starting for "minikube" (driver="docker")
I0610 15:51:20.447152 2332048 start.go:213] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0610 15:51:20.447282 2332048 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0610 15:51:20.447309 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:20.497537 2332048 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32800 SSHKeyPath:/home/aurelien/.minikube/machines/minikube/id_rsa Username:docker}
I0610 15:51:20.570975 2332048 ssh_runner.go:148] Run: cat /etc/os-release
I0610 15:51:20.572874 2332048 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0610 15:51:20.572910 2332048 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0610 15:51:20.572921 2332048 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0610 15:51:20.572927 2332048 info.go:96] Remote host: Ubuntu 19.10
I0610 15:51:20.572955 2332048 filesync.go:118] Scanning /home/aurelien/.minikube/addons for local assets ...
I0610 15:51:25.692404 2332048 filesync.go:118] Scanning /home/aurelien/.minikube/files for local assets ...
I0610 15:51:25.692470 2332048 start.go:206] post-start completed in 5.245316177s
I0610 15:51:25.692879 2332048 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0610 15:51:25.752442 2332048 profile.go:156] Saving config to /home/aurelien/.minikube/profiles/minikube/config.json ...
I0610 15:51:25.752729 2332048 start.go:124] duration metric: createHost completed in 9.514424363s
I0610 15:51:25.752762 2332048 start.go:75] releasing machines lock for "minikube", held for 9.514565972s
I0610 15:51:25.752875 2332048 cli_runner.go:108] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0610 15:51:25.804564 2332048 ssh_runner.go:148] Run: systemctl --version
I0610 15:51:25.804600 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:25.804612 2332048 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0610 15:51:25.804717 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:25.857713 2332048 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32800 SSHKeyPath:/home/aurelien/.minikube/machines/minikube/id_rsa Username:docker}
I0610 15:51:25.859070 2332048 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32800 SSHKeyPath:/home/aurelien/.minikube/machines/minikube/id_rsa Username:docker}
I0610 15:51:25.933428 2332048 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0610 15:51:25.941358 2332048 cruntime.go:189] skipping containerd shutdown because we are bound to it
I0610 15:51:25.941426 2332048 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0610 15:51:25.948024 2332048 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0610 15:51:25.983683 2332048 ssh_runner.go:148] Run: sudo systemctl start docker
I0610 15:51:25.990650 2332048 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
🐳  Preparing Kubernetes v1.16.5 on Docker 19.03.2 ...
I0610 15:51:26.051113 2332048 cli_runner.go:108] Run: docker network ls --filter name=bridge --format {{.ID}}
I0610 15:51:26.099914 2332048 cli_runner.go:108] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" 2833faae6943
I0610 15:51:26.146986 2332048 network.go:77] got host ip for mount in container by inspect docker network: 172.17.0.1
I0610 15:51:26.147001 2332048 start.go:268] checking
I0610 15:51:26.147037 2332048 ssh_runner.go:148] Run: grep 172.17.0.1   host.minikube.internal$ /etc/hosts
I0610 15:51:26.148897 2332048 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "172.17.0.1       host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
I0610 15:51:26.157041 2332048 preload.go:95] Checking if preload exists for k8s version v1.16.5 and runtime docker
I0610 15:51:26.157064 2332048 preload.go:103] Found local preload: /home/aurelien/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.16.5-docker-overlay2-amd64.tar.lz4
I0610 15:51:26.157108 2332048 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0610 15:51:26.185695 2332048 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-controller-manager:v1.16.5
k8s.gcr.io/kube-scheduler:v1.16.5
k8s.gcr.io/kube-proxy:v1.16.5
k8s.gcr.io/kube-apiserver:v1.16.5
kubernetesui/metrics-scraper:v1.0.2
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
k8s.gcr.io/pause:3.1
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0610 15:51:26.185734 2332048 docker.go:317] Images already preloaded, skipping extraction
I0610 15:51:26.185761 2332048 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0610 15:51:26.214259 2332048 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-scheduler:v1.16.5
k8s.gcr.io/kube-apiserver:v1.16.5
k8s.gcr.io/kube-controller-manager:v1.16.5
k8s.gcr.io/kube-proxy:v1.16.5
kubernetesui/metrics-scraper:v1.0.2
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
k8s.gcr.io/pause:3.1
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0610 15:51:26.214279 2332048 cache_images.go:69] Images are preloaded, skipping loading
I0610 15:51:26.214595 2332048 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.16.5 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0610 15:51:26.214766 2332048 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.3
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: minikube
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      listen-metrics-urls: http://127.0.0.1:2381,http://172.17.0.3:2381
kubernetesVersion: v1.16.5
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 172.17.0.3:10249

I0610 15:51:26.215018 2332048 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0610 15:51:26.246729 2332048 kubeadm.go:755] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.5/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=172.17.0.3 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.16.5 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0610 15:51:26.246817 2332048 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.16.5
I0610 15:51:26.251358 2332048 binaries.go:43] Found k8s binaries, skipping transfer
I0610 15:51:26.251433 2332048 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0610 15:51:26.255139 2332048 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (553 bytes)
I0610 15:51:26.264387 2332048 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0610 15:51:26.273773 2332048 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1551 bytes)
I0610 15:51:26.283604 2332048 start.go:268] checking
I0610 15:51:26.283691 2332048 ssh_runner.go:148] Run: grep 172.17.0.3   control-plane.minikube.internal$ /etc/hosts
I0610 15:51:26.285239 2332048 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.3      control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0610 15:51:26.290688 2332048 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0610 15:51:26.337417 2332048 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0610 15:51:26.345564 2332048 certs.go:52] Setting up /home/aurelien/.minikube/profiles/minikube for IP: 172.17.0.3
I0610 15:51:26.345604 2332048 certs.go:169] skipping minikubeCA CA generation: /home/aurelien/.minikube/ca.key
I0610 15:51:26.345623 2332048 certs.go:169] skipping proxyClientCA CA generation: /home/aurelien/.minikube/proxy-client-ca.key
I0610 15:51:26.345672 2332048 certs.go:273] generating minikube-user signed cert: /home/aurelien/.minikube/profiles/minikube/client.key
I0610 15:51:26.345680 2332048 crypto.go:69] Generating cert /home/aurelien/.minikube/profiles/minikube/client.crt with IP's: []
I0610 15:51:26.430398 2332048 crypto.go:157] Writing cert to /home/aurelien/.minikube/profiles/minikube/client.crt ...
I0610 15:51:26.430416 2332048 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/client.crt: {Name:mkf9533d637068707a83d7570062997171d4356e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0610 15:51:26.430513 2332048 crypto.go:165] Writing key to /home/aurelien/.minikube/profiles/minikube/client.key ...
I0610 15:51:26.430519 2332048 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/client.key: {Name:mk63becf07a08e2aa7beaff2e2efc2d3028a5a67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0610 15:51:26.430565 2332048 certs.go:273] generating minikube signed cert: /home/aurelien/.minikube/profiles/minikube/apiserver.key.0f3e66d0
I0610 15:51:26.430570 2332048 crypto.go:69] Generating cert /home/aurelien/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
I0610 15:51:26.565657 2332048 crypto.go:157] Writing cert to /home/aurelien/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 ...
I0610 15:51:26.565675 2332048 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/apiserver.crt.0f3e66d0: {Name:mk231f34d1c8d191e520b39860219dc4029b42ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0610 15:51:26.565765 2332048 crypto.go:165] Writing key to /home/aurelien/.minikube/profiles/minikube/apiserver.key.0f3e66d0 ...
I0610 15:51:26.565772 2332048 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/apiserver.key.0f3e66d0: {Name:mk85bbf77c9e9b1106fea12984d17e0f9871d46a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0610 15:51:26.565815 2332048 certs.go:284] copying /home/aurelien/.minikube/profiles/minikube/apiserver.crt.0f3e66d0 -> /home/aurelien/.minikube/profiles/minikube/apiserver.crt
I0610 15:51:26.565847 2332048 certs.go:288] copying /home/aurelien/.minikube/profiles/minikube/apiserver.key.0f3e66d0 -> /home/aurelien/.minikube/profiles/minikube/apiserver.key
I0610 15:51:26.565899 2332048 certs.go:273] generating aggregator signed cert: /home/aurelien/.minikube/profiles/minikube/proxy-client.key
I0610 15:51:26.565904 2332048 crypto.go:69] Generating cert /home/aurelien/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0610 15:51:26.646515 2332048 crypto.go:157] Writing cert to /home/aurelien/.minikube/profiles/minikube/proxy-client.crt ...
I0610 15:51:26.646531 2332048 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/proxy-client.crt: {Name:mkd702074be9bc1a9c0e9f347e58877d21d15cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0610 15:51:26.646702 2332048 crypto.go:165] Writing key to /home/aurelien/.minikube/profiles/minikube/proxy-client.key ...
I0610 15:51:26.646709 2332048 lock.go:35] WriteFile acquiring /home/aurelien/.minikube/profiles/minikube/proxy-client.key: {Name:mk33582d0cea0dfe3bb8901a4e0f5679bd5afafd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0610 15:51:26.646898 2332048 certs.go:348] found cert: /home/aurelien/.minikube/certs/home/aurelien/.minikube/certs/ca-key.pem (1679 bytes)
I0610 15:51:26.646920 2332048 certs.go:348] found cert: /home/aurelien/.minikube/certs/home/aurelien/.minikube/certs/ca.pem (1042 bytes)
I0610 15:51:26.646965 2332048 certs.go:348] found cert: /home/aurelien/.minikube/certs/home/aurelien/.minikube/certs/cert.pem (1082 bytes)
I0610 15:51:26.646983 2332048 certs.go:348] found cert: /home/aurelien/.minikube/certs/home/aurelien/.minikube/certs/key.pem (1679 bytes)
I0610 15:51:26.647443 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0610 15:51:26.658661 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0610 15:51:26.669150 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0610 15:51:26.681360 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0610 15:51:26.693783 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0610 15:51:26.705465 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0610 15:51:26.717829 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0610 15:51:26.730010 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0610 15:51:26.741488 2332048 ssh_runner.go:215] scp /home/aurelien/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0610 15:51:26.753402 2332048 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0610 15:51:26.764131 2332048 ssh_runner.go:148] Run: openssl version
I0610 15:51:26.767454 2332048 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0610 15:51:26.772353 2332048 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0610 15:51:26.774397 2332048 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Oct  1  2019 /usr/share/ca-certificates/minikubeCA.pem
I0610 15:51:26.774452 2332048 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0610 15:51:26.777771 2332048 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0610 15:51:26.782255 2332048 kubeadm.go:293] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8192 CPUs:6 DiskSize:51200 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.16.5 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.16.5 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0610 15:51:26.782361 2332048 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0610 15:51:26.818770 2332048 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0610 15:51:26.824874 2332048 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0610 15:51:26.830978 2332048 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0610 15:51:26.831018 2332048 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0610 15:51:26.835847 2332048 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0610 15:51:26.835869 2332048 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.5:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0610 15:51:39.030900 2332048 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.5:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (12.194952758s)
I0610 15:51:39.030950 2332048 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0610 15:51:39.031062 2332048 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.16.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0610 15:51:39.031073 2332048 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.16.5/kubectl label nodes minikube.k8s.io/version=v1.11.0 minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_06_10T15_51_39_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0610 15:51:39.036115 2332048 ops.go:35] apiserver oom_adj: -16
I0610 15:51:39.288932 2332048 kubeadm.go:890] duration metric: took 257.871458ms to wait for elevateKubeSystemPrivileges.
I0610 15:51:39.296259 2332048 kubeadm.go:295] StartCluster complete in 12.514004639s
I0610 15:51:39.296284 2332048 settings.go:123] acquiring lock: {Name:mka38e6d98ba4b3ca12911b7730c655289c87d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0610 15:51:39.296378 2332048 settings.go:131] Updating kubeconfig:  /home/aurelien/.kube/config
I0610 15:51:39.323028 2332048 lock.go:35] WriteFile acquiring /home/aurelien/.kube/config: {Name:mk6624bc5cc76fcb30beb1c7182a4f459370260e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
🔎  Verifying Kubernetes components...
I0610 15:51:39.323593 2332048 addons.go:320] enableAddons start: toEnable=map[], additional=[]
I0610 15:51:39.327695 2332048 addons.go:50] Setting storage-provisioner=true in profile "minikube"
I0610 15:51:39.327753 2332048 addons.go:126] Setting addon storage-provisioner=true in "minikube"
W0610 15:51:39.327759 2332048 addons.go:135] addon storage-provisioner should already be in state true
I0610 15:51:39.327792 2332048 addons.go:50] Setting default-storageclass=true in profile "minikube"
I0610 15:51:39.327796 2332048 host.go:65] Checking if "minikube" exists ...
I0610 15:51:39.327810 2332048 addons.go:266] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0610 15:51:39.328016 2332048 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0610 15:51:39.328157 2332048 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0610 15:51:39.341545 2332048 api_server.go:47] waiting for apiserver process to appear ...
I0610 15:51:39.341608 2332048 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0610 15:51:39.347033 2332048 api_server.go:67] duration metric: took 23.533119ms to wait for apiserver process to appear ...
I0610 15:51:39.347053 2332048 api_server.go:83] waiting for apiserver healthz status ...
I0610 15:51:39.347061 2332048 api_server.go:193] Checking apiserver healthz at https://172.17.0.3:8443/healthz ...
I0610 15:51:39.350921 2332048 api_server.go:213] https://172.17.0.3:8443/healthz returned 200:
ok
I0610 15:51:39.356574 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:39.356603 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:39.387263 2332048 addons.go:233] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0610 15:51:39.387278 2332048 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (1709 bytes)
I0610 15:51:39.387320 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:39.398646 2332048 addons.go:126] Setting addon default-storageclass=true in "minikube"
W0610 15:51:39.398661 2332048 addons.go:135] addon default-storageclass should already be in state true
I0610 15:51:39.398672 2332048 host.go:65] Checking if "minikube" exists ...
I0610 15:51:39.399260 2332048 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0610 15:51:39.452372 2332048 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32800 SSHKeyPath:/home/aurelien/.minikube/machines/minikube/id_rsa Username:docker}
I0610 15:51:39.461813 2332048 addons.go:233] installing /etc/kubernetes/addons/storageclass.yaml
I0610 15:51:39.461874 2332048 ssh_runner.go:215] scp deploy/addons/storageclass/storageclass.yaml.tmpl --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0610 15:51:39.461966 2332048 cli_runner.go:108] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0610 15:51:39.526902 2332048 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32800 SSHKeyPath:/home/aurelien/.minikube/machines/minikube/id_rsa Username:docker}
I0610 15:51:39.534106 2332048 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0610 15:51:39.611358 2332048 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0610 15:51:39.857221 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:39.857239 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:40.357373 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:40.357414 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:40.857264 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:40.857281 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:41.357693 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:41.357711 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:41.682876 2332048 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.148749779s)
🌟  Enabled addons: default-storageclass, storage-provisioner
I0610 15:51:41.685873 2332048 addons.go:322] enableAddons completed in 2.362283626s
I0610 15:51:41.857279 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:41.857297 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:42.357665 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:42.357704 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:42.857577 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:42.857626 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:43.357501 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:43.357541 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:43.857336 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:43.857360 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:44.357508 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:44.357525 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:44.857252 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:44.857270 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:45.357714 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:45.357730 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:45.857665 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:45.857680 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:46.357702 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:46.357741 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:46.857509 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:46.857562 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:47.357755 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:47.357796 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:47.857574 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:47.857613 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:48.357551 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:48.357610 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:48.857515 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:48.857604 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:49.357639 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:49.357689 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:49.857444 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:49.857503 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:50.357732 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:50.357748 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:50.857415 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:50.857455 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:51.357493 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:51.357529 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:51.857346 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:51.857388 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:52.357564 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:52.357606 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:52.857213 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:52.857231 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:53.357456 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:53.357498 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:53.857397 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:53.857414 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:54.357392 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:54.357434 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:54.857312 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:54.857329 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:55.357235 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:55.357274 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:55.857171 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:55.857187 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:56.357372 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:56.357389 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:56.857311 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:56.857329 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:57.357474 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:57.357497 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:57.857338 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:57.857396 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:58.357487 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:58.357551 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:58.857746 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:58.857772 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:59.357472 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:59.357487 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:51:59.857279 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:51:59.857320 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:00.357489 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:00.357530 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:00.857250 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:00.857298 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:01.357254 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:01.357281 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:01.857490 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:01.857531 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:02.357445 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:02.357483 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:02.857286 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:02.857302 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:03.357301 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:03.357317 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:03.857593 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:03.857609 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:04.357243 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:04.357259 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:04.857669 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:04.857692 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:05.357547 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:05.357588 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:05.857445 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:05.857464 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:06.357395 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:06.357419 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:06.857378 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:06.857395 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:07.357217 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:07.357233 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:07.857393 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:07.857434 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:08.357536 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:08.357597 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:08.857501 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:08.857542 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:09.357432 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:09.357492 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:09.857702 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:09.857718 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:10.357231 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:10.357250 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:10.857540 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:10.857558 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:11.357249 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:11.357265 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:11.857786 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:11.857803 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:12.357237 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:12.357254 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:12.857381 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:12.857397 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:13.357503 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:13.357518 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:13.857541 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:13.857557 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:14.357500 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:14.357529 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:14.857683 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:14.857699 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:15.357305 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:15.357321 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:15.857627 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:15.857644 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:16.357280 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:16.357339 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:16.857626 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:16.857649 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:17.357292 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:17.357333 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:17.857512 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:17.857528 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:18.358230 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:18.358284 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:18.857427 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:18.857448 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:19.357359 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:19.357377 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:19.858760 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:19.858879 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:20.357220 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:20.357236 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:20.857376 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:20.857395 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:21.357364 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:21.357420 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:21.857413 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:21.857428 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:22.357875 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:22.357916 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:22.857209 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:22.857246 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:23.357390 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:23.357406 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:23.858371 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:23.858462 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:24.358319 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:24.358397 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:24.857346 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:24.857370 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:25.357442 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:25.357458 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:25.857624 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:25.857640 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:26.357304 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:26.357323 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:26.857614 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:26.857652 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:27.357480 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:27.357524 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:27.857337 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:27.857354 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:28.357157 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:28.357174 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:28.857464 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:28.857481 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:29.357452 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:29.357494 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:29.857692 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:29.857733 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:30.357514 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:30.357552 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:30.857392 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:30.857411 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:31.357693 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:31.357729 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:31.857442 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:31.857482 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:32.357648 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:32.357665 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:32.857235 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:32.857252 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:33.357432 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:33.357472 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:33.857580 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:33.857598 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:34.357568 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:34.357584 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:34.857258 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:34.857274 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:35.357627 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:35.357670 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:35.857349 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:35.857365 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:36.357475 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:36.357517 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:36.857409 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:36.857428 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:37.357452 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:37.357470 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:37.857968 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:37.858059 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:38.357339 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:38.357377 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:38.857448 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:38.857464 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:39.357505 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:39.357521 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:39.857514 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:39.857530 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:40.357357 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:40.357373 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:40.857286 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:40.857304 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:41.357776 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:41.357812 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:41.857329 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:41.857391 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:42.357607 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:42.357627 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:42.857327 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:42.857343 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:43.357707 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:43.357724 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:43.857304 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:43.857334 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:44.357518 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:44.357561 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:44.857316 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:44.857356 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:45.357462 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:45.357497 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:45.857471 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:45.857530 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:46.358103 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:46.358120 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:46.857401 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:46.857417 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:47.357605 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:47.357647 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:47.857455 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:47.857497 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:48.357503 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:48.357544 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:48.857300 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:48.857316 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:49.357343 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:49.357391 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:49.857430 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:49.857460 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:50.357596 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:50.357672 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:50.857279 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:50.857295 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:51.357478 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:51.357497 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:51.857636 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:51.857682 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:52.357605 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:52.357630 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:52.857556 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:52.857590 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:53.357447 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:53.357464 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:53.857379 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:53.857398 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:54.357298 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:54.357320 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:54.857437 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:54.857457 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:55.357378 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:55.357421 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:55.857265 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:55.857283 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:56.357417 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:56.357435 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:56.857350 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:56.857367 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:57.357716 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:57.357732 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:57.857251 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:57.857292 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:58.357345 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:58.357360 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:58.857751 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:58.857768 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:59.357536 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:59.357552 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:52:59.857487 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:52:59.857508 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:00.357283 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:00.357298 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:00.858003 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:00.858023 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:01.357472 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:01.357512 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:01.857652 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:01.857669 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:02.357782 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:02.357853 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:02.857714 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:02.857753 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:03.357511 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:03.357529 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:03.857601 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:03.857618 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:04.357396 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:04.357437 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:04.857423 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:04.857471 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:05.357919 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:05.357937 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:05.857603 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:05.857621 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:06.357411 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:06.357429 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:06.857617 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:06.857657 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:07.357330 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:07.357348 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:07.857271 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:07.857292 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:08.357489 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:08.357524 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:08.857394 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:08.857412 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:09.357355 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:09.357372 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:09.857505 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:09.857626 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:10.357273 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:10.357290 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:10.857586 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:10.857628 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:11.357461 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:11.357499 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:11.857422 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:11.857438 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:12.357488 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:12.357529 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:12.857319 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:12.857355 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:13.357423 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:13.357439 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:13.857407 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:13.857448 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:14.357498 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:14.357546 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:14.857513 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:14.857530 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:15.357570 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:15.357603 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:15.857385 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:15.857404 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:16.357302 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:16.357318 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:16.857229 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:16.857289 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:17.357530 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:17.357586 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:17.857346 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:17.857362 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:18.358810 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:18.358836 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:18.857710 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:18.857746 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:19.357257 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:19.357272 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:19.857657 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:19.857675 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:20.357480 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:20.357515 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:20.857304 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:20.857321 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:21.357306 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:21.357338 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:21.857803 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:21.857821 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:22.357586 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:22.357604 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:22.857252 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:22.857269 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:23.357345 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:23.357363 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:23.857320 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:23.857336 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:24.357799 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:24.357877 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:24.857536 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:24.857581 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:25.357637 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:25.357758 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:25.857486 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:25.857523 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:26.357551 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:26.357606 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:26.857939 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:26.857958 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:27.357828 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:27.357845 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:27.857251 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:27.857268 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:28.357277 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:28.357294 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:28.857370 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:28.857427 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:29.357410 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:29.357471 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:29.857233 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:29.857252 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:30.357191 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:30.357207 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:30.857267 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:30.857306 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:31.357615 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:31.357663 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:31.857567 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:31.857602 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:32.357635 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:32.357676 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:53:32.857301 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W0610 15:53:32.857335 2332048 api_server.go:116] api server version match failed: controlPane = "v1.16.6-beta.0", expected: "v1.16.5"
I0610 15:55:39.357780 2332048 node_conditions.go:99] verifying NodePressure condition ...
I0610 15:55:39.360180 2332048 node_conditions.go:111] node storage ephemeral capacity is 321488636Ki
I0610 15:55:39.360192 2332048 node_conditions.go:112] node cpu capacity is 16
I0610 15:55:39.360201 2332048 node_conditions.go:102] duration metric: took 2.413864ms to run NodePressure ...
I0610 15:55:39.360236 2332048 exit.go:58] WithError(failed to start node)=startup failed: Wait failed: wait for healthy API server: controlPlane never updated to v1.16.5 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x0)
        /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1adff52, 0x14, 0x1db6b80, 0xc0009d2c20)
        /app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2b00360, 0xc0007aed20, 0x0, 0xa)
        /app/cmd/minikube/cmd/start.go:203 +0x7f7
github.com/spf13/cobra.(*Command).execute(0x2b00360, 0xc0007aebe0, 0xa, 0xa, 0x2b00360, 0xc0007aebe0)
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2b05220, 0x0, 0x1, 0xc000627400)
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
        /app/cmd/minikube/cmd/root.go:112 +0x747
main.main()
        /app/cmd/minikube/main.go:66 +0xea
W0610 15:55:39.360389 2332048 out.go:201] failed to start node: startup failed: Wait failed: wait for healthy API server: controlPlane never updated to v1.16.5

💣  failed to start node: startup failed: Wait failed: wait for healthy API server: controlPlane never updated to v1.16.5

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Wed 2020-06-10 19:51:17 UTC, end at Wed 2020-06-10 19:56:30 UTC. -- Jun 10 19:55:07 minikube dockerd[351]: time="2020-06-10T19:55:07.613515887Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:09 minikube dockerd[351]: time="2020-06-10T19:55:09.358769167Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:10 minikube dockerd[351]: time="2020-06-10T19:55:10.777692053Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:11 minikube dockerd[351]: time="2020-06-10T19:55:11.843196407Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:13 minikube dockerd[351]: time="2020-06-10T19:55:13.720084046Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:14 minikube dockerd[351]: time="2020-06-10T19:55:14.508237474Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:16 minikube dockerd[351]: time="2020-06-10T19:55:16.747878185Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:16 minikube dockerd[351]: time="2020-06-10T19:55:16.839992034Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:20 minikube dockerd[351]: time="2020-06-10T19:55:20.001231209Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:20 minikube dockerd[351]: time="2020-06-10T19:55:20.172778062Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:22 minikube dockerd[351]: time="2020-06-10T19:55:22.826206988Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:22 minikube dockerd[351]: time="2020-06-10T19:55:22.850751728Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:25 minikube dockerd[351]: time="2020-06-10T19:55:25.825160316Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:26 minikube dockerd[351]: time="2020-06-10T19:55:26.163798017Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:28 minikube dockerd[351]: time="2020-06-10T19:55:28.982343093Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:29 minikube dockerd[351]: time="2020-06-10T19:55:29.284608466Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:31 minikube dockerd[351]: time="2020-06-10T19:55:31.723880671Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:32 minikube dockerd[351]: time="2020-06-10T19:55:32.064304088Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:34 minikube dockerd[351]: time="2020-06-10T19:55:34.267977556Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:35 minikube dockerd[351]: time="2020-06-10T19:55:35.499172704Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:37 minikube dockerd[351]: time="2020-06-10T19:55:37.553131831Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:38 minikube dockerd[351]: time="2020-06-10T19:55:38.328817971Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:40 minikube dockerd[351]: time="2020-06-10T19:55:40.444896953Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:40 minikube dockerd[351]: time="2020-06-10T19:55:40.984829330Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:43 minikube dockerd[351]: time="2020-06-10T19:55:43.241544328Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:43 minikube dockerd[351]: time="2020-06-10T19:55:43.452096629Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:46 minikube dockerd[351]: time="2020-06-10T19:55:46.387838199Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:46 minikube dockerd[351]: time="2020-06-10T19:55:46.415688021Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:49 minikube dockerd[351]: time="2020-06-10T19:55:49.328684887Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:49 minikube dockerd[351]: time="2020-06-10T19:55:49.420418895Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:51 minikube dockerd[351]: time="2020-06-10T19:55:51.988707278Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:52 minikube dockerd[351]: time="2020-06-10T19:55:52.673267231Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:54 minikube dockerd[351]: time="2020-06-10T19:55:54.252545985Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:55 minikube dockerd[351]: time="2020-06-10T19:55:55.480342497Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:56 minikube dockerd[351]: time="2020-06-10T19:55:56.837497508Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:58 minikube dockerd[351]: time="2020-06-10T19:55:58.504707772Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:55:59 minikube dockerd[351]: time="2020-06-10T19:55:59.201177293Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:01 minikube dockerd[351]: time="2020-06-10T19:56:01.865301511Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:01 minikube dockerd[351]: time="2020-06-10T19:56:01.902285351Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:04 minikube dockerd[351]: time="2020-06-10T19:56:04.461416773Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:05 minikube dockerd[351]: time="2020-06-10T19:56:05.004171351Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:07 minikube dockerd[351]: time="2020-06-10T19:56:07.474827048Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:07 minikube dockerd[351]: time="2020-06-10T19:56:07.575474943Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:09 minikube dockerd[351]: time="2020-06-10T19:56:09.906156009Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:10 minikube dockerd[351]: time="2020-06-10T19:56:10.819789412Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:12 minikube dockerd[351]: time="2020-06-10T19:56:12.447939039Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:13 minikube dockerd[351]: time="2020-06-10T19:56:13.917691390Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:15 minikube dockerd[351]: time="2020-06-10T19:56:15.096080670Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:16 minikube dockerd[351]: time="2020-06-10T19:56:16.892945480Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:17 minikube dockerd[351]: time="2020-06-10T19:56:17.479602642Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:20 minikube dockerd[351]: time="2020-06-10T19:56:20.180312575Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:20 minikube dockerd[351]: time="2020-06-10T19:56:20.280583631Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:23 minikube dockerd[351]: time="2020-06-10T19:56:23.219225556Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:23 minikube dockerd[351]: time="2020-06-10T19:56:23.386258552Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:25 minikube dockerd[351]: time="2020-06-10T19:56:25.624830054Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:25 minikube dockerd[351]: time="2020-06-10T19:56:25.743138976Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:27 minikube dockerd[351]: time="2020-06-10T19:56:27.703894884Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:27 minikube dockerd[351]: time="2020-06-10T19:56:27.848258596Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:30 minikube dockerd[351]: time="2020-06-10T19:56:30.240468595Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jun 10 19:56:30 minikube dockerd[351]: time="2020-06-10T19:56:30.356883693Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
0759f69a4af6e 4689081edb103 4 minutes ago Running storage-provisioner 1 c6817888babc9
81c45417ced3b 4689081edb103 4 minutes ago Exited storage-provisioner 0 c6817888babc9
16300c620458b 0ee1b8a3ebe00 4 minutes ago Running kube-proxy 0 e268b92dde244
51ec15cc07c28 b4d073a9efda2 4 minutes ago Running kube-scheduler 0 145f9d2603a98
a6b51cde8ba92 441835dd23012 4 minutes ago Running kube-controller-manager 0 f0d726bc763c3
edebbd0259832 fc838b21afbb7 4 minutes ago Running kube-apiserver 0 d2f0e05c11c3e
f2beb8df3fb53 b2756210eeabf 4 minutes ago Running etcd 0 df60f9799cf8d

==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_06_10T15_51_39_0700
minikube.k8s.io/version=v1.11.0
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 10 Jun 2020 19:51:36 +0000
Taints:
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Wed, 10 Jun 2020 19:55:37 +0000 Wed, 10 Jun 2020 19:51:34 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 10 Jun 2020 19:55:37 +0000 Wed, 10 Jun 2020 19:51:34 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 10 Jun 2020 19:55:37 +0000 Wed, 10 Jun 2020 19:51:34 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 10 Jun 2020 19:55:37 +0000 Wed, 10 Jun 2020 19:51:34 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.17.0.3
Hostname: minikube
Capacity:
cpu: 16
ephemeral-storage: 321488636Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 31751Mi
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 321488636Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 31751Mi
pods: 110
System Info:
Machine ID: d6e045164ee9466eabde0715cb5c092f
System UUID: 68c5f04d-6006-4544-88ec-1fc44b5d524b
Boot ID: 64937377-cd63-4e9c-ac64-6a2113f9c4cc
Kernel Version: 5.4.0-33-generic
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.16.6-beta.0
Kube-Proxy Version: v1.16.6-beta.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


kube-system coredns-5644d7b6d9-bs8j2 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4m34s
kube-system coredns-5644d7b6d9-dhzdb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4m34s
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m33s
kube-system kube-apiserver-minikube 250m (1%) 0 (0%) 0 (0%) 0 (0%) 3m36s
kube-system kube-controller-manager-minikube 200m (1%) 0 (0%) 0 (0%) 0 (0%) 3m49s
kube-system kube-proxy-mbrr6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m34s
kube-system kube-scheduler-minikube 100m (0%) 0 (0%) 0 (0%) 0 (0%) 3m39s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 750m (4%) 0 (0%)
memory 140Mi (0%) 340Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal Starting 4m58s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 4m57s (x8 over 4m58s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m57s (x8 over 4m58s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m57s (x7 over 4m58s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m57s kubelet, minikube Updated Node Allocatable limit across pods
Warning readOnlySysFS 4m33s kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
Normal Starting 4m33s kube-proxy, minikube Starting kube-proxy.

==> dmesg <==
[ +0.000001] mce: CPU3: Core temperature above threshold, cpu clock throttled (total events = 144050)
[ +0.000040] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU15: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU9: Package temperature above threshold, cpu clock throttled (total events = 437594)
[ +0.000000] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000002] mce: CPU8: Package temperature above threshold, cpu clock throttled (total events = 437595)
[ +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU10: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 437595)
[ +0.000001] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU12: Package temperature above threshold, cpu clock throttled (total events = 437595)
[ +0.000001] mce: CPU13: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU14: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000000] mce: CPU11: Package temperature above threshold, cpu clock throttled (total events = 437596)
[ +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 437594)
[Jun 5 01:23] vboxdrv: 0000000000000000 VMMR0.r0
[ +0.049031] VBoxNetFlt: attached to 'vboxnet0' / 0a:00:27:00:00:00
[ +0.072080] vboxdrv: 0000000000000000 VBoxDDR0.r0
[ +0.021942] VMMR0InitVM: eflags=246 fKernelFeatures=0x0 (SUPKERNELFEATURES_SMAP=0)
[Jun 5 01:27] [drm:intel_pipe_update_end [i915]] ERROR Atomic update failure on pipe B (start=1459097 end=1459098) time 258 us, min 1590, max 1599, scanline start 1588, end 1612
[Jun 5 01:29] mce: CPU1: Core temperature above threshold, cpu clock throttled (total events = 45333)
[ +0.000001] mce: CPU9: Core temperature above threshold, cpu clock throttled (total events = 45333)
[ +0.000001] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU14: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000000] mce: CPU9: Package temperature above threshold, cpu clock throttled (total events = 439855)
[ +0.000002] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 439854)
[ +0.000050] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU12: Package temperature above threshold, cpu clock throttled (total events = 439856)
[ +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU10: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000027] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000002] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 439856)
[ +0.000001] mce: CPU11: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU13: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU15: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 439857)
[ +0.000001] mce: CPU8: Package temperature above threshold, cpu clock throttled (total events = 439856)
[ +37.760471] vboxnetflt: 1641 out of 1674 packets were not sent (directed to host)
[Jun 5 01:32] [drm:intel_pipe_update_end [i915]] ERROR Atomic update failure on pipe B (start=1478655 end=1478656) time 232 us, min 1590, max 1599, scanline start 1579, end 1601
[Jun 5 01:36] mce: CPU3: Core temperature above threshold, cpu clock throttled (total events = 144987)
[ +0.000001] mce: CPU11: Core temperature above threshold, cpu clock throttled (total events = 144987)
[ +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 440378)
[ +0.000001] mce: CPU9: Package temperature above threshold, cpu clock throttled (total events = 440379)
[ +0.000000] mce: CPU11: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000054] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU8: Package temperature above threshold, cpu clock throttled (total events = 440380)
[ +0.000001] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU12: Package temperature above threshold, cpu clock throttled (total events = 440380)
[ +0.000001] mce: CPU10: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU14: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU15: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU13: Package temperature above threshold, cpu clock throttled (total events = 440381)
[ +0.000001] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 440380)

==> etcd [f2beb8df3fb5] <==
2020-06-10 19:51:33.670683 I | etcdmain: etcd Version: 3.3.15
2020-06-10 19:51:33.670711 I | etcdmain: Git SHA: 94745a4ee
2020-06-10 19:51:33.670713 I | etcdmain: Go Version: go1.12.9
2020-06-10 19:51:33.670715 I | etcdmain: Go OS/Arch: linux/amd64
2020-06-10 19:51:33.670717 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16
2020-06-10 19:51:33.670873 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-06-10 19:51:33.671223 I | embed: listening for peers on https://172.17.0.3:2380
2020-06-10 19:51:33.671253 I | embed: listening for client requests on 127.0.0.1:2379
2020-06-10 19:51:33.671271 I | embed: listening for client requests on 172.17.0.3:2379
2020-06-10 19:51:33.674156 I | etcdserver: name = minikube
2020-06-10 19:51:33.674175 I | etcdserver: data dir = /var/lib/minikube/etcd
2020-06-10 19:51:33.674181 I | etcdserver: member dir = /var/lib/minikube/etcd/member
2020-06-10 19:51:33.674183 I | etcdserver: heartbeat = 100ms
2020-06-10 19:51:33.674185 I | etcdserver: election = 1000ms
2020-06-10 19:51:33.674188 I | etcdserver: snapshot count = 10000
2020-06-10 19:51:33.674201 I | etcdserver: advertise client URLs = https://172.17.0.3:2379
2020-06-10 19:51:33.674204 I | etcdserver: initial advertise peer URLs = https://172.17.0.3:2380
2020-06-10 19:51:33.674209 I | etcdserver: initial cluster = minikube=https://172.17.0.3:2380
2020-06-10 19:51:33.680788 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2
2020-06-10 19:51:33.680813 I | raft: b273bc7741bcb020 became follower at term 0
2020-06-10 19:51:33.680821 I | raft: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2020-06-10 19:51:33.680825 I | raft: b273bc7741bcb020 became follower at term 1
2020-06-10 19:51:33.687231 W | auth: simple token is not cryptographically signed
2020-06-10 19:51:33.690399 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
2020-06-10 19:51:33.690467 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-06-10 19:51:33.690693 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
2020-06-10 19:51:33.691661 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-06-10 19:51:33.691781 I | embed: listening for metrics on http://127.0.0.1:2381
2020-06-10 19:51:33.691839 I | embed: listening for metrics on http://172.17.0.3:2381
2020-06-10 19:51:34.381109 I | raft: b273bc7741bcb020 is starting a new election at term 1
2020-06-10 19:51:34.381124 I | raft: b273bc7741bcb020 became candidate at term 2
2020-06-10 19:51:34.381160 I | raft: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2
2020-06-10 19:51:34.381166 I | raft: b273bc7741bcb020 became leader at term 2
2020-06-10 19:51:34.381192 I | raft: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2
2020-06-10 19:51:34.381432 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2
2020-06-10 19:51:34.381440 I | embed: ready to serve client requests
2020-06-10 19:51:34.381480 I | embed: ready to serve client requests
2020-06-10 19:51:34.381965 I | etcdserver: setting up the initial cluster version to 3.3
2020-06-10 19:51:34.382722 N | etcdserver/membership: set the initial cluster version to 3.3
2020-06-10 19:51:34.382746 I | etcdserver/api: enabled capabilities for version 3.3
2020-06-10 19:51:34.382944 I | embed: serving client requests on 127.0.0.1:2379
2020-06-10 19:51:34.383101 I | embed: serving client requests on 172.17.0.3:2379

==> kernel <==
19:56:30 up 8 days, 3:37, 0 users, load average: 3.10, 2.81, 2.88
Linux minikube 5.4.0-33-generic #37-Ubuntu SMP Thu May 21 12:53:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [edebbd025983] <==
I0610 19:51:34.768894 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:34.768936 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0610 19:51:34.817551 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:34.817568 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0610 19:51:34.822342 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:34.822354 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
W0610 19:51:34.895330 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W0610 19:51:34.905566 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0610 19:51:34.915849 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0610 19:51:34.917855 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0610 19:51:34.929601 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0610 19:51:34.947463 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0610 19:51:34.947475 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0610 19:51:34.954688 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0610 19:51:34.954696 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0610 19:51:34.956345 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:34.956369 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0610 19:51:34.961626 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:34.961640 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0610 19:51:35.132074 1 client.go:357] parsed scheme: "endpoint"
I0610 19:51:35.132096 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0610 19:51:36.339796 1 secure_serving.go:123] Serving securely on [::]:8443
I0610 19:51:36.339825 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0610 19:51:36.339828 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0610 19:51:36.339952 1 autoregister_controller.go:140] Starting autoregister controller
I0610 19:51:36.339964 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0610 19:51:36.340039 1 crd_finalizer.go:274] Starting CRDFinalizer
I0610 19:51:36.340048 1 naming_controller.go:288] Starting NamingConditionController
I0610 19:51:36.340056 1 establishing_controller.go:73] Starting EstablishingController
I0610 19:51:36.340059 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0610 19:51:36.340072 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0610 19:51:36.340096 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0610 19:51:36.340125 1 controller.go:81] Starting OpenAPI AggregationController
I0610 19:51:36.340138 1 available_controller.go:383] Starting AvailableConditionController
I0610 19:51:36.340150 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0610 19:51:36.340173 1 controller.go:85] Starting OpenAPI controller
I0610 19:51:36.340178 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0610 19:51:36.340182 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
E0610 19:51:36.343151 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg:
I0610 19:51:36.408355 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0610 19:51:36.440016 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0610 19:51:36.440071 1 cache.go:39] Caches are synced for autoregister controller
I0610 19:51:36.440263 1 shared_informer.go:204] Caches are synced for crd-autoregister
I0610 19:51:36.440265 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0610 19:51:37.339952 1 controller.go:107] OpenAPI AggregationController: Processing item
I0610 19:51:37.340021 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0610 19:51:37.340027 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0610 19:51:37.342374 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0610 19:51:37.344546 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0610 19:51:37.344556 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0610 19:51:37.501981 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0610 19:51:37.517827 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0610 19:51:37.569775 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.17.0.3]
I0610 19:51:37.570041 1 controller.go:606] quota admission added evaluator for: endpoints
I0610 19:51:38.636929 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0610 19:51:38.645725 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0610 19:51:39.021210 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0610 19:51:43.778343 1 log.go:172] http: TLS handshake error from 172.17.0.1:49598: EOF
I0610 19:51:56.818662 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0610 19:51:56.890956 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

==> kube-controller-manager [a6b51cde8ba9] <==
I0610 19:51:55.038318 1 certificate_controller.go:113] Starting certificate controller
I0610 19:51:55.038321 1 shared_informer.go:197] Waiting for caches to sync for certificate
I0610 19:51:55.738279 1 controllermanager.go:534] Started "horizontalpodautoscaling"
I0610 19:51:55.738320 1 horizontal.go:156] Starting HPA controller
I0610 19:51:55.738326 1 shared_informer.go:197] Waiting for caches to sync for HPA
I0610 19:51:55.991365 1 controllermanager.go:534] Started "namespace"
I0610 19:51:55.991410 1 namespace_controller.go:186] Starting namespace controller
I0610 19:51:55.991453 1 shared_informer.go:197] Waiting for caches to sync for namespace
I0610 19:51:56.793108 1 garbagecollector.go:130] Starting garbage collector controller
I0610 19:51:56.793119 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0610 19:51:56.793130 1 graph_builder.go:282] GraphBuilder running
I0610 19:51:56.793232 1 controllermanager.go:534] Started "garbagecollector"
I0610 19:51:56.794532 1 shared_informer.go:197] Waiting for caches to sync for resource quota
W0610 19:51:56.796966 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0610 19:51:56.810367 1 shared_informer.go:204] Caches are synced for GC
I0610 19:51:56.817624 1 shared_informer.go:204] Caches are synced for deployment
I0610 19:51:56.819711 1 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"37a13869-78a5-475e-a8eb-7035902abaae", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
I0610 19:51:56.825434 1 shared_informer.go:204] Caches are synced for expand
I0610 19:51:56.838303 1 shared_informer.go:204] Caches are synced for disruption
I0610 19:51:56.838329 1 disruption.go:338] Sending events to api server.
I0610 19:51:56.838400 1 shared_informer.go:204] Caches are synced for certificate
I0610 19:51:56.838478 1 shared_informer.go:204] Caches are synced for job
I0610 19:51:56.838697 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I0610 19:51:56.838753 1 shared_informer.go:204] Caches are synced for PVC protection
I0610 19:51:56.838866 1 shared_informer.go:204] Caches are synced for certificate
I0610 19:51:56.838909 1 shared_informer.go:204] Caches are synced for ReplicationController
I0610 19:51:56.838946 1 shared_informer.go:204] Caches are synced for attach detach
I0610 19:51:56.848534 1 log.go:172] [INFO] signed certificate with serial number 448076519053696427626673704938515373589762678104
I0610 19:51:56.859440 1 shared_informer.go:204] Caches are synced for ReplicaSet
I0610 19:51:56.862072 1 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"ed7bb0ec-c9b8-4d80-ba11-354253bcebed", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-bs8j2
I0610 19:51:56.866460 1 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"ed7bb0ec-c9b8-4d80-ba11-354253bcebed", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-dhzdb
I0610 19:51:56.867211 1 shared_informer.go:204] Caches are synced for stateful set
I0610 19:51:56.874881 1 shared_informer.go:204] Caches are synced for TTL
I0610 19:51:56.887795 1 shared_informer.go:204] Caches are synced for node
I0610 19:51:56.887811 1 range_allocator.go:172] Starting range CIDR allocator
I0610 19:51:56.887814 1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
I0610 19:51:56.887817 1 shared_informer.go:204] Caches are synced for cidrallocator
I0610 19:51:56.888715 1 shared_informer.go:204] Caches are synced for daemon sets
I0610 19:51:56.888899 1 shared_informer.go:204] Caches are synced for service account
I0610 19:51:56.888945 1 shared_informer.go:204] Caches are synced for persistent volume
I0610 19:51:56.889170 1 shared_informer.go:204] Caches are synced for PV protection
I0610 19:51:56.889811 1 range_allocator.go:359] Set node minikube PodCIDR to [10.244.0.0/24]
I0610 19:51:56.890187 1 shared_informer.go:204] Caches are synced for taint
I0610 19:51:56.890243 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone:
W0610 19:51:56.890265 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0610 19:51:56.890266 1 taint_manager.go:186] Starting NoExecuteTaintManager
I0610 19:51:56.890289 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal.
I0610 19:51:56.890339 1 event.go:274] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"83cc614b-1fd0-473d-a267-b9ceddc0a501", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0610 19:51:56.891545 1 shared_informer.go:204] Caches are synced for namespace
I0610 19:51:56.894247 1 event.go:274] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"813cabc7-b1ed-4ce0-a1cf-9f260a3aed5a", APIVersion:"apps/v1", ResourceVersion:"191", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-mbrr6
E0610 19:51:56.905082 1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"813cabc7-b1ed-4ce0-a1cf-9f260a3aed5a", ResourceVersion:"191", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727415499, loc:(*time.Location)(0x6c143a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000a15d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00132e640), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000a15d20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000a15d40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.5", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000a15d80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000a2c500), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001788a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0016bc480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000118158)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001788ad8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0610 19:51:56.988990 1 shared_informer.go:204] Caches are synced for endpoint
I0610 19:51:57.039200 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I0610 19:51:57.238504 1 shared_informer.go:204] Caches are synced for HPA
I0610 19:51:57.393281 1 shared_informer.go:204] Caches are synced for garbage collector
I0610 19:51:57.393296 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0610 19:51:57.394708 1 shared_informer.go:204] Caches are synced for resource quota
I0610 19:51:57.440830 1 shared_informer.go:204] Caches are synced for resource quota
I0610 19:51:58.289853 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0610 19:51:58.289954 1 shared_informer.go:204] Caches are synced for garbage collector

==> kube-proxy [16300c620458] <==
W0610 19:51:57.458248 1 server_others.go:330] Flag proxy-mode="" unknown, assuming iptables proxy
I0610 19:51:57.462596 1 node.go:135] Successfully retrieved node IP: 172.17.0.3
I0610 19:51:57.462612 1 server_others.go:150] Using iptables Proxier.
I0610 19:51:57.462866 1 server.go:529] Version: v1.16.6-beta.0
I0610 19:51:57.463183 1 conntrack.go:52] Setting nf_conntrack_max to 524288
E0610 19:51:57.463414 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0610 19:51:57.463536 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0610 19:51:57.463586 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0610 19:51:57.463695 1 config.go:313] Starting service config controller
I0610 19:51:57.463703 1 shared_informer.go:197] Waiting for caches to sync for service config
I0610 19:51:57.463752 1 config.go:131] Starting endpoints config controller
I0610 19:51:57.463979 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0610 19:51:57.563808 1 shared_informer.go:204] Caches are synced for service config
I0610 19:51:57.564080 1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-scheduler [51ec15cc07c2] <==
I0610 19:51:34.187103 1 serving.go:319] Generated self-signed cert in-memory
W0610 19:51:36.351521 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0610 19:51:36.351658 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0610 19:51:36.351748 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
W0610 19:51:36.351813 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0610 19:51:36.355338 1 server.go:148] Version: v1.16.6-beta.0
I0610 19:51:36.355397 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0610 19:51:36.361868 1 authorization.go:47] Authorization is disabled
W0610 19:51:36.361880 1 authentication.go:79] Authentication is disabled
I0610 19:51:36.361887 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0610 19:51:36.362232 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E0610 19:51:36.363622 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0610 19:51:36.363724 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0610 19:51:36.363737 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0610 19:51:36.363743 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0610 19:51:36.363764 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0610 19:51:36.363768 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0610 19:51:36.363770 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0610 19:51:36.363767 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0610 19:51:36.363767 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0610 19:51:36.363982 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0610 19:51:36.364007 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0610 19:51:37.364392 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0610 19:51:37.365393 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0610 19:51:37.366199 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0610 19:51:37.367350 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0610 19:51:37.368655 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0610 19:51:37.369684 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0610 19:51:37.370714 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0610 19:51:37.371993 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0610 19:51:37.372975 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0610 19:51:37.374138 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0610 19:51:37.375233 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0610 19:51:38.462467 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler...
I0610 19:51:38.465089 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
E0610 19:51:41.679409 1 factory.go:585] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Wed 2020-06-10 19:51:17 UTC, end at Wed 2020-06-10 19:56:31 UTC. --
Jun 10 19:56:26 minikube kubelet[1100]: W0610 19:56:26.025781 1100 pod_container_deletor.go:75] Container "c588b942ff9ef0bcb4a04c3f7ff584bb909799c42af5824271d167527be59e77" not found in pod's containers
Jun 10 19:56:26 minikube kubelet[1100]: W0610 19:56:26.026755 1100 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "c588b942ff9ef0bcb4a04c3f7ff584bb909799c42af5824271d167527be59e77"
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.525670 1100 cni.go:358] Error adding kube-system_coredns-5644d7b6d9-bs8j2/13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6 to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.624934 1100 cni.go:379] Error deleting kube-system_coredns-5644d7b6d9-bs8j2/13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6 from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.192 -j CNI-11d8a3d015e6c4572e2e383e -m comment --comment name: "crio-bridge" id: "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-11d8a3d015e6c4572e2e383e':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.706200 1100 cni.go:358] Error adding kube-system_coredns-5644d7b6d9-dhzdb/78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90 to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.721026 1100 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.192 -j CNI-11d8a3d015e6c4572e2e383e -m comment --comment name: "crio-bridge" id: "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-11d8a3d015e6c4572e2e383e':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.721059 1100 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.192 -j CNI-11d8a3d015e6c4572e2e383e -m comment --comment name: "crio-bridge" id: "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-11d8a3d015e6c4572e2e383e':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.721067 1100 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.192 -j CNI-11d8a3d015e6c4572e2e383e -m comment --comment name: "crio-bridge" id: "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-11d8a3d015e6c4572e2e383e':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.721100 1100 pod_workers.go:191] Error syncing pod 676c28c4-99ce-4e58-9189-ce25a87be14a ("coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.192 -j CNI-11d8a3d015e6c4572e2e383e -m comment --comment name: "crio-bridge" id: "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-11d8a3d015e6c4572e2e383e':No such file or directory\n\nTry iptables -h' or 'iptables --help' for more information.\n]"
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.773467 1100 cni.go:379] Error deleting kube-system_coredns-5644d7b6d9-dhzdb/78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90 from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.191 -j CNI-ab775a86f094eaf654e6e888 -m comment --comment name: "crio-bridge" id: "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-ab775a86f094eaf654e6e888':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.863281 1100 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.191 -j CNI-ab775a86f094eaf654e6e888 -m comment --comment name: "crio-bridge" id: "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-ab775a86f094eaf654e6e888':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.863309 1100 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.191 -j CNI-ab775a86f094eaf654e6e888 -m comment --comment name: "crio-bridge" id: "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-ab775a86f094eaf654e6e888':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.863317 1100 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.191 -j CNI-ab775a86f094eaf654e6e888 -m comment --comment name: "crio-bridge" id: "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-ab775a86f094eaf654e6e888':No such file or directory Jun 10 19:56:27 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:27 minikube kubelet[1100]: ]
Jun 10 19:56:27 minikube kubelet[1100]: E0610 19:56:27.863357 1100 pod_workers.go:191] Error syncing pod f7b5822f-3077-458b-a7ae-cd685b48b09a ("coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.191 -j CNI-ab775a86f094eaf654e6e888 -m comment --comment name: "crio-bridge" id: "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-ab775a86f094eaf654e6e888':No such file or directory\n\nTry iptables -h' or 'iptables --help' for more information.\n]"
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.074160 1100 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-5644d7b6d9-bs8j2_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6"
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.087167 1100 pod_container_deletor.go:75] Container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6" not found in pod's containers
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.088640 1100 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "13e2f33b7736e548be396bb2fdcb93d028a0c4c305ebd239b7a928413c647ad6"
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.091954 1100 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "coredns-5644d7b6d9-dhzdb_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90"
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.101109 1100 pod_container_deletor.go:75] Container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90" not found in pod's containers
Jun 10 19:56:28 minikube kubelet[1100]: W0610 19:56:28.101998 1100 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "78e6032008d97fc555ee17f9b918d8fd2e44422f6a55a1f43033a6288da38a90"
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.069714 1100 cni.go:358] Error adding kube-system_coredns-5644d7b6d9-dhzdb/00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138 to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.157058 1100 cni.go:379] Error deleting kube-system_coredns-5644d7b6d9-dhzdb/00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138 from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.193 -j CNI-193654f39dd86ca648a09336 -m comment --comment name: "crio-bridge" id: "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-193654f39dd86ca648a09336':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.189997 1100 cni.go:358] Error adding kube-system_coredns-5644d7b6d9-bs8j2/9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf to network bridge/crio-bridge: failed to set bridge addr: could not add IP address to "cni0": permission denied
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.259299 1100 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.193 -j CNI-193654f39dd86ca648a09336 -m comment --comment name: "crio-bridge" id: "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-193654f39dd86ca648a09336':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.259333 1100 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.193 -j CNI-193654f39dd86ca648a09336 -m comment --comment name: "crio-bridge" id: "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-193654f39dd86ca648a09336':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.259341 1100 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.193 -j CNI-193654f39dd86ca648a09336 -m comment --comment name: "crio-bridge" id: "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-193654f39dd86ca648a09336':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.259397 1100 pod_workers.go:191] Error syncing pod f7b5822f-3077-458b-a7ae-cd685b48b09a ("coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-5644d7b6d9-dhzdb_kube-system(f7b5822f-3077-458b-a7ae-cd685b48b09a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-dhzdb_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" network for pod "coredns-5644d7b6d9-dhzdb": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-dhzdb_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.193 -j CNI-193654f39dd86ca648a09336 -m comment --comment name: "crio-bridge" id: "00affd08714828121002ac2d5890937ccace9ff2b8d5081d67fb91fd599fe138" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-193654f39dd86ca648a09336':No such file or directory\n\nTry iptables -h' or 'iptables --help' for more information.\n]"
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.273235 1100 cni.go:379] Error deleting kube-system_coredns-5644d7b6d9-bs8j2/9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf from network bridge/crio-bridge: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.194 -j CNI-b69b1b39772ef93b00789b1b -m comment --comment name: "crio-bridge" id: "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-b69b1b39772ef93b00789b1b':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.372122 1100 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.194 -j CNI-b69b1b39772ef93b00789b1b -m comment --comment name: "crio-bridge" id: "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-b69b1b39772ef93b00789b1b':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.372164 1100 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.194 -j CNI-b69b1b39772ef93b00789b1b -m comment --comment name: "crio-bridge" id: "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-b69b1b39772ef93b00789b1b':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.372175 1100 kuberuntime_manager.go:710] createPodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.194 -j CNI-b69b1b39772ef93b00789b1b -m comment --comment name: "crio-bridge" id: "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-b69b1b39772ef93b00789b1b':No such file or directory Jun 10 19:56:30 minikube kubelet[1100]: Try iptables -h' or 'iptables --help' for more information.
Jun 10 19:56:30 minikube kubelet[1100]: ]
Jun 10 19:56:30 minikube kubelet[1100]: E0610 19:56:30.372220 1100 pod_workers.go:191] Error syncing pod 676c28c4-99ce-4e58-9189-ce25a87be14a ("coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)"), skipping: failed to "CreatePodSandbox" for "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" with CreatePodSandboxError: "CreatePodSandbox for pod "coredns-5644d7b6d9-bs8j2_kube-system(676c28c4-99ce-4e58-9189-ce25a87be14a)" failed: rpc error: code = Unknown desc = [failed to set up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to set up pod "coredns-5644d7b6d9-bs8j2_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" network for pod "coredns-5644d7b6d9-bs8j2": networkPlugin cni failed to teardown pod "coredns-5644d7b6d9-bs8j2_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.88.0.194 -j CNI-b69b1b39772ef93b00789b1b -m comment --comment name: "crio-bridge" id: "9bd323eb91ffbce81262ecd33c5c0dc34f349f8d48304ce2cd251fa0f9e0aabf" --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target CNI-b69b1b39772ef93b00789b1b':No such file or directory\n\nTry iptables -h' or 'iptables --help' for more information.\n]"

==> storage-provisioner [0759f69a4af6] <==

==> storage-provisioner [81c45417ced3] <==
F0610 19:52:27.508017 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

@medyagh medyagh added area/cni CNI support kind/support Categorizes issue or PR as a support question. labels Jun 10, 2020
@medyagh
Copy link
Member

medyagh commented Jun 10, 2020

@AurelienGasser is there a reason you provide -network-plugin=cni ? docker runtime does not need a cni and comes with cni by default

@medyagh
Copy link
Member

medyagh commented Jun 10, 2020

@AurelienGasser I also noticed:

I0610 15:51:39.356574 2332048 api_server.go:136] control plane version: v1.16.6-beta.0
W

you are using a k8s version with upstream problems

kubernetes/kubernetes#87424

maybe try a newer k8s verison?

@AurelienGasser
Copy link
Author

AurelienGasser commented Jun 11, 2020

@medyagh I provide --network-plugin=cni to be able to use Calico.

I get the same error if I use either

--network-plugin=cni

or

--extra-config=kubelet.network-plugin=cni

or both.

With kubernetes v1.16.10: no more control plane version issue, but the main error is still here:

networkPlugin cni failed to set up pod "coredns-5644d7b6d9-45s55_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied

@medyagh
Copy link
Member

medyagh commented Jun 11, 2020

hm... I have not tested the docker driver with CNI, could u plz try with KVM driver and see if u have same issue?

@AurelienGasser
Copy link
Author

AurelienGasser commented Jun 11, 2020

@medyagh No issue with the KVM driver (using --network-plugin=cni)

@medyagh
Copy link
Member

medyagh commented Jun 11, 2020

ok thanks for confirming this , this is a bug and we should fix it !

could u plz confirm something else ? can you try docker driver with containerd runtime and see if it works with that ?

start --container-runtime=containerd

@medyagh medyagh added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/bug Categorizes issue or PR as related to a bug. labels Jun 11, 2020
@medyagh medyagh changed the title docker: Permission denied using CNI network plugin docker driver: Permission denied using CNI network plugin Jun 11, 2020
@AurelienGasser
Copy link
Author

@medmedchiheb No error with containerd

@medyagh
Copy link
Member

medyagh commented Jun 11, 2020

thanks for confriming this, @AurelienGasser
we will have to fix the CNI on docker-runtime on docker driver.

@medyagh
Copy link
Member

medyagh commented Jul 15, 2020

@AurelienGasser I am curious does this error happen in v1.12.0 ?

@AurelienGasser
Copy link
Author

Hi @medyagh, the error persists in v1.12.0.

@tstromberg tstromberg changed the title docker driver: Permission denied using CNI network plugin docker driver: Permission denied if --network-plugin=cni Jul 22, 2020
@tstromberg
Copy link
Contributor

tstromberg commented Sep 23, 2020

The TL;DR here is that if you specify --network-plugin=cni, you need to provide a CNI for CoreDNS to come up successfully.

Please note that this flag is deprecated - but for some reason, we hide the deprecation notice in the logs instead of showing it to the user. The new equivalent is --cni=/path/to/cni.yaml, or one of the other options:

--cni='': CNI plug-in to use. Valid options: auto, bridge, calico, cilium, flannel, kindnet, or path to a CNI manifest (default: auto)

I can verify that minikube start --cni=bridge --driver=docker for instance does not create this issue.

Renaming the issue to capture the primary remaining issue.

@tstromberg tstromberg changed the title docker driver: Permission denied if --network-plugin=cni Add deprecation notice for --network-plugin=cni Sep 23, 2020
@tstromberg tstromberg changed the title Add deprecation notice for --network-plugin=cni Add deprecation notice for --network-plugin=cni (new flag: --cni) Sep 23, 2020
@tstromberg tstromberg changed the title Add deprecation notice for --network-plugin=cni (new flag: --cni) Add deprecation warning for --network-plugin=cni (new flag: --cni) Sep 23, 2020
@tstromberg tstromberg added kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. priority/backlog Higher priority than priority/awaiting-more-evidence. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/support Categorizes issue or PR as a support question. labels Sep 23, 2020
@tstromberg tstromberg changed the title Add deprecation warning for --network-plugin=cni (new flag: --cni) Add warning for --network-plugin=cni (CNI has to be provided, see --cni) Oct 2, 2020
@tstromberg
Copy link
Contributor

Fixing the title because I was a bit in err here: the deprecated flag is actually --enable-default-cni, which should show a warning. --network-plugin isn't deprecated. That said, this could be unexpected behavior, so we should show a warning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cni CNI support good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants