Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error enabling csi-hostpth-driver addon #9633

Closed
mcortinas opened this issue Nov 8, 2020 · 2 comments
Closed

Error enabling csi-hostpth-driver addon #9633

mcortinas opened this issue Nov 8, 2020 · 2 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@mcortinas
Copy link

Steps to reproduce the issue:

1.minikube start
2. minikube addons enable csi-hostpth-driver

❌ Exiting due to MK_ENABLE: run callbacks: csi-hostpth-driver is not a valid addon

Full output of failed command:

I1108 10:50:08.918156  429068 addons.go:55] Setting csi-hostpth-driver=true in profile "minikube"
I1108 10:50:08.918526  429068 out.go:110] 

W1108 10:50:08.918609  429068 out.go:146] ❌  Exiting due to MK_ENABLE: run callbacks: csi-hostpth-driver is not a valid addon
❌  Exiting due to MK_ENABLE: run callbacks: csi-hostpth-driver is not a valid addon
W1108 10:50:08.918777  429068 out.go:146] 

W1108 10:50:08.918810  429068 out.go:146] 😿  If the above advice does not help, please let us know: 
😿  If the above advice does not help, please let us know: 
W1108 10:50:08.918842  429068 out.go:146] 👉  https://github.com/kubernetes/minikube/issues/new/choose
👉  https://github.com/kubernetes/minikube/issues/new/choose
I1108 10:50:08.918857  429068 out.go:110] 

Full output of minikube start command used, if not already included:

minikube start 😄 minikube v1.14.2 on Fedora 32 ✨ Automatically selected the docker driver 👍 Starting control plane node minikube in cluster minikube 🔥 Creating docker container (CPUs=2, Memory=3900MB) ...

🧯 Docker is nearly out of disk space, which may cause deployments to fail! (95% of capacity)
💡 Suggestion:

Try at least one of the following to free up space on the device:

1. Run "docker system prune" to remove unused docker data
2. Increase the amount of memory allocated to Docker for Desktop via
Docker icon > Preferences > Resources > Disk Image Size
3. Run "minikube ssh -- docker system prune" if using the docker container runtime

🍿 Related issue: #9024

🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" by default

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Sun 2020-11-08 09:54:14 UTC, end at Sun 2020-11-08 09:58:23 UTC. -- Nov 08 09:54:15 minikube systemd[1]: Starting Docker Application Container Engine... Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.093898587Z" level=info msg="Starting up" Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.095179676Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.095216419Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.095236269Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.095248639Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.106866514Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.106890635Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.106906244Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.106918299Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.521899897Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.546790861Z" level=warning msg="Your kernel does not support cgroup rt period" Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.546815051Z" level=warning msg="Your kernel does not support cgroup rt runtime" Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.546821826Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.546827367Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.546977826Z" level=info msg="Loading containers: start." Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.674758297Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.738270900Z" level=info msg="Loading containers: done." Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.924559280Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.924658935Z" level=info msg="Daemon has completed initialization" Nov 08 09:54:15 minikube dockerd[157]: time="2020-11-08T09:54:15.953144276Z" level=info msg="API listen on /run/docker.sock" Nov 08 09:54:15 minikube systemd[1]: Started Docker Application Container Engine. Nov 08 09:54:25 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Nov 08 09:54:25 minikube systemd[1]: Stopping Docker Application Container Engine... Nov 08 09:54:25 minikube dockerd[157]: time="2020-11-08T09:54:25.546750201Z" level=info msg="Processing signal 'terminated'" Nov 08 09:54:25 minikube dockerd[157]: time="2020-11-08T09:54:25.550273543Z" level=info msg="Daemon shutdown complete" Nov 08 09:54:25 minikube systemd[1]: docker.service: Succeeded. Nov 08 09:54:25 minikube systemd[1]: Stopped Docker Application Container Engine. Nov 08 09:54:25 minikube systemd[1]: Starting Docker Application Container Engine... Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.669489783Z" level=info msg="Starting up" Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.674010082Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.674110561Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.674153564Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.674175798Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.676073898Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.676111346Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.676134049Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.676146142Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.727560867Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.777698834Z" level=warning msg="Your kernel does not support cgroup rt period" Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.777774061Z" level=warning msg="Your kernel does not support cgroup rt runtime" Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.777798042Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.777817366Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.778297534Z" level=info msg="Loading containers: start." Nov 08 09:54:25 minikube dockerd[391]: time="2020-11-08T09:54:25.957920140Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 08 09:54:26 minikube dockerd[391]: time="2020-11-08T09:54:26.065149683Z" level=info msg="Loading containers: done." Nov 08 09:54:26 minikube dockerd[391]: time="2020-11-08T09:54:26.137721206Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Nov 08 09:54:26 minikube dockerd[391]: time="2020-11-08T09:54:26.137801325Z" level=info msg="Daemon has completed initialization" Nov 08 09:54:26 minikube dockerd[391]: time="2020-11-08T09:54:26.182772931Z" level=info msg="API listen on /var/run/docker.sock" Nov 08 09:54:26 minikube dockerd[391]: time="2020-11-08T09:54:26.182778814Z" level=info msg="API listen on [::]:2376" Nov 08 09:54:26 minikube systemd[1]: Started Docker Application Container Engine.

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
3038b7190915a bad58561c4be7 3 minutes ago Running storage-provisioner 0 c645140e7063d
5c2b63e69a4dd bfe3a36ebd252 3 minutes ago Running coredns 0 28a505c7025da
bd0541d19c6b4 d373dd5a8593a 3 minutes ago Running kube-proxy 0 6238bdc751afd
df4785f1698f4 607331163122e 3 minutes ago Running kube-apiserver 0 217c7ad21566a
3a862b689fbe0 8603821e1a7a5 3 minutes ago Running kube-controller-manager 0 40d9ab485188a
b0ac70ca337b9 0369cf4303ffd 3 minutes ago Running etcd 0 dac61768b9689
f3ffdc3d9d772 2f32d66b884f8 3 minutes ago Running kube-scheduler 0 68a5c8072c040

==> coredns [5c2b63e69a4d] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d

==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=2c82918e2347188e21c4e44c8056fc80408bce10
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_11_08T10_54_52_0700
minikube.k8s.io/version=v1.14.2
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 08 Nov 2020 09:54:48 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Sun, 08 Nov 2020 09:58:19 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Sun, 08 Nov 2020 09:55:10 +0000 Sun, 08 Nov 2020 09:54:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 08 Nov 2020 09:55:10 +0000 Sun, 08 Nov 2020 09:54:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 08 Nov 2020 09:55:10 +0000 Sun, 08 Nov 2020 09:54:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 08 Nov 2020 09:55:10 +0000 Sun, 08 Nov 2020 09:55:10 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
Capacity:
cpu: 8
ephemeral-storage: 51343840Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16231256Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 51343840Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16231256Ki
pods: 110
System Info:
Machine ID: 43555588164c40469990b6bee73852d4
System UUID: 25ff943a-a514-4294-b187-f2df90759f2f
Boot ID: d8056ebd-9392-4f7f-8307-2767193bdfbb
Kernel Version: 5.8.14-200.fc32.x86_64
OS Image: Ubuntu 20.04 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.8
Kubelet Version: v1.19.2
Kube-Proxy Version: v1.19.2
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


kube-system coredns-f9fd979d6-mknv9 100m (1%) 0 (0%) 70Mi (0%) 170Mi (1%) 3m26s
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m24s
kube-system kube-apiserver-minikube 250m (3%) 0 (0%) 0 (0%) 0 (0%) 3m24s
kube-system kube-controller-manager-minikube 200m (2%) 0 (0%) 0 (0%) 0 (0%) 3m24s
kube-system kube-proxy-rt7wn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m26s
kube-system kube-scheduler-minikube 100m (1%) 0 (0%) 0 (0%) 0 (0%) 3m24s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m29s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 650m (8%) 0 (0%)
memory 70Mi (0%) 170Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal NodeHasSufficientMemory 3m45s (x5 over 3m45s) kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m45s (x5 over 3m45s) kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m45s (x4 over 3m45s) kubelet Node minikube status is now: NodeHasSufficientPID
Normal Starting 3m25s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m25s kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m25s kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m25s kubelet Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m24s kubelet Updated Node Allocatable limit across pods
Normal Starting 3m23s kube-proxy Starting kube-proxy.
Normal NodeReady 3m14s kubelet Node minikube status is now: NodeReady

==> dmesg <==
[ +0.255400] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[ +7.916213] L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.
[ +2.052729] process 'docker/tmp/qemu-check417060378/check' started with executable stack
[Nov 7 14:42] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to [email protected] if you depend on this functionality.
[Nov 7 18:22] IRQ 127: no longer affine to CPU4
[ +0.007868] IRQ 123: no longer affine to CPU5
[ +0.000006] IRQ 129: no longer affine to CPU5
[ +0.000008] IRQ 150: no longer affine to CPU5
[ +0.007405] IRQ 16: no longer affine to CPU6
[ +0.000007] IRQ 122: no longer affine to CPU6
[ +0.000008] IRQ 132: no longer affine to CPU6
[ +0.009266] IRQ 128: no longer affine to CPU7
[ +0.020711] smpboot: Scheduler frequency invariance went wobbly, disabling!
[ +1.586496] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[ +0.088905] ata3.00: supports DRM functions and may not be fully accessible
[ +0.004307] ata3.00: supports DRM functions and may not be fully accessible
[ +1.093728] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[Nov 7 21:03] atkbd serio0: Unknown key pressed (translated set 2, code 0x85 on isa0060/serio0).
[ +0.000006] atkbd serio0: Use 'setkeycodes e005 ' to make it known.
[ +4.668627] IRQ 127: no longer affine to CPU4
[ +0.012102] IRQ 16: no longer affine to CPU5
[ +0.000018] IRQ 128: no longer affine to CPU5
[ +0.011712] IRQ 122: no longer affine to CPU6
[ +0.000019] IRQ 132: no longer affine to CPU6
[ +0.012482] IRQ 125: no longer affine to CPU7
[ +0.000017] IRQ 129: no longer affine to CPU7
[ +0.000019] IRQ 150: no longer affine to CPU7
[ +1.589960] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[ +0.094368] ata3.00: supports DRM functions and may not be fully accessible
[ +0.004006] ata3.00: supports DRM functions and may not be fully accessible
[ +0.454988] done.
[ +1.247886] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[ +0.298010] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[Nov 7 21:26] IRQ 127: no longer affine to CPU4
[ +0.004707] IRQ 16: no longer affine to CPU5
[ +0.000011] IRQ 128: no longer affine to CPU5
[ +0.004630] IRQ 122: no longer affine to CPU6
[ +0.000010] IRQ 132: no longer affine to CPU6
[ +0.005957] IRQ 125: no longer affine to CPU7
[ +0.000013] IRQ 129: no longer affine to CPU7
[ +0.000009] IRQ 150: no longer affine to CPU7
[ +1.564870] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[ +0.096925] ata3.00: supports DRM functions and may not be fully accessible
[ +0.004391] ata3.00: supports DRM functions and may not be fully accessible
[ +0.419092] done.
[ +0.701865] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[ +0.280721] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[Nov 7 22:34] IRQ 127: no longer affine to CPU4
[ +0.009710] IRQ 123: no longer affine to CPU5
[ +0.000011] IRQ 129: no longer affine to CPU5
[ +0.000013] IRQ 150: no longer affine to CPU5
[ +0.008507] IRQ 16: no longer affine to CPU6
[ +0.000011] IRQ 122: no longer affine to CPU6
[ +0.000011] IRQ 132: no longer affine to CPU6
[ +0.008333] IRQ 128: no longer affine to CPU7
[ +1.605873] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring
[ +0.097351] ata3.00: supports DRM functions and may not be fully accessible
[ +0.004479] ata3.00: supports DRM functions and may not be fully accessible
[ +0.435064] done.
[ +0.710904] iwlwifi 0000:01:00.0: FW already configured (0) - re-configuring

==> etcd [b0ac70ca337b] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-11-08 09:54:42.572789 I | etcdmain: etcd Version: 3.4.13
2020-11-08 09:54:42.572939 I | etcdmain: Git SHA: ae9734ed2
2020-11-08 09:54:42.572957 I | etcdmain: Go Version: go1.12.17
2020-11-08 09:54:42.572971 I | etcdmain: Go OS/Arch: linux/amd64
2020-11-08 09:54:42.572989 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-11-08 09:54:42.575399 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-11-08 09:54:42.654984 I | embed: name = minikube
2020-11-08 09:54:42.655272 I | embed: data dir = /var/lib/minikube/etcd
2020-11-08 09:54:42.655433 I | embed: member dir = /var/lib/minikube/etcd/member
2020-11-08 09:54:42.655615 I | embed: heartbeat = 100ms
2020-11-08 09:54:42.655751 I | embed: election = 1000ms
2020-11-08 09:54:42.655786 I | embed: snapshot count = 10000
2020-11-08 09:54:42.655860 I | embed: advertise client URLs = https://192.168.49.2:2379
2020-11-08 09:54:42.849790 I | etcdserver: starting member aec36adc501070cc in cluster fa54960ea34d58be
raft2020/11/08 09:54:42 INFO: aec36adc501070cc switched to configuration voters=()
raft2020/11/08 09:54:42 INFO: aec36adc501070cc became follower at term 0
raft2020/11/08 09:54:42 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/11/08 09:54:42 INFO: aec36adc501070cc became follower at term 1
raft2020/11/08 09:54:42 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
2020-11-08 09:54:42.869778 W | auth: simple token is not cryptographically signed
2020-11-08 09:54:42.902076 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
2020-11-08 09:54:42.902718 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/11/08 09:54:42 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
2020-11-08 09:54:42.951139 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
2020-11-08 09:54:42.964002 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-11-08 09:54:42.964264 I | embed: listening for peers on 192.168.49.2:2380
2020-11-08 09:54:42.964767 I | embed: listening for metrics on http://127.0.0.1:2381
raft2020/11/08 09:54:43 INFO: aec36adc501070cc is starting a new election at term 1
raft2020/11/08 09:54:43 INFO: aec36adc501070cc became candidate at term 2
raft2020/11/08 09:54:43 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
raft2020/11/08 09:54:43 INFO: aec36adc501070cc became leader at term 2
raft2020/11/08 09:54:43 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
2020-11-08 09:54:43.765645 I | etcdserver: setting up the initial cluster version to 3.4
2020-11-08 09:54:43.771078 N | etcdserver/membership: set the initial cluster version to 3.4
2020-11-08 09:54:43.771325 I | etcdserver/api: enabled capabilities for version 3.4
2020-11-08 09:54:43.771692 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
2020-11-08 09:54:43.771756 I | embed: ready to serve client requests
2020-11-08 09:54:43.772135 I | embed: ready to serve client requests
2020-11-08 09:54:43.862956 I | embed: serving client requests on 192.168.49.2:2379
2020-11-08 09:54:43.866395 I | embed: serving client requests on 127.0.0.1:2379
E1108 10:56:45.437763 442679 out.go:286] unable to execute 2020-11-08 09:54:58.671962 W | etcdserver: request "header:<ID:8128000788816606011 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/serviceaccounts/kube-public/default" mod_revision:305 > success:<request_put:<key:"/registry/serviceaccounts/kube-public/default" value_size:149 >> failure:<request_range:<key:"/registry/serviceaccounts/kube-public/default" > >>" with result "size:16" took too long (102.958936ms) to execute
: html/template:2020-11-08 09:54:58.671962 W | etcdserver: request "header:<ID:8128000788816606011 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/serviceaccounts/kube-public/default" mod_revision:305 > success:<request_put:<key:"/registry/serviceaccounts/kube-public/default" value_size:149 >> failure:<request_range:<key:"/registry/serviceaccounts/kube-public/default" > >>" with result "size:16" took too long (102.958936ms) to execute
: """ in attribute name: " username:\"kube-apiserver-etcd-" - returning raw string.
2020-11-08 09:54:58.671962 W | etcdserver: request "header:<ID:8128000788816606011 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/serviceaccounts/kube-public/default" mod_revision:305 > success:<request_put:<key:"/registry/serviceaccounts/kube-public/default" value_size:149 >> failure:<request_range:<key:"/registry/serviceaccounts/kube-public/default" > >>" with result "size:16" took too long (102.958936ms) to execute
2020-11-08 09:54:58.672303 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath" " with result "range_response_count:1 size:729" took too long (180.87193ms) to execute
2020-11-08 09:54:58.866175 W | etcdserver: read-only range request "key:"/registry/clusterroles/edit" " with result "range_response_count:1 size:3252" took too long (106.327848ms) to execute
2020-11-08 09:54:58.866507 W | etcdserver: read-only range request "key:"/registry/clusterroles/admin" " with result "range_response_count:1 size:2109" took too long (105.244704ms) to execute
2020-11-08 09:54:58.867243 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath" " with result "range_response_count:0 size:5" took too long (107.656716ms) to execute
2020-11-08 09:55:00.864663 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:55:01.608015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:55:11.608882 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:55:21.608717 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:55:31.608147 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:55:41.608266 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:55:51.609436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:56:01.608822 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:56:11.608814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:56:21.608119 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:56:31.608990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-11-08 09:56:41.608162 I | etcdserver/api/etcdhttp: /health OK (status code 200)

==> kernel <==
09:56:45 up 19:16, 0 users, load average: 2.16, 2.42, 2.04
Linux minikube 5.8.14-200.fc32.x86_64 #1 SMP Wed Oct 7 14:47:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04 LTS"

==> kube-apiserver [df4785f1698f] <==
I1108 09:54:46.261355 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1108 09:54:46.267722 1 client.go:360] parsed scheme: "endpoint"
I1108 09:54:46.267743 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1108 09:54:48.033726 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1108 09:54:48.033727 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1108 09:54:48.034016 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I1108 09:54:48.034353 1 secure_serving.go:197] Serving securely on [::]:8443
I1108 09:54:48.034464 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1108 09:54:48.034494 1 autoregister_controller.go:141] Starting autoregister controller
I1108 09:54:48.034492 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
I1108 09:54:48.034505 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1108 09:54:48.034526 1 controller.go:83] Starting OpenAPI AggregationController
I1108 09:54:48.034547 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1108 09:54:48.034552 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1108 09:54:48.035000 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1108 09:54:48.035077 1 available_controller.go:404] Starting AvailableConditionController
I1108 09:54:48.035101 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1108 09:54:48.035080 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1108 09:54:48.035305 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1108 09:54:48.035312 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1108 09:54:48.035307 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1108 09:54:48.035316 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1108 09:54:48.035496 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I1108 09:54:48.035560 1 controller.go:86] Starting OpenAPI controller
I1108 09:54:48.035601 1 naming_controller.go:291] Starting NamingConditionController
I1108 09:54:48.035661 1 establishing_controller.go:76] Starting EstablishingController
I1108 09:54:48.035677 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I1108 09:54:48.035720 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1108 09:54:48.035782 1 crd_finalizer.go:266] Starting CRDFinalizer
E1108 09:54:48.041208 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg:
I1108 09:54:48.134564 1 cache.go:39] Caches are synced for autoregister controller
I1108 09:54:48.134589 1 shared_informer.go:247] Caches are synced for crd-autoregister
I1108 09:54:48.135141 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I1108 09:54:48.135143 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1108 09:54:48.135442 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1108 09:54:49.033787 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1108 09:54:49.033900 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1108 09:54:49.048927 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I1108 09:54:49.064289 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I1108 09:54:49.064365 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I1108 09:54:50.378563 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1108 09:54:50.501731 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1108 09:54:50.725403 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1108 09:54:50.728763 1 controller.go:606] quota admission added evaluator for: endpoints
I1108 09:54:50.740268 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1108 09:54:51.428975 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1108 09:54:52.454709 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1108 09:54:52.672389 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1108 09:54:58.451973 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1108 09:54:58.475979 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1108 09:54:59.831907 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1108 09:55:21.998241 1 client.go:360] parsed scheme: "passthrough"
I1108 09:55:21.998316 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1108 09:55:21.998335 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1108 09:55:52.890082 1 client.go:360] parsed scheme: "passthrough"
I1108 09:55:52.890367 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1108 09:55:52.890417 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1108 09:56:26.298315 1 client.go:360] parsed scheme: "passthrough"
I1108 09:56:26.298436 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I1108 09:56:26.298471 1 clientconn.go:948] ClientConn switching balancer to "pick_first"

==> kube-controller-manager [3a862b689fbe] <==
I1108 09:54:57.970689 1 shared_informer.go:240] Waiting for caches to sync for expand
I1108 09:54:58.219722 1 controllermanager.go:549] Started "podgc"
I1108 09:54:58.219895 1 gc_controller.go:89] Starting GC controller
I1108 09:54:58.219924 1 shared_informer.go:240] Waiting for caches to sync for GC
I1108 09:54:58.369574 1 controllermanager.go:549] Started "csrapproving"
W1108 09:54:58.369628 1 controllermanager.go:541] Skipping "nodeipam"
I1108 09:54:58.370326 1 certificate_controller.go:118] Starting certificate controller "csrapproving"
I1108 09:54:58.370370 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
W1108 09:54:58.388374 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1108 09:54:58.390465 1 shared_informer.go:247] Caches are synced for namespace
I1108 09:54:58.390821 1 shared_informer.go:247] Caches are synced for ReplicaSet
I1108 09:54:58.391135 1 shared_informer.go:247] Caches are synced for TTL
I1108 09:54:58.412206 1 shared_informer.go:247] Caches are synced for service account
I1108 09:54:58.417273 1 shared_informer.go:247] Caches are synced for PVC protection
I1108 09:54:58.417349 1 shared_informer.go:247] Caches are synced for taint
I1108 09:54:58.417437 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
I1108 09:54:58.417444 1 taint_manager.go:187] Starting NoExecuteTaintManager
W1108 09:54:58.417483 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1108 09:54:58.417518 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1108 09:54:58.417570 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I1108 09:54:58.418059 1 shared_informer.go:247] Caches are synced for ReplicationController
I1108 09:54:58.420070 1 shared_informer.go:247] Caches are synced for GC
I1108 09:54:58.421962 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I1108 09:54:58.422152 1 shared_informer.go:247] Caches are synced for persistent volume
I1108 09:54:58.441446 1 shared_informer.go:247] Caches are synced for daemon sets
I1108 09:54:58.448375 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I1108 09:54:58.448404 1 shared_informer.go:247] Caches are synced for endpoint_slice
I1108 09:54:58.448465 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I1108 09:54:58.449910 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I1108 09:54:58.450723 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I1108 09:54:58.450816 1 shared_informer.go:247] Caches are synced for job
I1108 09:54:58.455673 1 shared_informer.go:247] Caches are synced for PV protection
I1108 09:54:58.466313 1 shared_informer.go:247] Caches are synced for deployment
I1108 09:54:58.466348 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I1108 09:54:58.468686 1 shared_informer.go:247] Caches are synced for attach detach
I1108 09:54:58.468885 1 shared_informer.go:247] Caches are synced for HPA
I1108 09:54:58.470593 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I1108 09:54:58.470597 1 shared_informer.go:247] Caches are synced for endpoint
I1108 09:54:58.471256 1 shared_informer.go:247] Caches are synced for expand
I1108 09:54:58.491308 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1"
I1108 09:54:58.549008 1 request.go:645] Throttling request took 1.027912333s, request: GET:https://192.168.49.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
I1108 09:54:58.549658 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I1108 09:54:58.566342 1 shared_informer.go:247] Caches are synced for disruption
I1108 09:54:58.566410 1 disruption.go:339] Sending events to api server.
I1108 09:54:58.571045 1 shared_informer.go:247] Caches are synced for stateful set
I1108 09:54:58.686294 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-mknv9"
I1108 09:54:58.686787 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rt7wn"
I1108 09:54:58.750000 1 shared_informer.go:247] Caches are synced for resource quota
I1108 09:54:58.773966 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
E1108 09:54:58.878992 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E1108 09:54:58.881924 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E1108 09:54:58.950849 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"4c4b9c03-af94-413c-b3f7-f1f76632052e", ResourceVersion:"230", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740426092, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001bacb40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bacb60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001bacb80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00172fa40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bacba0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bacbc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001bacc00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0013a4fc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000978528), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004b2d20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00011acf0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000978578)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
E1108 09:54:59.049354 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I1108 09:54:59.057405 1 shared_informer.go:247] Caches are synced for garbage collector
I1108 09:54:59.057485 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1108 09:54:59.074555 1 shared_informer.go:247] Caches are synced for garbage collector
E1108 09:54:59.079328 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"4c4b9c03-af94-413c-b3f7-f1f76632052e", ResourceVersion:"333", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740426092, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000fa6620), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000fa6680)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000fa66e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000fa6740)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000fa67a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0014a1600), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000fa6800), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000fa6880), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000fa6a00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00188b140), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000d5b9d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001f1ce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e180f8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000d5ba68)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I1108 09:54:59.271612 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I1108 09:54:59.271702 1 shared_informer.go:247] Caches are synced for resource quota
I1108 09:55:13.418589 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.

==> kube-proxy [bd0541d19c6b] <==
I1108 09:55:01.191351 1 node.go:136] Successfully retrieved node IP: 192.168.49.2
I1108 09:55:01.191432 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
W1108 09:55:01.359354 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
I1108 09:55:01.359695 1 server_others.go:186] Using iptables Proxier.
W1108 09:55:01.359737 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I1108 09:55:01.359761 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I1108 09:55:01.360614 1 server.go:650] Version: v1.19.2
I1108 09:55:01.362147 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I1108 09:55:01.362577 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1108 09:55:01.362853 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1108 09:55:01.363364 1 config.go:315] Starting service config controller
I1108 09:55:01.363403 1 shared_informer.go:240] Waiting for caches to sync for service config
I1108 09:55:01.363469 1 config.go:224] Starting endpoint slice config controller
I1108 09:55:01.363491 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1108 09:55:01.463697 1 shared_informer.go:247] Caches are synced for endpoint slice config
I1108 09:55:01.463707 1 shared_informer.go:247] Caches are synced for service config

==> kube-scheduler [f3ffdc3d9d77] <==
I1108 09:54:42.463359 1 registry.go:173] Registering SelectorSpread plugin
I1108 09:54:42.463573 1 registry.go:173] Registering SelectorSpread plugin
I1108 09:54:44.782031 1 serving.go:331] Generated self-signed cert in-memory
W1108 09:54:48.054900 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1108 09:54:48.055079 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1108 09:54:48.055177 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
W1108 09:54:48.055247 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1108 09:54:48.069835 1 registry.go:173] Registering SelectorSpread plugin
I1108 09:54:48.069848 1 registry.go:173] Registering SelectorSpread plugin
I1108 09:54:48.071787 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1108 09:54:48.071818 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1108 09:54:48.072119 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1108 09:54:48.072390 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1108 09:54:48.074718 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1108 09:54:48.075214 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1108 09:54:48.075368 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1108 09:54:48.075478 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1108 09:54:48.075590 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1108 09:54:48.075690 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1108 09:54:48.075806 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1108 09:54:48.075961 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1108 09:54:48.075997 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1108 09:54:48.076020 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1108 09:54:48.076166 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1108 09:54:48.076213 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1108 09:54:48.076218 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1108 09:54:48.926304 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1108 09:54:48.967687 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1108 09:54:48.978975 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1108 09:54:49.038142 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1108 09:54:49.073355 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1108 09:54:49.117116 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1108 09:54:49.207731 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1108 09:54:49.223595 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1108 09:54:49.296438 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1108 09:54:49.420295 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1108 09:54:49.451259 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1108 09:54:49.565686 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1108 09:54:49.606825 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I1108 09:54:51.671962 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
W1108 09:54:58.749662 1 factory.go:467] Pod kube-system/coredns-f9fd979d6-mknv9 doesn't exist in informer cache: pod "coredns-f9fd979d6-mknv9" not found

==> kubelet <==
-- Logs begin at Sun 2020-11-08 09:54:14 UTC, end at Sun 2020-11-08 09:56:45 UTC. --
Nov 08 09:54:59 minikube kubelet[2159]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.806345 2159 kuberuntime_manager.go:214] Container runtime docker initialized, version: 19.03.8, apiVersion: 1.40.0
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.807694 2159 server.go:1147] Started kubelet
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.808039 2159 server.go:152] Starting to listen on 0.0.0.0:10250
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.812875 2159 server.go:424] Adding debug handlers to kubelet server.
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.814619 2159 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.815455 2159 volume_manager.go:265] Starting Kubelet Volume Manager
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.816286 2159 desired_state_of_world_populator.go:139] Desired state populator starts to run
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.862229 2159 status_manager.go:158] Starting to sync pod status with apiserver
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.862293 2159 kubelet.go:1741] Starting kubelet main sync loop.
Nov 08 09:54:59 minikube kubelet[2159]: E1108 09:54:59.862339 2159 kubelet.go:1765] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.865888 2159 client.go:87] parsed scheme: "unix"
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.865913 2159 client.go:87] scheme "unix" not registered, fallback to default scheme
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.865959 2159 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.865972 2159 clientconn.go:948] ClientConn switching balancer to "pick_first"
Nov 08 09:54:59 minikube kubelet[2159]: E1108 09:54:59.962418 2159 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.964486 2159 kubelet_node_status.go:70] Attempting to register node minikube
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.997859 2159 kubelet_node_status.go:108] Node minikube was previously registered
Nov 08 09:54:59 minikube kubelet[2159]: I1108 09:54:59.997949 2159 kubelet_node_status.go:73] Successfully registered node minikube
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.075433 2159 cpu_manager.go:184] [cpumanager] starting with none policy
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.075465 2159 cpu_manager.go:185] [cpumanager] reconciling every 10s
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.075497 2159 state_mem.go:36] [cpumanager] initializing new in-memory state store
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.075774 2159 state_mem.go:88] [cpumanager] updated default cpuset: ""
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.075792 2159 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.075809 2159 policy_none.go:43] [cpumanager] none policy: Start
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.079106 2159 plugin_manager.go:114] Starting Kubelet Plugin Manager
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.162711 2159 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.173535 2159 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.186152 2159 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.198808 2159 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.211236 2159 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.217809 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d186e6390814d4dd7e770f47c08e98a2-etcd-data") pod "etcd-minikube" (UID: "d186e6390814d4dd7e770f47c08e98a2")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.217855 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-k8s-certs") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.217887 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.217980 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218022 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-ca-certs") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218060 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218160 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218252 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/421cc5ca-3dd5-4c91-8224-76a40d917726-xtables-lock") pod "kube-proxy-rt7wn" (UID: "421cc5ca-3dd5-4c91-8224-76a40d917726")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218331 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d186e6390814d4dd7e770f47c08e98a2-etcd-certs") pod "etcd-minikube" (UID: "d186e6390814d4dd7e770f47c08e98a2")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218387 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218486 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-ca-certs") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218543 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ff7d12f9e4f14e202a85a7c5534a3129-kubeconfig") pod "kube-scheduler-minikube" (UID: "ff7d12f9e4f14e202a85a7c5534a3129")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218576 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/421cc5ca-3dd5-4c91-8224-76a40d917726-kube-proxy") pod "kube-proxy-rt7wn" (UID: "421cc5ca-3dd5-4c91-8224-76a40d917726")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218615 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/421cc5ca-3dd5-4c91-8224-76a40d917726-lib-modules") pod "kube-proxy-rt7wn" (UID: "421cc5ca-3dd5-4c91-8224-76a40d917726")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218694 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-p6d8s" (UniqueName: "kubernetes.io/secret/421cc5ca-3dd5-4c91-8224-76a40d917726-kube-proxy-token-p6d8s") pod "kube-proxy-rt7wn" (UID: "421cc5ca-3dd5-4c91-8224-76a40d917726")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218745 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218799 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-k8s-certs") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218854 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-kubeconfig") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218898 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641")
Nov 08 09:55:00 minikube kubelet[2159]: I1108 09:55:00.218947 2159 reconciler.go:157] Reconciler: start to sync state
Nov 08 09:55:00 minikube kubelet[2159]: W1108 09:55:00.893580 2159 pod_container_deletor.go:79] Container "6238bdc751afd6b92cd0ff11ff0ee79409c85faa3d3770e566abd3c51d17222d" not found in pod's containers
Nov 08 09:55:12 minikube kubelet[2159]: I1108 09:55:12.694314 2159 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 08 09:55:12 minikube kubelet[2159]: I1108 09:55:12.861564 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f521e387-2f1e-4fb1-b152-89b82531077c-config-volume") pod "coredns-f9fd979d6-mknv9" (UID: "f521e387-2f1e-4fb1-b152-89b82531077c")
Nov 08 09:55:12 minikube kubelet[2159]: I1108 09:55:12.861786 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-zkl85" (UniqueName: "kubernetes.io/secret/f521e387-2f1e-4fb1-b152-89b82531077c-coredns-token-zkl85") pod "coredns-f9fd979d6-mknv9" (UID: "f521e387-2f1e-4fb1-b152-89b82531077c")
Nov 08 09:55:13 minikube kubelet[2159]: W1108 09:55:13.496642 2159 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-mknv9 through plugin: invalid network status for
Nov 08 09:55:13 minikube kubelet[2159]: W1108 09:55:13.977463 2159 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-mknv9 through plugin: invalid network status for
Nov 08 09:55:18 minikube kubelet[2159]: I1108 09:55:18.686931 2159 topology_manager.go:233] [topologymanager] Topology Admit Handler
Nov 08 09:55:18 minikube kubelet[2159]: I1108 09:55:18.873920 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/fe5e0127-816f-4478-b7c1-ada4a399eea0-tmp") pod "storage-provisioner" (UID: "fe5e0127-816f-4478-b7c1-ada4a399eea0")
Nov 08 09:55:18 minikube kubelet[2159]: I1108 09:55:18.874016 2159 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-t8qq5" (UniqueName: "kubernetes.io/secret/fe5e0127-816f-4478-b7c1-ada4a399eea0-storage-provisioner-token-t8qq5") pod "storage-provisioner" (UID: "fe5e0127-816f-4478-b7c1-ada4a399eea0")

==> storage-provisioner [3038b7190915] <==
I1108 09:55:19.980208 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1108 09:55:19.990389 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1108 09:55:19.990492 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_0ec3f6e0-72da-46f3-a1db-2ba97d27019c!
I1108 09:55:19.990483 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7edd7c74-fb83-44cb-8523-6b60db7c7890", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type:

@RA489
Copy link

RA489 commented Nov 11, 2020

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Nov 11, 2020
@sharifelgamal
Copy link
Collaborator

You have misspelled the name of the addon, it should be csi-hostpath-driver.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

4 participants