Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

all sorts of startup issues with apiserver & kubelet #6822

Closed
balopat opened this issue Feb 27, 2020 · 2 comments
Closed

all sorts of startup issues with apiserver & kubelet #6822

balopat opened this issue Feb 27, 2020 · 2 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/not-reproducible Indicates an issue can not be reproduced as described.

Comments

@balopat
Copy link
Contributor

balopat commented Feb 27, 2020

The exact command to reproduce the issue:

minikube start -p docker2

The full output of the command that failed:

minikube start -p docker2 --vm-driver=docker 479ms  Thu Feb 27 11:06:03 2020
😄 [docker2] minikube v1.7.3 on Darwin 10.15.3
✨ Using the docker (experimental) driver based on user configuration
👍 Kubernetes 1.17.3 is now available. If you would like to upgrade, specify: --kubernetes-version=1.17.3
⌛ Reconfiguring existing host ...
🔄 Starting existing docker VM for "docker2" ...
🐳 Preparing Kubernetes v1.17.2 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🚀 Launching Kubernetes ...

💣 Error starting cluster: addon phase cmd:"/bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": exit status 1
stdout:

stderr:
W0227 16:07:04.879129 3556 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0227 16:07:04.879181 3556 validation.go:28] Cannot validate kubelet config - no validator is available
error execution phase addon/coredns: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps/kube-dns?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher

😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
❌ Problems detected in kube-apiserver [0a9082b63aa8]:
Error: failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use
❌ Problems detected in kubelet:
Feb 27 16:07:05 docker2 kubelet[1516]: W0227 16:07:05.151637 1516 eviction_manager.go:417] eviction manager: unexpected error when attempting to reduce ephemeral-storage pressure: wanted to free 9223372036854775807 bytes, but freed 490843812 bytes space with errors in image deletion: [rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "alpine:3.10" (must force) - container 878177864ae7 is using its referenced image af341ccd2df8, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete e0d646523991 (cannot be forced) - image has dependent child images]
Feb 27 16:07:07 docker2 kubelet[1516]: I0227 16:07:07.998801 1516 eviction_manager.go:566] eviction manager: pod kindnet-6gtcm_kube-system(526aec17-bfd0-48a5-8707-40c46c98973e) is evicted successfully
Feb 27 16:07:07 docker2 kubelet[1516]: I0227 16:07:07.998830 1516 eviction_manager.go:190] eviction manager: pods kindnet-6gtcm_kube-system(526aec17-bfd0-48a5-8707-40c46c98973e) evicted, waiting for pod to be cleaned up

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Thu 2020-02-27 16:06:31 UTC, end at Thu 2020-02-27 16:13:20 UTC. --
Feb 27 16:06:45 docker2 systemd[1]: Started Docker Application Container Engine.
Feb 27 16:06:55 docker2 dockerd[1097]: time="2020-02-27T16:06:55.639775200Z" level=info msg="shim containerd-shim started" address=/containerd-shim/5b93e8e109f84df58842d79a8429469b0821f3a2c17c1bdbe3a30984da7f1adf.sock debug=false pid=2232
Feb 27 16:06:55 docker2 dockerd[1097]: time="2020-02-27T16:06:55.646936700Z" level=info msg="shim containerd-shim started" address=/containerd-shim/922fa6c624ebd3b6427a197c9e42b16b6432866eab72dbde7eaeb658c6ad11f5.sock debug=false pid=2244
Feb 27 16:06:55 docker2 dockerd[1097]: time="2020-02-27T16:06:55.647875600Z" level=info msg="shim containerd-shim started" address=/containerd-shim/fe52abeab02e1e419b5e25897bef5892d4189fc9baf33588a487be69bcfd4e33.sock debug=false pid=2246
Feb 27 16:06:55 docker2 dockerd[1097]: time="2020-02-27T16:06:55.659341300Z" level=info msg="shim containerd-shim started" address=/containerd-shim/5fe696cf0c8f1de7a438d3c5436413471a75c3c9d0c0a478ab8696faae3f3411.sock debug=false pid=2276
Feb 27 16:06:56 docker2 dockerd[1097]: time="2020-02-27T16:06:56.020513400Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1a5134400c2400f92de6f4721e454aa12f5a6d48febad0721c6f9dd7f6a378d9.sock debug=false pid=2422
Feb 27 16:06:56 docker2 dockerd[1097]: time="2020-02-27T16:06:56.021560500Z" level=info msg="shim containerd-shim started" address=/containerd-shim/b319788752fe175a8836b5b9a43956a93bf13673c40da530563711049c8b0087.sock debug=false pid=2423
Feb 27 16:06:56 docker2 dockerd[1097]: time="2020-02-27T16:06:56.101712400Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f2e91e497e0b47b49aae7f0bb23004054bc340c37ff4dabdf8135bdec0a69270.sock debug=false pid=2471
Feb 27 16:06:56 docker2 dockerd[1097]: time="2020-02-27T16:06:56.119819400Z" level=info msg="shim containerd-shim started" address=/containerd-shim/0cab813ef1098c5949f26ddeade1bd0ca52f54bfd3faef9e33de7d3537250f8d.sock debug=false pid=2488
Feb 27 16:06:56 docker2 dockerd[1097]: time="2020-02-27T16:06:56.752088300Z" level=info msg="shim reaped" id=19ac63496c0f1cddef7dc88780af799db85823dc15686c964a91aac24a35cb39
Feb 27 16:06:56 docker2 dockerd[1097]: time="2020-02-27T16:06:56.794686100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:06:56 docker2 dockerd[1097]: time="2020-02-27T16:06:56.794869600Z" level=warning msg="19ac63496c0f1cddef7dc88780af799db85823dc15686c964a91aac24a35cb39 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/19ac63496c0f1cddef7dc88780af799db85823dc15686c964a91aac24a35cb39/mounts/shm, flags: 0x2: no such file or directory"
Feb 27 16:06:59 docker2 dockerd[1097]: time="2020-02-27T16:06:59.736706600Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bc583cd5bd01b37aceee66ac151ab0bf1aad0a25ae90d02336fe9db7b97e065b.sock debug=false pid=3119
Feb 27 16:06:59 docker2 dockerd[1097]: time="2020-02-27T16:06:59.883559600Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4620221ca5e646236b95aba502de329e5a04478e04cc1fa170afc01189eadee0.sock debug=false pid=3151
Feb 27 16:06:59 docker2 dockerd[1097]: time="2020-02-27T16:06:59.971780700Z" level=info msg="shim containerd-shim started" address=/containerd-shim/ab59eca2f1f04b48d1cc200f2a565058017b8e207bcd294999b3e90ee12d8f50.sock debug=false pid=3174
Feb 27 16:07:00 docker2 dockerd[1097]: time="2020-02-27T16:07:00.162726900Z" level=info msg="shim containerd-shim started" address=/containerd-shim/b5d6c33199c9370b8b184196a3a0a68ea62846160f65f1941edab430f5031d99.sock debug=false pid=3255
Feb 27 16:07:00 docker2 dockerd[1097]: time="2020-02-27T16:07:00.170779000Z" level=info msg="shim reaped" id=a61f8d8779b6a212a27637f70d631f2ab818614ef520a82d1c6bda9cc0da9ebc
Feb 27 16:07:00 docker2 dockerd[1097]: time="2020-02-27T16:07:00.180937500Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:07:00 docker2 dockerd[1097]: time="2020-02-27T16:07:00.181080900Z" level=warning msg="a61f8d8779b6a212a27637f70d631f2ab818614ef520a82d1c6bda9cc0da9ebc cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a61f8d8779b6a212a27637f70d631f2ab818614ef520a82d1c6bda9cc0da9ebc/mounts/shm, flags: 0x2: no such file or directory"
Feb 27 16:07:01 docker2 dockerd[1097]: time="2020-02-27T16:07:01.230851800Z" level=info msg="shim containerd-shim started" address=/containerd-shim/8124e29b36d1f379bda1da0c336a3b32e7d011f2dc8a91e49c21e5c8bc6e3cec.sock debug=false pid=3378
Feb 27 16:07:01 docker2 dockerd[1097]: time="2020-02-27T16:07:01.532999100Z" level=info msg="shim reaped" id=0a9082b63aa893d60900df6577c9c8a9323a1525766190624143f1bcc12e25b7
Feb 27 16:07:01 docker2 dockerd[1097]: time="2020-02-27T16:07:01.543325800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:07:01 docker2 dockerd[1097]: time="2020-02-27T16:07:01.543496300Z" level=warning msg="0a9082b63aa893d60900df6577c9c8a9323a1525766190624143f1bcc12e25b7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0a9082b63aa893d60900df6577c9c8a9323a1525766190624143f1bcc12e25b7/mounts/shm, flags: 0x2: no such file or directory"
Feb 27 16:07:04 docker2 dockerd[1097]: time="2020-02-27T16:07:04.899356200Z" level=info msg="shim reaped" id=1130a4653a9a346924bcc5973e44ed801b6a65a0ea4a68a427306974daa69fd6
Feb 27 16:07:04 docker2 dockerd[1097]: time="2020-02-27T16:07:04.960602800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:07:04 docker2 dockerd[1097]: time="2020-02-27T16:07:04.980255900Z" level=info msg="shim reaped" id=7d6509004baaea9384a0d72ef37db62c9183614fa00fb4edf7e8ecad27eb573d
Feb 27 16:07:04 docker2 dockerd[1097]: time="2020-02-27T16:07:04.989136900Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:07:04 docker2 dockerd[1097]: time="2020-02-27T16:07:04.989301200Z" level=warning msg="7d6509004baaea9384a0d72ef37db62c9183614fa00fb4edf7e8ecad27eb573d cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7d6509004baaea9384a0d72ef37db62c9183614fa00fb4edf7e8ecad27eb573d/mounts/shm, flags: 0x2: no such file or directory"
Feb 27 16:07:05 docker2 dockerd[1097]: time="2020-02-27T16:07:05.111459500Z" level=info msg="shim reaped" id=9ac1518fb45b749d45d57078271feab0c2ae886eb8a8db3f5aebb101b53c2df8
Feb 27 16:07:05 docker2 dockerd[1097]: time="2020-02-27T16:07:05.120608000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:07:05 docker2 dockerd[1097]: time="2020-02-27T16:07:05.182680000Z" level=info msg="shim containerd-shim started" address=/containerd-shim/639481db9ca164fc6a572285e872d56ebc43371f4ac2101fe1bc301917afafbc.sock debug=false pid=3647
Feb 27 16:07:05 docker2 dockerd[1097]: time="2020-02-27T16:07:05.243774100Z" level=info msg="shim containerd-shim started" address=/containerd-shim/ee9ccb3e19bf726f07979396a8e2d77f777d7f929e605a58ab6e10e2dd74aaca.sock debug=false pid=3671
Feb 27 16:07:06 docker2 dockerd[1097]: time="2020-02-27T16:07:06.412696400Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a21e8f4c5ae3d1f57a63b0c89712311cc31e49c6a077eba53485ef42f468d4f4.sock debug=false pid=3727
Feb 27 16:07:06 docker2 dockerd[1097]: time="2020-02-27T16:07:06.414761700Z" level=info msg="shim containerd-shim started" address=/containerd-shim/2b7d7731b3f6a09922359e795f2144f2a0804f86bd5e904dc4af3198a476c91e.sock debug=false pid=3731
Feb 27 16:07:07 docker2 dockerd[1097]: time="2020-02-27T16:07:07.712602900Z" level=info msg="Container 893faf75fab55fefe9926702fc29e91f92d46240024a45ea1bd471047189ee34 failed to exit within 0 seconds of signal 15 - using the force"
Feb 27 16:07:07 docker2 dockerd[1097]: time="2020-02-27T16:07:07.819718500Z" level=info msg="shim reaped" id=893faf75fab55fefe9926702fc29e91f92d46240024a45ea1bd471047189ee34
Feb 27 16:07:07 docker2 dockerd[1097]: time="2020-02-27T16:07:07.829847700Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:07:07 docker2 dockerd[1097]: time="2020-02-27T16:07:07.829962200Z" level=warning msg="893faf75fab55fefe9926702fc29e91f92d46240024a45ea1bd471047189ee34 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/893faf75fab55fefe9926702fc29e91f92d46240024a45ea1bd471047189ee34/mounts/shm, flags: 0x2: no such file or directory"
Feb 27 16:07:07 docker2 dockerd[1097]: time="2020-02-27T16:07:07.933819200Z" level=info msg="shim reaped" id=b21f459e9c061b9c9b32352d2e39ed11eb98be80dc71ca3e0a1119350d0a12cd
Feb 27 16:07:07 docker2 dockerd[1097]: time="2020-02-27T16:07:07.942686200Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:07:08 docker2 dockerd[1097]: time="2020-02-27T16:07:08.788300900Z" level=info msg="shim containerd-shim started" address=/containerd-shim/dba9fa4164472ebbad028631d80469b1d4716feb3da59d252fc6514958fbbe06.sock debug=false pid=4064
Feb 27 16:07:12 docker2 dockerd[1097]: time="2020-02-27T16:07:12.469957300Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4bfad1e49005fae40685f26630d1231507515a063dfddf04a470d617779e4bd0.sock debug=false pid=4235
Feb 27 16:07:22 docker2 dockerd[1097]: time="2020-02-27T16:07:22.574457200Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a52c11903b5b17dd939fb3e9e4f3cc95d42846e3f7315834fbc96432c78a67e6.sock debug=false pid=4412
Feb 27 16:07:23 docker2 dockerd[1097]: time="2020-02-27T16:07:23.061947000Z" level=info msg="shim reaped" id=5bd6e70e07e639e2d8018960d7be75222d6128a559bf00611940a0015423e0ba
Feb 27 16:07:23 docker2 dockerd[1097]: time="2020-02-27T16:07:23.072460200Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:07:23 docker2 dockerd[1097]: time="2020-02-27T16:07:23.072594600Z" level=warning msg="5bd6e70e07e639e2d8018960d7be75222d6128a559bf00611940a0015423e0ba cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5bd6e70e07e639e2d8018960d7be75222d6128a559bf00611940a0015423e0ba/mounts/shm, flags: 0x2: no such file or directory"
Feb 27 16:07:25 docker2 dockerd[1097]: time="2020-02-27T16:07:25.612521800Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e0479b48ca8b490f143e78c03edc3d8a24c7674a6ea65d6eaa2b7ba8c1a4b1c6.sock debug=false pid=4468
Feb 27 16:07:25 docker2 dockerd[1097]: time="2020-02-27T16:07:25.615212400Z" level=info msg="shim containerd-shim started" address=/containerd-shim/ad59033d75fb87dc0fc387c27e4924eb3c77fa06df31ddebdef6d28bbb741164.sock debug=false pid=4475
Feb 27 16:07:25 docker2 dockerd[1097]: time="2020-02-27T16:07:25.668218700Z" level=info msg="shim containerd-shim started" address=/containerd-shim/58d842895322e309fe1eda5c02a72f80dd93322e03ebd1ac55f4c11b34874457.sock debug=false pid=4515
Feb 27 16:07:25 docker2 dockerd[1097]: time="2020-02-27T16:07:25.820642500Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f5cb9e84203eecaa09046a3cc39beaa5596267ef54a09c3506a7193a401c148a.sock debug=false pid=4552
Feb 27 16:07:26 docker2 dockerd[1097]: time="2020-02-27T16:07:26.220274500Z" level=info msg="shim containerd-shim started" address=/containerd-shim/504685f8a64e4a43ad2437b7f1ddfd9d31b8af6ddaff3321ecd8863eca18cca9.sock debug=false pid=4634
Feb 27 16:07:26 docker2 dockerd[1097]: time="2020-02-27T16:07:26.298715000Z" level=info msg="shim containerd-shim started" address=/containerd-shim/414e2767b3824da07819ab0a253ee60f7c7d3c0da7df5f24fad4e1c698557595.sock debug=false pid=4659
Feb 27 16:07:38 docker2 dockerd[1097]: time="2020-02-27T16:07:38.074509000Z" level=info msg="Container e92c9aab14b6e17fd2b875824dd1d73861d7b978de6b0a1384a24c335b943cea failed to exit within 0 seconds of signal 15 - using the force"
Feb 27 16:07:38 docker2 dockerd[1097]: time="2020-02-27T16:07:38.148622800Z" level=info msg="shim reaped" id=e92c9aab14b6e17fd2b875824dd1d73861d7b978de6b0a1384a24c335b943cea
Feb 27 16:07:38 docker2 dockerd[1097]: time="2020-02-27T16:07:38.158655400Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:07:38 docker2 dockerd[1097]: time="2020-02-27T16:07:38.158773200Z" level=warning msg="e92c9aab14b6e17fd2b875824dd1d73861d7b978de6b0a1384a24c335b943cea cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e92c9aab14b6e17fd2b875824dd1d73861d7b978de6b0a1384a24c335b943cea/mounts/shm, flags: 0x2: no such file or directory"
Feb 27 16:07:38 docker2 dockerd[1097]: time="2020-02-27T16:07:38.262954000Z" level=info msg="shim reaped" id=fafced3b60a43856622575e7d56f02d098ba033e8d122578d2e221ca68f8cc5c
Feb 27 16:07:38 docker2 dockerd[1097]: time="2020-02-27T16:07:38.273193100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 27 16:07:41 docker2 dockerd[1097]: time="2020-02-27T16:07:41.463257400Z" level=info msg="shim reaped" id=91ba496743730b94cd5e0e36af4caf871d48201fd58443717067034e4c8f3778
Feb 27 16:07:41 docker2 dockerd[1097]: time="2020-02-27T16:07:41.466006100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4c12b25bbbf50 70f311871ae12 5 minutes ago Running coredns 1 658d92ccfd6e2
193be02e07200 70f311871ae12 5 minutes ago Running coredns 1 03704283f39a2
394c040d571b3 41ef50a5f06a7 6 minutes ago Running kube-apiserver 2 1a4b9e21446ed
52f67bd17a376 cba2a99699bdf 6 minutes ago Running kube-proxy 1 10823c1010b8e
0a9082b63aa89 41ef50a5f06a7 6 minutes ago Exited kube-apiserver 1 1a4b9e21446ed
1e470485358d4 303ce5db0e90d 6 minutes ago Running etcd 0 c5fb7ab209919
878c82c1ad00d da5fd66c4068c 6 minutes ago Running kube-controller-manager 3 6af964f86d60d
0f0cdc1da7c6a f52d4c527ef2f 6 minutes ago Running kube-scheduler 3 fc90c99eae894
d55dd687a5afa f52d4c527ef2f 12 days ago Exited kube-scheduler 2 592044e6db648
80a6b09fdd3ef da5fd66c4068c 12 days ago Exited kube-controller-manager 2 410edf24da074
7a452c6194299 cba2a99699bdf 12 days ago Exited kube-proxy 0 9de93ad0032ba
98287e17a9373 70f311871ae12 12 days ago Exited coredns 0 08a5eae345477
5e175c58d6246 70f311871ae12 12 days ago Exited coredns 0 25ce9a2f3c3a6

==> coredns [193be02e0720] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> coredns [4c12b25bbbf5] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> coredns [5e175c58d624] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0214 21:50:00.987870 1 trace.go:82] Trace[1122880088]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-02-14 21:49:31.0195425 +0000 UTC m=+0.104898401) (total time: 30.002693s):
Trace[1122880088]: [30.002693s] [30.002693s] END
E0214 21:50:00.987896 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.987896 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.987896 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0214 21:50:00.987968 1 trace.go:82] Trace[335965467]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-02-14 21:49:31.0194961 +0000 UTC m=+0.104869901) (total time: 30.0027721s):
Trace[335965467]: [30.0027721s] [30.0027721s] END
E0214 21:50:00.987983 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.987983 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.987983 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0214 21:50:00.988023 1 trace.go:82] Trace[1147019383]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-02-14 21:49:31.0196196 +0000 UTC m=+0.104976701) (total time: 30.0028717s):
Trace[1147019383]: [30.0028717s] [30.0028717s] END
E0214 21:50:00.988043 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988043 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988043 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.987896 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.987983 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988043 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> coredns [98287e17a937] <==
E0214 21:50:00.988323 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988367 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988835 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0214 21:50:00.988290 1 trace.go:82] Trace[1122880088]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-02-14 21:49:31.0194852 +0000 UTC m=+0.103836601) (total time: 30.0028781s):
Trace[1122880088]: [30.0028781s] [30.0028781s] END
E0214 21:50:00.988323 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988323 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988323 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0214 21:50:00.988349 1 trace.go:82] Trace[335965467]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-02-14 21:49:31.0194763 +0000 UTC m=+0.103826901) (total time: 30.0029669s):
Trace[335965467]: [30.0029669s] [30.0029669s] END
E0214 21:50:00.988367 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988367 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988367 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0214 21:50:00.988801 1 trace.go:82] Trace[1147019383]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-02-14 21:49:31.0194763 +0000 UTC m=+0.103826901) (total time: 30.0037989s):
Trace[1147019383]: [30.0037989s] [30.0037989s] END
E0214 21:50:00.988835 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988835 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0214 21:50:00.988835 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> dmesg <==
[Feb26 01:06] #4
[ +0.064067] #5
[ +0.064745] #6
[ +0.065135] #7
[ +0.391725] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A
[ +0.000954] virtio-pci 0000:00:01.0: PCI INT A: no GSI
[ +0.005761] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
[ +0.000789] virtio-pci 0000:00:07.0: PCI INT A: no GSI
[ +0.053160] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[ +0.023864] ahci 0000:00:02.0: can't derive routing for PCI INT A
[ +0.000825] ahci 0000:00:02.0: PCI INT A: no GSI
[ +0.747121] i8042: Can't read CTR while initializing i8042
[ +0.001878] i8042: probe of i8042 failed with error -5
[ +0.003334] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[ +0.000988] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[ +0.108969] ata1.00: ATA Identify Device Log not supported
[ +0.000689] ata1.00: Security Log not supported
[ +0.002292] ata1.00: ATA Identify Device Log not supported
[ +0.000730] ata1.00: Security Log not supported
[ +0.161421] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +0.020007] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +6.637971] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +0.073676] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[Feb26 01:23] clocksource: timekeeping watchdog on CPU5: Marking clocksource 'tsc' as unstable because the skew is too large:
[ +0.001724] clocksource: 'hpet' wd_now: 7b1885ef wd_last: 7a28498c mask: ffffffff
[ +0.001361] clocksource: 'tsc' cs_now: 35cdd14633e cs_last: 35c0edde556 mask: ffffffffffffffff
[ +0.002257] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
[Feb26 16:20] hrtimer: interrupt took 11066700 ns
[Feb27 15:56] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.005388] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.005036] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000013] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.001299] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000002] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.016733] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000003] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.

==> kernel <==
16:13:21 up 1 day, 15:07, 0 users, load average: 0.60, 0.49, 0.43
Linux docker2 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [0a9082b63aa8] <==
api/all=true|false controls all API versions
api/ga=true|false controls all API versions of the form v[0-9]+
api/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+
api/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+
api/legacy is deprecated, and will be removed in a future version

Egress selector flags:

  --egress-selector-config-file string   File with apiserver egress selector configuration.

Admission flags:

  --admission-control strings              Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.)
  --admission-control-config-file string   File with admission control configuration.
  --disable-admission-plugins strings      admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, RuntimeClass, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
  --enable-admission-plugins strings       admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, RuntimeClass, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, DefaultStorageClass, DefaultTolerationSeconds, DenyEscalatingExec, DenyExecOnPrivileged, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodPreset, PodSecurityPolicy, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.

Metrics flags:

  --show-hidden-metrics-for-version string   The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is <major>.<minor>, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.

Misc flags:

  --allow-privileged                          If true, allow privileged containers. [default=false]
  --apiserver-count int                       The number of apiservers running in the cluster, must be a positive number. (In use when --endpoint-reconciler-type=master-count is enabled.) (default 1)
  --enable-aggregator-routing                 Turns on aggregator routing requests to endpoints IP rather than cluster IP.
  --endpoint-reconciler-type string           Use an endpoint reconciler (master-count, lease, none) (default "lease")
  --event-ttl duration                        Amount of time to retain events. (default 1h0m0s)
  --kubelet-certificate-authority string      Path to a cert file for the certificate authority.
  --kubelet-client-certificate string         Path to a client cert file for TLS.
  --kubelet-client-key string                 Path to a client key file for TLS.
  --kubelet-https                             Use https for kubelet connections. (default true)
  --kubelet-preferred-address-types strings   List of the preferred NodeAddressTypes to use for kubelet connections. (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
  --kubelet-timeout duration                  Timeout for kubelet operations. (default 5s)
  --kubernetes-service-node-port int          If non-zero, the Kubernetes master service (which apiserver creates/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.
  --max-connection-bytes-per-sec int          If non-zero, throttle each user connection to this number of bytes/sec. Currently only applies to long-running requests.
  --proxy-client-cert-file string             Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.
  --proxy-client-key-file string              Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.
  --service-account-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)
  --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.
  --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)

Global flags:

  --add-dir-header                   If true, adds the file directory to the header
  --alsologtostderr                  log to standard error as well as files

-h, --help help for kube-apiserver
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--logtostderr log to standard error instead of files (default true)
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level number for the log level verbosity
--version version[=true] Print version information and quit
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging

==> kube-apiserver [394c040d571b] <==
I0227 16:07:14.177541 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0227 16:07:14.185212 1 client.go:361] parsed scheme: "endpoint"
I0227 16:07:14.185256 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0227 16:07:14.192785 1 client.go:361] parsed scheme: "endpoint"
I0227 16:07:14.192824 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
W0227 16:07:14.303101 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W0227 16:07:14.313499 1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0227 16:07:14.323731 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0227 16:07:14.339429 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0227 16:07:14.343991 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0227 16:07:14.373749 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0227 16:07:14.386863 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0227 16:07:14.386901 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0227 16:07:14.393485 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0227 16:07:14.393520 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0227 16:07:14.394872 1 client.go:361] parsed scheme: "endpoint"
I0227 16:07:14.394912 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0227 16:07:14.401495 1 client.go:361] parsed scheme: "endpoint"
I0227 16:07:14.401521 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0227 16:07:14.530016 1 client.go:361] parsed scheme: "endpoint"
I0227 16:07:14.530194 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0227 16:07:16.046434 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0227 16:07:16.046482 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0227 16:07:16.046571 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0227 16:07:16.047128 1 secure_serving.go:178] Serving securely on [::]:8443
I0227 16:07:16.047233 1 controller.go:81] Starting OpenAPI AggregationController
I0227 16:07:16.047270 1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0227 16:07:16.047547 1 available_controller.go:386] Starting AvailableConditionController
I0227 16:07:16.047614 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0227 16:07:16.047820 1 naming_controller.go:288] Starting NamingConditionController
I0227 16:07:16.047883 1 controller.go:85] Starting OpenAPI controller
I0227 16:07:16.047926 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0227 16:07:16.048092 1 crd_finalizer.go:263] Starting CRDFinalizer
I0227 16:07:16.048278 1 autoregister_controller.go:140] Starting autoregister controller
I0227 16:07:16.048294 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0227 16:07:16.048254 1 establishing_controller.go:73] Starting EstablishingController
I0227 16:07:16.048645 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0227 16:07:16.048645 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0227 16:07:16.048672 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0227 16:07:16.048676 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0227 16:07:16.048334 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0227 16:07:16.048774 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0227 16:07:16.049226 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0227 16:07:16.049262 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I0227 16:07:16.049691 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0227 16:07:16.049732 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
E0227 16:07:16.074322 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.4, ResourceVersion: 0, AdditionalErrorMsg:
I0227 16:07:16.294679 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0227 16:07:16.360306 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0227 16:07:16.361028 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0227 16:07:16.361318 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
I0227 16:07:16.361754 1 shared_informer.go:204] Caches are synced for crd-autoregister
I0227 16:07:16.361040 1 cache.go:39] Caches are synced for autoregister controller
I0227 16:07:17.046490 1 controller.go:107] OpenAPI AggregationController: Processing item
I0227 16:07:17.046527 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0227 16:07:17.046563 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0227 16:07:17.059008 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
W0227 16:07:17.300834 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.4]
I0227 16:07:17.301755 1 controller.go:606] quota admission added evaluator for: endpoints
I0227 16:07:52.119473 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io

==> kube-controller-manager [80a6b09fdd3e] <==
I0215 00:50:19.883521 1 shared_informer.go:197] Waiting for caches to sync for GC
I0215 00:50:19.919175 1 controllermanager.go:533] Started "job"
I0215 00:50:19.923149 1 job_controller.go:143] Starting job controller
I0215 00:50:19.923269 1 shared_informer.go:197] Waiting for caches to sync for job
I0215 00:50:19.959067 1 controllermanager.go:533] Started "cronjob"
I0215 00:50:19.960198 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0215 00:50:19.962532 1 cronjob_controller.go:97] Starting CronJob Manager
I0215 00:50:19.973080 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0215 00:50:20.013026 1 shared_informer.go:204] Caches are synced for certificate-csrsigning
I0215 00:50:20.049210 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
I0215 00:50:20.098449 1 shared_informer.go:204] Caches are synced for namespace
I0215 00:50:20.113070 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I0215 00:50:20.113075 1 shared_informer.go:204] Caches are synced for service account
I0215 00:50:20.168573 1 shared_informer.go:204] Caches are synced for PV protection
I0215 00:50:20.207012 1 shared_informer.go:204] Caches are synced for HPA
I0215 00:50:20.207749 1 shared_informer.go:204] Caches are synced for PVC protection
W0215 00:50:20.222407 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="docker2" does not exist
I0215 00:50:20.224109 1 shared_informer.go:204] Caches are synced for job
I0215 00:50:20.225450 1 shared_informer.go:204] Caches are synced for taint
I0215 00:50:20.228092 1 shared_informer.go:204] Caches are synced for TTL
I0215 00:50:20.228255 1 taint_manager.go:186] Starting NoExecuteTaintManager
I0215 00:50:20.228255 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0215 00:50:20.229694 1 node_lifecycle_controller.go:1058] Missing timestamp for Node docker2. Assuming now as a timestamp.
I0215 00:50:20.229858 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"docker2", UID:"c9105832-6226-4027-8250-447fab204fe7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node docker2 event: Registered Node docker2 in Controller
I0215 00:50:20.230508 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0215 00:50:20.237281 1 shared_informer.go:204] Caches are synced for node
I0215 00:50:20.237381 1 range_allocator.go:172] Starting range CIDR allocator
I0215 00:50:20.237451 1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
I0215 00:50:20.237471 1 shared_informer.go:204] Caches are synced for cidrallocator
I0215 00:50:20.247768 1 shared_informer.go:204] Caches are synced for stateful set
I0215 00:50:20.251529 1 shared_informer.go:204] Caches are synced for deployment
I0215 00:50:20.254761 1 shared_informer.go:204] Caches are synced for ReplicaSet
I0215 00:50:20.272294 1 shared_informer.go:204] Caches are synced for endpoint
I0215 00:50:20.294255 1 shared_informer.go:204] Caches are synced for GC
I0215 00:50:20.299376 1 shared_informer.go:204] Caches are synced for attach detach
I0215 00:50:20.315742 1 shared_informer.go:204] Caches are synced for daemon sets
I0215 00:50:20.435695 1 shared_informer.go:204] Caches are synced for disruption
I0215 00:50:20.435741 1 disruption.go:338] Sending events to api server.
I0215 00:50:20.469040 1 shared_informer.go:204] Caches are synced for ReplicationController
I0215 00:50:20.473167 1 shared_informer.go:204] Caches are synced for persistent volume
I0215 00:50:20.550451 1 shared_informer.go:204] Caches are synced for expand
I0215 00:50:20.559741 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I0215 00:50:20.560639 1 shared_informer.go:204] Caches are synced for resource quota
I0215 00:50:20.582132 1 shared_informer.go:204] Caches are synced for garbage collector
I0215 00:50:20.594483 1 shared_informer.go:204] Caches are synced for garbage collector
I0215 00:50:20.594924 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0215 00:50:20.607758 1 shared_informer.go:204] Caches are synced for resource quota
I0217 02:21:50.292252 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"docker2", UID:"c9105832-6226-4027-8250-447fab204fe7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node docker2 status is now: NodeNotReady
I0217 02:21:51.571128 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-dns", UID:"abd6301e-c179-4762-8e01-01b5eaada1d5", APIVersion:"v1", ResourceVersion:"20417", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again
I0217 02:21:51.730336 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0217 02:22:06.881428 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0217 14:41:37.439852 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"docker2", UID:"c9105832-6226-4027-8250-447fab204fe7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node docker2 status is now: NodeNotReady
I0217 14:41:37.795013 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0217 14:41:47.796573 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0218 06:27:25.544675 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"docker2", UID:"c9105832-6226-4027-8250-447fab204fe7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node docker2 status is now: NodeNotReady
I0218 06:27:26.516197 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0218 06:27:36.524365 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0220 18:34:05.389294 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"docker2", UID:"c9105832-6226-4027-8250-447fab204fe7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeNotReady' Node docker2 status is now: NodeNotReady
I0220 18:34:05.787994 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0220 18:34:15.795264 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.

==> kube-controller-manager [878c82c1ad00] <==
I0227 16:07:51.614629 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
I0227 16:07:51.618531 1 shared_informer.go:204] Caches are synced for TTL
I0227 16:07:51.619466 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I0227 16:07:51.619565 1 shared_informer.go:204] Caches are synced for service account
I0227 16:07:51.832484 1 shared_informer.go:204] Caches are synced for taint
I0227 16:07:51.832642 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0227 16:07:51.832707 1 node_lifecycle_controller.go:1058] Missing timestamp for Node docker2. Assuming now as a timestamp.
I0227 16:07:51.832788 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0227 16:07:51.832929 1 taint_manager.go:186] Starting NoExecuteTaintManager
I0227 16:07:51.833288 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"docker2", UID:"c9105832-6226-4027-8250-447fab204fe7", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node docker2 event: Registered Node docker2 in Controller
I0227 16:07:51.853055 1 shared_informer.go:204] Caches are synced for HPA
I0227 16:07:51.870579 1 shared_informer.go:204] Caches are synced for PVC protection
I0227 16:07:51.889502 1 shared_informer.go:204] Caches are synced for deployment
I0227 16:07:51.892270 1 shared_informer.go:204] Caches are synced for job
I0227 16:07:51.918554 1 shared_informer.go:204] Caches are synced for ReplicaSet
I0227 16:07:51.919069 1 shared_informer.go:204] Caches are synced for disruption
I0227 16:07:51.919118 1 disruption.go:338] Sending events to api server.
I0227 16:07:51.919416 1 shared_informer.go:204] Caches are synced for ReplicationController
I0227 16:07:51.919511 1 shared_informer.go:204] Caches are synced for GC
I0227 16:07:51.922016 1 shared_informer.go:204] Caches are synced for endpoint
I0227 16:07:51.971587 1 shared_informer.go:204] Caches are synced for attach detach
I0227 16:07:51.980516 1 shared_informer.go:204] Caches are synced for persistent volume
I0227 16:07:52.025487 1 shared_informer.go:204] Caches are synced for stateful set
I0227 16:07:52.026068 1 shared_informer.go:204] Caches are synced for resource quota
I0227 16:07:52.038565 1 shared_informer.go:204] Caches are synced for PV protection
I0227 16:07:52.069810 1 shared_informer.go:204] Caches are synced for expand
I0227 16:07:52.078130 1 shared_informer.go:204] Caches are synced for resource quota
I0227 16:07:52.086566 1 shared_informer.go:204] Caches are synced for daemon sets
I0227 16:07:52.088776 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"148583", FieldPath:""}): type: 'Warning' reason: 'FailedDaemonPod' Found failed daemon pod kube-system/kindnet-6gtcm on node docker2, will try to kill it
I0227 16:07:52.098088 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"148583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kindnet-6gtcm
I0227 16:07:52.112180 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"148583", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-pzrnw
I0227 16:07:52.120862 1 shared_informer.go:204] Caches are synced for garbage collector
E0227 16:07:52.122910 1 daemon_controller.go:290] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", ResourceVersion:"148583", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717313756, loc:(*time.Location)(0x6b971e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001241a40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001241a60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001241a80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001241aa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:0.5.3", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001241ac0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001241b00)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000eca370), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000844f68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001417380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000882120)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000844fb0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I0227 16:07:52.125314 1 shared_informer.go:204] Caches are synced for garbage collector
I0227 16:07:52.125354 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0227 16:07:53.131484 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182296", FieldPath:""}): type: 'Warning' reason: 'FailedDaemonPod' Found failed daemon pod kube-system/kindnet-pzrnw on node docker2, will try to kill it
I0227 16:07:53.143135 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182296", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kindnet-pzrnw
I0227 16:07:53.150304 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182300", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-hwhjc
I0227 16:07:55.163810 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182305", FieldPath:""}): type: 'Warning' reason: 'FailedDaemonPod' Found failed daemon pod kube-system/kindnet-hwhjc on node docker2, will try to kill it
I0227 16:07:55.171939 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182305", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kindnet-hwhjc
I0227 16:07:55.180571 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182316", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-2mh2j
I0227 16:07:59.195755 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182320", FieldPath:""}): type: 'Warning' reason: 'FailedDaemonPod' Found failed daemon pod kube-system/kindnet-2mh2j on node docker2, will try to kill it
I0227 16:07:59.207161 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182320", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kindnet-2mh2j
I0227 16:07:59.217440 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182340", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-zb5wl
I0227 16:08:07.199653 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182342", FieldPath:""}): type: 'Warning' reason: 'FailedDaemonPod' Found failed daemon pod kube-system/kindnet-zb5wl on node docker2, will try to kill it
I0227 16:08:07.209329 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kindnet-zb5wl
I0227 16:08:07.217643 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-k6gqv
I0227 16:08:23.238998 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182371", FieldPath:""}): type: 'Warning' reason: 'FailedDaemonPod' Found failed daemon pod kube-system/kindnet-k6gqv on node docker2, will try to kill it
I0227 16:08:23.248177 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kindnet-k6gqv
I0227 16:08:23.259549 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182417", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-zrv54
I0227 16:08:55.242237 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182419", FieldPath:""}): type: 'Warning' reason: 'FailedDaemonPod' Found failed daemon pod kube-system/kindnet-zrv54 on node docker2, will try to kill it
I0227 16:08:55.251852 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kindnet-zrv54
I0227 16:08:55.264781 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182503", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-2v8w5
E0227 16:08:55.282777 1 daemon_controller.go:290] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", ResourceVersion:"182503", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717313756, loc:(*time.Location)(0x6b971e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001c21780), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001c217a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001c217c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001c217e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:0.5.3", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001c21800)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001c21840)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00183e820), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001c80508), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001bb1500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000c568a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001c80550)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I0227 16:09:59.220994 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182507", FieldPath:""}): type: 'Warning' reason: 'FailedDaemonPod' Found failed daemon pod kube-system/kindnet-2v8w5 on node docker2, will try to kill it
I0227 16:09:59.233710 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182507", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kindnet-2v8w5
I0227 16:09:59.248389 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-s6bn8
I0227 16:12:07.098700 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182659", FieldPath:""}): type: 'Warning' reason: 'FailedDaemonPod' Found failed daemon pod kube-system/kindnet-s6bn8 on node docker2, will try to kill it
I0227 16:12:07.110399 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kindnet-s6bn8
I0227 16:12:07.121874 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0a2afc85-2c67-4ba3-9087-cfd31d94216b", APIVersion:"apps/v1", ResourceVersion:"182947", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-mbpc5

==> kube-proxy [52f67bd17a37] <==
W0227 16:07:06.899034 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
E0227 16:07:06.903213 1 node.go:124] Failed to retrieve node info: Get https://localhost:8443/api/v1/nodes/docker2: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:08.050730 1 node.go:124] Failed to retrieve node info: Get https://localhost:8443/api/v1/nodes/docker2: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.416899 1 node.go:124] Failed to retrieve node info: Get https://localhost:8443/api/v1/nodes/docker2: dial tcp 127.0.0.1:8443: connect: connection refused
I0227 16:07:16.290973 1 node.go:135] Successfully retrieved node IP: 172.17.0.3
I0227 16:07:16.291031 1 server_others.go:145] Using iptables Proxier.
I0227 16:07:16.293186 1 server.go:571] Version: v1.17.2
I0227 16:07:16.296746 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0227 16:07:16.296840 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0227 16:07:16.297000 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0227 16:07:16.299049 1 config.go:313] Starting service config controller
I0227 16:07:16.299227 1 config.go:131] Starting endpoints config controller
I0227 16:07:16.299673 1 shared_informer.go:197] Waiting for caches to sync for service config
I0227 16:07:16.299686 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0227 16:07:16.399996 1 shared_informer.go:204] Caches are synced for service config
I0227 16:07:16.400030 1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-proxy [7a452c619429] <==
W0214 21:49:32.031634 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0214 21:49:32.048542 1 node.go:135] Successfully retrieved node IP: 172.17.0.3
I0214 21:49:32.048582 1 server_others.go:145] Using iptables Proxier.
I0214 21:49:32.048899 1 server.go:571] Version: v1.17.2
I0214 21:49:32.051409 1 conntrack.go:52] Setting nf_conntrack_max to 262144
E0214 21:49:32.051937 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0214 21:49:32.052480 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0214 21:49:32.052527 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0214 21:49:32.054267 1 config.go:313] Starting service config controller
I0214 21:49:32.054459 1 config.go:131] Starting endpoints config controller
I0214 21:49:32.054507 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0214 21:49:32.054507 1 shared_informer.go:197] Waiting for caches to sync for service config
I0214 21:49:32.154956 1 shared_informer.go:204] Caches are synced for service config
I0214 21:49:32.155017 1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-scheduler [0f0cdc1da7c6] <==
E0227 16:07:08.804295 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:08.805836 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:08.808357 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:08.809227 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:08.809279 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:08.810959 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:08.812077 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:08.813126 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:08.814204 1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=182065&timeoutSeconds=498&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.800936 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.804660 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.807116 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.807345 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.809606 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.809788 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.811053 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.813304 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.814710 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.815474 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.816994 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:09.817584 1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=182065&timeoutSeconds=553&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.803984 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.806583 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.809099 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.810535 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.811532 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.812589 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.814182 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.815588 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.816732 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.818010 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.819325 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:10.820280 1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=182065&timeoutSeconds=428&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.805445 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.807645 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.810628 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.811767 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.812659 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.813955 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.815262 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.816705 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.818067 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.818858 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.820257 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:11.821167 1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=182065&timeoutSeconds=550&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:12.262772 1 leaderelection.go:331] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
E0227 16:07:16.093341 1 leaderelection.go:331] error retrieving resource lock kube-system/kube-scheduler: endpoints "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
E0227 16:07:16.093367 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0227 16:07:16.188162 1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: unknown (get pods)
E0227 16:07:16.188280 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0227 16:07:16.188537 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0227 16:07:16.188605 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0227 16:07:16.188775 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0227 16:07:16.188930 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0227 16:07:16.189007 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0227 16:07:16.189182 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0227 16:07:16.189243 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0227 16:07:16.189351 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0227 16:07:16.190479 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
I0227 16:07:21.847295 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kube-scheduler [d55dd687a5af] <==
I0215 00:49:45.452738 1 serving.go:312] Generated self-signed cert in-memory
W0215 00:49:47.438005 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0215 00:49:47.438470 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0215 00:49:47.740143 1 authorization.go:47] Authorization is disabled
W0215 00:49:47.740188 1 authentication.go:92] Authentication is disabled
I0215 00:49:47.760859 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0215 00:49:47.764957 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0215 00:49:47.765029 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0215 00:49:47.765070 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0215 00:49:47.765105 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0215 00:49:47.765159 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0215 00:49:47.765288 1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0215 00:49:47.865818 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0215 00:49:47.866673 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0215 00:49:47.904043 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0215 00:50:05.156343 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Thu 2020-02-27 16:06:31 UTC, end at Thu 2020-02-27 16:13:23 UTC. --
Feb 27 16:12:53 docker2 kubelet[1516]: W0227 16:12:53.604438 1516 eviction_manager.go:330] eviction manager: attempting to reclaim ephemeral-storage
Feb 27 16:12:53 docker2 kubelet[1516]: I0227 16:12:53.604529 1516 container_gc.go:85] attempting to delete unused containers
Feb 27 16:12:53 docker2 kubelet[1516]: I0227 16:12:53.613407 1516 image_gc_manager.go:317] attempting to delete unused images
Feb 27 16:12:53 docker2 kubelet[1516]: I0227 16:12:53.622605 1516 image_gc_manager.go:371] [imageGCManager]: Removing image "sha256:af341ccd2df8b0e2d67cf8dd32e087bfda4e5756ebd1c76bbf3efa0dc246590e" to free 5556786 bytes
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.624515 1516 remote_image.go:135] RemoveImage "sha256:af341ccd2df8b0e2d67cf8dd32e087bfda4e5756ebd1c76bbf3efa0dc246590e" from image service failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "alpine:3.10" (must force) - container 878177864ae7 is using its referenced image af341ccd2df8
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.624581 1516 kuberuntime_image.go:120] Remove image "sha256:af341ccd2df8b0e2d67cf8dd32e087bfda4e5756ebd1c76bbf3efa0dc246590e" failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "alpine:3.10" (must force) - container 878177864ae7 is using its referenced image af341ccd2df8
Feb 27 16:12:53 docker2 kubelet[1516]: I0227 16:12:53.624606 1516 image_gc_manager.go:371] [imageGCManager]: Removing image "sha256:656679563d3056aab37d10312ba5e3531e3c62da465d592db0445b103a6d32a5" to free 350340155 bytes
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.626560 1516 remote_image.go:135] RemoveImage "sha256:656679563d3056aab37d10312ba5e3531e3c62da465d592db0445b103a6d32a5" from image service failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 656679563d30 (must be forced) - image is being used by stopped container 7fd55458cbfe
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.626621 1516 kuberuntime_image.go:120] Remove image "sha256:656679563d3056aab37d10312ba5e3531e3c62da465d592db0445b103a6d32a5" failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 656679563d30 (must be forced) - image is being used by stopped container 7fd55458cbfe
Feb 27 16:12:53 docker2 kubelet[1516]: W0227 16:12:53.626653 1516 eviction_manager.go:417] eviction manager: unexpected error when attempting to reduce ephemeral-storage pressure: wanted to free 9223372036854775807 bytes, but freed 0 bytes space with errors in image deletion: [rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "alpine:3.10" (must force) - container 878177864ae7 is using its referenced image af341ccd2df8, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 656679563d30 (must be forced) - image is being used by stopped container 7fd55458cbfe]
Feb 27 16:12:53 docker2 kubelet[1516]: I0227 16:12:53.631500 1516 eviction_manager.go:341] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Feb 27 16:12:53 docker2 kubelet[1516]: I0227 16:12:53.631634 1516 eviction_manager.go:359] eviction manager: pods ranked for eviction: kube-controller-manager-docker2_kube-system(091462203a51a29b462d059b44429ffa), kube-apiserver-docker2_kube-system(d5c4653e86e73ffdfab210fa258c1e08), kube-scheduler-docker2_kube-system(9c994ea62a2d8d6f1bb7498f10aa6fcf), etcd-docker2_kube-system(bd69997820e8e7727464019240391c5b), coredns-6955765f44-7fdrd_kube-system(967d04bc-91bb-4f22-b808-0f86bb60e318), coredns-6955765f44-7nblb_kube-system(93859660-a67d-423a-8999-92a8728e207c), kube-proxy-mz24d_kube-system(18e61b52-d9d8-48d3-8ee6-b4177dbf9411)
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.631692 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-controller-manager-docker2_kube-system(091462203a51a29b462d059b44429ffa)
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.631717 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-apiserver-docker2_kube-system(d5c4653e86e73ffdfab210fa258c1e08)
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.631733 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-scheduler-docker2_kube-system(9c994ea62a2d8d6f1bb7498f10aa6fcf)
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.631756 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod etcd-docker2_kube-system(bd69997820e8e7727464019240391c5b)
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.631773 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod coredns-6955765f44-7fdrd_kube-system(967d04bc-91bb-4f22-b808-0f86bb60e318)
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.631795 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod coredns-6955765f44-7nblb_kube-system(93859660-a67d-423a-8999-92a8728e207c)
Feb 27 16:12:53 docker2 kubelet[1516]: E0227 16:12:53.631814 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-proxy-mz24d_kube-system(18e61b52-d9d8-48d3-8ee6-b4177dbf9411)
Feb 27 16:12:53 docker2 kubelet[1516]: I0227 16:12:53.631831 1516 eviction_manager.go:383] eviction manager: unable to evict any pods from the node
Feb 27 16:13:03 docker2 kubelet[1516]: W0227 16:13:03.615417 1516 eviction_manager.go:330] eviction manager: attempting to reclaim ephemeral-storage
Feb 27 16:13:03 docker2 kubelet[1516]: I0227 16:13:03.615504 1516 container_gc.go:85] attempting to delete unused containers
Feb 27 16:13:03 docker2 kubelet[1516]: I0227 16:13:03.624000 1516 image_gc_manager.go:317] attempting to delete unused images
Feb 27 16:13:03 docker2 kubelet[1516]: I0227 16:13:03.632188 1516 image_gc_manager.go:371] [imageGCManager]: Removing image "sha256:af341ccd2df8b0e2d67cf8dd32e087bfda4e5756ebd1c76bbf3efa0dc246590e" to free 5556786 bytes
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.634004 1516 remote_image.go:135] RemoveImage "sha256:af341ccd2df8b0e2d67cf8dd32e087bfda4e5756ebd1c76bbf3efa0dc246590e" from image service failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "alpine:3.10" (must force) - container 878177864ae7 is using its referenced image af341ccd2df8
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.634055 1516 kuberuntime_image.go:120] Remove image "sha256:af341ccd2df8b0e2d67cf8dd32e087bfda4e5756ebd1c76bbf3efa0dc246590e" failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "alpine:3.10" (must force) - container 878177864ae7 is using its referenced image af341ccd2df8
Feb 27 16:13:03 docker2 kubelet[1516]: I0227 16:13:03.634078 1516 image_gc_manager.go:371] [imageGCManager]: Removing image "sha256:656679563d3056aab37d10312ba5e3531e3c62da465d592db0445b103a6d32a5" to free 350340155 bytes
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.635693 1516 remote_image.go:135] RemoveImage "sha256:656679563d3056aab37d10312ba5e3531e3c62da465d592db0445b103a6d32a5" from image service failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 656679563d30 (must be forced) - image is being used by stopped container 7fd55458cbfe
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.635812 1516 kuberuntime_image.go:120] Remove image "sha256:656679563d3056aab37d10312ba5e3531e3c62da465d592db0445b103a6d32a5" failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 656679563d30 (must be forced) - image is being used by stopped container 7fd55458cbfe
Feb 27 16:13:03 docker2 kubelet[1516]: W0227 16:13:03.635836 1516 eviction_manager.go:417] eviction manager: unexpected error when attempting to reduce ephemeral-storage pressure: wanted to free 9223372036854775807 bytes, but freed 0 bytes space with errors in image deletion: [rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "alpine:3.10" (must force) - container 878177864ae7 is using its referenced image af341ccd2df8, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 656679563d30 (must be forced) - image is being used by stopped container 7fd55458cbfe]
Feb 27 16:13:03 docker2 kubelet[1516]: I0227 16:13:03.640307 1516 eviction_manager.go:341] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Feb 27 16:13:03 docker2 kubelet[1516]: I0227 16:13:03.640394 1516 eviction_manager.go:359] eviction manager: pods ranked for eviction: kube-controller-manager-docker2_kube-system(091462203a51a29b462d059b44429ffa), kube-apiserver-docker2_kube-system(d5c4653e86e73ffdfab210fa258c1e08), kube-scheduler-docker2_kube-system(9c994ea62a2d8d6f1bb7498f10aa6fcf), etcd-docker2_kube-system(bd69997820e8e7727464019240391c5b), coredns-6955765f44-7fdrd_kube-system(967d04bc-91bb-4f22-b808-0f86bb60e318), coredns-6955765f44-7nblb_kube-system(93859660-a67d-423a-8999-92a8728e207c), kube-proxy-mz24d_kube-system(18e61b52-d9d8-48d3-8ee6-b4177dbf9411)
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.640447 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-controller-manager-docker2_kube-system(091462203a51a29b462d059b44429ffa)
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.640470 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-apiserver-docker2_kube-system(d5c4653e86e73ffdfab210fa258c1e08)
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.640487 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-scheduler-docker2_kube-system(9c994ea62a2d8d6f1bb7498f10aa6fcf)
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.640508 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod etcd-docker2_kube-system(bd69997820e8e7727464019240391c5b)
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.640527 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod coredns-6955765f44-7fdrd_kube-system(967d04bc-91bb-4f22-b808-0f86bb60e318)
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.640544 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod coredns-6955765f44-7nblb_kube-system(93859660-a67d-423a-8999-92a8728e207c)
Feb 27 16:13:03 docker2 kubelet[1516]: E0227 16:13:03.640561 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-proxy-mz24d_kube-system(18e61b52-d9d8-48d3-8ee6-b4177dbf9411)
Feb 27 16:13:03 docker2 kubelet[1516]: I0227 16:13:03.640576 1516 eviction_manager.go:383] eviction manager: unable to evict any pods from the node
Feb 27 16:13:13 docker2 kubelet[1516]: W0227 16:13:13.656671 1516 eviction_manager.go:330] eviction manager: attempting to reclaim ephemeral-storage
Feb 27 16:13:13 docker2 kubelet[1516]: I0227 16:13:13.656783 1516 container_gc.go:85] attempting to delete unused containers
Feb 27 16:13:13 docker2 kubelet[1516]: I0227 16:13:13.664454 1516 image_gc_manager.go:317] attempting to delete unused images
Feb 27 16:13:13 docker2 kubelet[1516]: I0227 16:13:13.672957 1516 image_gc_manager.go:371] [imageGCManager]: Removing image "sha256:af341ccd2df8b0e2d67cf8dd32e087bfda4e5756ebd1c76bbf3efa0dc246590e" to free 5556786 bytes
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.674556 1516 remote_image.go:135] RemoveImage "sha256:af341ccd2df8b0e2d67cf8dd32e087bfda4e5756ebd1c76bbf3efa0dc246590e" from image service failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "alpine:3.10" (must force) - container 878177864ae7 is using its referenced image af341ccd2df8
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.674635 1516 kuberuntime_image.go:120] Remove image "sha256:af341ccd2df8b0e2d67cf8dd32e087bfda4e5756ebd1c76bbf3efa0dc246590e" failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "alpine:3.10" (must force) - container 878177864ae7 is using its referenced image af341ccd2df8
Feb 27 16:13:13 docker2 kubelet[1516]: I0227 16:13:13.674660 1516 image_gc_manager.go:371] [imageGCManager]: Removing image "sha256:656679563d3056aab37d10312ba5e3531e3c62da465d592db0445b103a6d32a5" to free 350340155 bytes
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.676476 1516 remote_image.go:135] RemoveImage "sha256:656679563d3056aab37d10312ba5e3531e3c62da465d592db0445b103a6d32a5" from image service failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 656679563d30 (must be forced) - image is being used by stopped container 7fd55458cbfe
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.676533 1516 kuberuntime_image.go:120] Remove image "sha256:656679563d3056aab37d10312ba5e3531e3c62da465d592db0445b103a6d32a5" failed: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 656679563d30 (must be forced) - image is being used by stopped container 7fd55458cbfe
Feb 27 16:13:13 docker2 kubelet[1516]: W0227 16:13:13.676563 1516 eviction_manager.go:417] eviction manager: unexpected error when attempting to reduce ephemeral-storage pressure: wanted to free 9223372036854775807 bytes, but freed 0 bytes space with errors in image deletion: [rpc error: code = Unknown desc = Error response from daemon: conflict: unable to remove repository reference "alpine:3.10" (must force) - container 878177864ae7 is using its referenced image af341ccd2df8, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 656679563d30 (must be forced) - image is being used by stopped container 7fd55458cbfe]
Feb 27 16:13:13 docker2 kubelet[1516]: I0227 16:13:13.680937 1516 eviction_manager.go:341] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Feb 27 16:13:13 docker2 kubelet[1516]: I0227 16:13:13.681040 1516 eviction_manager.go:359] eviction manager: pods ranked for eviction: kube-controller-manager-docker2_kube-system(091462203a51a29b462d059b44429ffa), kube-apiserver-docker2_kube-system(d5c4653e86e73ffdfab210fa258c1e08), kube-scheduler-docker2_kube-system(9c994ea62a2d8d6f1bb7498f10aa6fcf), etcd-docker2_kube-system(bd69997820e8e7727464019240391c5b), coredns-6955765f44-7fdrd_kube-system(967d04bc-91bb-4f22-b808-0f86bb60e318), coredns-6955765f44-7nblb_kube-system(93859660-a67d-423a-8999-92a8728e207c), kube-proxy-mz24d_kube-system(18e61b52-d9d8-48d3-8ee6-b4177dbf9411)
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.681105 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-controller-manager-docker2_kube-system(091462203a51a29b462d059b44429ffa)
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.681125 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-apiserver-docker2_kube-system(d5c4653e86e73ffdfab210fa258c1e08)
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.681139 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-scheduler-docker2_kube-system(9c994ea62a2d8d6f1bb7498f10aa6fcf)
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.681159 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod etcd-docker2_kube-system(bd69997820e8e7727464019240391c5b)
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.681180 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod coredns-6955765f44-7fdrd_kube-system(967d04bc-91bb-4f22-b808-0f86bb60e318)
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.681198 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod coredns-6955765f44-7nblb_kube-system(93859660-a67d-423a-8999-92a8728e207c)
Feb 27 16:13:13 docker2 kubelet[1516]: E0227 16:13:13.681213 1516 eviction_manager.go:551] eviction manager: cannot evict a critical pod kube-proxy-mz24d_kube-system(18e61b52-d9d8-48d3-8ee6-b4177dbf9411)
Feb 27 16:13:13 docker2 kubelet[1516]: I0227 16:13:13.681228 1516 eviction_manager.go:383] eviction manager: unable to evict any pods from the node

The operating system version:
MacOSX Catalina

@balopat balopat added co/docker-env docker-env issues co/docker-driver Issues related to kubernetes in container co/runtime/docker Issues specific to a docker runtime labels Feb 27, 2020
@balopat
Copy link
Contributor Author

balopat commented Feb 27, 2020

FYI I think this cluster might have been an invalid profile (btw when is a profile invalid?) - but after starting it became a valid profile...

@priyawadhwa priyawadhwa added kind/support Categorizes issue or PR as a support question. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Mar 4, 2020
@afbjorklund afbjorklund removed co/docker-env docker-env issues co/runtime/docker Issues specific to a docker runtime labels Mar 15, 2020
@tstromberg tstromberg added the triage/not-reproducible Indicates an issue can not be reproduced as described. label Mar 18, 2020
@tstromberg
Copy link
Contributor

Hopefully it's OK if I close this - there wasn't enough information to make it actionable, and some time has already passed. If you are able to provide additional details, you may reopen it at any point by adding /reopen to your comment.

Here is additional information that may be helpful to us:

  • The full output of minikube logs

Thank you for sharing your experience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/not-reproducible Indicates an issue can not be reproduced as described.
Projects
None yet
Development

No branches or pull requests

4 participants