Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flaky TestFunctional/parallel: NodeLabels, ComponentHealth: connection refused #10128

Closed
lingsamuel opened this issue Jan 12, 2021 · 8 comments
Closed
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@lingsamuel
Copy link
Contributor

lingsamuel commented Jan 12, 2021

ComponentHealth:

2021-01-12T05:25:37.3463444Z === RUN   TestFunctional/parallel/ComponentHealth
2021-01-12T05:25:37.3464788Z     functional_test.go:376: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
2021-01-12T05:25:37.5662203Z     functional_test.go:376: (dbg) Non-zero exit: kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json: exit status 1 (224.949397ms)
2021-01-12T05:25:37.5923987Z         ** stderr ** 
2021-01-12T05:25:37.5929096Z         	The connection to the server 10.1.0.4:8441 was refused - did you specify the right host or port?
2021-01-12T05:25:37.5929831Z         
2021-01-12T05:25:37.5930205Z         ** /stderr **

2021-01-12T05:25:44.7054532Z         	* ==> container status <==
2021-01-12T05:25:44.7055371Z         	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
2021-01-12T05:25:44.7056413Z         	* 96e341e465df2       bfe3a36ebd252       6 seconds ago       Running             coredns                   1                   01dc39edd2b7f
2021-01-12T05:25:44.7058160Z         	* d6580bff778da       ca9843d3b5454       6 seconds ago       Running             kube-apiserver            0                   59f7ff5a48cae
2021-01-12T05:25:44.7059822Z         	* 60d2752fe8f49       85069258b98ac       7 seconds ago       Running             storage-provisioner       2                   d1c5f845b4bd1
2021-01-12T05:25:44.7061932Z         	* 12cc801d8bd4d       ca9843d3b5454       14 seconds ago      Exited              kube-apiserver            1                   ec33feccee719
2021-01-12T05:25:44.7064175Z         	* 0a5da4299472b       85069258b98ac       15 seconds ago      Exited              storage-provisioner       1                   d1c5f845b4bd1
2021-01-12T05:25:44.7065832Z         	* 72c887a753f72       10cc881966cfd       15 seconds ago      Running             kube-proxy                1                   6cc485284b50b
2021-01-12T05:25:44.7067452Z         	* 8407cbea3f2bc       b9fa1895dcaa6       15 seconds ago      Running             kube-controller-manager   1                   60c5f61dacb56
2021-01-12T05:25:44.7068761Z         	* 9f013e8027699       0369cf4303ffd       15 seconds ago      Running             etcd                      1                   a59e467efb040
2021-01-12T05:25:44.7070434Z         	* b677276b51079       3138b6e3d4712       15 seconds ago      Running             kube-scheduler            1                   b2ef13c9f20df
2021-01-12T05:25:44.7071593Z         	* 2fed919fb79c9       bfe3a36ebd252       21 seconds ago      Exited              coredns                   0                   cd7ee1b325f2e
2021-01-12T05:25:44.7073167Z         	* cabf497080b12       10cc881966cfd       24 seconds ago      Exited              kube-proxy                0                   7ee0171f17651
2021-01-12T05:25:44.7074339Z         	* 6c8942dc3c5c0       0369cf4303ffd       48 seconds ago      Exited              etcd                      0                   65d6cc4c95ed3
2021-01-12T05:25:44.7075753Z         	* 391959bafdb17       3138b6e3d4712       48 seconds ago      Exited              kube-scheduler            0                   b49046e7f27dc
2021-01-12T05:25:44.7077338Z         	* 89e2f58a9be63       b9fa1895dcaa6       48 seconds ago      Exited              kube-controller-manager   0                   70eb703baef61



2021-01-12T05:25:44.7231317Z         	* ==> etcd [6c8942dc3c5c] <==
2021-01-12T05:25:44.7232380Z         	* 2021-01-12 05:24:56.481969 I | etcdserver: ea713dbad49c9c1c as single-node; fast-forwarding 9 ticks (election ticks 10)
2021-01-12T05:25:44.7233458Z         	* 2021-01-12 05:24:56.482026 I | embed: listening for peers on 10.1.0.4:2380
2021-01-12T05:25:44.7234248Z         	* raft2021/01/12 05:24:56 INFO: ea713dbad49c9c1c switched to configuration voters=(16893351549883685916)
2021-01-12T05:25:44.7235719Z         	* 2021-01-12 05:24:56.484882 I | etcdserver/membership: added member ea713dbad49c9c1c [https://10.1.0.4:2380] to cluster ac34d98d3f2e481e
2021-01-12T05:25:44.7236812Z         	* raft2021/01/12 05:24:56 INFO: ea713dbad49c9c1c is starting a new election at term 1
2021-01-12T05:25:44.7237614Z         	* raft2021/01/12 05:24:56 INFO: ea713dbad49c9c1c became candidate at term 2
2021-01-12T05:25:44.7238537Z         	* raft2021/01/12 05:24:56 INFO: ea713dbad49c9c1c received MsgVoteResp from ea713dbad49c9c1c at term 2
2021-01-12T05:25:44.7239421Z         	* raft2021/01/12 05:24:56 INFO: ea713dbad49c9c1c became leader at term 2
2021-01-12T05:25:44.7240555Z         	* raft2021/01/12 05:24:56 INFO: raft.node: ea713dbad49c9c1c elected leader ea713dbad49c9c1c at term 2
2021-01-12T05:25:44.7242043Z         	* 2021-01-12 05:24:56.949380 I | etcdserver: setting up the initial cluster version to 3.4
2021-01-12T05:25:44.7243186Z         	* 2021-01-12 05:24:56.952792 N | etcdserver/membership: set the initial cluster version to 3.4
2021-01-12T05:25:44.7244310Z         	* 2021-01-12 05:24:56.952840 I | etcdserver/api: enabled capabilities for version 3.4
2021-01-12T05:25:44.7245734Z         	* 2021-01-12 05:24:56.952873 I | etcdserver: published {Name:fv-az183-750 ClientURLs:[https://10.1.0.4:2379]} to cluster ac34d98d3f2e481e
2021-01-12T05:25:44.7248895Z         	* 2021-01-12 05:24:56.952976 I | embed: ready to serve client requests
2021-01-12T05:25:44.7251115Z         	* 2021-01-12 05:24:56.954140 I | embed: serving client requests on 127.0.0.1:2379
2021-01-12T05:25:44.7257769Z         	* 2021-01-12 05:24:56.961163 I | embed: ready to serve client requests
2021-01-12T05:25:44.7258974Z         	* 2021-01-12 05:24:56.965758 I | embed: serving client requests on 10.1.0.4:2379
2021-01-12T05:25:44.7260032Z         	* 2021-01-12 05:25:06.289953 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-01-12T05:25:44.7261078Z         	* 2021-01-12 05:25:06.371771 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-01-12T05:25:44.7262129Z         	* 2021-01-12 05:25:16.371803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-01-12T05:25:44.7263175Z         	* 2021-01-12 05:25:26.371736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-01-12T05:25:44.7264358Z         	* 2021-01-12 05:25:26.940143 N | pkg/osutil: received terminated signal, shutting down...
2021-01-12T05:25:44.7266553Z         	* WARNING: 2021/01/12 05:25:26 grpc: addrConn.createTransport failed to connect to {10.1.0.4:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.1.0.4:2379: connect: connection refused". Reconnecting...
2021-01-12T05:25:44.7269233Z         	* 2021-01-12 05:25:26.958966 I | etcdserver: skipped leadership transfer for single voting member cluster
2021-01-12T05:25:44.7272473Z         	* WARNING: 2021/01/12 05:25:26 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2021-01-12T05:25:44.7273705Z         	* 

/kind failing-test
/kind flake

Full logs


2021-01-12T05:25:37.3463444Z === RUN   TestFunctional/parallel/ComponentHealth
2021-01-12T05:25:37.3464788Z     functional_test.go:376: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
2021-01-12T05:25:37.5662203Z     functional_test.go:376: (dbg) Non-zero exit: kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json: exit status 1 (224.949397ms)
2021-01-12T05:25:37.5728717Z         
2021-01-12T05:25:37.5773060Z         -- stdout --
2021-01-12T05:25:37.5896563Z         	{
2021-01-12T05:25:37.5897972Z         	    "apiVersion": "v1",
2021-01-12T05:25:37.5898798Z         	    "items": [],
2021-01-12T05:25:37.5899379Z         	    "kind": "List",
2021-01-12T05:25:37.5899936Z         	    "metadata": {
2021-01-12T05:25:37.5911946Z         	        "resourceVersion": "",
2021-01-12T05:25:37.5919573Z         	        "selfLink": ""
2021-01-12T05:25:37.5920173Z         	    }
2021-01-12T05:25:37.5920653Z         	}
2021-01-12T05:25:37.5921149Z         
2021-01-12T05:25:37.5922255Z         -- /stdout --
2021-01-12T05:25:37.5923987Z         ** stderr ** 
2021-01-12T05:25:37.5929096Z         	The connection to the server 10.1.0.4:8441 was refused - did you specify the right host or port?
2021-01-12T05:25:37.5929831Z         
2021-01-12T05:25:37.5930205Z         ** /stderr **
2021-01-12T05:25:37.5931613Z     functional_test.go:378: failed to get components. args "kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json": exit status 1
2021-01-12T05:25:37.5933248Z     helpers_test.go:216: -----------------------post-mortem--------------------------------
2021-01-12T05:25:37.5934725Z     helpers_test.go:233: (dbg) Run:  ./minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
2021-01-12T05:25:42.7979113Z     helpers_test.go:233: (dbg) Done: ./minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube: (5.233219186s)
2021-01-12T05:25:42.7981717Z     helpers_test.go:238: <<< TestFunctional/parallel/ComponentHealth FAILED: start of post-mortem logs <<<
2021-01-12T05:25:42.7983984Z     helpers_test.go:239: ======>  post-mortem[TestFunctional/parallel/ComponentHealth]: minikube logs <======
2021-01-12T05:25:42.7985850Z     helpers_test.go:241: (dbg) Run:  ./minikube-linux-amd64 -p minikube logs -n 25
2021-01-12T05:25:44.6975109Z     helpers_test.go:241: (dbg) Done: ./minikube-linux-amd64 -p minikube logs -n 25: (1.895640799s)
2021-01-12T05:25:44.6990091Z     helpers_test.go:246: TestFunctional/parallel/ComponentHealth logs: 
2021-01-12T05:25:44.6991537Z         -- stdout --
2021-01-12T05:25:44.6992357Z         	* ==> Docker <==
2021-01-12T05:25:44.6994218Z         	* Journal file /var/log/journal/71212933024b41b3962b1df8d52a7d31/user-1000.journal is truncated, ignoring file.
2021-01-12T05:25:44.6995426Z         	* -- Logs begin at Fri 2020-12-18 20:44:42 UTC, end at Tue 2021-01-12 05:25:43 UTC. --
2021-01-12T05:25:44.6996521Z         	* Jan 12 05:22:48 fv-az183-750 dockerd[1611]: time="2021-01-12T05:22:48.300718000Z" level=info msg="Loading containers: start."
2021-01-12T05:25:44.6998427Z         	* Jan 12 05:22:48 fv-az183-750 dockerd[1611]: time="2021-01-12T05:22:48.518565900Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
2021-01-12T05:25:44.6999916Z         	* Jan 12 05:22:48 fv-az183-750 dockerd[1611]: time="2021-01-12T05:22:48.560009100Z" level=info msg="Loading containers: done."
2021-01-12T05:25:44.7002172Z         	* Jan 12 05:22:48 fv-az183-750 dockerd[1611]: time="2021-01-12T05:22:48.854791000Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
2021-01-12T05:25:44.7004558Z         	* Jan 12 05:22:48 fv-az183-750 dockerd[1611]: time="2021-01-12T05:22:48.855043800Z" level=info msg="Docker daemon" commit=bd33bbf0497b2327516dc799a5e541b720822a4c graphdriver(s)=overlay2 version=19.03.13+azure
2021-01-12T05:25:44.7006315Z         	* Jan 12 05:22:48 fv-az183-750 dockerd[1611]: time="2021-01-12T05:22:48.856372100Z" level=info msg="Daemon has completed initialization"
2021-01-12T05:25:44.7007782Z         	* Jan 12 05:22:48 fv-az183-750 systemd[1]: Started Docker Application Container Engine.
2021-01-12T05:25:44.7009080Z         	* Jan 12 05:22:48 fv-az183-750 dockerd[1611]: time="2021-01-12T05:22:48.892748000Z" level=info msg="API listen on /var/run/docker.sock"
2021-01-12T05:25:44.7010799Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.259636575Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7012951Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.259681177Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7015390Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.305774864Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7017327Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.330321722Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7019224Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.362708917Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7021274Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.386005321Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7024010Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.400223734Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7027091Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.408642997Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7030322Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.409312826Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7033368Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.432384620Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7036658Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.433886885Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7039318Z         	* Jan 12 05:25:27 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:27.436584701Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7041745Z         	* Jan 12 05:25:28 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:28.739585889Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7044323Z         	* Jan 12 05:25:29 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:29.380744504Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7047595Z         	* Jan 12 05:25:32 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:32.003663141Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7050240Z         	* Jan 12 05:25:37 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:37.765622440Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7052702Z         	* Jan 12 05:25:37 fv-az183-750 dockerd[1611]: time="2021-01-12T05:25:37.765662142Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-01-12T05:25:44.7053968Z         	* 
2021-01-12T05:25:44.7054532Z         	* ==> container status <==
2021-01-12T05:25:44.7055371Z         	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
2021-01-12T05:25:44.7056413Z         	* 96e341e465df2       bfe3a36ebd252       6 seconds ago       Running             coredns                   1                   01dc39edd2b7f
2021-01-12T05:25:44.7058160Z         	* d6580bff778da       ca9843d3b5454       6 seconds ago       Running             kube-apiserver            0                   59f7ff5a48cae
2021-01-12T05:25:44.7059822Z         	* 60d2752fe8f49       85069258b98ac       7 seconds ago       Running             storage-provisioner       2                   d1c5f845b4bd1
2021-01-12T05:25:44.7061932Z         	* 12cc801d8bd4d       ca9843d3b5454       14 seconds ago      Exited              kube-apiserver            1                   ec33feccee719
2021-01-12T05:25:44.7064175Z         	* 0a5da4299472b       85069258b98ac       15 seconds ago      Exited              storage-provisioner       1                   d1c5f845b4bd1
2021-01-12T05:25:44.7065832Z         	* 72c887a753f72       10cc881966cfd       15 seconds ago      Running             kube-proxy                1                   6cc485284b50b
2021-01-12T05:25:44.7067452Z         	* 8407cbea3f2bc       b9fa1895dcaa6       15 seconds ago      Running             kube-controller-manager   1                   60c5f61dacb56
2021-01-12T05:25:44.7068761Z         	* 9f013e8027699       0369cf4303ffd       15 seconds ago      Running             etcd                      1                   a59e467efb040
2021-01-12T05:25:44.7070434Z         	* b677276b51079       3138b6e3d4712       15 seconds ago      Running             kube-scheduler            1                   b2ef13c9f20df
2021-01-12T05:25:44.7071593Z         	* 2fed919fb79c9       bfe3a36ebd252       21 seconds ago      Exited              coredns                   0                   cd7ee1b325f2e
2021-01-12T05:25:44.7073167Z         	* cabf497080b12       10cc881966cfd       24 seconds ago      Exited              kube-proxy                0                   7ee0171f17651
2021-01-12T05:25:44.7074339Z         	* 6c8942dc3c5c0       0369cf4303ffd       48 seconds ago      Exited              etcd                      0                   65d6cc4c95ed3
2021-01-12T05:25:44.7075753Z         	* 391959bafdb17       3138b6e3d4712       48 seconds ago      Exited              kube-scheduler            0                   b49046e7f27dc
2021-01-12T05:25:44.7077338Z         	* 89e2f58a9be63       b9fa1895dcaa6       48 seconds ago      Exited              kube-controller-manager   0                   70eb703baef61
2021-01-12T05:25:44.7078178Z         	* 
2021-01-12T05:25:44.7078775Z         	* ==> coredns [2fed919fb79c] <==
2021-01-12T05:25:44.7079218Z         	* .:53
2021-01-12T05:25:44.7079950Z         	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
2021-01-12T05:25:44.7080931Z         	* CoreDNS-1.7.0
2021-01-12T05:25:44.7081451Z         	* linux/amd64, go1.14.4, f59c03d
2021-01-12T05:25:44.7082197Z         	* [INFO] SIGTERM: Shutting down servers then terminating
2021-01-12T05:25:44.7083529Z         	* [INFO] plugin/health: Going into lameduck mode for 5s
2021-01-12T05:25:44.7084067Z         	* 
2021-01-12T05:25:44.7084494Z         	* ==> coredns [96e341e465df] <==
2021-01-12T05:25:44.7084931Z         	* .:53
2021-01-12T05:25:44.7085649Z         	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
2021-01-12T05:25:44.7086660Z         	* CoreDNS-1.7.0
2021-01-12T05:25:44.7087890Z         	* linux/amd64, go1.14.4, f59c03d
2021-01-12T05:25:44.7090143Z         	* W0112 05:25:37.510967       1 reflector.go:404] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Unexpected watch close - watch lasted less than a second and no items received
2021-01-12T05:25:44.7093037Z         	* W0112 05:25:37.510999       1 reflector.go:404] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: watch of *v1.Endpoints ended with: very short watch: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Unexpected watch close - watch lasted less than a second and no items received
2021-01-12T05:25:44.7095856Z         	* W0112 05:25:37.511052       1 reflector.go:404] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Unexpected watch close - watch lasted less than a second and no items received
2021-01-12T05:25:44.7097479Z         	* 
2021-01-12T05:25:44.7097912Z         	* ==> describe nodes <==
2021-01-12T05:25:44.7100356Z         	* Name:               fv-az183-750
2021-01-12T05:25:44.7101143Z         	* Roles:              control-plane,master
2021-01-12T05:25:44.7101906Z         	* Labels:             beta.kubernetes.io/arch=amd64
2021-01-12T05:25:44.7102641Z         	*                     beta.kubernetes.io/os=linux
2021-01-12T05:25:44.7103306Z         	*                     kubernetes.io/arch=amd64
2021-01-12T05:25:44.7104170Z         	*                     kubernetes.io/hostname=fv-az183-750
2021-01-12T05:25:44.7104848Z         	*                     kubernetes.io/os=linux
2021-01-12T05:25:44.7105692Z         	*                     minikube.k8s.io/commit=edc415c06435a77cb867d0997d33533b1507ed0b
2021-01-12T05:25:44.7107207Z         	*                     minikube.k8s.io/name=minikube
2021-01-12T05:25:44.7108222Z         	*                     minikube.k8s.io/updated_at=2021_01_12T05_25_03_0700
2021-01-12T05:25:44.7108911Z         	*                     minikube.k8s.io/version=v1.16.0
2021-01-12T05:25:44.7109988Z         	*                     node-role.kubernetes.io/control-plane=
2021-01-12T05:25:44.7111621Z         	*                     node-role.kubernetes.io/master=
2021-01-12T05:25:44.7114240Z         	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
2021-01-12T05:25:44.7115386Z         	*                     node.alpha.kubernetes.io/ttl: 0
2021-01-12T05:25:44.7117135Z         	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
2021-01-12T05:25:44.7118236Z         	* CreationTimestamp:  Tue, 12 Jan 2021 05:25:00 +0000
2021-01-12T05:25:44.7118834Z         	* Taints:             <none>
2021-01-12T05:25:44.7119349Z         	* Unschedulable:      false
2021-01-12T05:25:44.7119844Z         	* Lease:
2021-01-12T05:25:44.7120574Z         	*   HolderIdentity:  fv-az183-750
2021-01-12T05:25:44.7121163Z         	*   AcquireTime:     <unset>
2021-01-12T05:25:44.7121737Z         	*   RenewTime:       Tue, 12 Jan 2021 05:25:35 +0000
2021-01-12T05:25:44.7122238Z         	* Conditions:
2021-01-12T05:25:44.7123088Z         	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
2021-01-12T05:25:44.7124499Z         	*   ----             ------  -----------------                 ------------------                ------                       -------
2021-01-12T05:25:44.7125757Z         	*   MemoryPressure   False   Tue, 12 Jan 2021 05:25:35 +0000   Tue, 12 Jan 2021 05:24:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
2021-01-12T05:25:44.7127427Z         	*   DiskPressure     False   Tue, 12 Jan 2021 05:25:35 +0000   Tue, 12 Jan 2021 05:24:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
2021-01-12T05:25:44.7128912Z         	*   PIDPressure      False   Tue, 12 Jan 2021 05:25:35 +0000   Tue, 12 Jan 2021 05:24:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
2021-01-12T05:25:44.7130253Z         	*   Ready            True    Tue, 12 Jan 2021 05:25:35 +0000   Tue, 12 Jan 2021 05:25:35 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
2021-01-12T05:25:44.7131099Z         	* Addresses:
2021-01-12T05:25:44.7131569Z         	*   InternalIP:  10.1.0.4
2021-01-12T05:25:44.7132313Z         	*   Hostname:    fv-az183-750
2021-01-12T05:25:44.7132791Z         	* Capacity:
2021-01-12T05:25:44.7133224Z         	*   cpu:                2
2021-01-12T05:25:44.7133929Z         	*   ephemeral-storage:  87218124Ki
2021-01-12T05:25:44.7134696Z         	*   hugepages-1Gi:      0
2021-01-12T05:25:44.7135374Z         	*   hugepages-2Mi:      0
2021-01-12T05:25:44.7135858Z         	*   memory:             7121296Ki
2021-01-12T05:25:44.7136312Z         	*   pods:               110
2021-01-12T05:25:44.7136763Z         	* Allocatable:
2021-01-12T05:25:44.7137216Z         	*   cpu:                2
2021-01-12T05:25:44.7137911Z         	*   ephemeral-storage:  87218124Ki
2021-01-12T05:25:44.7138652Z         	*   hugepages-1Gi:      0
2021-01-12T05:25:44.7139323Z         	*   hugepages-2Mi:      0
2021-01-12T05:25:44.7139812Z         	*   memory:             7121296Ki
2021-01-12T05:25:44.7140262Z         	*   pods:               110
2021-01-12T05:25:44.7140682Z         	* System Info:
2021-01-12T05:25:44.7141247Z         	*   Machine ID:                 71212933024b41b3962b1df8d52a7d31
2021-01-12T05:25:44.7142261Z         	*   System UUID:                9d1bd067-cf7a-ad4f-89c1-d8367f57d5d5
2021-01-12T05:25:44.7143393Z         	*   Boot ID:                    54c15c7b-bcb1-47c6-84dc-deb0c386b474
2021-01-12T05:25:44.7144305Z         	*   Kernel Version:             5.4.0-1032-azure
2021-01-12T05:25:44.7144876Z         	*   OS Image:                   Ubuntu 18.04.5 LTS
2021-01-12T05:25:44.7145518Z         	*   Operating System:           linux
2021-01-12T05:25:44.7146078Z         	*   Architecture:               amd64
2021-01-12T05:25:44.7146765Z         	*   Container Runtime Version:  docker://19.3.13
2021-01-12T05:25:44.7147419Z         	*   Kubelet Version:            v1.20.0
2021-01-12T05:25:44.7148179Z         	*   Kube-Proxy Version:         v1.20.0
2021-01-12T05:25:44.7148710Z         	* PodCIDR:                      10.244.0.0/24
2021-01-12T05:25:44.7149216Z         	* PodCIDRs:                     10.244.0.0/24
2021-01-12T05:25:44.7149952Z         	* Non-terminated Pods:          (7 in total)
2021-01-12T05:25:44.7150807Z         	*   Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
2021-01-12T05:25:44.7151962Z         	*   ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
2021-01-12T05:25:44.7153146Z         	*   kube-system                 coredns-74ff55c5b-czs6s                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (2%)     24s
2021-01-12T05:25:44.7154325Z         	*   kube-system                 etcd-fv-az183-750                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
2021-01-12T05:25:44.7155590Z         	*   kube-system                 kube-apiserver-fv-az183-750             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7s
2021-01-12T05:25:44.7158279Z         	*   kube-system                 kube-controller-manager-fv-az183-750    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
2021-01-12T05:25:44.7159731Z         	*   kube-system                 kube-proxy-8dnk2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
2021-01-12T05:25:44.7160967Z         	*   kube-system                 kube-scheduler-fv-az183-750             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
2021-01-12T05:25:44.7163548Z         	*   kube-system                 storage-provisioner                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
2021-01-12T05:25:44.7164461Z         	* Allocated resources:
2021-01-12T05:25:44.7165337Z         	*   (Total limits may be over 100 percent, i.e., overcommitted.)
2021-01-12T05:25:44.7166229Z         	*   Resource           Requests    Limits
2021-01-12T05:25:44.7167488Z         	*   --------           --------    ------
2021-01-12T05:25:44.7168188Z         	*   cpu                750m (37%)  0 (0%)
2021-01-12T05:25:44.7168737Z         	*   memory             170Mi (2%)  170Mi (2%)
2021-01-12T05:25:44.7169575Z         	*   ephemeral-storage  100Mi (0%)  0 (0%)
2021-01-12T05:25:44.7170287Z         	*   hugepages-1Gi      0 (0%)      0 (0%)
2021-01-12T05:25:44.7170965Z         	*   hugepages-2Mi      0 (0%)      0 (0%)
2021-01-12T05:25:44.7171404Z         	* Events:
2021-01-12T05:25:44.7171900Z         	*   Type    Reason                   Age                From        Message
2021-01-12T05:25:44.7172973Z         	*   ----    ------                   ----               ----        -------
2021-01-12T05:25:44.7173613Z         	*   Normal  Starting                 51s                kubelet     Starting kubelet.
2021-01-12T05:25:44.7174961Z         	*   Normal  NodeHasSufficientMemory  51s (x4 over 51s)  kubelet     Node fv-az183-750 status is now: NodeHasSufficientMemory
2021-01-12T05:25:44.7177050Z         	*   Normal  NodeHasNoDiskPressure    51s (x3 over 51s)  kubelet     Node fv-az183-750 status is now: NodeHasNoDiskPressure
2021-01-12T05:25:44.7178857Z         	*   Normal  NodeHasSufficientPID     51s (x3 over 51s)  kubelet     Node fv-az183-750 status is now: NodeHasSufficientPID
2021-01-12T05:25:44.7180246Z         	*   Normal  NodeAllocatableEnforced  51s                kubelet     Updated Node Allocatable limit across pods
2021-01-12T05:25:44.7181987Z         	*   Normal  Starting                 39s                kubelet     Starting kubelet.
2021-01-12T05:25:44.7183687Z         	*   Normal  NodeHasSufficientMemory  39s                kubelet     Node fv-az183-750 status is now: NodeHasSufficientMemory
2021-01-12T05:25:44.7185440Z         	*   Normal  NodeHasNoDiskPressure    39s                kubelet     Node fv-az183-750 status is now: NodeHasNoDiskPressure
2021-01-12T05:25:44.7187133Z         	*   Normal  NodeHasSufficientPID     39s                kubelet     Node fv-az183-750 status is now: NodeHasSufficientPID
2021-01-12T05:25:44.7190033Z         	*   Normal  NodeNotReady             39s                kubelet     Node fv-az183-750 status is now: NodeNotReady
2021-01-12T05:25:44.7191548Z         	*   Normal  NodeAllocatableEnforced  39s                kubelet     Updated Node Allocatable limit across pods
2021-01-12T05:25:44.7193426Z         	*   Normal  NodeReady                29s                kubelet     Node fv-az183-750 status is now: NodeReady
2021-01-12T05:25:44.7195000Z         	*   Normal  Starting                 23s                kube-proxy  Starting kube-proxy.
2021-01-12T05:25:44.7197284Z         	*   Normal  Starting                 9s                 kube-proxy  Starting kube-proxy.
2021-01-12T05:25:44.7198312Z         	*   Normal  Starting                 8s                 kubelet     Starting kubelet.
2021-01-12T05:25:44.7199778Z         	*   Normal  NodeHasSufficientMemory  8s                 kubelet     Node fv-az183-750 status is now: NodeHasSufficientMemory
2021-01-12T05:25:44.7201684Z         	*   Normal  NodeHasNoDiskPressure    8s                 kubelet     Node fv-az183-750 status is now: NodeHasNoDiskPressure
2021-01-12T05:25:44.7203385Z         	*   Normal  NodeHasSufficientPID     8s                 kubelet     Node fv-az183-750 status is now: NodeHasSufficientPID
2021-01-12T05:25:44.7204866Z         	*   Normal  NodeNotReady             8s                 kubelet     Node fv-az183-750 status is now: NodeNotReady
2021-01-12T05:25:44.7206040Z         	*   Normal  NodeAllocatableEnforced  8s                 kubelet     Updated Node Allocatable limit across pods
2021-01-12T05:25:44.7207633Z         	*   Normal  NodeReady                8s                 kubelet     Node fv-az183-750 status is now: NodeReady
2021-01-12T05:25:44.7208275Z         	* 
2021-01-12T05:25:44.7208650Z         	* ==> dmesg <==
2021-01-12T05:25:44.7209808Z         	* [Jan12 05:22] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
2021-01-12T05:25:44.7211127Z         	* [  +0.076032] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
2021-01-12T05:25:44.7211997Z         	*               * this clock source is slow. Consider trying other clock sources
2021-01-12T05:25:44.7212840Z         	* [  +1.402387] platform eisa.0: EISA: Cannot allocate resource for mainboard
2021-01-12T05:25:44.7213767Z         	* [  +0.008264] platform eisa.0: Cannot allocate resource for EISA slot 1
2021-01-12T05:25:44.7214523Z         	* [  +0.006519] platform eisa.0: Cannot allocate resource for EISA slot 2
2021-01-12T05:25:44.7215468Z         	* [  +0.014804] platform eisa.0: Cannot allocate resource for EISA slot 3
2021-01-12T05:25:44.7216401Z         	* [  +0.013868] platform eisa.0: Cannot allocate resource for EISA slot 4
2021-01-12T05:25:44.7217166Z         	* [  +0.009885] platform eisa.0: Cannot allocate resource for EISA slot 5
2021-01-12T05:25:44.7217912Z         	* [  +0.010665] platform eisa.0: Cannot allocate resource for EISA slot 6
2021-01-12T05:25:44.7218672Z         	* [  +0.006854] platform eisa.0: Cannot allocate resource for EISA slot 7
2021-01-12T05:25:44.7219615Z         	* [  +0.006597] platform eisa.0: Cannot allocate resource for EISA slot 8
2021-01-12T05:25:44.7220496Z         	* [  +0.171538] Unstable clock detected, switching default tracing clock to "global"
2021-01-12T05:25:44.7221315Z         	*               If you want to keep using the local clock, then add:
2021-01-12T05:25:44.7221913Z         	*                 "trace_clock=local"
2021-01-12T05:25:44.7222485Z         	*               on the kernel command line
2021-01-12T05:25:44.7224337Z         	* [  +5.357688] systemd[1]: Configuration file /etc/systemd/system/runner-provisioner.service is marked executable. Please remove executable permission bits. Proceeding anyway.
2021-01-12T05:25:44.7226140Z         	* [  +0.160800] systemd[1]: /etc/systemd/system/runner-provisioner.service:3: Invalid URL, ignoring: None
2021-01-12T05:25:44.7227181Z         	* [  +8.708016] new mount options do not match the existing superblock, will be ignored
2021-01-12T05:25:44.7227868Z         	* [  +4.886468] Started bpfilter
2021-01-12T05:25:44.7228448Z         	* [  +0.612620] kauditd_printk_skb: 7 callbacks suppressed
2021-01-12T05:25:44.7230041Z         	* [Jan12 05:24] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to [email protected] if you depend on this functionality.
2021-01-12T05:25:44.7230936Z         	* 
2021-01-12T05:25:44.7231317Z         	* ==> etcd [6c8942dc3c5c] <==
2021-01-12T05:25:44.7232380Z         	* 2021-01-12 05:24:56.481969 I | etcdserver: ea713dbad49c9c1c as single-node; fast-forwarding 9 ticks (election ticks 10)
2021-01-12T05:25:44.7233458Z         	* 2021-01-12 05:24:56.482026 I | embed: listening for peers on 10.1.0.4:2380
2021-01-12T05:25:44.7234248Z         	* raft2021/01/12 05:24:56 INFO: ea713dbad49c9c1c switched to configuration voters=(16893351549883685916)
2021-01-12T05:25:44.7235719Z         	* 2021-01-12 05:24:56.484882 I | etcdserver/membership: added member ea713dbad49c9c1c [https://10.1.0.4:2380] to cluster ac34d98d3f2e481e
2021-01-12T05:25:44.7236812Z         	* raft2021/01/12 05:24:56 INFO: ea713dbad49c9c1c is starting a new election at term 1
2021-01-12T05:25:44.7237614Z         	* raft2021/01/12 05:24:56 INFO: ea713dbad49c9c1c became candidate at term 2
2021-01-12T05:25:44.7238537Z         	* raft2021/01/12 05:24:56 INFO: ea713dbad49c9c1c received MsgVoteResp from ea713dbad49c9c1c at term 2
2021-01-12T05:25:44.7239421Z         	* raft2021/01/12 05:24:56 INFO: ea713dbad49c9c1c became leader at term 2
2021-01-12T05:25:44.7240555Z         	* raft2021/01/12 05:24:56 INFO: raft.node: ea713dbad49c9c1c elected leader ea713dbad49c9c1c at term 2
2021-01-12T05:25:44.7242043Z         	* 2021-01-12 05:24:56.949380 I | etcdserver: setting up the initial cluster version to 3.4
2021-01-12T05:25:44.7243186Z         	* 2021-01-12 05:24:56.952792 N | etcdserver/membership: set the initial cluster version to 3.4
2021-01-12T05:25:44.7244310Z         	* 2021-01-12 05:24:56.952840 I | etcdserver/api: enabled capabilities for version 3.4
2021-01-12T05:25:44.7245734Z         	* 2021-01-12 05:24:56.952873 I | etcdserver: published {Name:fv-az183-750 ClientURLs:[https://10.1.0.4:2379]} to cluster ac34d98d3f2e481e
2021-01-12T05:25:44.7248895Z         	* 2021-01-12 05:24:56.952976 I | embed: ready to serve client requests
2021-01-12T05:25:44.7251115Z         	* 2021-01-12 05:24:56.954140 I | embed: serving client requests on 127.0.0.1:2379
2021-01-12T05:25:44.7257769Z         	* 2021-01-12 05:24:56.961163 I | embed: ready to serve client requests
2021-01-12T05:25:44.7258974Z         	* 2021-01-12 05:24:56.965758 I | embed: serving client requests on 10.1.0.4:2379
2021-01-12T05:25:44.7260032Z         	* 2021-01-12 05:25:06.289953 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-01-12T05:25:44.7261078Z         	* 2021-01-12 05:25:06.371771 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-01-12T05:25:44.7262129Z         	* 2021-01-12 05:25:16.371803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-01-12T05:25:44.7263175Z         	* 2021-01-12 05:25:26.371736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-01-12T05:25:44.7264358Z         	* 2021-01-12 05:25:26.940143 N | pkg/osutil: received terminated signal, shutting down...
2021-01-12T05:25:44.7266553Z         	* WARNING: 2021/01/12 05:25:26 grpc: addrConn.createTransport failed to connect to {10.1.0.4:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.1.0.4:2379: connect: connection refused". Reconnecting...
2021-01-12T05:25:44.7269233Z         	* 2021-01-12 05:25:26.958966 I | etcdserver: skipped leadership transfer for single voting member cluster
2021-01-12T05:25:44.7272473Z         	* WARNING: 2021/01/12 05:25:26 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2021-01-12T05:25:44.7273705Z         	* 
2021-01-12T05:25:44.7274129Z         	* ==> etcd [9f013e802769] <==
2021-01-12T05:25:44.7275224Z         	* 2021-01-12 05:25:29.349720 I | embed: initial advertise peer URLs = https://10.1.0.4:2380
2021-01-12T05:25:44.7276206Z         	* 2021-01-12 05:25:29.349725 I | embed: initial cluster = 
2021-01-12T05:25:44.7277939Z         	* 2021-01-12 05:25:29.366998 I | etcdserver: restarting member ea713dbad49c9c1c in cluster ac34d98d3f2e481e at commit index 493
2021-01-12T05:25:44.7279047Z         	* raft2021/01/12 05:25:29 INFO: ea713dbad49c9c1c switched to configuration voters=()
2021-01-12T05:25:44.7279966Z         	* raft2021/01/12 05:25:29 INFO: ea713dbad49c9c1c became follower at term 2
2021-01-12T05:25:44.7280952Z         	* raft2021/01/12 05:25:29 INFO: newRaft ea713dbad49c9c1c [peers: [], term: 2, commit: 493, applied: 0, lastindex: 493, lastterm: 2]
2021-01-12T05:25:44.7282407Z         	* 2021-01-12 05:25:29.369694 W | auth: simple token is not cryptographically signed
2021-01-12T05:25:44.7283771Z         	* 2021-01-12 05:25:29.425311 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
2021-01-12T05:25:44.7284808Z         	* raft2021/01/12 05:25:29 INFO: ea713dbad49c9c1c switched to configuration voters=(16893351549883685916)
2021-01-12T05:25:44.7286284Z         	* 2021-01-12 05:25:29.425906 I | etcdserver/membership: added member ea713dbad49c9c1c [https://10.1.0.4:2380] to cluster ac34d98d3f2e481e
2021-01-12T05:25:44.7287827Z         	* 2021-01-12 05:25:29.425970 N | etcdserver/membership: set the initial cluster version to 3.4
2021-01-12T05:25:44.7288964Z         	* 2021-01-12 05:25:29.425999 I | etcdserver/api: enabled capabilities for version 3.4
2021-01-12T05:25:44.7290891Z         	* 2021-01-12 05:25:29.432111 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2021-01-12T05:25:44.7292617Z         	* 2021-01-12 05:25:29.432261 I | embed: listening for metrics on http://127.0.0.1:2381
2021-01-12T05:25:44.7293620Z         	* 2021-01-12 05:25:29.432326 I | embed: listening for peers on 10.1.0.4:2380
2021-01-12T05:25:44.7294422Z         	* raft2021/01/12 05:25:30 INFO: ea713dbad49c9c1c is starting a new election at term 2
2021-01-12T05:25:44.7295305Z         	* raft2021/01/12 05:25:30 INFO: ea713dbad49c9c1c became candidate at term 3
2021-01-12T05:25:44.7296280Z         	* raft2021/01/12 05:25:30 INFO: ea713dbad49c9c1c received MsgVoteResp from ea713dbad49c9c1c at term 3
2021-01-12T05:25:44.7297255Z         	* raft2021/01/12 05:25:30 INFO: ea713dbad49c9c1c became leader at term 3
2021-01-12T05:25:44.7298201Z         	* raft2021/01/12 05:25:30 INFO: raft.node: ea713dbad49c9c1c elected leader ea713dbad49c9c1c at term 3
2021-01-12T05:25:44.7299732Z         	* 2021-01-12 05:25:30.873519 I | etcdserver: published {Name:fv-az183-750 ClientURLs:[https://10.1.0.4:2379]} to cluster ac34d98d3f2e481e
2021-01-12T05:25:44.7300970Z         	* 2021-01-12 05:25:30.873839 I | embed: ready to serve client requests
2021-01-12T05:25:44.7301907Z         	* 2021-01-12 05:25:30.875304 I | embed: serving client requests on 127.0.0.1:2379
2021-01-12T05:25:44.7302814Z         	* 2021-01-12 05:25:30.875506 I | embed: ready to serve client requests
2021-01-12T05:25:44.7303748Z         	* 2021-01-12 05:25:30.876739 I | embed: serving client requests on 10.1.0.4:2379
2021-01-12T05:25:44.7304271Z         	* 
2021-01-12T05:25:44.7304646Z         	* ==> kernel <==
2021-01-12T05:25:44.7305272Z         	*  05:25:43 up 3 min,  0 users,  load average: 2.22, 1.11, 0.44
2021-01-12T05:25:44.7306297Z         	* Linux fv-az183-750 5.4.0-1032-azure #33~18.04.1-Ubuntu SMP Tue Nov 17 11:40:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
2021-01-12T05:25:44.7307026Z         	* PRETTY_NAME="Ubuntu 18.04.5 LTS"
2021-01-12T05:25:44.7307464Z         	* 
2021-01-12T05:25:44.7308137Z         	* ==> kube-apiserver [12cc801d8bd4] <==
2021-01-12T05:25:44.7308884Z         	* I0112 05:25:34.506876       1 controller.go:86] Starting OpenAPI controller
2021-01-12T05:25:44.7309897Z         	* I0112 05:25:34.506900       1 naming_controller.go:291] Starting NamingConditionController
2021-01-12T05:25:44.7311056Z         	* I0112 05:25:34.506916       1 establishing_controller.go:76] Starting EstablishingController
2021-01-12T05:25:44.7312603Z         	* I0112 05:25:34.506927       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
2021-01-12T05:25:44.7314924Z         	* I0112 05:25:34.506937       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
2021-01-12T05:25:44.7316704Z         	* I0112 05:25:34.506948       1 crd_finalizer.go:266] Starting CRDFinalizer
2021-01-12T05:25:44.7317978Z         	* I0112 05:25:34.561092       1 crdregistration_controller.go:111] Starting crd-autoregister controller
2021-01-12T05:25:44.7319312Z         	* I0112 05:25:34.561260       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
2021-01-12T05:25:44.7320340Z         	* I0112 05:25:34.591952       1 cache.go:39] Caches are synced for autoregister controller
2021-01-12T05:25:44.7321299Z         	* I0112 05:25:34.595678       1 apf_controller.go:253] Running API Priority and Fairness config worker
2021-01-12T05:25:44.7322228Z         	* I0112 05:25:34.600236       1 shared_informer.go:247] Caches are synced for node_authorizer 
2021-01-12T05:25:44.7323422Z         	* I0112 05:25:34.601456       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2021-01-12T05:25:44.7324836Z         	* I0112 05:25:34.610900       1 cache.go:39] Caches are synced for AvailableConditionController controller
2021-01-12T05:25:44.7326497Z         	* E0112 05:25:34.617771       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
2021-01-12T05:25:44.7328305Z         	* I0112 05:25:34.661367       1 shared_informer.go:247] Caches are synced for crd-autoregister 
2021-01-12T05:25:44.7329312Z         	* I0112 05:25:34.687674       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
2021-01-12T05:25:44.7330458Z         	* I0112 05:25:35.394795       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
2021-01-12T05:25:44.7331714Z         	* I0112 05:25:35.485160       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2021-01-12T05:25:44.7333333Z         	* I0112 05:25:35.485200       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2021-01-12T05:25:44.7334693Z         	* I0112 05:25:35.494651       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
2021-01-12T05:25:44.7335950Z         	* I0112 05:25:36.067278       1 controller.go:606] quota admission added evaluator for: serviceaccounts
2021-01-12T05:25:44.7337047Z         	* I0112 05:25:36.082382       1 controller.go:606] quota admission added evaluator for: deployments.apps
2021-01-12T05:25:44.7338126Z         	* I0112 05:25:36.116973       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
2021-01-12T05:25:44.7342397Z         	* I0112 05:25:36.136249       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
2021-01-12T05:25:44.7344375Z         	* I0112 05:25:36.149761       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
2021-01-12T05:25:44.7345427Z         	* 
2021-01-12T05:25:44.7346289Z         	* ==> kube-apiserver [d6580bff778d] <==
2021-01-12T05:25:44.7347357Z         	* I0112 05:25:41.596905       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
2021-01-12T05:25:44.7348888Z         	* I0112 05:25:41.596911       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
2021-01-12T05:25:44.7350176Z         	* I0112 05:25:41.596928       1 controller.go:83] Starting OpenAPI AggregationController
2021-01-12T05:25:44.7352039Z         	* I0112 05:25:41.623370       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
2021-01-12T05:25:44.7362639Z         	* I0112 05:25:41.623395       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
2021-01-12T05:25:44.7363926Z         	* I0112 05:25:41.574470       1 apf_controller.go:249] Starting API Priority and Fairness config controller
2021-01-12T05:25:44.7365296Z         	* I0112 05:25:41.623956       1 crdregistration_controller.go:111] Starting crd-autoregister controller
2021-01-12T05:25:44.7366636Z         	* I0112 05:25:41.623963       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
2021-01-12T05:25:44.7367894Z         	* I0112 05:25:41.648793       1 controller.go:86] Starting OpenAPI controller
2021-01-12T05:25:44.7368927Z         	* I0112 05:25:41.648954       1 naming_controller.go:291] Starting NamingConditionController
2021-01-12T05:25:44.7370104Z         	* I0112 05:25:41.648966       1 establishing_controller.go:76] Starting EstablishingController
2021-01-12T05:25:44.7371654Z         	* I0112 05:25:41.648978       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
2021-01-12T05:25:44.7373963Z         	* I0112 05:25:41.648990       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
2021-01-12T05:25:44.7375754Z         	* I0112 05:25:41.649003       1 crd_finalizer.go:266] Starting CRDFinalizer
2021-01-12T05:25:44.7377065Z         	* E0112 05:25:41.760300       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
2021-01-12T05:25:44.7378431Z         	* I0112 05:25:41.769065       1 shared_informer.go:247] Caches are synced for node_authorizer 
2021-01-12T05:25:44.7379457Z         	* I0112 05:25:41.773479       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
2021-01-12T05:25:44.7380711Z         	* I0112 05:25:41.774387       1 cache.go:39] Caches are synced for AvailableConditionController controller
2021-01-12T05:25:44.7381816Z         	* I0112 05:25:41.775024       1 cache.go:39] Caches are synced for autoregister controller
2021-01-12T05:25:44.7383037Z         	* I0112 05:25:41.796992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2021-01-12T05:25:44.7384531Z         	* I0112 05:25:41.826809       1 shared_informer.go:247] Caches are synced for crd-autoregister 
2021-01-12T05:25:44.7385524Z         	* I0112 05:25:41.826855       1 apf_controller.go:253] Running API Priority and Fairness config worker
2021-01-12T05:25:44.7387172Z         	* I0112 05:25:42.570719       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2021-01-12T05:25:44.7388720Z         	* I0112 05:25:42.570839       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2021-01-12T05:25:44.7390172Z         	* I0112 05:25:42.574590       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
2021-01-12T05:25:44.7391117Z         	* 
2021-01-12T05:25:44.7391969Z         	* ==> kube-controller-manager [8407cbea3f2b] <==
2021-01-12T05:25:44.7393040Z         	* Flag --port has been deprecated, see --secure-port instead.
2021-01-12T05:25:44.7394137Z         	* I0112 05:25:30.325798       1 serving.go:331] Generated self-signed cert in-memory
2021-01-12T05:25:44.7395012Z         	* I0112 05:25:31.100576       1 controllermanager.go:176] Version: v1.20.0
2021-01-12T05:25:44.7395822Z         	* I0112 05:25:31.102570       1 secure_serving.go:197] Serving securely on 127.0.0.1:10257
2021-01-12T05:25:44.7397522Z         	* I0112 05:25:31.102657       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
2021-01-12T05:25:44.7402702Z         	* I0112 05:25:31.102675       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
2021-01-12T05:25:44.7404680Z         	* I0112 05:25:31.102704       1 tlsconfig.go:240] Starting DynamicServingCertificateController
2021-01-12T05:25:44.7406119Z         	* I0112 05:25:37.071725       1 shared_informer.go:240] Waiting for caches to sync for tokens
2021-01-12T05:25:44.7408264Z         	* I0112 05:25:37.084125       1 node_ipam_controller.go:91] Sending events to api server.
2021-01-12T05:25:44.7410424Z         	* I0112 05:25:37.172580       1 shared_informer.go:247] Caches are synced for tokens 
2021-01-12T05:25:44.7415419Z         	* W0112 05:25:37.504580       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-01-12T05:25:44.7419569Z         	* W0112 05:25:37.504619       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-01-12T05:25:44.7422344Z         	* E0112 05:25:41.697708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" at the cluster scope
2021-01-12T05:25:44.7425553Z         	* E0112 05:25:41.698188       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: serviceaccounts is forbidden: User "system:kube-controller-manager" cannot list resource "serviceaccounts" in API group "" at the cluster scope
2021-01-12T05:25:44.7427861Z         	* 
2021-01-12T05:25:44.7429753Z         	* ==> kube-controller-manager [89e2f58a9be6] <==
2021-01-12T05:25:44.7432054Z         	* I0112 05:25:19.294319       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
2021-01-12T05:25:44.7433615Z         	* I0112 05:25:19.322353       1 range_allocator.go:373] Set node fv-az183-750 PodCIDR to [10.244.0.0/24]
2021-01-12T05:25:44.7434494Z         	* I0112 05:25:19.400144       1 shared_informer.go:247] Caches are synced for disruption 
2021-01-12T05:25:44.7435354Z         	* I0112 05:25:19.400166       1 disruption.go:339] Sending events to api server.
2021-01-12T05:25:44.7436196Z         	* I0112 05:25:19.401479       1 shared_informer.go:247] Caches are synced for deployment 
2021-01-12T05:25:44.7438033Z         	* I0112 05:25:19.407458       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
2021-01-12T05:25:44.7440544Z         	* I0112 05:25:19.414777       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-czs6s"
2021-01-12T05:25:44.7442311Z         	* I0112 05:25:19.421263       1 request.go:655] Throttling request took 1.050127056s, request: GET:https://10.1.0.4:8441/apis/batch/v1?timeout=32s
2021-01-12T05:25:44.7443651Z         	* I0112 05:25:19.421490       1 shared_informer.go:247] Caches are synced for ReplicationController 
2021-01-12T05:25:44.7445717Z         	* I0112 05:25:19.435472       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-jjth9"
2021-01-12T05:25:44.7447507Z         	* I0112 05:25:19.467429       1 shared_informer.go:247] Caches are synced for taint 
2021-01-12T05:25:44.7448451Z         	* I0112 05:25:19.467565       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
2021-01-12T05:25:44.7450262Z         	* W0112 05:25:19.467654       1 node_lifecycle_controller.go:1044] Missing timestamp for Node fv-az183-750. Assuming now as a timestamp.
2021-01-12T05:25:44.7471311Z         	* I0112 05:25:19.467696       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
2021-01-12T05:25:44.7473907Z         	* I0112 05:25:19.468017       1 event.go:291] "Event occurred" object="fv-az183-750" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node fv-az183-750 event: Registered Node fv-az183-750 in Controller"
2021-01-12T05:25:44.7475680Z         	* I0112 05:25:19.468039       1 taint_manager.go:187] Starting NoExecuteTaintManager
2021-01-12T05:25:44.7476951Z         	* I0112 05:25:19.472941       1 shared_informer.go:247] Caches are synced for resource quota 
2021-01-12T05:25:44.7478524Z         	* I0112 05:25:19.622563       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
2021-01-12T05:25:44.7479462Z         	* I0112 05:25:19.901010       1 shared_informer.go:247] Caches are synced for garbage collector 
2021-01-12T05:25:44.7483908Z         	* I0112 05:25:19.901059       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
2021-01-12T05:25:44.7487296Z         	* I0112 05:25:19.922710       1 shared_informer.go:247] Caches are synced for garbage collector 
2021-01-12T05:25:44.7489580Z         	* I0112 05:25:20.090140       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
2021-01-12T05:25:44.7493542Z         	* I0112 05:25:20.118035       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-jjth9"
2021-01-12T05:25:44.7495191Z         	* I0112 05:25:20.223059       1 shared_informer.go:240] Waiting for caches to sync for resource quota
2021-01-12T05:25:44.7496553Z         	* I0112 05:25:20.223093       1 shared_informer.go:247] Caches are synced for resource quota 
2021-01-12T05:25:44.7497978Z         	* 
2021-01-12T05:25:44.7499466Z         	* ==> kube-proxy [72c887a753f7] <==
2021-01-12T05:25:44.7500749Z         	* I0112 05:25:34.649480       1 node.go:172] Successfully retrieved node IP: 10.1.0.4
2021-01-12T05:25:44.7501992Z         	* I0112 05:25:34.657798       1 server_others.go:142] kube-proxy node IP is an IPv4 address (10.1.0.4), assume IPv4 operation
2021-01-12T05:25:44.7503002Z         	* W0112 05:25:34.703560       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
2021-01-12T05:25:44.7504860Z         	* I0112 05:25:34.703640       1 server_others.go:185] Using iptables Proxier.
2021-01-12T05:25:44.7505811Z         	* I0112 05:25:34.703824       1 server.go:650] Version: v1.20.0
2021-01-12T05:25:44.7506971Z         	* I0112 05:25:34.704097       1 conntrack.go:52] Setting nf_conntrack_max to 131072
2021-01-12T05:25:44.7507777Z         	* I0112 05:25:34.705616       1 config.go:315] Starting service config controller
2021-01-12T05:25:44.7508645Z         	* I0112 05:25:34.705629       1 shared_informer.go:240] Waiting for caches to sync for service config
2021-01-12T05:25:44.7509719Z         	* I0112 05:25:34.705646       1 config.go:224] Starting endpoint slice config controller
2021-01-12T05:25:44.7511524Z         	* I0112 05:25:34.705650       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
2021-01-12T05:25:44.7512506Z         	* I0112 05:25:34.805726       1 shared_informer.go:247] Caches are synced for endpoint slice config 
2021-01-12T05:25:44.7513409Z         	* I0112 05:25:34.805774       1 shared_informer.go:247] Caches are synced for service config 
2021-01-12T05:25:44.7515923Z         	* 
2021-01-12T05:25:44.7517176Z         	* ==> kube-proxy [cabf497080b1] <==
2021-01-12T05:25:44.7519371Z         	* I0112 05:25:20.284298       1 node.go:172] Successfully retrieved node IP: 10.1.0.4
2021-01-12T05:25:44.7523711Z         	* I0112 05:25:20.284354       1 server_others.go:142] kube-proxy node IP is an IPv4 address (10.1.0.4), assume IPv4 operation
2021-01-12T05:25:44.7524758Z         	* W0112 05:25:20.316182       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
2021-01-12T05:25:44.7529645Z         	* I0112 05:25:20.316256       1 server_others.go:185] Using iptables Proxier.
2021-01-12T05:25:44.7532301Z         	* I0112 05:25:20.316462       1 server.go:650] Version: v1.20.0
2021-01-12T05:25:44.7534332Z         	* I0112 05:25:20.316743       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
2021-01-12T05:25:44.7536092Z         	* I0112 05:25:20.316770       1 conntrack.go:52] Setting nf_conntrack_max to 131072
2021-01-12T05:25:44.7538075Z         	* I0112 05:25:20.320927       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
2021-01-12T05:25:44.7539409Z         	* I0112 05:25:20.321010       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
2021-01-12T05:25:44.7540354Z         	* I0112 05:25:20.321829       1 config.go:315] Starting service config controller
2021-01-12T05:25:44.7541212Z         	* I0112 05:25:20.321847       1 shared_informer.go:240] Waiting for caches to sync for service config
2021-01-12T05:25:44.7542116Z         	* I0112 05:25:20.321872       1 config.go:224] Starting endpoint slice config controller
2021-01-12T05:25:44.7543026Z         	* I0112 05:25:20.321881       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
2021-01-12T05:25:44.7543994Z         	* I0112 05:25:20.422367       1 shared_informer.go:247] Caches are synced for endpoint slice config 
2021-01-12T05:25:44.7544883Z         	* I0112 05:25:20.422413       1 shared_informer.go:247] Caches are synced for service config 
2021-01-12T05:25:44.7545498Z         	* 
2021-01-12T05:25:44.7546156Z         	* ==> kube-scheduler [391959bafdb1] <==
2021-01-12T05:25:44.7548429Z         	* W0112 05:25:00.754977       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
2021-01-12T05:25:44.7550720Z         	* W0112 05:25:00.755128       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
2021-01-12T05:25:44.7552890Z         	* W0112 05:25:00.755211       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
2021-01-12T05:25:44.7554403Z         	* I0112 05:25:00.822297       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
2021-01-12T05:25:44.7556438Z         	* I0112 05:25:00.823078       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-01-12T05:25:44.7559221Z         	* I0112 05:25:00.823267       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-01-12T05:25:44.7561101Z         	* I0112 05:25:00.823358       1 tlsconfig.go:240] Starting DynamicServingCertificateController
2021-01-12T05:25:44.7563269Z         	* E0112 05:25:00.833034       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
2021-01-12T05:25:44.7566069Z         	* E0112 05:25:00.833257       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
2021-01-12T05:25:44.7569581Z         	* E0112 05:25:00.833460       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
2021-01-12T05:25:44.7572590Z         	* E0112 05:25:00.835854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
2021-01-12T05:25:44.7575496Z         	* E0112 05:25:00.836070       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
2021-01-12T05:25:44.7578779Z         	* E0112 05:25:00.836241       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
2021-01-12T05:25:44.7590632Z         	* E0112 05:25:00.836370       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2021-01-12T05:25:44.7594174Z         	* E0112 05:25:00.836529       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
2021-01-12T05:25:44.7597467Z         	* E0112 05:25:00.836705       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-01-12T05:25:44.7600905Z         	* E0112 05:25:00.836879       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
2021-01-12T05:25:44.7604535Z         	* E0112 05:25:00.837077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
2021-01-12T05:25:44.7610764Z         	* E0112 05:25:00.841901       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
2021-01-12T05:25:44.7621670Z         	* E0112 05:25:01.762307       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
2021-01-12T05:25:44.7632738Z         	* E0112 05:25:01.776601       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2021-01-12T05:25:44.7642017Z         	* E0112 05:25:01.825620       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-01-12T05:25:44.7649381Z         	* E0112 05:25:01.842721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
2021-01-12T05:25:44.7654021Z         	* E0112 05:25:01.898494       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
2021-01-12T05:25:44.7657528Z         	* I0112 05:25:03.623458       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
2021-01-12T05:25:44.7658954Z         	* 
2021-01-12T05:25:44.7659766Z         	* ==> kube-scheduler [b677276b5107] <==
2021-01-12T05:25:44.7660628Z         	* I0112 05:25:34.663935       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
2021-01-12T05:25:44.7662534Z         	* I0112 05:25:34.664569       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
2021-01-12T05:25:44.7665730Z         	* I0112 05:25:34.664587       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
2021-01-12T05:25:44.7668839Z         	* I0112 05:25:34.664661       1 tlsconfig.go:240] Starting DynamicServingCertificateController
2021-01-12T05:25:44.7672512Z         	* I0112 05:25:34.672938       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-01-12T05:25:44.7676968Z         	* I0112 05:25:34.672958       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-01-12T05:25:44.7681862Z         	* I0112 05:25:34.672978       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
2021-01-12T05:25:44.7687490Z         	* I0112 05:25:34.672986       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
2021-01-12T05:25:44.7689525Z         	* I0112 05:25:34.764728       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
2021-01-12T05:25:44.7691866Z         	* I0112 05:25:34.773119       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
2021-01-12T05:25:44.7694898Z         	* I0112 05:25:34.773124       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
2021-01-12T05:25:44.7697161Z         	* E0112 05:25:41.698676       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)
2021-01-12T05:25:44.7699721Z         	* E0112 05:25:41.698902       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)
2021-01-12T05:25:44.7701619Z         	* E0112 05:25:41.699016       1 reflector.go:138] k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: unknown (get configmaps)
2021-01-12T05:25:44.7703727Z         	* E0112 05:25:41.756958       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
2021-01-12T05:25:44.7706135Z         	* E0112 05:25:41.757212       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
2021-01-12T05:25:44.7708429Z         	* E0112 05:25:41.757362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
2021-01-12T05:25:44.7710722Z         	* E0112 05:25:41.757466       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
2021-01-12T05:25:44.7712941Z         	* E0112 05:25:41.757620       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
2021-01-12T05:25:44.7715164Z         	* E0112 05:25:41.757867       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
2021-01-12T05:25:44.7716857Z         	* E0112 05:25:41.758631       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
2021-01-12T05:25:44.7718327Z         	* E0112 05:25:41.761624       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
2021-01-12T05:25:44.7720046Z         	* E0112 05:25:41.764381       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
2021-01-12T05:25:44.7721770Z         	* E0112 05:25:41.764736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
2021-01-12T05:25:44.7723766Z         	* E0112 05:25:41.764945       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
2021-01-12T05:25:44.7724784Z         	* 
2021-01-12T05:25:44.7725187Z         	* ==> kubelet <==
2021-01-12T05:25:44.7726321Z         	* Journal file /var/log/journal/71212933024b41b3962b1df8d52a7d31/user-1000.journal is truncated, ignoring file.
2021-01-12T05:25:44.7727708Z         	* -- Logs begin at Fri 2020-12-18 20:44:42 UTC, end at Tue 2021-01-12 05:25:44 UTC. --
2021-01-12T05:25:44.7731505Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193454   10301 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/65286fb339aec5ec7d2fb601f0d82340-usr-local-share-ca-certificates") pod "kube-apiserver-fv-az183-750" (UID: "65286fb339aec5ec7d2fb601f0d82340")
2021-01-12T05:25:44.7737469Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193470   10301 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/a3e7be694ef7cf952503c5d331abc0ac-flexvolume-dir") pod "kube-controller-manager-fv-az183-750" (UID: "a3e7be694ef7cf952503c5d331abc0ac")
2021-01-12T05:25:44.7743093Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193491   10301 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a3e7be694ef7cf952503c5d331abc0ac-k8s-certs") pod "kube-controller-manager-fv-az183-750" (UID: "a3e7be694ef7cf952503c5d331abc0ac")
2021-01-12T05:25:44.7748296Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193535   10301 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/df6ac209-ca79-4fd7-a031-7bc6daba4cb8-lib-modules") pod "kube-proxy-8dnk2" (UID: "df6ac209-ca79-4fd7-a031-7bc6daba4cb8")
2021-01-12T05:25:44.7752808Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193602   10301 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/90323557-5558-4561-9177-9f948dc07d69-tmp") pod "storage-provisioner" (UID: "90323557-5558-4561-9177-9f948dc07d69")
2021-01-12T05:25:44.7757055Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193617   10301 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/118f2a7de72604883658c3510cd0e586-etcd-data") pod "etcd-fv-az183-750" (UID: "118f2a7de72604883658c3510cd0e586")
2021-01-12T05:25:44.7761703Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193631   10301 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/65286fb339aec5ec7d2fb601f0d82340-ca-certs") pod "kube-apiserver-fv-az183-750" (UID: "65286fb339aec5ec7d2fb601f0d82340")
2021-01-12T05:25:44.7766962Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193646   10301 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/65286fb339aec5ec7d2fb601f0d82340-etc-ca-certificates") pod "kube-apiserver-fv-az183-750" (UID: "65286fb339aec5ec7d2fb601f0d82340")
2021-01-12T05:25:44.7772628Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193695   10301 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/65286fb339aec5ec7d2fb601f0d82340-usr-share-ca-certificates") pod "kube-apiserver-fv-az183-750" (UID: "65286fb339aec5ec7d2fb601f0d82340")
2021-01-12T05:25:44.7778220Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193744   10301 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a3e7be694ef7cf952503c5d331abc0ac-ca-certs") pod "kube-controller-manager-fv-az183-750" (UID: "a3e7be694ef7cf952503c5d331abc0ac")
2021-01-12T05:25:44.7781778Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.193756   10301 reconciler.go:157] Reconciler: start to sync state
2021-01-12T05:25:44.7783762Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: I0112 05:25:36.449005   10301 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0a5da4299472b48a1af0ffe770d9640707d99d8626982b7654586338f31af0c6
2021-01-12T05:25:44.7786314Z         	* Jan 12 05:25:36 fv-az183-750 kubelet[10301]: W0112 05:25:36.610689   10301 kubelet.go:1622] Deleted mirror pod "kube-apiserver-fv-az183-750_kube-system(b7fd540b-d413-47c5-9b1a-4363987b30eb)" because it is outdated
2021-01-12T05:25:44.7788814Z         	* Jan 12 05:25:37 fv-az183-750 kubelet[10301]: W0112 05:25:37.067652   10301 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-czs6s through plugin: invalid network status for
2021-01-12T05:25:44.7791607Z         	* Jan 12 05:25:37 fv-az183-750 kubelet[10301]: W0112 05:25:37.087150   10301 pod_container_deletor.go:79] Container "01dc39edd2b7f60b4c4198c1b8ab5ab05b3c2184f26dfc1ee7d00a9c66e56c72" not found in pod's containers
2021-01-12T05:25:44.7802900Z         	* Jan 12 05:25:37 fv-az183-750 kubelet[10301]: E0112 05:25:37.503417   10301 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-fv-az183-750.165964c4a3b115bf", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-fv-az183-750", UID:"65286fb339aec5ec7d2fb601f0d82340", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Started", Message:"Started container kube-apiserver", Source:v1.EventSource{Component:"kubelet", Host:"fv-az183-750"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff76a945cde2bbf, ext:2609389098, loc:(*time.Location)(0x70c7020)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff76a945cde2bbf, ext:2609389098, loc:(*time.Location)(0x70c7020)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": read tcp 10.1.0.4:49970->10.1.0.4:8441: read: connection reset by peer'(may retry after sleeping)
2021-01-12T05:25:44.7812777Z         	* Jan 12 05:25:38 fv-az183-750 kubelet[10301]: W0112 05:25:38.095870   10301 pod_container_deletor.go:79] Container "ec33feccee719798c5ee071d8e73405efb17468b6a11b02c6414d0b3633f1986" not found in pod's containers
2021-01-12T05:25:44.7815652Z         	* Jan 12 05:25:38 fv-az183-750 kubelet[10301]: I0112 05:25:38.096239   10301 scope.go:95] [topologymanager] RemoveContainer - Container ID: ccfc079c38466c57c40fea4364e17a1a4422f9547361fe2b5adcfa646c6bd71a
2021-01-12T05:25:44.7818339Z         	* Jan 12 05:25:38 fv-az183-750 kubelet[10301]: W0112 05:25:38.116189   10301 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-czs6s through plugin: invalid network status for
2021-01-12T05:25:44.7820621Z         	* Jan 12 05:25:41 fv-az183-750 kubelet[10301]: E0112 05:25:41.717801   10301 reflector.go:138] object-"kube-system"/"kube-proxy-token-prps9": Failed to watch *v1.Secret: unknown (get secrets)
2021-01-12T05:25:44.7822826Z         	* Jan 12 05:25:41 fv-az183-750 kubelet[10301]: E0112 05:25:41.718494   10301 reflector.go:138] object-"kube-system"/"storage-provisioner-token-mmxd4": Failed to watch *v1.Secret: unknown (get secrets)
2021-01-12T05:25:44.7824975Z         	* Jan 12 05:25:41 fv-az183-750 kubelet[10301]: E0112 05:25:41.718904   10301 reflector.go:138] object-"kube-system"/"coredns-token-nqxvn": Failed to watch *v1.Secret: unknown (get secrets)
2021-01-12T05:25:44.7826887Z         	* Jan 12 05:25:41 fv-az183-750 kubelet[10301]: E0112 05:25:41.719833   10301 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
2021-01-12T05:25:44.7828711Z         	* Jan 12 05:25:41 fv-az183-750 kubelet[10301]: E0112 05:25:41.720132   10301 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
2021-01-12T05:25:44.7831192Z         	* Jan 12 05:25:44 fv-az183-750 kubelet[10301]: I0112 05:25:44.018538   10301 request.go:655] Throttling request took 1.039531292s, request: GET:https://control-plane.minikube.internal:8441/apis/storage.k8s.io/v1/csidrivers?resourceVersion=477
2021-01-12T05:25:44.7832779Z         	* 
2021-01-12T05:25:44.7833522Z         	* ==> storage-provisioner [0a5da4299472] <==
2021-01-12T05:25:44.7834469Z         	* I0112 05:25:28.977760       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
2021-01-12T05:25:44.7835797Z         	* F0112 05:25:28.980154       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
2021-01-12T05:25:44.7836703Z         	* 
2021-01-12T05:25:44.7837427Z         	* ==> storage-provisioner [60d2752fe8f4] <==
2021-01-12T05:25:44.7838376Z         	* I0112 05:25:36.898093       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
2021-01-12T05:25:44.7839559Z         	* I0112 05:25:36.919057       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
2021-01-12T05:25:44.7841188Z         	* I0112 05:25:36.919091       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
2021-01-12T05:25:44.7842124Z         
2021-01-12T05:25:44.7842633Z         -- /stdout --
2021-01-12T05:25:44.7843827Z     helpers_test.go:248: (dbg) Run:  ./minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
2021-01-12T05:25:44.8677121Z     helpers_test.go:255: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
2021-01-12T05:25:44.9456989Z     helpers_test.go:261: non-running pods: 
2021-01-12T05:25:44.9459982Z     helpers_test.go:263: ======> post-mortem[TestFunctional/parallel/ComponentHealth]: describe non-running pods <======
2021-01-12T05:25:44.9461415Z     helpers_test.go:266: (dbg) Run:  kubectl --context minikube describe pod 
2021-01-12T05:25:45.0017385Z     helpers_test.go:266: (dbg) Non-zero exit: kubectl --context minikube describe pod : exit status 1 (56.007184ms)
2021-01-12T05:25:45.0018823Z         
2021-01-12T05:25:45.0019812Z         ** stderr ** 
2021-01-12T05:25:45.0020409Z         	error: resource name may not be empty
2021-01-12T05:25:45.0020904Z         
2021-01-12T05:25:45.0021341Z         ** /stderr **
2021-01-12T05:25:45.0022543Z     helpers_test.go:268: kubectl --context minikube describe pod : exit status 1

NodeLabels:


2021-01-12T09:00:30.1906980Z === CONT  TestFunctional/parallel/NodeLabels
2021-01-12T09:00:30.1961180Z     functional_test.go:149: (dbg) Run:  kubectl --context functional-20210112085556-1482 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
2021-01-12T09:00:31.3766830Z     functional_test.go:149: (dbg) Non-zero exit: kubectl --context functional-20210112085556-1482 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (1.20762357s)
2021-01-12T09:00:31.3778330Z         
2021-01-12T09:00:31.3783630Z         -- stdout --
2021-01-12T09:00:31.3944230Z         	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
2021-01-12T09:00:31.4005150Z         		template was:
2021-01-12T09:00:31.4210980Z         			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
2021-01-12T09:00:31.4212040Z         		raw data was:
2021-01-12T09:00:31.4287660Z         			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":"","selfLink":""}}
2021-01-12T09:00:31.4375910Z         		object given to template engine was:
2021-01-12T09:00:31.4410480Z         			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion: selfLink:]]
2021-01-12T09:00:31.4412520Z         	
2021-01-12T09:00:31.4414230Z         
2021-01-12T09:00:31.4516400Z         -- /stdout --
2021-01-12T09:00:31.4523020Z         ** stderr ** 
2021-01-12T09:00:31.4563670Z         	The connection to the server 192.168.99.100:8441 was refused - did you specify the right host or port?
2021-01-12T09:00:31.4572110Z         	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range
2021-01-12T09:00:31.4612610Z         
2021-01-12T09:00:31.4725050Z         ** /stderr **
2021-01-12T09:00:31.4773650Z     functional_test.go:151: failed to 'kubectl get nodes' with args "kubectl --context functional-20210112085556-1482 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
2021-01-12T09:00:31.4874500Z     functional_test.go:156: expected to have label "minikube.k8s.io/commit" in node labels but got : 
2021-01-12T09:00:31.4925740Z         -- stdout --
2021-01-12T09:00:31.4932270Z         	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
2021-01-12T09:00:31.4934230Z         		template was:
2021-01-12T09:00:31.4938420Z         			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
2021-01-12T09:00:31.4943640Z         		raw data was:
2021-01-12T09:00:31.4952830Z         			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":"","selfLink":""}}
2021-01-12T09:00:31.4958890Z         		object given to template engine was:
2021-01-12T09:00:31.4963060Z         			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion: selfLink:]]
2021-01-12T09:00:31.4969070Z         	
2021-01-12T09:00:31.4972510Z         
2021-01-12T09:00:31.4977840Z         -- /stdout --
2021-01-12T09:00:31.4978960Z         ** stderr ** 
2021-01-12T09:00:31.4986320Z         	The connection to the server 192.168.99.100:8441 was refused - did you specify the right host or port?
2021-01-12T09:00:31.4990600Z         	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range
2021-01-12T09:00:31.4997590Z         
2021-01-12T09:00:31.5001490Z         ** /stderr **
2021-01-12T09:00:31.5004910Z     functional_test.go:156: expected to have label "minikube.k8s.io/version" in node labels but got : 
2021-01-12T09:00:31.5009450Z         -- stdout --
2021-01-12T09:00:31.5014320Z         	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
2021-01-12T09:00:31.5019640Z         		template was:
2021-01-12T09:00:31.5024160Z         			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
2021-01-12T09:00:31.5028070Z         		raw data was:
2021-01-12T09:00:31.5032000Z         			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":"","selfLink":""}}
2021-01-12T09:00:31.5036880Z         		object given to template engine was:
2021-01-12T09:00:31.5042210Z         			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion: selfLink:]]
2021-01-12T09:00:31.5046590Z         	
2021-01-12T09:00:31.5050210Z         
2021-01-12T09:00:31.5054740Z         -- /stdout --
2021-01-12T09:00:31.5058920Z         ** stderr ** 
2021-01-12T09:00:31.5063410Z         	The connection to the server 192.168.99.100:8441 was refused - did you specify the right host or port?
2021-01-12T09:00:31.5069710Z         	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range
2021-01-12T09:00:31.5074590Z         
2021-01-12T09:00:31.5078320Z         ** /stderr **
2021-01-12T09:00:31.5083420Z     functional_test.go:156: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
2021-01-12T09:00:31.5087910Z         -- stdout --
2021-01-12T09:00:31.5093110Z         	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
2021-01-12T09:00:31.5097410Z         		template was:
2021-01-12T09:00:31.5101480Z         			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
2021-01-12T09:00:31.5105320Z         		raw data was:
2021-01-12T09:00:31.5109040Z         			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":"","selfLink":""}}
2021-01-12T09:00:31.5113060Z         		object given to template engine was:
2021-01-12T09:00:31.5116750Z         			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion: selfLink:]]
2021-01-12T09:00:31.5122780Z         	
2021-01-12T09:00:31.5126300Z         
2021-01-12T09:00:31.5130960Z         -- /stdout --
2021-01-12T09:00:31.5135730Z         ** stderr ** 
2021-01-12T09:00:31.5141260Z         	The connection to the server 192.168.99.100:8441 was refused - did you specify the right host or port?
2021-01-12T09:00:31.5146110Z         	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range
2021-01-12T09:00:31.5150460Z         
2021-01-12T09:00:31.5154000Z         ** /stderr **
2021-01-12T09:00:31.5158400Z     functional_test.go:156: expected to have label "minikube.k8s.io/name" in node labels but got : 
2021-01-12T09:00:31.5163070Z         -- stdout --
2021-01-12T09:00:31.5168000Z         	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
2021-01-12T09:00:31.5172290Z         		template was:
2021-01-12T09:00:31.5176520Z         			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
2021-01-12T09:00:31.5180290Z         		raw data was:
2021-01-12T09:00:31.5184200Z         			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":"","selfLink":""}}
2021-01-12T09:00:31.5188180Z         		object given to template engine was:
2021-01-12T09:00:31.5194080Z         			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion: selfLink:]]
2021-01-12T09:00:31.5198030Z         	
2021-01-12T09:00:31.5201360Z         
2021-01-12T09:00:31.5205570Z         -- /stdout --
2021-01-12T09:00:31.5209020Z         ** stderr ** 
2021-01-12T09:00:31.5213220Z         	The connection to the server 192.168.99.100:8441 was refused - did you specify the right host or port?
2021-01-12T09:00:31.5218710Z         	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range
2021-01-12T09:00:31.5225020Z         
2021-01-12T09:00:31.5228540Z         ** /stderr **
2021-01-12T09:00:31.5233370Z     helpers_test.go:216: -----------------------post-mortem--------------------------------
2021-01-12T09:00:31.5237110Z     helpers_test.go:233: (dbg) Run:  ./minikube-darwin-amd64 status --format={{.Host}} -p functional-20210112085556-1482 -n functional-20210112085556-1482
2021-01-12T09:00:32.6568070Z     helpers_test.go:233: (dbg) Non-zero exit: ./minikube-darwin-amd64 status --format={{.Host}} -p functional-20210112085556-1482 -n functional-20210112085556-1482: exit status 2 (1.280669041s)
2021-01-12T09:00:32.6666800Z         
2021-01-12T09:00:32.6769350Z         -- stdout --
2021-01-12T09:00:32.6870910Z         	Running
2021-01-12T09:00:32.6956350Z         
2021-01-12T09:00:32.7058980Z         -- /stdout --
2021-01-12T09:00:32.7160630Z     helpers_test.go:233: status error: exit status 2 (may be ok)
@k8s-ci-robot k8s-ci-robot added kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. labels Jan 12, 2021
@lingsamuel lingsamuel changed the title Flaky TestFunctional/parallel/ComponentHealth: connection refused Flaky TestFunctional/parallel/ComponentHealth: etcd terminated Jan 12, 2021
@lingsamuel lingsamuel changed the title Flaky TestFunctional/parallel/ComponentHealth: etcd terminated Flaky TestFunctional/parallel/ComponentHealth: connection refused Jan 12, 2021
@lingsamuel lingsamuel changed the title Flaky TestFunctional/parallel/ComponentHealth: connection refused Flaky TestFunctional/parallel: NodeLabels, ComponentHealth: connection refused Jan 12, 2021
@priyawadhwa priyawadhwa added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jan 25, 2021
@priyawadhwa
Copy link

Hey @lingsamuel thank you for opening this issue, this is definitely a very flaky test. Would you be interested in trying to fix it? If you, or anyone else, wants to take a look, please feel free to comment /assign on this issue.

@lingsamuel
Copy link
Contributor Author

I am afraid I don't have enough time to resolve this..

@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Mar 31, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 29, 2021
@sharifelgamal sharifelgamal removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 14, 2021
@sharifelgamal
Copy link
Collaborator

I still see this flake every so often, definitely would be nice to fix.

@sharifelgamal sharifelgamal added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Jul 14, 2021
@sharifelgamal sharifelgamal added this to the 1.24.0-candidate milestone Jul 14, 2021
@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Sep 1, 2021
@spowelljr spowelljr modified the milestones: 1.24.0, 1.25.0-candidate Nov 5, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 26, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 26, 2022
@spowelljr spowelljr removed this from the 1.26.0 milestone Jun 24, 2022
@spowelljr spowelljr added this to the 1.27.0-candidate milestone Jun 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

7 participants