Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

init container is unable to resolve pods #5598

Closed
e-dard opened this issue Oct 11, 2019 · 9 comments
Closed

init container is unable to resolve pods #5598

e-dard opened this issue Oct 11, 2019 · 9 comments
Labels
area/dns DNS issues co/kvm2-driver KVM2 driver related issues kind/bug Categorizes issue or PR as related to a bug. kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@e-dard
Copy link

e-dard commented Oct 11, 2019

Hi, I'm having DNS problems from within init containers when using minikube on Arch Linux using the vm driver kvm2.

The issue is as follows: an initContainer that checks that another pod is up and ready on a port is unable to connect to the pod and port. However, the pod is ready and waiting. If I run the exact same spec on other setups (I have tried macOS/minikube/hyperkit and a colleague tried arch/minikube/virtualbox) then it works just fine. This seems to be a specific issue with either my arch box, or the kvm2 driver. I assume the latter.

Here is how to reproduce the issue:

Versions:

local/docker-machine-driver-kvm2 1.3.1-1
local/minikube-bin 1.4.0-1
local/kubectl-bin 1.16.1-1

docker-machine Version         : 0.16.2-1
libvirt Version         : 5.6.0-1
qemu-headless Version         : 4.1.0-2
ebtables Version         : 2.0.10_4-7
dnsmasq Version         : 2.80-4

Steps:

$ minikube start --vm-driver kvm2 --cpus 22 --disk-size 20000m --memory 20g --v=7 --logtostderr
...
...
$ kubectl apply -f test.yaml

Here is the contents of test.yaml: test.yaml.txt

The test is pretty simple:

  1. bring up etcd
  2. bring up a pod with an init container. The init container waits for etcd to be ready as follows: until nc -z -w 1 etcd-0.etcd 2379; do echo waiting for etcd-0.etcd; sleep 2; done;

As I mentioned earlier. This spec works on macOS / minikube / hyperkit and on a friend's Arch machine with arch/minikube/virtualbox.

Here is the failed situation:

edd@tr:~|⇒  kubectl -n yay get pods
NAME                 READY   STATUS     RESTARTS   AGE
app-that-uses-etcd   0/1     Init:0/1   0          14m
etcd-0               1/1     Running    0          14m

If I then go and find the init container for the app-that-uses-etcd pod...

edd@tr:~|⇒  docker ps -a | grep "init-myservice"
0ca39631e5e2        busybox                "sh -c 'until nc -z …"   15 minutes ago      Up 14 minutes                           k8s_init-myservice_app-that-uses-etcd_yay_069cbba6-e96b-4cc8-ab57-c439bc67bdbd_0
edd@tr:~|⇒

I can then check the init container logs:

edd@tr:~|⇒  docker logs 0ca39631e5e2
nc: bad address 'etcd-0.etcd'
waiting for etcd-0.etcd
waiting for etcd-0.etcd
nc: bad address 'etcd-0.etcd'
nc: bad address 'etcd-0.etcd'
waiting for etcd-0.etcd
waiting for etcd-0.etcd
nc: bad address 'etcd-0.etcd'
nc: bad address 'etcd-0.etcd'
waiting for etcd-0.etcd
nc: bad address 'etcd-0.etcd'
waiting for etcd-0.etcd
waiting for etcd-0.etcd
<snip>

The init container is not able to see etcd, which is what it needs to do to complete... Next, I can exec into etcd-0:

edd@tr:~|⇒  kubectl -n yay exec -ti etcd-0 sh
/ # nc -z -w 1 etcd-0.etcd 2379
/ # echo $?
0
/ # ping etcd-0.etcd
PING etcd-0.etcd (172.17.0.5): 56 data bytes
64 bytes from 172.17.0.5: seq=0 ttl=64 time=0.025 ms
64 bytes from 172.17.0.5: seq=1 ttl=64 time=0.038 ms
^C
--- etcd-0.etcd ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.025/0.031/0.038 ms

/ # wget etcd-0.etcd:2379
Connecting to etcd-0.etcd:2379 (172.17.0.5:2379)
wget: server returned error: HTTP/1.1 404 Not Found
/ #
/ #

/ # wget localhost:2379
Connecting to localhost:2379 (127.0.0.1:2379)
wget: server returned error: HTTP/1.1 404 Not Found
/ #
/ #
/ # wget 172.17.0.5:2379
Connecting to 172.17.0.5:2379 (172.17.0.5:2379)

wget: server returned error: HTTP/1.1 404 Not Found
/ #

/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search yay.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ #

So inside of the etcd-0 pod everything looks great... It's just that the init container doesn't seem to be able to reach it.

Finally, I will exec into the init container....

edd@tr:~|⇒  docker exec -ti 0ca39631e5e2 sh
/ #  nc -z -w 1 etcd-0.etcd 2379
nc: bad address 'etcd-0.etcd'
/ # nc -z -w 1 etcd-0.etcd 2379
nc: bad address 'etcd-0.etcd'

/ # nc -z -w 1 172.17.0.5 2379
/ # echo $?
0

/ #
/ #
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search yay.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ #
/ #

/ # nslookup etcd-0.etcd
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'etcd-0.etcd'
/ #
/ 

So in this case the container can reach the other pod by using the IP directly, but it can't resolve etcd-0.etcd'.

Here are my minikube logs:

minikube logs edd@tr:~|⇒ minikube logs ==> Docker <== -- Logs begin at Fri 2019-10-11 14:05:12 UTC, end at Fri 2019-10-11 14:31:30 UTC. -- Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.377106610Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.377154120Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00016c4b0, CONNECTING" module=grpc Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.377624451Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00016c4b0, READY" module=grpc Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.427731547Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.428039072Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.428083614Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.428096924Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.428108048Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.428119142Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.428132101Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.428547440Z" level=info msg="Loading containers: start." Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.612892441Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.765559611Z" level=info msg="Loading containers: done." Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.778526981Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.780249463Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.801432605Z" level=info msg="Docker daemon" commit=039a7df9ba graphdriver(s)=overlay2 version=18.09.9 Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.801562217Z" level=info msg="Daemon has completed initialization" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.877212679Z" level=info msg="API listen on /var/run/docker.sock" Oct 11 14:05:20 minikube dockerd[2514]: time="2019-10-11T14:05:20.877398756Z" level=info msg="API listen on [::]:2376" Oct 11 14:05:20 minikube systemd[1]: Started Docker Application Container Engine. Oct 11 14:06:13 minikube dockerd[2514]: time="2019-10-11T14:06:13.320399736Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Oct 11 14:06:13 minikube dockerd[2514]: time="2019-10-11T14:06:13.321362328Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Oct 11 14:06:13 minikube dockerd[2514]: time="2019-10-11T14:06:13.393167346Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Oct 11 14:06:13 minikube dockerd[2514]: time="2019-10-11T14:06:13.394245060Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Oct 11 14:06:13 minikube dockerd[2514]: time="2019-10-11T14:06:13.465495142Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Oct 11 14:06:13 minikube dockerd[2514]: time="2019-10-11T14:06:13.466018946Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Oct 11 14:06:14 minikube dockerd[2514]: time="2019-10-11T14:06:14.167844267Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Oct 11 14:06:14 minikube dockerd[2514]: time="2019-10-11T14:06:14.168888959Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Oct 11 14:06:18 minikube dockerd[2514]: time="2019-10-11T14:06:18.318257569Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Oct 11 14:06:18 minikube dockerd[2514]: time="2019-10-11T14:06:18.319493520Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Oct 11 14:06:18 minikube dockerd[2514]: time="2019-10-11T14:06:18.360575531Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Oct 11 14:06:18 minikube dockerd[2514]: time="2019-10-11T14:06:18.361856754Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Oct 11 14:06:18 minikube dockerd[2514]: time="2019-10-11T14:06:18.698528089Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Oct 11 14:06:18 minikube dockerd[2514]: time="2019-10-11T14:06:18.699615265Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Oct 11 14:06:18 minikube dockerd[2514]: time="2019-10-11T14:06:18.715021169Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Oct 11 14:06:18 minikube dockerd[2514]: time="2019-10-11T14:06:18.715588009Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Oct 11 14:06:39 minikube dockerd[2514]: time="2019-10-11T14:06:39.131966056Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n" Oct 11 14:06:39 minikube dockerd[2514]: time="2019-10-11T14:06:39.132698160Z" level=warning msg="failed to retrieve docker-init version: exec: \"docker-init\": executable file not found in $PATH" Oct 11 14:06:39 minikube dockerd[2514]: time="2019-10-11T14:06:39.743110744Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cfe41ac13a3b8cd19ecb901e237a81c6c2f143109087a6b151a73a0d444d3b9b/shim.sock" debug=false pid=4311 Oct 11 14:06:39 minikube dockerd[2514]: time="2019-10-11T14:06:39.984921222Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0f16e0fed6ee29607621a6c3021601c04ae568f9d320b4309788b1255cd4c405/shim.sock" debug=false pid=4350 Oct 11 14:06:40 minikube dockerd[2514]: time="2019-10-11T14:06:40.046080263Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2ca1fbf850e44c47db7094cd2be93d1906c673136e28fd6a5164609348c15d0f/shim.sock" debug=false pid=4372 Oct 11 14:06:40 minikube dockerd[2514]: time="2019-10-11T14:06:40.083821242Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/76aa5ed58b8a573d32f672a715eb4fe29812e8acd086b874763f05e131d7f21c/shim.sock" debug=false pid=4395 Oct 11 14:06:40 minikube dockerd[2514]: time="2019-10-11T14:06:40.117050355Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8f305f76264921f804efba8a1ee80990e7037f59e31ee359853786d00a898b1f/shim.sock" debug=false pid=4419 Oct 11 14:06:40 minikube dockerd[2514]: time="2019-10-11T14:06:40.118156896Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c1a7344a5d42b75abaa5f84be10012702e06ac38679406cc687585f58a7ea659/shim.sock" debug=false pid=4421 Oct 11 14:06:40 minikube dockerd[2514]: time="2019-10-11T14:06:40.332872985Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d23b71d2b9c6b9c8e1524ccbb9bf36195558dd93a0ed977f2fb2136a3a4fec98/shim.sock" debug=false pid=4578 Oct 11 14:06:40 minikube dockerd[2514]: time="2019-10-11T14:06:40.380670964Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0c327625690ebd976dd2de3ec6f0adabfa8546c3e6f72bdec64bbaedd8f25457/shim.sock" debug=false pid=4602 Oct 11 14:06:40 minikube dockerd[2514]: time="2019-10-11T14:06:40.446654003Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d07419f3294072ed521c5b61ba65e877073889ca39993d84c440431536be1036/shim.sock" debug=false pid=4632 Oct 11 14:06:40 minikube dockerd[2514]: time="2019-10-11T14:06:40.447469857Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ca3b74a07b4d3a4df03a276aa52a4fedb489bfee728f8efb2fcc92554294a28b/shim.sock" debug=false pid=4638 Oct 11 14:06:55 minikube dockerd[2514]: time="2019-10-11T14:06:55.646731375Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9081c66ae6ad8e8bea106928766676b3ccb82d225ab2e011d73d687c9da3e7e3/shim.sock" debug=false pid=5356 Oct 11 14:06:55 minikube dockerd[2514]: time="2019-10-11T14:06:55.974907222Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/37d488539c8e04b691ade8d33af5e5d3c548d0a314dab3d178eb35ba610b873e/shim.sock" debug=false pid=5419 Oct 11 14:06:56 minikube dockerd[2514]: time="2019-10-11T14:06:56.066140432Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9117c6887ac790c57c734f6b03ddddef0370a3eacb1dc5518e201b08aebbd988/shim.sock" debug=false pid=5465 Oct 11 14:06:56 minikube dockerd[2514]: time="2019-10-11T14:06:56.077817261Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/425dba55ffa6b754463b60f680fc5fc05d0aa24a913a95e8de7c884e85a8b091/shim.sock" debug=false pid=5498 Oct 11 14:06:56 minikube dockerd[2514]: time="2019-10-11T14:06:56.554483146Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a55243adbc678a953a01189f04a2a8fc9ddb569f0d7ba37e9f7374c6113657f8/shim.sock" debug=false pid=5727 Oct 11 14:06:56 minikube dockerd[2514]: time="2019-10-11T14:06:56.606886256Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e361259e6bf1fcce72bdb6dfb5f03e852e64fedba52e710f32722f7c6f0ec393/shim.sock" debug=false pid=5751 Oct 11 14:06:57 minikube dockerd[2514]: time="2019-10-11T14:06:57.952988503Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5c1a02c016287ec06597ed4824c3dc9c1a17a046c8fc44ee997e776f5d590b43/shim.sock" debug=false pid=5850 Oct 11 14:06:58 minikube dockerd[2514]: time="2019-10-11T14:06:58.229435438Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/13013805c556097f9e8cd107c39117a8f6e957be0c6d3535ddf27c0485e4cf3e/shim.sock" debug=false pid=5892 Oct 11 14:11:10 minikube dockerd[2514]: time="2019-10-11T14:11:10.095920863Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3e31d3a12fbe39e40cbc7172dcc0cdeaf544473990bd0fa0efe55d058e75eeca/shim.sock" debug=false pid=11052 Oct 11 14:11:10 minikube dockerd[2514]: time="2019-10-11T14:11:10.143748031Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/496bb62746a68c71ec0c786e3c0d67184b1ad5a528e306dcb2a45e8b9018a40e/shim.sock" debug=false pid=11069 Oct 11 14:11:14 minikube dockerd[2514]: time="2019-10-11T14:11:14.049554252Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0ca39631e5e2ea62a6772b8bf72cc35d194216958a9de0c123a8eeab58fa6c28/shim.sock" debug=false pid=11319 Oct 11 14:11:18 minikube dockerd[2514]: time="2019-10-11T14:11:18.764813310Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f762597dc4916dc586af11338f69717fa9d4bb1ca840c34cd8f561c1cdcf82ec/shim.sock" debug=false pid=11575

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f762597dc4916 quay.io/coreos/etcd@sha256:ea49a3d44a50a50770bff84eab87bac2542c7171254c4d84c609b8c66aefc211 20 minutes ago Running etcd 0 496bb62746a68
0ca39631e5e2e busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 20 minutes ago Running init-myservice 0 3e31d3a12fbe3
13013805c5560 4689081edb103 24 minutes ago Running storage-provisioner 0 5c1a02c016287
e361259e6bf1f bf261d1579144 24 minutes ago Running coredns 0 425dba55ffa6b
a55243adbc678 bf261d1579144 24 minutes ago Running coredns 0 9117c6887ac79
37d488539c8e0 c21b0c7400f98 24 minutes ago Running kube-proxy 0 9081c66ae6ad8
d07419f329407 06a629a7e51cd 24 minutes ago Running kube-controller-manager 0 c1a7344a5d42b
ca3b74a07b4d3 b305571ca60a5 24 minutes ago Running kube-apiserver 0 8f305f7626492
0c327625690eb b2756210eeabf 24 minutes ago Running etcd 0 2ca1fbf850e44
d23b71d2b9c6b bd12a212f9dcb 24 minutes ago Running kube-addon-manager 0 0f16e0fed6ee2
76aa5ed58b8a5 301ddc62b80b1 24 minutes ago Running kube-scheduler 0 cfe41ac13a3b8

==> coredns [a55243adbc67] <==
.:53
2019-10-11T14:06:56.812Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2019-10-11T14:06:56.812Z [INFO] CoreDNS-1.6.2
2019-10-11T14:06:56.812Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb

==> coredns [e361259e6bf1] <==
.:53
2019-10-11T14:06:56.883Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2019-10-11T14:06:56.884Z [INFO] CoreDNS-1.6.2
2019-10-11T14:06:56.884Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb

==> dmesg <==
[Oct11 14:04] Decoding supported only on Scalable MCA processors.
[ +0.011611] #2
[ +0.010190] #3
[ +0.010785] #4
[ +0.010215] #5
[ +0.010658] #6
[ +0.010342] #7
[ +0.010735] #8
[ +0.003263] #9
[ +0.002520] #10
[ +0.001484] #11
[ +0.001026] #12
[ +0.007010] #13
[ +0.001319] #14
[ +0.001146] #15
[ +0.001011] #16
[ +0.001022] #17
[ +0.005471] #18
[ +0.001152] #19
[ +0.000978] #20
[ +0.000914] #21
[ +0.153929] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ +16.062791] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
[ +0.021367] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[ +0.021790] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ +0.158478] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[Oct11 14:05] systemd-fstab-generator[1376]: Ignoring "noauto" for root device
[ +0.008040] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0.516937] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +0.535372] vboxguest: loading out-of-tree module taints kernel.
[ +0.008599] vboxguest: PCI device not found, probably running on physical hardware.
[ +4.117999] systemd-fstab-generator[2410]: Ignoring "noauto" for root device
[Oct11 14:06] systemd-fstab-generator[3579]: Ignoring "noauto" for root device
[ +8.688321] systemd-fstab-generator[4107]: Ignoring "noauto" for root device
[ +24.895801] kauditd_printk_skb: 68 callbacks suppressed
[ +16.770156] kauditd_printk_skb: 20 callbacks suppressed
[Oct11 14:07] NFSD: Unable to end grace period: -110
[ +23.184394] kauditd_printk_skb: 47 callbacks suppressed

==> kernel <==
14:31:30 up 26 min, 0 users, load average: 1.27, 1.25, 1.06
Linux minikube 4.15.0 #1 SMP Wed Sep 18 07:44:58 PDT 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2018.05.3"

==> kube-addon-manager [d23b71d2b9c6] <==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-11T14:30:47+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-11T14:30:50+00:00 ==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-11T14:30:52+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-11T14:30:55+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-11T14:30:57+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-11T14:31:00+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-11T14:31:02+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-11T14:31:05+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-11T14:31:07+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-11T14:31:10+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-11T14:31:11+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-11T14:31:16+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-11T14:31:17+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-11T14:31:21+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-11T14:31:22+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-11T14:31:25+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-11T14:31:27+00:00 ==

==> kube-apiserver [ca3b74a07b4d] <==
I1011 14:06:42.277714 1 client.go:361] parsed scheme: "endpoint"
I1011 14:06:42.277836 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1011 14:06:42.285326 1 client.go:361] parsed scheme: "endpoint"
I1011 14:06:42.285349 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1011 14:06:42.299756 1 client.go:361] parsed scheme: "endpoint"
I1011 14:06:42.299961 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
W1011 14:06:42.488425 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W1011 14:06:42.512070 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1011 14:06:42.527546 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1011 14:06:42.533833 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1011 14:06:42.547278 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1011 14:06:42.574874 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1011 14:06:42.574912 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1011 14:06:42.589623 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1011 14:06:42.589775 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1011 14:06:42.592207 1 client.go:361] parsed scheme: "endpoint"
I1011 14:06:42.592242 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1011 14:06:42.605861 1 client.go:361] parsed scheme: "endpoint"
I1011 14:06:42.605899 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1011 14:06:44.585126 1 secure_serving.go:123] Serving securely on [::]:8443
I1011 14:06:44.585524 1 controller.go:81] Starting OpenAPI AggregationController
I1011 14:06:44.585556 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1011 14:06:44.585598 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1011 14:06:44.585657 1 autoregister_controller.go:140] Starting autoregister controller
I1011 14:06:44.585675 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1011 14:06:44.585702 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1011 14:06:44.585711 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I1011 14:06:44.586082 1 available_controller.go:383] Starting AvailableConditionController
I1011 14:06:44.586137 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1011 14:06:44.586331 1 crd_finalizer.go:274] Starting CRDFinalizer
I1011 14:06:44.586416 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1011 14:06:44.586393 1 naming_controller.go:288] Starting NamingConditionController
I1011 14:06:44.586476 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1011 14:06:44.586500 1 establishing_controller.go:73] Starting EstablishingController
I1011 14:06:44.587076 1 controller.go:85] Starting OpenAPI controller
I1011 14:06:44.587103 1 customresource_discovery_controller.go:208] Starting DiscoveryController
E1011 14:06:44.587081 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.134, ResourceVersion: 0, AdditionalErrorMsg:
I1011 14:06:44.664651 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1011 14:06:44.685884 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1011 14:06:44.685955 1 shared_informer.go:204] Caches are synced for crd-autoregister
I1011 14:06:44.685964 1 cache.go:39] Caches are synced for autoregister controller
I1011 14:06:44.686256 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1011 14:06:45.586019 1 controller.go:107] OpenAPI AggregationController: Processing item
I1011 14:06:45.586168 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1011 14:06:45.586312 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1011 14:06:45.591985 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1011 14:06:45.607523 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1011 14:06:45.607541 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1011 14:06:47.147089 1 controller.go:606] quota admission added evaluator for: endpoints
I1011 14:06:47.368855 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1011 14:06:47.650161 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1011 14:06:47.969668 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.39.134]
I1011 14:06:48.368309 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1011 14:06:49.472916 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1011 14:06:49.805316 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1011 14:06:55.121382 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1011 14:06:55.145614 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
I1011 14:06:55.418008 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1011 14:11:09.630001 1 controller.go:606] quota admission added evaluator for: statefulsets.apps
E1011 14:23:46.904304 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted

==> kube-controller-manager [d07419f32940] <==
W1011 14:06:53.216515 1 controllermanager.go:526] Skipping "ttl-after-finished"
I1011 14:06:53.216523 1 shared_informer.go:197] Waiting for caches to sync for attach detach
I1011 14:06:53.915260 1 controllermanager.go:534] Started "horizontalpodautoscaling"
I1011 14:06:53.915289 1 horizontal.go:156] Starting HPA controller
I1011 14:06:53.915447 1 shared_informer.go:197] Waiting for caches to sync for HPA
I1011 14:06:54.166294 1 controllermanager.go:534] Started "cronjob"
I1011 14:06:54.166380 1 cronjob_controller.go:96] Starting CronJob Manager
I1011 14:06:54.316139 1 controllermanager.go:534] Started "csrsigning"
I1011 14:06:54.316211 1 certificate_controller.go:113] Starting certificate controller
I1011 14:06:54.316232 1 shared_informer.go:197] Waiting for caches to sync for certificate
E1011 14:06:54.565607 1 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1011 14:06:54.565639 1 controllermanager.go:526] Skipping "service"
W1011 14:06:54.565651 1 controllermanager.go:526] Skipping "root-ca-cert-publisher"
I1011 14:06:54.816062 1 controllermanager.go:534] Started "deployment"
I1011 14:06:54.816096 1 deployment_controller.go:152] Starting deployment controller
I1011 14:06:54.816123 1 shared_informer.go:197] Waiting for caches to sync for deployment
I1011 14:06:54.816681 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I1011 14:06:54.818923 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
W1011 14:06:54.821068 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1011 14:06:54.824832 1 shared_informer.go:204] Caches are synced for namespace
I1011 14:06:54.838816 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I1011 14:06:54.859433 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I1011 14:06:54.866018 1 shared_informer.go:204] Caches are synced for service account
E1011 14:06:54.885931 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I1011 14:06:54.915781 1 shared_informer.go:204] Caches are synced for certificate
I1011 14:06:54.917066 1 shared_informer.go:204] Caches are synced for certificate
I1011 14:06:54.918205 1 shared_informer.go:204] Caches are synced for TTL
I1011 14:06:54.966132 1 shared_informer.go:204] Caches are synced for taint
I1011 14:06:54.966623 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone:
W1011 14:06:54.966789 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1011 14:06:54.966814 1 taint_manager.go:186] Starting NoExecuteTaintManager
I1011 14:06:54.966886 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"6e7845e4-c0d0-4d3f-86a2-19c99c182da5", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I1011 14:06:54.966821 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal.
I1011 14:06:54.966969 1 shared_informer.go:204] Caches are synced for PVC protection
I1011 14:06:54.967000 1 shared_informer.go:204] Caches are synced for GC
I1011 14:06:54.974379 1 shared_informer.go:204] Caches are synced for endpoint
I1011 14:06:54.983019 1 shared_informer.go:204] Caches are synced for ReplicationController
I1011 14:06:54.998965 1 shared_informer.go:204] Caches are synced for job
I1011 14:06:55.098576 1 shared_informer.go:204] Caches are synced for ReplicaSet
I1011 14:06:55.116801 1 shared_informer.go:204] Caches are synced for daemon sets
I1011 14:06:55.129835 1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"64c27295-f217-4a09-9cf4-569c73153c79", APIVersion:"apps/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-pl5ts
E1011 14:06:55.157296 1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"64c27295-f217-4a09-9cf4-569c73153c79", ResourceVersion:"224", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63706399609, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001485200), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0012a27c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001485220), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001485240), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001485280)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000607090), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0014b87e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0017bc4e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0017be088)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0014b8828)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I1011 14:06:55.166074 1 shared_informer.go:204] Caches are synced for stateful set
I1011 14:06:55.215835 1 shared_informer.go:204] Caches are synced for HPA
I1011 14:06:55.316462 1 shared_informer.go:204] Caches are synced for PV protection
I1011 14:06:55.316853 1 shared_informer.go:204] Caches are synced for attach detach
I1011 14:06:55.319148 1 shared_informer.go:204] Caches are synced for expand
I1011 14:06:55.321218 1 shared_informer.go:204] Caches are synced for resource quota
I1011 14:06:55.365210 1 shared_informer.go:204] Caches are synced for disruption
I1011 14:06:55.365293 1 disruption.go:341] Sending events to api server.
I1011 14:06:55.366090 1 shared_informer.go:204] Caches are synced for persistent volume
I1011 14:06:55.376128 1 shared_informer.go:204] Caches are synced for garbage collector
I1011 14:06:55.376166 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1011 14:06:55.416273 1 shared_informer.go:204] Caches are synced for deployment
I1011 14:06:55.416983 1 shared_informer.go:204] Caches are synced for resource quota
I1011 14:06:55.419211 1 shared_informer.go:204] Caches are synced for garbage collector
I1011 14:06:55.421504 1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"812ac6cb-eb47-4eac-9993-88a363e7424c", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
I1011 14:06:55.427109 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"3b66ddc2-b03e-4838-acec-c519bdea6010", APIVersion:"apps/v1", ResourceVersion:"331", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-fxkhv
I1011 14:06:55.435481 1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"3b66ddc2-b03e-4838-acec-c519bdea6010", APIVersion:"apps/v1", ResourceVersion:"331", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-kgq7n
I1011 14:11:09.652638 1 event.go:255] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"yay", Name:"etcd", UID:"73f1517d-a3cd-46fa-8757-2c49f0b0f721", APIVersion:"apps/v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' create Pod etcd-0 in StatefulSet etcd successful

==> kube-proxy [37d488539c8e] <==
W1011 14:06:56.306397 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
I1011 14:06:56.316024 1 node.go:135] Successfully retrieved node IP: 192.168.122.11
I1011 14:06:56.316072 1 server_others.go:149] Using iptables Proxier.
W1011 14:06:56.316238 1 proxier.go:287] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1011 14:06:56.316959 1 server.go:529] Version: v1.16.0
I1011 14:06:56.317837 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 720896
I1011 14:06:56.317888 1 conntrack.go:52] Setting nf_conntrack_max to 720896
I1011 14:06:56.318338 1 conntrack.go:83] Setting conntrack hashsize to 180224
I1011 14:06:56.328491 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1011 14:06:56.328575 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1011 14:06:56.328773 1 config.go:313] Starting service config controller
I1011 14:06:56.328831 1 shared_informer.go:197] Waiting for caches to sync for service config
I1011 14:06:56.328887 1 config.go:131] Starting endpoints config controller
I1011 14:06:56.328916 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1011 14:06:56.429153 1 shared_informer.go:204] Caches are synced for service config
I1011 14:06:56.429232 1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-scheduler [76aa5ed58b8a] <==
I1011 14:06:40.653423 1 serving.go:319] Generated self-signed cert in-memory
W1011 14:06:44.603804 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1011 14:06:44.603833 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1011 14:06:44.603842 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
W1011 14:06:44.603848 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1011 14:06:44.606987 1 server.go:143] Version: v1.16.0
I1011 14:06:44.607078 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W1011 14:06:44.609096 1 authorization.go:47] Authorization is disabled
W1011 14:06:44.609146 1 authentication.go:79] Authentication is disabled
I1011 14:06:44.609224 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1011 14:06:44.610747 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E1011 14:06:44.635583 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1011 14:06:44.636361 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1011 14:06:44.636371 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1011 14:06:44.636442 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1011 14:06:44.636518 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1011 14:06:44.637150 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1011 14:06:44.637267 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1011 14:06:44.637336 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1011 14:06:44.637426 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1011 14:06:44.637515 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1011 14:06:44.637741 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1011 14:06:45.637538 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1011 14:06:45.638061 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1011 14:06:45.638936 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1011 14:06:45.640017 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1011 14:06:45.641682 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1011 14:06:45.642518 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1011 14:06:45.643630 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1011 14:06:45.644698 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1011 14:06:45.645502 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1011 14:06:45.646531 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1011 14:06:45.647856 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
I1011 14:06:47.614221 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler...
I1011 14:06:47.625356 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Fri 2019-10-11 14:05:12 UTC, end at Fri 2019-10-11 14:31:30 UTC. --
Oct 11 14:06:43 minikube kubelet[4190]: E1011 14:06:43.619621 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:43 minikube kubelet[4190]: E1011 14:06:43.720059 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:43 minikube kubelet[4190]: E1011 14:06:43.820667 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:43 minikube kubelet[4190]: E1011 14:06:43.921152 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.021810 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.122279 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.223078 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.324003 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.424314 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.524686 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.613305 4190 controller.go:220] failed to get node "minikube" when trying to set owner ref to the node lease: nodes "minikube" not found
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.624879 4190 kubelet.go:2267] node "minikube" not found
Oct 11 14:06:44 minikube kubelet[4190]: I1011 14:06:44.636508 4190 kubelet_node_status.go:75] Successfully registered node minikube
Oct 11 14:06:44 minikube kubelet[4190]: I1011 14:06:44.646301 4190 reconciler.go:154] Reconciler: start to sync state
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.660312 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c2ede59f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbc4584f9f, ext:20997184110, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbc4584f9f, ext:20997184110, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.718005 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c9956249", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffcc49, ext:21108823715, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffcc49, ext:21108823715, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.772492 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c99580fe", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffeafe, ext:21108831576, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffeafe, ext:21108831576, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.826476 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c995a421", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcb000e21, ext:21108840560, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcb000e21, ext:21108840560, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.882413 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2cb1fd9d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcc8a43d8, ext:21134675506, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcc8a43d8, ext:21134675506, loc:(*time.Location)(0x797f100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.942266 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c9956249", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffcc49, ext:21108823715, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcd32cf7d, ext:21145721293, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:44 minikube kubelet[4190]: E1011 14:06:44.999708 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c99580fe", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffeafe, ext:21108831576, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcd332db5, ext:21145745413, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:45 minikube kubelet[4190]: E1011 14:06:45.056261 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c995a421", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcb000e21, ext:21108840560, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcd334293, ext:21145750755, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:45 minikube kubelet[4190]: E1011 14:06:45.115635 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c9956249", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffcc49, ext:21108823715, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcd39703d, ext:21146155660, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:45 minikube kubelet[4190]: E1011 14:06:45.514779 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c99580fe", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffeafe, ext:21108831576, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcd39c1f2, ext:21146176578, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:45 minikube kubelet[4190]: E1011 14:06:45.915375 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c995a421", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcb000e21, ext:21108840560, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcd39d62a, ext:21146181754, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:46 minikube kubelet[4190]: E1011 14:06:46.315763 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c9956249", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffcc49, ext:21108823715, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbd98da104, ext:21352999764, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:46 minikube kubelet[4190]: E1011 14:06:46.714687 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c99580fe", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffeafe, ext:21108831576, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbd98dc636, ext:21353009285, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:47 minikube kubelet[4190]: E1011 14:06:47.114109 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c995a421", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcb000e21, ext:21108840560, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbd98deb4a, ext:21353018777, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:47 minikube kubelet[4190]: E1011 14:06:47.515850 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c9956249", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcaffcc49, ext:21108823715, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbdc584df9, ext:21399836745, loc:(*time.Location)(0x797f100)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:47 minikube kubelet[4190]: E1011 14:06:47.916716 4190 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.15cc9cd2c995a421", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbcb000e21, ext:21108840560, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf6040fbdc587bd4, ext:21399848483, loc:(*time.Location)(0x797f100)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Oct 11 14:06:55 minikube kubelet[4190]: I1011 14:06:55.195168 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-mvgr4" (UniqueName: "kubernetes.io/secret/f8f46ca0-e3e0-4e58-a79b-85933dc5bcfe-kube-proxy-token-mvgr4") pod "kube-proxy-pl5ts" (UID: "f8f46ca0-e3e0-4e58-a79b-85933dc5bcfe")
Oct 11 14:06:55 minikube kubelet[4190]: I1011 14:06:55.195247 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/f8f46ca0-e3e0-4e58-a79b-85933dc5bcfe-xtables-lock") pod "kube-proxy-pl5ts" (UID: "f8f46ca0-e3e0-4e58-a79b-85933dc5bcfe")
Oct 11 14:06:55 minikube kubelet[4190]: I1011 14:06:55.195282 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/f8f46ca0-e3e0-4e58-a79b-85933dc5bcfe-lib-modules") pod "kube-proxy-pl5ts" (UID: "f8f46ca0-e3e0-4e58-a79b-85933dc5bcfe")
Oct 11 14:06:55 minikube kubelet[4190]: I1011 14:06:55.195317 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f8f46ca0-e3e0-4e58-a79b-85933dc5bcfe-kube-proxy") pod "kube-proxy-pl5ts" (UID: "f8f46ca0-e3e0-4e58-a79b-85933dc5bcfe")
Oct 11 14:06:55 minikube kubelet[4190]: I1011 14:06:55.497263 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-2t7lv" (UniqueName: "kubernetes.io/secret/926bc3c0-16c3-480d-9b21-a3f1682950a3-coredns-token-2t7lv") pod "coredns-5644d7b6d9-fxkhv" (UID: "926bc3c0-16c3-480d-9b21-a3f1682950a3")
Oct 11 14:06:55 minikube kubelet[4190]: I1011 14:06:55.497337 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f836da2d-ec06-4aa2-81a4-534f83dd8ba2-config-volume") pod "coredns-5644d7b6d9-kgq7n" (UID: "f836da2d-ec06-4aa2-81a4-534f83dd8ba2")
Oct 11 14:06:55 minikube kubelet[4190]: I1011 14:06:55.497385 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/926bc3c0-16c3-480d-9b21-a3f1682950a3-config-volume") pod "coredns-5644d7b6d9-fxkhv" (UID: "926bc3c0-16c3-480d-9b21-a3f1682950a3")
Oct 11 14:06:55 minikube kubelet[4190]: I1011 14:06:55.497478 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-2t7lv" (UniqueName: "kubernetes.io/secret/f836da2d-ec06-4aa2-81a4-534f83dd8ba2-coredns-token-2t7lv") pod "coredns-5644d7b6d9-kgq7n" (UID: "f836da2d-ec06-4aa2-81a4-534f83dd8ba2")
Oct 11 14:06:56 minikube kubelet[4190]: W1011 14:06:56.434521 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-fxkhv through plugin: invalid network status for
Oct 11 14:06:56 minikube kubelet[4190]: W1011 14:06:56.487968 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-kgq7n through plugin: invalid network status for
Oct 11 14:06:56 minikube kubelet[4190]: W1011 14:06:56.491454 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-fxkhv through plugin: invalid network status for
Oct 11 14:06:56 minikube kubelet[4190]: E1011 14:06:56.493941 4190 remote_runtime.go:295] ContainerStatus "a55243adbc678a953a01189f04a2a8fc9ddb569f0d7ba37e9f7374c6113657f8" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: a55243adbc678a953a01189f04a2a8fc9ddb569f0d7ba37e9f7374c6113657f8
Oct 11 14:06:56 minikube kubelet[4190]: E1011 14:06:56.494005 4190 kuberuntime_manager.go:935] getPodContainerStatuses for pod "coredns-5644d7b6d9-fxkhv_kube-system(926bc3c0-16c3-480d-9b21-a3f1682950a3)" failed: rpc error: code = Unknown desc = Error: No such container: a55243adbc678a953a01189f04a2a8fc9ddb569f0d7ba37e9f7374c6113657f8
Oct 11 14:06:56 minikube kubelet[4190]: W1011 14:06:56.503817 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-kgq7n through plugin: invalid network status for
Oct 11 14:06:56 minikube kubelet[4190]: W1011 14:06:56.505231 4190 pod_container_deletor.go:75] Container "425dba55ffa6b754463b60f680fc5fc05d0aa24a913a95e8de7c884e85a8b091" not found in pod's containers
Oct 11 14:06:57 minikube kubelet[4190]: W1011 14:06:57.516749 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-kgq7n through plugin: invalid network status for
Oct 11 14:06:57 minikube kubelet[4190]: W1011 14:06:57.528204 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-fxkhv through plugin: invalid network status for
Oct 11 14:06:57 minikube kubelet[4190]: I1011 14:06:57.607236 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/4bb39430-5f74-4204-a21f-6874b003f288-tmp") pod "storage-provisioner" (UID: "4bb39430-5f74-4204-a21f-6874b003f288")
Oct 11 14:06:57 minikube kubelet[4190]: I1011 14:06:57.607376 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-6ntxm" (UniqueName: "kubernetes.io/secret/4bb39430-5f74-4204-a21f-6874b003f288-storage-provisioner-token-6ntxm") pod "storage-provisioner" (UID: "4bb39430-5f74-4204-a21f-6874b003f288")
Oct 11 14:11:09 minikube kubelet[4190]: I1011 14:11:09.652473 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9c72q" (UniqueName: "kubernetes.io/secret/069cbba6-e96b-4cc8-ab57-c439bc67bdbd-default-token-9c72q") pod "app-that-uses-etcd" (UID: "069cbba6-e96b-4cc8-ab57-c439bc67bdbd")
Oct 11 14:11:09 minikube kubelet[4190]: I1011 14:11:09.753319 4190 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9c72q" (UniqueName: "kubernetes.io/secret/228229c6-8ee3-45f4-a1f0-6afcb36bd85a-default-token-9c72q") pod "etcd-0" (UID: "228229c6-8ee3-45f4-a1f0-6afcb36bd85a")
Oct 11 14:11:10 minikube kubelet[4190]: W1011 14:11:10.589111 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for yay/app-that-uses-etcd through plugin: invalid network status for
Oct 11 14:11:10 minikube kubelet[4190]: W1011 14:11:10.689454 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for yay/etcd-0 through plugin: invalid network status for
Oct 11 14:11:10 minikube kubelet[4190]: W1011 14:11:10.969991 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for yay/app-that-uses-etcd through plugin: invalid network status for
Oct 11 14:11:10 minikube kubelet[4190]: W1011 14:11:10.973654 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for yay/etcd-0 through plugin: invalid network status for
Oct 11 14:11:14 minikube kubelet[4190]: W1011 14:11:14.004287 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for yay/app-that-uses-etcd through plugin: invalid network status for
Oct 11 14:11:14 minikube kubelet[4190]: E1011 14:11:14.007517 4190 remote_runtime.go:295] ContainerStatus "0ca39631e5e2ea62a6772b8bf72cc35d194216958a9de0c123a8eeab58fa6c28" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 0ca39631e5e2ea62a6772b8bf72cc35d194216958a9de0c123a8eeab58fa6c28
Oct 11 14:11:14 minikube kubelet[4190]: E1011 14:11:14.007574 4190 kuberuntime_manager.go:935] getPodContainerStatuses for pod "app-that-uses-etcd_yay(069cbba6-e96b-4cc8-ab57-c439bc67bdbd)" failed: rpc error: code = Unknown desc = Error: No such container: 0ca39631e5e2ea62a6772b8bf72cc35d194216958a9de0c123a8eeab58fa6c28
Oct 11 14:11:15 minikube kubelet[4190]: W1011 14:11:15.022946 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for yay/app-that-uses-etcd through plugin: invalid network status for
Oct 11 14:11:19 minikube kubelet[4190]: W1011 14:11:19.074539 4190 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for yay/etcd-0 through plugin: invalid network status for

==> storage-provisioner [13013805c556] <==

@tstromberg tstromberg added area/dns DNS issues co/kvm2-driver KVM2 driver related issues kind/support Categorizes issue or PR as a support question. labels Oct 16, 2019
@tstromberg
Copy link
Contributor

Interesting error. Generally, DNS should not behave differently between drivers, although driver proxies do sometimes cause issues. We will have to see about duplicating this on end with kvm2.

@e-dard
Copy link
Author

e-dard commented Oct 17, 2019

@tstromberg thanks. Is there anything else I can do to help the investigation? I'm currently unable to run my $WORK k8s cluster on my development machine because of this issue. I'm having to instead run it with minikube/hyperkit on my drastically underpowered MBP, which causes much sadness...

@e-dard
Copy link
Author

e-dard commented Oct 28, 2019

Ping. Does anyone have any thoughts on things I can do to help with this?

@e-dard
Copy link
Author

e-dard commented Nov 1, 2019

I have confirmed that this still happens on minikube 1.5.2 and docker-machine-driver-kvm2 1.4.

@tstromberg tstromberg changed the title init container DNS problems with kvm2 driver on Arch init container is unable to resolve pods Nov 6, 2019
@medyagh medyagh added the kind/bug Categorizes issue or PR as related to a bug. label Nov 6, 2019
@medyagh
Copy link
Member

medyagh commented Nov 6, 2019

@e-dard
thanks for providing steps to replicate this issue, and sorry that you are having this issue, I would love to find out the root cause of this !

I am trying to replicate this issue for myself, how did you exec into the init container while its already dead (exited) ?

$ docker ps  | grep "init"

$ docker ps -a  | grep "init"
dd73e8a51096        busybox                         "sh -c 'until nc -z …"   6 minutes ago       Exited (0) 6 minutes ago                       k8s_init-myservice_app-that-uses-etcd_yay_26185742-26a6-470e-88c0-20159a967f41_0

another question is there anything different in your KVM machine like corp network settings?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 4, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 5, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/dns DNS issues co/kvm2-driver KVM2 driver related issues kind/bug Categorizes issue or PR as related to a bug. kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants