-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker: Ingress not exposed on MacOS #7332
Comments
I suspect something may be missing to forward the port with the docker driver. I don't know if this is a documentation issue or an implementation issue. @medyagh - can you comment? Do you mind trying to see if it works properly with |
Works just fine with |
the ingress addon is currently not supported on docker driver on MacOs. this is due the limitation on docker bridge on mac. we could add same work arround for addon ingress on docker driver on mac and windows. sorry that you faced this issue, the LEAST we could do is at least not allow the user to enable this addon on docker on macos driver for now. till it is fixed @jkornata |
cc: @josedonizetti |
Thank you @medyagh |
@medyagh, could you please re-open this until the defect is fixed? |
@medyagh +1 |
This issue is referenced in the cli output when trying to enable the ingress addon, yet the status is closed? probably better to open it up @medyagh |
I think the bot heard it wrong, the comment said to not close this bug |
I've been trying to enable ingress on Windows 10. When I try, I get the following error:
I believe this error message was introduced as part of fix: #7393 Which redirects to this error. Is this the correct ticket? If so why does the ticket only refer to MacOS. If not, what is the correct ticket. I'm sorry if this comment doens't have anything to do with this ticket, but I reached a dead end with this error and I wanted to make sure I'm tracking correctly. |
Yes, this error message will show up for the docker driver on both MacOS and Windows, since this ticket applies to both. This is still an outstanding bug we need to address. |
@oconnelc have you tried the suggestion that minikube gave? |
This issue is still exists if you want a small work around I suggest you to install the virtualbox and run the command if you get the below error in Mac(OS) Then try doing following steps |
Our next release should be at the end of August. |
Any update regarding this issue? |
Release is underway right now, 1.23.0 will be released today. |
@sharifelgamal @zhan9san We ran into an issue due to this change. We were running K8s 1.17.4 using minikube version 1.23.0.
I believe the reason is because in K8s 1.17 Ingress is only present in v1beta1 - https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#ingresslist-v1beta1-networking-k8s-io The PR tries to list the ingress resources using v1 apiVersion which is not present in versions prior to K8s 1.19 |
Hi @mdhume Would it be possible to upgrade the k8s cluster? It will introduce more logic to support backward compatibility |
Is |
For the docker driver, yes. |
@zhan9san unfortunately we won't be able to since that is the version we are running currently. One option could be to revert to previous behavior i.e. disable ingress support, if the K8s version detected is prior to 1.19 |
How about adding an option like minikube tunnel --service-only or something else to set up tunnels for 'service' only? |
@zhan9san that would work too 👍 |
To follow the code of conduct in existing flags, I'd like to implement the following command.
while
But this would have an impact on ingress for non-Mac system. Do you have any concern? |
What helped me:
(Since |
The referenced issue (kubernetes/minikube#7332) appears to have been resolved by kubernetes/minikube#12089
The referenced issue (kubernetes/minikube#7332) appears to have been resolved by kubernetes/minikube#12089 Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This is (at time of posting) still the only way to make it work on Mac Silicon (M1, 2020) using:
Is there a specific reason the workaround cannot be incorporated into the master? To date the Apple Silicon virtualization drivers are still limited. So working with docker is rather useful, and this workaround literally saved my day. |
what is the workaround for using ingress for minikube on docker driver on m1 chip macOS |
The one described by @zhan9san above |
Oh I have been trying to export my nodeport of a service to my host machine running the minikube (macOS) and now I see this open issue. Well is there a workaround? I mean it really is the most basic thing to try to reach the minikube wirh a client outside of the minikube, isn't it? I really wonder how this can be, but maybe I am missing the point why someone would set up a cluster without having access to it. |
I was able to get ingress and ingress-dns exposed properly on minikube with docker driver by using docker-mac-net-connect |
@michelesr I am not using ingress but a regular node port. it is only possible using the virtual box drivers on intel based macs. |
That would work with the tool I linked. It basically allow you to reach docker containers using their IP address, just like you would on a Linux machine, and so makes the minikube ip reachable from the host and your node ports accessible |
@michelesr I tried it out already and unfortunately I didn't work with it either. Still thank you very much for trying to help. |
@michelesr Thanks for sharing - that tool is incredibly useful. It's the only way I've been able to get ingress-dns to work on a Mac with an ARM64 chip. |
If this issue is closed, why does the documentation say that ingress doesn't work for Docker on Windows?
|
Hi @rahil-p , do you mind sharing the steps you took? |
Steps to reproduce the issue:
I can't access ingress on the fresh installation. It's on MacOS, with docker for mac and kubernetes disabled in docker for mac.
minikube start --vm-driver=docker --kubernetes-version v1.14.0
minikube addons enable ingress
Issue is not affected by the kubernetes version. It also happens on the newest. I've tried following this guide but it's doesn't work without ingress service. I thought that as suggested here adding service manually will fix the issue but it doesn't.
But if I try to
curl 172.18.0.5:8080
it cannot connect.curl 172.18.0.4
doesn't work either.Neither does
curl 172.17.0.2
orcurl hello-world.info
(with/etc/hosts
modifiedFull output of failed command:
Full output of
minikube start
command used, if not already included:😄 minikube v1.9.0 na Darwin 10.12.6
✨ Using the docker driver based on user configuration
🚜 Pulling base image ...
🔥 Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=1989MB (1989MB available) ...
🐳 przygowowywanie Kubernetesa v1.14.0 na Docker 19.03.2...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🌟 Enabling addons: default-storageclass, storage-provisioner
🏄 Gotowe! kubectl jest skonfigurowany do użycia z "minikube".
❗ /usr/local/bin/kubectl is v1.18.0, which may be incompatible with Kubernetes v1.14.0.
💡 You can also use 'minikube kubectl -- get pods' to invoke a matching version
Optional: Full output of
minikube logs
command:==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
49548f067e0fb gcr.io/google-samples/hello-app@sha256:c62ead5b8c15c231f9e786250b07909daf6c266d0fcddd93fea882eb722c3be4 14 minutes ago Running web 0 cc3588d4252ea
6e356d38f6644 quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d0b22f715fcea5598ef7f869d308b55289a3daaa12922fa52a1abf17703c88e7 19 minutes ago Running nginx-ingress-controller 0 0254de39b3801
fdf3890ae6cad kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555 23 minutes ago Running kindnet-cni 0 fdc9efa64e13c
5987d4d29db7b eb516548c180f 24 minutes ago Running coredns 0 a48e9875ea2d7
6a507738d34a6 eb516548c180f 24 minutes ago Running coredns 0 55124d3804fb1
31fa7a07f95ed 5cd54e388abaf 24 minutes ago Running kube-proxy 0 00fed65b89e57
791695c1a1a89 4689081edb103 24 minutes ago Running storage-provisioner 0 4e5d751c70346
b82aa41df356b 2c4adeb21b4ff 24 minutes ago Running etcd 0 09a6124253491
636cbc28b02a5 00638a24688b0 24 minutes ago Running kube-scheduler 0 59929901cfb8d
a15a83b0d226f ecf910f40d6e0 24 minutes ago Running kube-apiserver 0 1702fda9a509f
c3fe71e5fc3a8 b95b1efa0436b 24 minutes ago Running kube-controller-manager 0 d91b6fdb43251
==> coredns [5987d4d29db7] <==
.:53
2020-03-31T08:32:08.712Z [INFO] CoreDNS-1.3.1
2020-03-31T08:32:08.713Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-03-31T08:32:08.713Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
==> coredns [6a507738d34a] <==
.:53
2020-03-31T08:32:08.711Z [INFO] CoreDNS-1.3.1
2020-03-31T08:32:08.711Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-03-31T08:32:08.711Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=8af1ea66d8a0cb7202a44a91b6dc775577868ed1
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_03_31T10_31_49_0700
minikube.k8s.io/version=v1.9.0
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 31 Mar 2020 08:31:43 +0000
Taints:
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 31 Mar 2020 08:55:44 +0000 Tue, 31 Mar 2020 08:31:35 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.17.0.2
Hostname: minikube
Capacity:
cpu: 2
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2037620Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2037620Ki
pods: 110
System Info:
Machine ID: 8545c5f5c4eb42e884baacaf5fa1f5fb
System UUID: e80618a3-0f92-4608-98b0-196f69922a9e
Boot ID: 598d6f3e-313e-44ba-867d-08468399f9d3
Kernel Version: 4.19.76-linuxkit
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.14.0
Kube-Proxy Version: v1.14.0
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
default web 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14m
kube-system coredns-fb8b8dccf-bktjn 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 24m
kube-system coredns-fb8b8dccf-lbpbz 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 24m
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system kindnet-hcl42 100m (5%) 100m (5%) 50Mi (2%) 50Mi (2%) 24m
kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system kube-proxy-m7v6p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m
kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system nginx-ingress-controller-b84556868-kh8n6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 850m (42%) 100m (5%)
memory 190Mi (9%) 390Mi (19%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
Normal NodeHasSufficientMemory 24m (x8 over 24m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 24m (x8 over 24m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24m (x7 over 24m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Warning readOnlySysFS 24m kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
Normal Starting 24m kube-proxy, minikube Starting kube-proxy.
==> dmesg <==
[Mar31 07:34] tsc: Unable to calibrate against PIT
[ +0.597814] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A
[ +0.001924] virtio-pci 0000:00:01.0: PCI INT A: no GSI
[ +0.005139] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
[ +0.001680] virtio-pci 0000:00:07.0: PCI INT A: no GSI
[ +0.058545] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[ +0.022298] ahci 0000:00:02.0: can't derive routing for PCI INT A
[ +0.001507] ahci 0000:00:02.0: PCI INT A: no GSI
[ +0.683851] i8042: Can't read CTR while initializing i8042
[ +0.001417] i8042: probe of i8042 failed with error -5
[ +0.006370] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[ +0.001774] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[ +0.260204] ata1.00: ATA Identify Device Log not supported
[ +0.001281] ata1.00: Security Log not supported
[ +0.002459] ata1.00: ATA Identify Device Log not supported
[ +0.001264] ata1.00: Security Log not supported
[ +0.154008] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +0.021992] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[Mar31 07:35] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +0.077989] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[Mar31 07:40] hrtimer: interrupt took 2316993 ns
[Mar31 07:47] tee (5973): /proc/5576/oom_adj is deprecated, please use /proc/5576/oom_score_adj instead.
==> etcd [b82aa41df356] <==
2020-03-31 08:31:34.136909 I | etcdmain: etcd Version: 3.3.10
2020-03-31 08:31:34.139611 I | etcdmain: Git SHA: 27fc7e2
2020-03-31 08:31:34.139688 I | etcdmain: Go Version: go1.10.4
2020-03-31 08:31:34.140806 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-31 08:31:34.141644 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-03-31 08:31:34.144109 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-31 08:31:34.161365 I | embed: listening for peers on https://172.17.0.2:2380
2020-03-31 08:31:34.162964 I | embed: listening for client requests on 127.0.0.1:2379
2020-03-31 08:31:34.163139 I | embed: listening for client requests on 172.17.0.2:2379
2020-03-31 08:31:34.193488 I | etcdserver: name = minikube
2020-03-31 08:31:34.194252 I | etcdserver: data dir = /var/lib/minikube/etcd
2020-03-31 08:31:34.195167 I | etcdserver: member dir = /var/lib/minikube/etcd/member
2020-03-31 08:31:34.195636 I | etcdserver: heartbeat = 100ms
2020-03-31 08:31:34.195985 I | etcdserver: election = 1000ms
2020-03-31 08:31:34.196385 I | etcdserver: snapshot count = 10000
2020-03-31 08:31:34.196656 I | etcdserver: advertise client URLs = https://172.17.0.2:2379
2020-03-31 08:31:34.197009 I | etcdserver: initial advertise peer URLs = https://172.17.0.2:2380
2020-03-31 08:31:34.197237 I | etcdserver: initial cluster = minikube=https://172.17.0.2:2380
2020-03-31 08:31:34.236216 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
2020-03-31 08:31:34.236303 I | raft: b8e14bda2255bc24 became follower at term 0
2020-03-31 08:31:34.236320 I | raft: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2020-03-31 08:31:34.236334 I | raft: b8e14bda2255bc24 became follower at term 1
2020-03-31 08:31:34.340367 W | auth: simple token is not cryptographically signed
2020-03-31 08:31:34.401667 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
2020-03-31 08:31:34.409456 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-03-31 08:31:34.424575 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
2020-03-31 08:31:34.442258 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-31 08:31:34.444013 I | embed: listening for metrics on http://172.17.0.2:2381
2020-03-31 08:31:34.444133 I | embed: listening for metrics on http://127.0.0.1:2381
2020-03-31 08:31:34.702254 I | raft: b8e14bda2255bc24 is starting a new election at term 1
2020-03-31 08:31:34.702335 I | raft: b8e14bda2255bc24 became candidate at term 2
2020-03-31 08:31:34.702368 I | raft: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
2020-03-31 08:31:34.702389 I | raft: b8e14bda2255bc24 became leader at term 2
2020-03-31 08:31:34.702402 I | raft: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
2020-03-31 08:31:34.931189 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
2020-03-31 08:31:35.006979 I | etcdserver: setting up the initial cluster version to 3.3
2020-03-31 08:31:35.060823 I | embed: ready to serve client requests
2020-03-31 08:31:35.391969 N | etcdserver/membership: set the initial cluster version to 3.3
2020-03-31 08:31:35.432869 I | etcdserver/api: enabled capabilities for version 3.3
2020-03-31 08:31:35.461278 I | embed: ready to serve client requests
2020-03-31 08:31:35.497338 I | embed: serving client requests on 127.0.0.1:2379
2020-03-31 08:31:35.498302 I | embed: serving client requests on 172.17.0.2:2379
proto: no coders for int
proto: no encoder for ValueSize int [GetProperties]
2020-03-31 08:32:26.935952 W | etcdserver: request "header:<ID:13557085228049851706 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/172.17.0.2" mod_revision:439 > success:<request_put:<key:"/registry/masterleases/172.17.0.2" value_size:65 lease:4333713191195075896 >> failure:<request_range:<key:"/registry/masterleases/172.17.0.2" > >>" with result "size:16" took too long (262.086409ms) to execute
2020-03-31 08:32:26.936285 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/kube-scheduler" " with result "range_response_count:1 size:430" took too long (178.640542ms) to execute
2020-03-31 08:36:01.834832 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kube-system/kube-controller-manager" " with result "range_response_count:1 size:448" took too long (537.346776ms) to execute
2020-03-31 08:36:01.837558 W | etcdserver: read-only range request "key:"/registry/deployments" range_end:"/registry/deploymentt" count_only:true " with result "range_response_count:0 size:7" took too long (237.353514ms) to execute
2020-03-31 08:36:50.822689 W | etcdserver: read-only range request "key:"/registry/persistentvolumeclaims" range_end:"/registry/persistentvolumeclaimt" count_only:true " with result "range_response_count:0 size:5" took too long (268.036763ms) to execute
2020-03-31 08:36:50.823106 W | etcdserver: read-only range request "key:"/registry/leases/kube-node-lease/minikube" " with result "range_response_count:1 size:289" took too long (313.963517ms) to execute
2020-03-31 08:36:52.839697 W | etcdserver: read-only range request "key:"/registry/runtimeclasses" range_end:"/registry/runtimeclasset" count_only:true " with result "range_response_count:0 size:5" took too long (521.345081ms) to execute
2020-03-31 08:41:36.476771 I | mvcc: store.index: compact 792
2020-03-31 08:41:36.485267 I | mvcc: finished scheduled compaction at 792 (took 4.328598ms)
2020-03-31 08:46:36.273524 I | mvcc: store.index: compact 1204
2020-03-31 08:46:36.277749 I | mvcc: finished scheduled compaction at 1204 (took 1.397204ms)
2020-03-31 08:51:36.069722 I | mvcc: store.index: compact 1625
2020-03-31 08:51:36.071463 I | mvcc: finished scheduled compaction at 1625 (took 836.551µs)
==> kernel <==
08:56:17 up 1:21, 0 users, load average: 0.33, 0.36, 0.53
Linux minikube 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"
==> kube-apiserver [a15a83b0d226] <==
I0331 08:55:48.541814 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:48.542070 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:49.542325 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:49.542511 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:50.543443 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:50.543681 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:51.545228 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:51.545400 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:52.548788 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:52.549108 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:53.550212 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:53.550512 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:54.550920 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:54.559542 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:55.552142 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:55.562253 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:56.552804 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:56.563460 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:57.554372 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:57.564611 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:58.555926 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:58.565912 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:55:59.557787 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:55:59.567042 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:00.558500 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:00.567752 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:01.559200 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:01.568257 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:02.560176 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:02.568718 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:03.560969 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:03.569388 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:04.562444 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:04.570431 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:05.563591 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:05.571439 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:06.542265 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:06.551395 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:07.545431 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:07.551901 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:08.546286 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:08.552996 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:09.547546 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:09.553592 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:10.553217 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:10.554171 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:11.554591 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:11.554731 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:12.555210 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:12.555426 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:13.555827 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:13.556101 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:14.556416 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:14.556718 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:15.557116 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:15.557383 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:16.558507 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:16.558968 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0331 08:56:17.559695 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0331 08:56:17.565042 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
==> kube-controller-manager [c3fe71e5fc3a] <==
I0331 08:32:01.282471 1 controllermanager.go:497] Started "daemonset"
W0331 08:32:01.282653 1 controllermanager.go:489] Skipping "root-ca-cert-publisher"
I0331 08:32:01.738243 1 controllermanager.go:497] Started "horizontalpodautoscaling"
I0331 08:32:01.739200 1 horizontal.go:156] Starting HPA controller
I0331 08:32:01.741221 1 controller_utils.go:1027] Waiting for caches to sync for HPA controller
I0331 08:32:01.989670 1 controllermanager.go:497] Started "tokencleaner"
W0331 08:32:01.990240 1 controllermanager.go:489] Skipping "ttl-after-finished"
E0331 08:32:01.990935 1 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0331 08:32:01.990176 1 tokencleaner.go:116] Starting token cleaner controller
I0331 08:32:01.994933 1 controller_utils.go:1027] Waiting for caches to sync for token_cleaner controller
W0331 08:32:02.083571 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0331 08:32:02.086826 1 controller_utils.go:1034] Caches are synced for bootstrap_signer controller
I0331 08:32:02.087999 1 controller_utils.go:1034] Caches are synced for deployment controller
I0331 08:32:02.089057 1 controller_utils.go:1034] Caches are synced for certificate controller
I0331 08:32:02.092192 1 controller_utils.go:1034] Caches are synced for ReplicaSet controller
I0331 08:32:02.093670 1 controller_utils.go:1034] Caches are synced for endpoint controller
I0331 08:32:02.093757 1 controller_utils.go:1034] Caches are synced for certificate controller
I0331 08:32:02.096554 1 controller_utils.go:1034] Caches are synced for token_cleaner controller
I0331 08:32:02.132764 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"0f9b3570-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"197", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-fb8b8dccf to 2
I0331 08:32:02.135841 1 controller_utils.go:1034] Caches are synced for node controller
I0331 08:32:02.135926 1 range_allocator.go:157] Starting range CIDR allocator
I0331 08:32:02.136016 1 controller_utils.go:1027] Waiting for caches to sync for cidrallocator controller
I0331 08:32:02.139985 1 controller_utils.go:1034] Caches are synced for GC controller
I0331 08:32:02.142975 1 controller_utils.go:1034] Caches are synced for HPA controller
I0331 08:32:02.143886 1 controller_utils.go:1034] Caches are synced for TTL controller
I0331 08:32:02.153627 1 controller_utils.go:1034] Caches are synced for PV protection controller
I0331 08:32:02.156638 1 controller_utils.go:1034] Caches are synced for taint controller
I0331 08:32:02.156788 1 node_lifecycle_controller.go:1159] Initializing eviction metric for zone:
W0331 08:32:02.156892 1 node_lifecycle_controller.go:833] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0331 08:32:02.157068 1 node_lifecycle_controller.go:1059] Controller detected that zone is now in state Normal.
I0331 08:32:02.158108 1 taint_manager.go:198] Starting NoExecuteTaintManager
I0331 08:32:02.160204 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"0cf13fa1-732a-11ea-9f29-02429a45b1b2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0331 08:32:02.170846 1 controller_utils.go:1034] Caches are synced for job controller
I0331 08:32:02.173867 1 log.go:172] [INFO] signed certificate with serial number 348836518710746890614976265293012047567942960152
I0331 08:32:02.190539 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"18426705-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-lbpbz
I0331 08:32:02.221681 1 controller_utils.go:1034] Caches are synced for service account controller
I0331 08:32:02.225773 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"18426705-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-bktjn
I0331 08:32:02.236302 1 controller_utils.go:1034] Caches are synced for cidrallocator controller
I0331 08:32:02.260197 1 controller_utils.go:1034] Caches are synced for namespace controller
I0331 08:32:02.319173 1 range_allocator.go:310] Set node minikube PodCIDR to 10.244.0.0/24
I0331 08:32:02.483880 1 controller_utils.go:1034] Caches are synced for daemon sets controller
I0331 08:32:02.553898 1 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller
I0331 08:32:02.561597 1 controller_utils.go:1034] Caches are synced for persistent volume controller
I0331 08:32:02.594321 1 controller_utils.go:1034] Caches are synced for attach detach controller
I0331 08:32:02.623154 1 controller_utils.go:1034] Caches are synced for stateful set controller
I0331 08:32:02.626184 1 controller_utils.go:1034] Caches are synced for expand controller
I0331 08:32:02.641836 1 controller_utils.go:1034] Caches are synced for PVC protection controller
I0331 08:32:02.675653 1 controller_utils.go:1034] Caches are synced for disruption controller
I0331 08:32:02.675749 1 disruption.go:294] Sending events to api server.
I0331 08:32:02.678210 1 controller_utils.go:1034] Caches are synced for ReplicationController controller
I0331 08:32:02.693864 1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
I0331 08:32:02.724773 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"0fb881c6-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-m7v6p
I0331 08:32:02.753727 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"109cdd5b-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"240", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-hcl42
I0331 08:32:02.815582 1 controller_utils.go:1034] Caches are synced for garbage collector controller
I0331 08:32:02.815791 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0331 08:32:02.863222 1 controller_utils.go:1034] Caches are synced for resource quota controller
I0331 08:32:02.894120 1 controller_utils.go:1034] Caches are synced for garbage collector controller
E0331 08:32:03.056559 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0331 08:35:06.194335 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"nginx-ingress-controller", UID:"85f7245a-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-ingress-controller-b84556868 to 1
I0331 08:35:06.243687 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"nginx-ingress-controller-b84556868", UID:"85f8a669-732a-11ea-9f29-02429a45b1b2", APIVersion:"apps/v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-ingress-controller-b84556868-kh8n6
==> kube-proxy [31fa7a07f95e] <==
W0331 08:32:06.518547 1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I0331 08:32:06.672751 1 server_others.go:148] Using iptables Proxier.
I0331 08:32:06.675746 1 server_others.go:178] Tearing down inactive rules.
I0331 08:32:07.027370 1 server.go:555] Version: v1.14.0
I0331 08:32:07.066710 1 conntrack.go:52] Setting nf_conntrack_max to 131072
E0331 08:32:07.067346 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0331 08:32:07.067633 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0331 08:32:07.067763 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0331 08:32:07.068184 1 config.go:202] Starting service config controller
I0331 08:32:07.068371 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0331 08:32:07.089152 1 config.go:102] Starting endpoints config controller
I0331 08:32:07.089722 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0331 08:32:07.195756 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0331 08:32:07.269068 1 controller_utils.go:1034] Caches are synced for service config controller
==> kube-scheduler [636cbc28b02a] <==
I0331 08:31:35.938018 1 serving.go:319] Generated self-signed cert in-memory
W0331 08:31:36.608645 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0331 08:31:36.608726 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0331 08:31:36.608757 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0331 08:31:36.621912 1 server.go:142] Version: v1.14.0
I0331 08:31:36.625207 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0331 08:31:36.638219 1 authorization.go:47] Authorization is disabled
W0331 08:31:36.638287 1 authentication.go:55] Authentication is disabled
I0331 08:31:36.638311 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0331 08:31:36.640459 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0331 08:31:43.052618 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0331 08:31:43.053184 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0331 08:31:43.053690 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0331 08:31:43.055118 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0331 08:31:43.055202 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0331 08:31:43.055360 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0331 08:31:43.055806 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0331 08:31:43.055849 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0331 08:31:43.056810 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0331 08:31:43.070097 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0331 08:31:44.058160 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0331 08:31:44.059737 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0331 08:31:44.059875 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0331 08:31:44.069524 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0331 08:31:44.070192 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0331 08:31:44.073620 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0331 08:31:44.073938 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0331 08:31:44.074342 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0331 08:31:44.080776 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0331 08:31:44.081063 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0331 08:31:45.926301 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0331 08:31:46.026597 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0331 08:31:46.027034 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0331 08:31:46.066937 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler
==> kubelet <==
-- Logs begin at Tue 2020-03-31 08:29:37 UTC, end at Tue 2020-03-31 08:56:19 UTC. --
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.308177 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.363793 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.419479 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.481355 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd84f838bb8, ext:999230435, loc:(*time.Location)(0x7ff88e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.543899 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd84f83b36e, ext:999240601, loc:(*time.Location)(0x7ff88e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.627373 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd84f83d361, ext:999248781, loc:(*time.Location)(0x7ff88e0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.692428 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd85f75d96f, ext:1266768277, loc:(*time.Location)(0x7ff88e0)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:43 minikube kubelet[1618]: E0331 08:31:43.851375 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd85f75aebd, ext:1266757353, loc:(*time.Location)(0x7ff88e0)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:44 minikube kubelet[1618]: E0331 08:31:44.249636 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd85f75ca73, ext:1266764442, loc:(*time.Location)(0x7ff88e0)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:44 minikube kubelet[1618]: E0331 08:31:44.452700 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a4a335340", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd863f1c940, ext:1341999475, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd863f1c940, ext:1341999475, loc:(*time.Location)(0x7ff88e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:44 minikube kubelet[1618]: E0331 08:31:44.847316 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd86b2d1402, ext:1463325782, loc:(*time.Location)(0x7ff88e0)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:45 minikube kubelet[1618]: E0331 08:31:45.248634 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249aa607", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node minikube status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3e607, ext:711236661, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd86b2eae4a, ext:1463430769, loc:(*time.Location)(0x7ff88e0)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:45 minikube kubelet[1618]: E0331 08:31:45.655875 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249abc1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node minikube status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f3fc1a, ext:711242326, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd86b33e875, ext:1463773344, loc:(*time.Location)(0x7ff88e0)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:46 minikube kubelet[1618]: E0331 08:31:46.055732 1618 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.1601565a249a41c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node minikube status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd839f381c8, ext:711211006, loc:(*time.Location)(0x7ff88e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf98ddd893eb4f75, ext:2073139617, loc:(*time.Location)(0x7ff88e0)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
Mar 31 08:31:49 minikube kubelet[1618]: E0331 08:31:49.648591 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:31:49 minikube kubelet[1618]: E0331 08:31:49.657552 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:31:59 minikube kubelet[1618]: E0331 08:31:59.692174 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:31:59 minikube kubelet[1618]: E0331 08:31:59.692365 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.343909 1618 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.0.0/24
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.345132 1618 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.345503 1618 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
Mar 31 08:32:02 minikube kubelet[1618]: E0331 08:32:02.399773 1618 reflector.go:126] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Mar 31 08:32:02 minikube kubelet[1618]: E0331 08:32:02.402299 1618 reflector.go:126] object-"kube-system"/"coredns-token-sflpk": Failed to list *v1.Secret: secrets "coredns-token-sflpk" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.651242 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1846dd62-732a-11ea-9f29-02429a45b1b2-config-volume") pod "coredns-fb8b8dccf-lbpbz" (UID: "1846dd62-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.663192 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/184fbeb3-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-bktjn" (UID: "184fbeb3-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.663343 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/12c200e7-732a-11ea-9f29-02429a45b1b2-tmp") pod "storage-provisioner" (UID: "12c200e7-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.663423 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/184fbeb3-732a-11ea-9f29-02429a45b1b2-config-volume") pod "coredns-fb8b8dccf-bktjn" (UID: "184fbeb3-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.666767 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/1846dd62-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-lbpbz" (UID: "1846dd62-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.682574 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-cjc6f" (UniqueName: "kubernetes.io/secret/12c200e7-732a-11ea-9f29-02429a45b1b2-storage-provisioner-token-cjc6f") pod "storage-provisioner" (UID: "12c200e7-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.791486 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/18876401-732a-11ea-9f29-02429a45b1b2-cni-cfg") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.791875 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/18876401-732a-11ea-9f29-02429a45b1b2-lib-modules") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.792173 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-n82c5" (UniqueName: "kubernetes.io/secret/18876401-732a-11ea-9f29-02429a45b1b2-kindnet-token-n82c5") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.792706 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/18876401-732a-11ea-9f29-02429a45b1b2-xtables-lock") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: E0331 08:32:02.798128 1618 reflector.go:126] object-"kube-system"/"kindnet-token-n82c5": Failed to list *v1.Secret: secrets "kindnet-token-n82c5" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.893841 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/18869b9f-732a-11ea-9f29-02429a45b1b2-lib-modules") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.895351 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/18869b9f-732a-11ea-9f29-02429a45b1b2-kube-proxy") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.896545 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/18869b9f-732a-11ea-9f29-02429a45b1b2-xtables-lock") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:02 minikube kubelet[1618]: I0331 08:32:02.900684 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-82nbp" (UniqueName: "kubernetes.io/secret/18869b9f-732a-11ea-9f29-02429a45b1b2-kube-proxy-token-82nbp") pod "kube-proxy-m7v6p" (UID: "18869b9f-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.793791 1618 secret.go:198] Couldn't get secret kube-system/coredns-token-sflpk: couldn't propagate object cache: timed out waiting for the condition
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.793998 1618 nestedpendingoperations.go:267] Operation for ""kubernetes.io/secret/184fbeb3-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk" ("184fbeb3-732a-11ea-9f29-02429a45b1b2")" failed. No retries permitted until 2020-03-31 08:32:04.293967663 +0000 UTC m=+36.055171975 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/184fbeb3-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-bktjn" (UID: "184fbeb3-732a-11ea-9f29-02429a45b1b2") : couldn't propagate object cache: timed out waiting for the condition"
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.794879 1618 secret.go:198] Couldn't get secret kube-system/coredns-token-sflpk: couldn't propagate object cache: timed out waiting for the condition
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.794952 1618 nestedpendingoperations.go:267] Operation for ""kubernetes.io/secret/1846dd62-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk" ("1846dd62-732a-11ea-9f29-02429a45b1b2")" failed. No retries permitted until 2020-03-31 08:32:04.294926895 +0000 UTC m=+36.056131206 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "coredns-token-sflpk" (UniqueName: "kubernetes.io/secret/1846dd62-732a-11ea-9f29-02429a45b1b2-coredns-token-sflpk") pod "coredns-fb8b8dccf-lbpbz" (UID: "1846dd62-732a-11ea-9f29-02429a45b1b2") : couldn't propagate object cache: timed out waiting for the condition"
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.900920 1618 secret.go:198] Couldn't get secret kube-system/kindnet-token-n82c5: couldn't propagate object cache: timed out waiting for the condition
Mar 31 08:32:03 minikube kubelet[1618]: E0331 08:32:03.901234 1618 nestedpendingoperations.go:267] Operation for ""kubernetes.io/secret/18876401-732a-11ea-9f29-02429a45b1b2-kindnet-token-n82c5" ("18876401-732a-11ea-9f29-02429a45b1b2")" failed. No retries permitted until 2020-03-31 08:32:04.401170675 +0000 UTC m=+36.162375074 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "kindnet-token-n82c5" (UniqueName: "kubernetes.io/secret/18876401-732a-11ea-9f29-02429a45b1b2-kindnet-token-n82c5") pod "kindnet-hcl42" (UID: "18876401-732a-11ea-9f29-02429a45b1b2") : couldn't propagate object cache: timed out waiting for the condition"
Mar 31 08:32:04 minikube kubelet[1618]: W0331 08:32:04.418840 1618 container.go:409] Failed to create summary reader for "/system.slice/run-rfbc88cf5398744519564ad9cbf4ff678.scope": none of the resources are being tracked.
Mar 31 08:32:04 minikube kubelet[1618]: W0331 08:32:04.419588 1618 container.go:409] Failed to create summary reader for "/system.slice/run-r0435686948fa4809aafd2bfdbacf7779.scope": none of the resources are being tracked.
Mar 31 08:32:05 minikube kubelet[1618]: W0331 08:32:05.976174 1618 pod_container_deletor.go:75] Container "fdc9efa64e13c2ce2c3745c444a18be062347bf4c9dd4e17f131c14e020b9101" not found in pod's containers
Mar 31 08:32:06 minikube kubelet[1618]: W0331 08:32:06.837949 1618 pod_container_deletor.go:75] Container "55124d3804fb1e46a3df0165b6a8e99f7b1ccc3fd80da91f0645219a283f7b79" not found in pod's containers
Mar 31 08:32:06 minikube kubelet[1618]: W0331 08:32:06.868003 1618 pod_container_deletor.go:75] Container "a48e9875ea2d71897bfcb6a9d5163006cbc89e4d738c41f651c47396299b93fb" not found in pod's containers
Mar 31 08:32:08 minikube kubelet[1618]: I0331 08:32:08.373210 1618 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
Mar 31 08:32:09 minikube kubelet[1618]: E0331 08:32:09.711731 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:32:09 minikube kubelet[1618]: E0331 08:32:09.711865 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:32:19 minikube kubelet[1618]: E0331 08:32:19.816629 1618 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
Mar 31 08:32:19 minikube kubelet[1618]: E0331 08:32:19.817086 1618 helpers.go:721] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
Mar 31 08:35:06 minikube kubelet[1618]: I0331 08:35:06.390555 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "nginx-ingress-token-6hbxw" (UniqueName: "kubernetes.io/secret/86005fa7-732a-11ea-9f29-02429a45b1b2-nginx-ingress-token-6hbxw") pod "nginx-ingress-controller-b84556868-kh8n6" (UID: "86005fa7-732a-11ea-9f29-02429a45b1b2")
Mar 31 08:35:07 minikube kubelet[1618]: W0331 08:35:07.441583 1618 pod_container_deletor.go:75] Container "0254de39b3801b1cdce25aea2b15a6cf57f9d4c13e50b84459be2a1b197f73aa" not found in pod's containers
Mar 31 08:41:53 minikube kubelet[1618]: E0331 08:41:53.448884 1618 reflector.go:126] object-"default"/"default-token-jp22c": Failed to list *v1.Secret: secrets "default-token-jp22c" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "minikube" and this object
Mar 31 08:41:53 minikube kubelet[1618]: I0331 08:41:53.535630 1618 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jp22c" (UniqueName: "kubernetes.io/secret/78b61bb0-732b-11ea-9f29-02429a45b1b2-default-token-jp22c") pod "web" (UID: "78b61bb0-732b-11ea-9f29-02429a45b1b2")
Mar 31 08:41:55 minikube kubelet[1618]: W0331 08:41:55.682086 1618 pod_container_deletor.go:75] Container "cc3588d4252ea6a8587eecc630d55d513d07e8630a4f8eb3bbffb6ed7c4bc995" not found in pod's containers
Mar 31 08:52:32 minikube kubelet[1618]: W0331 08:52:32.579484 1618 reflector.go:289] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: too old resource version: 325 (1077)
==> storage-provisioner [791695c1a1a8] <==
The text was updated successfully, but these errors were encountered: