Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker driver on macOS: hangs after wake from hibernation #6462

Closed
balopat opened this issue Feb 2, 2020 · 6 comments
Closed

docker driver on macOS: hangs after wake from hibernation #6462

balopat opened this issue Feb 2, 2020 · 6 comments
Assignees
Labels
co/docker-driver Issues related to kubernetes in container co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@balopat
Copy link
Contributor

balopat commented Feb 2, 2020

The exact command to reproduce the issue:

minikube start --vm-driver=docker --container-runtime=docker 
# wait...
minikube status 
....
⚠️  /Users/balintp/dev/tools/google-cloud-sdk/bin/kubectl is version 1.13.11-dispatcher, and is incompatible with Kubernetes 1.17.2. You will need to update /Users/balintp/dev/tools/google-cloud-sdk/bin/kubectl or use 'minikube kubectl' to connect with this cluster

The full output of the command that failed:

kubectl works fine

kubectl get pods --all-namespaces                                                                                                                                             Sat Feb  1 21:20:42 2020
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-6955765f44-7rd8t           1/1     Running   0          37s
kube-system   coredns-6955765f44-qrvqz           1/1     Running   0          37s
kube-system   etcd-minikube                      1/1     Running   0          52s
kube-system   kindnet-rq767                      1/1     Running   0          37s
kube-system   kube-apiserver-minikube            1/1     Running   0          52s
kube-system   kube-controller-manager-minikube   1/1     Running   0          52s
kube-system   kube-proxy-psqfk                   1/1     Running   0          37s
kube-system   kube-scheduler-minikube            1/1     Running   0          52s

which shows that the apiserver is running!

but minikube status is very slow (probably because we are pointing to a bad apiserver endpoint?)

$ minikube status --alsologtostderr --v 10 --logtostderr                                                                                                             
W0201 21:23:19.624076    5352 root.go:244] Error reading config file at /Users/balintp/.minikube/config/config.json: open /Users/balintp/.minikube/config/config.json: no such file or directory
I0201 21:23:19.904916    5352 kubeadm.go:89] kubelet is-active: active
I0201 21:23:20.401802    5352 kubeadm.go:118] apiserver freezer: "7:freezer:/docker/39d00f61548d1716e963ccfe215a7623b45ec59f7c7df5154983ed6a6f91fda8/kubepods/burstable/pod9a370179267f6fbf6b7a1e0f95ad1e25/6ee39e375ec76cea1b2cadb2bc6d48e537e092625c600db511591102bf396355"
I0201 21:23:20.589871    5352 kubeadm.go:132] freezer state: "THAWED"
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Misconfigured

WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`

this takes 1.3minutes on my Mac.

$ minikube update-context                                                                                                                                           
🎉  minikube IP has been updated to point at 172.17.0.2

did not help with the speed, but at least kubectl is well configured now...

minikube status --alsologtostderr --v 10 --logtostderr                                                                                                               
W0201 21:25:36.421924    5547 root.go:244] Error reading config file at /Users/balintp/.minikube/config/config.json: open /Users/balintp/.minikube/config/config.json: no such file or directory
I0201 21:25:36.670968    5547 kubeadm.go:89] kubelet is-active: active
I0201 21:25:37.151922    5547 kubeadm.go:118] apiserver freezer: "7:freezer:/docker/39d00f61548d1716e963ccfe215a7623b45ec59f7c7df5154983ed6a6f91fda8/kubepods/burstable/pod9a370179267f6fbf6b7a1e0f95ad1e25/6ee39e375ec76cea1b2cadb2bc6d48e537e092625c600db511591102bf396355"
I0201 21:25:37.325578    5547 kubeadm.go:132] freezer state: "THAWED"
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

and now kubectl is broken :(

kubectl get pods --all-namespaces

does not return...

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Sun 2020-02-02 05:14:36 UTC, end at Sun 2020-02-02 05:29:37 UTC. --
Feb 02 05:14:44 minikube systemd[1]: Starting Docker Application Container Engine...
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.747213200Z" level=info msg="Starting up"
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.748796000Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.748836100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.748861100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.748890300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.749171200Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007b0e30, CONNECTING" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.749257900Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.750256100Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0007b0e30, READY" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.751305300Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.751341800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.751365600Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.751388300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.751503200Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00015cd60, CONNECTING" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.751851400Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00015cd60, READY" module=grpc
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.775899900Z" level=info msg="Loading containers: start."
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.844098400Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.882746000Z" level=info msg="Loading containers: done."
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.961652500Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.961858800Z" level=info msg="Daemon has completed initialization"
Feb 02 05:14:44 minikube dockerd[175]: time="2020-02-02T05:14:44.997615500Z" level=info msg="API listen on /run/docker.sock"
Feb 02 05:14:44 minikube systemd[1]: Started Docker Application Container Engine.

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID

==> coredns [59d8c75a5e5f] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> coredns [c019a1e96ecc] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> dmesg <==
[  +0.000000] lapic_timer_shutdown

[  +0.000001]  periodic: 
[  +0.000000] lapic_timer_set_periodic

[  +0.000000]  oneshot:  
[  +0.000001] lapic_timer_set_oneshot

[  +0.000000]  event_handler:  
[  +0.000001] hrtimer_interrupt

[  +0.000000]  retries:        6404

[  +0.000001] Tick Device: mode:     1
[  +0.000000] Per CPU device: 7
[  +0.000000] Clock Event Device: 
[  +0.000000] lapic
[  +0.000001]  max_delta_ns:   258352911376
[  +0.000000]  min_delta_ns:   1805
[  +0.000000]  mult:           35700670
[  +0.000000]  shift:          32
[  +0.000001]  mode:           3
[  +0.000000]  next_event:     216724780000000 nsecs
[  +0.000000]  set_next_event: 
[  +0.000001] lapic_next_event

[  +0.000000]  shutdown: 
[  +0.000001] lapic_timer_shutdown

[  +0.000000]  periodic: 
[  +0.000001] lapic_timer_set_periodic

[  +0.000007]  oneshot:  
[  +0.000001] lapic_timer_set_oneshot

[  +0.000000]  event_handler:  
[  +0.000001] hrtimer_interrupt

[  +0.000000]  retries:        5608

[  +5.019936]  0000000000000086 0000000000019100 0000000000000000 ffff8c18d8ad4000
[  +0.000002]  ffffffff8dc13500 ffff8c18f5a2acc0 ffff8c18fe619100 ffffffff8d7d6484
[  +0.000002]  0000000000000002 7fffffffffffffff ffffa796c19ffdb8 ffffa796c19ffe88
[  +0.000002] Call Trace:
[  +0.000006]  [<ffffffff8d7d6484>] ? __schedule+0x3ed/0x46e
[  +0.000001]  [<ffffffff8d7d8152>] ? usleep_range+0x66/0x66
[  +0.000002]  [<ffffffff8d7d6565>] ? schedule+0x60/0x71
[  +0.000001]  [<ffffffff8d7d818b>] ? schedule_timeout+0x39/0xf2
[  +0.000002]  [<ffffffff8d7d6d97>] ? __wait_for_common+0xce/0x13e
[  +0.000014]  [<ffffffff8d1024b3>] ? wake_up_q+0x45/0x45
[  +0.000012]  [<ffffffff8d122cbb>] ? __wait_rcu_gp+0xae/0xc2
[  +0.000003]  [<ffffffff8d124ebf>] ? synchronize_sched+0x4e/0x68
[  +0.000008]  [<ffffffff8d126545>] ? call_rcu_bh+0x13/0x13
[  +0.000002]  [<ffffffff8d122bff>] ? trace_raw_output_rcu_utilization+0x5f/0x5f
[  +0.000942]  [<ffffffff8d212f04>] ? namespace_unlock+0x52/0x74
[  +0.000004]  [<ffffffff8d214197>] ? SYSC_umount+0x317/0x353
[  +0.000014]  [<ffffffff8d13a70d>] ? SyS_futex+0x139/0x15e
[  +0.000004]  [<ffffffff8d00341a>] ? do_syscall_64+0x6f/0x7f
[  +0.000002]  [<ffffffff8d7d8f4e>] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6
[Feb 2 05:09] cgroup: cgroup2: unknown option "nsdelegate"

==> kernel <==
 05:29:38 up 2 days, 12:31,  0 users,  load average: 0.28, 0.46, 0.44
Linux minikube 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [6ee39e375ec7] <==
W0202 05:19:45.757354       1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0202 05:19:45.763196       1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0202 05:19:45.774480       1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0202 05:19:45.776540       1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0202 05:19:45.785604       1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0202 05:19:45.799964       1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0202 05:19:45.799997       1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0202 05:19:45.807148       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0202 05:19:45.807179       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0202 05:19:45.808839       1 client.go:361] parsed scheme: "endpoint"
I0202 05:19:45.808876       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0202 05:19:45.815901       1 client.go:361] parsed scheme: "endpoint"
I0202 05:19:45.815939       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0202 05:19:47.164486       1 secure_serving.go:178] Serving securely on [::]:8443
I0202 05:19:47.164548       1 autoregister_controller.go:140] Starting autoregister controller
I0202 05:19:47.164562       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0202 05:19:47.164681       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0202 05:19:47.164730       1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0202 05:19:47.164758       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0202 05:19:47.165410       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0202 05:19:47.165707       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0202 05:19:47.165739       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0202 05:19:47.165843       1 crd_finalizer.go:263] Starting CRDFinalizer
I0202 05:19:47.166616       1 establishing_controller.go:73] Starting EstablishingController
I0202 05:19:47.166962       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0202 05:19:47.167124       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0202 05:19:47.167274       1 controller.go:85] Starting OpenAPI controller
I0202 05:19:47.167456       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0202 05:19:47.167579       1 naming_controller.go:288] Starting NamingConditionController
I0202 05:19:47.167639       1 available_controller.go:386] Starting AvailableConditionController
I0202 05:19:47.167901       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0202 05:19:47.168234       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0202 05:19:47.168273       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0202 05:19:47.170126       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0202 05:19:47.170186       1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I0202 05:19:47.170266       1 controller.go:81] Starting OpenAPI AggregationController
I0202 05:19:47.170357       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0202 05:19:47.170489       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
E0202 05:19:47.170416       1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
I0202 05:19:47.266228       1 cache.go:39] Caches are synced for autoregister controller
I0202 05:19:47.266842       1 shared_informer.go:204] Caches are synced for crd-autoregister 
I0202 05:19:47.270592       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I0202 05:19:47.271041       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0202 05:19:47.271185       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0202 05:19:48.165776       1 controller.go:107] OpenAPI AggregationController: Processing item 
I0202 05:19:48.165829       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0202 05:19:48.165871       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0202 05:19:48.170519       1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0202 05:19:48.175994       1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0202 05:19:48.176026       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0202 05:19:48.523882       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0202 05:19:48.578518       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0202 05:19:48.720905       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
I0202 05:19:48.723073       1 controller.go:606] quota admission added evaluator for: endpoints
I0202 05:19:49.409802       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0202 05:19:49.846655       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0202 05:19:50.276304       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0202 05:19:50.475540       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0202 05:20:06.180038       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0202 05:20:06.540345       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

==> kube-controller-manager [bf75c97aded3] <==
I0202 05:20:04.574879       1 controllermanager.go:533] Started "replicaset"
I0202 05:20:04.575044       1 replica_set.go:180] Starting replicaset controller
I0202 05:20:04.575375       1 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
I0202 05:20:04.824268       1 controllermanager.go:533] Started "job"
I0202 05:20:04.824349       1 job_controller.go:143] Starting job controller
I0202 05:20:04.824362       1 shared_informer.go:197] Waiting for caches to sync for job
I0202 05:20:05.073601       1 controllermanager.go:533] Started "podgc"
I0202 05:20:05.073679       1 gc_controller.go:88] Starting GC controller
I0202 05:20:05.073711       1 shared_informer.go:197] Waiting for caches to sync for GC
I0202 05:20:05.983388       1 garbagecollector.go:129] Starting garbage collector controller
I0202 05:20:05.983430       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0202 05:20:05.983388       1 controllermanager.go:533] Started "garbagecollector"
I0202 05:20:05.983495       1 graph_builder.go:282] GraphBuilder running
I0202 05:20:05.983704       1 shared_informer.go:197] Waiting for caches to sync for resource quota
W0202 05:20:05.999120       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0202 05:20:06.022010       1 shared_informer.go:204] Caches are synced for node 
I0202 05:20:06.022075       1 range_allocator.go:172] Starting range CIDR allocator
I0202 05:20:06.022121       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
I0202 05:20:06.022144       1 shared_informer.go:204] Caches are synced for cidrallocator 
I0202 05:20:06.024804       1 shared_informer.go:204] Caches are synced for job 
I0202 05:20:06.025365       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I0202 05:20:06.026232       1 shared_informer.go:204] Caches are synced for ReplicationController 
I0202 05:20:06.029782       1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I0202 05:20:06.038020       1 shared_informer.go:204] Caches are synced for stateful set 
I0202 05:20:06.043264       1 shared_informer.go:204] Caches are synced for certificate-csrsigning 
I0202 05:20:06.074109       1 shared_informer.go:204] Caches are synced for GC 
I0202 05:20:06.075576       1 shared_informer.go:204] Caches are synced for TTL 
I0202 05:20:06.076319       1 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I0202 05:20:06.078135       1 shared_informer.go:204] Caches are synced for endpoint 
I0202 05:20:06.092818       1 shared_informer.go:204] Caches are synced for PVC protection 
I0202 05:20:06.161927       1 shared_informer.go:204] Caches are synced for disruption 
I0202 05:20:06.162007       1 disruption.go:338] Sending events to api server.
I0202 05:20:06.174836       1 shared_informer.go:204] Caches are synced for HPA 
I0202 05:20:06.175894       1 shared_informer.go:204] Caches are synced for ReplicaSet 
I0202 05:20:06.176076       1 shared_informer.go:204] Caches are synced for deployment 
I0202 05:20:06.184121       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"e2cbf482-bf8c-4603-a70d-ed4a8a366017", APIVersion:"apps/v1", ResourceVersion:"205", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
I0202 05:20:06.193815       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"24a4655d-d788-4c88-b454-78f2b114cc3b", APIVersion:"apps/v1", ResourceVersion:"353", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-7rd8t
I0202 05:20:06.200905       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"24a4655d-d788-4c88-b454-78f2b114cc3b", APIVersion:"apps/v1", ResourceVersion:"353", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-qrvqz
I0202 05:20:06.275415       1 shared_informer.go:204] Caches are synced for persistent volume 
I0202 05:20:06.275571       1 shared_informer.go:204] Caches are synced for expand 
I0202 05:20:06.324427       1 shared_informer.go:204] Caches are synced for PV protection 
I0202 05:20:06.331356       1 shared_informer.go:204] Caches are synced for attach detach 
I0202 05:20:06.362148       1 shared_informer.go:204] Caches are synced for namespace 
I0202 05:20:06.377508       1 shared_informer.go:204] Caches are synced for service account 
I0202 05:20:06.525575       1 shared_informer.go:204] Caches are synced for taint 
I0202 05:20:06.525807       1 taint_manager.go:186] Starting NoExecuteTaintManager
I0202 05:20:06.526148       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"9dab2739-62bf-4633-b2d5-219feb812abf", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0202 05:20:06.526032       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
W0202 05:20:06.527115       1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0202 05:20:06.527247       1 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
I0202 05:20:06.537318       1 shared_informer.go:204] Caches are synced for daemon sets 
I0202 05:20:06.546333       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"9ed2d4ad-5e5b-48ce-9d40-645b2dc7581a", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-psqfk
I0202 05:20:06.548569       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"0b4c5cc0-90f2-40e8-942a-a1c69a63c7bb", APIVersion:"apps/v1", ResourceVersion:"225", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rq767
I0202 05:20:06.583746       1 shared_informer.go:204] Caches are synced for garbage collector 
I0202 05:20:06.583777       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0202 05:20:06.583996       1 shared_informer.go:204] Caches are synced for resource quota 
I0202 05:20:06.601350       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
I0202 05:20:06.616878       1 shared_informer.go:204] Caches are synced for resource quota 
I0202 05:20:07.477646       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0202 05:20:07.477705       1 shared_informer.go:204] Caches are synced for garbage collector 

==> kube-proxy [c6cef0f5ec4e] <==
W0202 05:20:07.815909       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0202 05:20:07.831317       1 node.go:135] Successfully retrieved node IP: 172.17.0.2
I0202 05:20:07.831364       1 server_others.go:145] Using iptables Proxier.
I0202 05:20:07.832294       1 server.go:571] Version: v1.17.2
I0202 05:20:07.833171       1 conntrack.go:52] Setting nf_conntrack_max to 262144
E0202 05:20:07.833927       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0202 05:20:07.834213       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0202 05:20:07.834469       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0202 05:20:07.836432       1 config.go:313] Starting service config controller
I0202 05:20:07.836557       1 shared_informer.go:197] Waiting for caches to sync for service config
I0202 05:20:07.865947       1 config.go:131] Starting endpoints config controller
I0202 05:20:07.866602       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0202 05:20:07.966162       1 shared_informer.go:204] Caches are synced for service config 
I0202 05:20:07.967114       1 shared_informer.go:204] Caches are synced for endpoints config 

==> kube-scheduler [90e3771fb98c] <==
I0202 05:19:44.258806       1 serving.go:312] Generated self-signed cert in-memory
W0202 05:19:44.752066       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0202 05:19:44.752194       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0202 05:19:47.267861       1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0202 05:19:47.267983       1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0202 05:19:47.268010       1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W0202 05:19:47.268030       1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W0202 05:19:47.294736       1 authorization.go:47] Authorization is disabled
W0202 05:19:47.295989       1 authentication.go:92] Authentication is disabled
I0202 05:19:47.296223       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0202 05:19:47.298569       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0202 05:19:47.299492       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0202 05:19:47.299650       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0202 05:19:47.299714       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0202 05:19:47.371161       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0202 05:19:47.371666       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0202 05:19:47.371666       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0202 05:19:47.372115       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0202 05:19:47.372293       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0202 05:19:47.372564       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0202 05:19:47.372628       1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0202 05:19:47.372691       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0202 05:19:47.372949       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0202 05:19:47.373014       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0202 05:19:47.373063       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0202 05:19:47.373298       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0202 05:19:48.372564       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0202 05:19:48.374592       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0202 05:19:48.375690       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0202 05:19:48.376861       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0202 05:19:48.378101       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0202 05:19:48.379578       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0202 05:19:48.381209       1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0202 05:19:48.384597       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0202 05:19:48.386288       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0202 05:19:48.387823       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0202 05:19:48.391737       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0202 05:19:48.393165       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0202 05:19:49.400185       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0202 05:19:49.400798       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0202 05:19:49.411638       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Sun 2020-02-02 05:14:36 UTC, end at Sun 2020-02-02 05:29:40 UTC. --
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.751513    3212 volume_manager.go:265] Starting Kubelet Volume Manager
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.751543    3212 desired_state_of_world_populator.go:138] Desired state populator starts to run
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.799528    3212 status_manager.go:157] Starting to sync pod status with apiserver
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.799604    3212 kubelet.go:1820] Starting kubelet main sync loop.
Feb 02 05:19:50 minikube kubelet[3212]: E0202 05:19:50.799686    3212 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.865183    3212 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.871323    3212 clientconn.go:104] parsed scheme: "unix"
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.871462    3212 clientconn.go:104] scheme "unix" not registered, fallback to default scheme
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.871637    3212 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.871729    3212 clientconn.go:577] ClientConn switching balancer to "pick_first"
Feb 02 05:19:50 minikube kubelet[3212]: E0202 05:19:50.900066    3212 kubelet.go:1844] skipping pod synchronization - container runtime status check may not have completed yet
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.910325    3212 kubelet_node_status.go:70] Attempting to register node minikube
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.976288    3212 kubelet_node_status.go:112] Node minikube was previously registered
Feb 02 05:19:50 minikube kubelet[3212]: I0202 05:19:50.976565    3212 kubelet_node_status.go:73] Successfully registered node minikube
Feb 02 05:19:51 minikube kubelet[3212]: E0202 05:19:51.100401    3212 kubelet.go:1844] skipping pod synchronization - container runtime status check may not have completed yet
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.219979    3212 cpu_manager.go:173] [cpumanager] starting with none policy
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.220016    3212 cpu_manager.go:174] [cpumanager] reconciling every 10s
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.220037    3212 policy_none.go:43] [cpumanager] none policy: Start
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.222117    3212 plugin_manager.go:114] Starting Kubelet Plugin Manager
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.568603    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/9a370179267f6fbf6b7a1e0f95ad1e25-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "9a370179267f6fbf6b7a1e0f95ad1e25")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.568680    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/9a370179267f6fbf6b7a1e0f95ad1e25-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "9a370179267f6fbf6b7a1e0f95ad1e25")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.568729    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/091462203a51a29b462d059b44429ffa-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "091462203a51a29b462d059b44429ffa")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.568771    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9c994ea62a2d8d6f1bb7498f10aa6fcf-kubeconfig") pod "kube-scheduler-minikube" (UID: "9c994ea62a2d8d6f1bb7498f10aa6fcf")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.568815    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/9e79c287f01afa6c2b58d4247258897c-etcd-certs") pod "etcd-minikube" (UID: "9e79c287f01afa6c2b58d4247258897c")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.568851    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/9a370179267f6fbf6b7a1e0f95ad1e25-ca-certs") pod "kube-apiserver-minikube" (UID: "9a370179267f6fbf6b7a1e0f95ad1e25")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.568888    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/091462203a51a29b462d059b44429ffa-ca-certs") pod "kube-controller-manager-minikube" (UID: "091462203a51a29b462d059b44429ffa")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.568923    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/091462203a51a29b462d059b44429ffa-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "091462203a51a29b462d059b44429ffa")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.568982    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/091462203a51a29b462d059b44429ffa-kubeconfig") pod "kube-controller-manager-minikube" (UID: "091462203a51a29b462d059b44429ffa")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.569056    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/091462203a51a29b462d059b44429ffa-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "091462203a51a29b462d059b44429ffa")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.569108    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/9e79c287f01afa6c2b58d4247258897c-etcd-data") pod "etcd-minikube" (UID: "9e79c287f01afa6c2b58d4247258897c")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.569146    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/9a370179267f6fbf6b7a1e0f95ad1e25-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "9a370179267f6fbf6b7a1e0f95ad1e25")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.569214    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/091462203a51a29b462d059b44429ffa-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "091462203a51a29b462d059b44429ffa")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.569279    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/9a370179267f6fbf6b7a1e0f95ad1e25-k8s-certs") pod "kube-apiserver-minikube" (UID: "9a370179267f6fbf6b7a1e0f95ad1e25")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.569321    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/091462203a51a29b462d059b44429ffa-k8s-certs") pod "kube-controller-manager-minikube" (UID: "091462203a51a29b462d059b44429ffa")
Feb 02 05:19:51 minikube kubelet[3212]: I0202 05:19:51.569351    3212 reconciler.go:156] Reconciler: start to sync state
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.058144    3212 kuberuntime_manager.go:981] updating runtime config through cri with podcidr 10.244.0.0/24
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.058710    3212 docker_service.go:355] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.059013    3212 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.582927    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8014b5f0-6daa-4aa6-981c-5e69ac6d17a5-kube-proxy") pod "kube-proxy-psqfk" (UID: "8014b5f0-6daa-4aa6-981c-5e69ac6d17a5")
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.583037    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-fvzq5" (UniqueName: "kubernetes.io/secret/8014b5f0-6daa-4aa6-981c-5e69ac6d17a5-kube-proxy-token-fvzq5") pod "kube-proxy-psqfk" (UID: "8014b5f0-6daa-4aa6-981c-5e69ac6d17a5")
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.583080    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/f8f6959c-c0fa-4164-8155-87cf28869d1e-xtables-lock") pod "kindnet-rq767" (UID: "f8f6959c-c0fa-4164-8155-87cf28869d1e")
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.583136    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/f8f6959c-c0fa-4164-8155-87cf28869d1e-lib-modules") pod "kindnet-rq767" (UID: "f8f6959c-c0fa-4164-8155-87cf28869d1e")
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.583206    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/8014b5f0-6daa-4aa6-981c-5e69ac6d17a5-xtables-lock") pod "kube-proxy-psqfk" (UID: "8014b5f0-6daa-4aa6-981c-5e69ac6d17a5")
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.583251    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/8014b5f0-6daa-4aa6-981c-5e69ac6d17a5-lib-modules") pod "kube-proxy-psqfk" (UID: "8014b5f0-6daa-4aa6-981c-5e69ac6d17a5")
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.583288    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/f8f6959c-c0fa-4164-8155-87cf28869d1e-cni-cfg") pod "kindnet-rq767" (UID: "f8f6959c-c0fa-4164-8155-87cf28869d1e")
Feb 02 05:20:06 minikube kubelet[3212]: I0202 05:20:06.583323    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-l2vt7" (UniqueName: "kubernetes.io/secret/f8f6959c-c0fa-4164-8155-87cf28869d1e-kindnet-token-l2vt7") pod "kindnet-rq767" (UID: "f8f6959c-c0fa-4164-8155-87cf28869d1e")
Feb 02 05:20:07 minikube kubelet[3212]: W0202 05:20:07.284789    3212 pod_container_deletor.go:75] Container "600a5d0752e10c030811c7c216da9b977de0a3f1139fee6af64484e3c4b52f5b" not found in pod's containers
Feb 02 05:20:07 minikube kubelet[3212]: I0202 05:20:07.391963    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-s9d5x" (UniqueName: "kubernetes.io/secret/9249fd79-1485-4f9d-be31-7c08ba159cba-coredns-token-s9d5x") pod "coredns-6955765f44-7rd8t" (UID: "9249fd79-1485-4f9d-be31-7c08ba159cba")
Feb 02 05:20:07 minikube kubelet[3212]: I0202 05:20:07.392058    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/68ae5f19-c4ec-49ef-821d-192857b5925c-config-volume") pod "coredns-6955765f44-qrvqz" (UID: "68ae5f19-c4ec-49ef-821d-192857b5925c")
Feb 02 05:20:07 minikube kubelet[3212]: I0202 05:20:07.392114    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-s9d5x" (UniqueName: "kubernetes.io/secret/68ae5f19-c4ec-49ef-821d-192857b5925c-coredns-token-s9d5x") pod "coredns-6955765f44-qrvqz" (UID: "68ae5f19-c4ec-49ef-821d-192857b5925c")
Feb 02 05:20:07 minikube kubelet[3212]: I0202 05:20:07.392161    3212 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9249fd79-1485-4f9d-be31-7c08ba159cba-config-volume") pod "coredns-6955765f44-7rd8t" (UID: "9249fd79-1485-4f9d-be31-7c08ba159cba")
Feb 02 05:20:07 minikube kubelet[3212]: W0202 05:20:07.442580    3212 pod_container_deletor.go:75] Container "25bdda04fc29fb0a77ca8b169233957a62b4bd91cab7bc96d049ae521e006e1a" not found in pod's containers
Feb 02 05:20:08 minikube kubelet[3212]: W0202 05:20:08.404519    3212 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-qrvqz through plugin: invalid network status for
Feb 02 05:20:08 minikube kubelet[3212]: W0202 05:20:08.464762    3212 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-qrvqz through plugin: invalid network status for
Feb 02 05:20:08 minikube kubelet[3212]: E0202 05:20:08.469995    3212 remote_runtime.go:295] ContainerStatus "c019a1e96eccbf9abb1536cdd7c95217eec411eb02f1992b14b161485f4f8c7f" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: c019a1e96eccbf9abb1536cdd7c95217eec411eb02f1992b14b161485f4f8c7f
Feb 02 05:20:08 minikube kubelet[3212]: E0202 05:20:08.470087    3212 kuberuntime_manager.go:955] getPodContainerStatuses for pod "coredns-6955765f44-qrvqz_kube-system(68ae5f19-c4ec-49ef-821d-192857b5925c)" failed: rpc error: code = Unknown desc = Error: No such container: c019a1e96eccbf9abb1536cdd7c95217eec411eb02f1992b14b161485f4f8c7f
Feb 02 05:20:08 minikube kubelet[3212]: W0202 05:20:08.488384    3212 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-7rd8t through plugin: invalid network status for
Feb 02 05:20:08 minikube kubelet[3212]: W0202 05:20:08.492226    3212 pod_container_deletor.go:75] Container "daf0f49de31f6c2e9ca85897c2bf41b52a4ee4fd883e536510ef888a95de997f" not found in pod's containers
Feb 02 05:20:09 minikube kubelet[3212]: W0202 05:20:09.502604    3212 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-qrvqz through plugin: invalid network status for
Feb 02 05:20:09 minikube kubelet[3212]: W0202 05:20:09.509765    3212 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-7rd8t through plugin: invalid network status for

The operating system version:
MacOSX Mojave 10.14.6

@balopat balopat added kind/bug Categorizes issue or PR as related to a bug. co/runtime/docker Issues specific to a docker runtime co/docker-driver Issues related to kubernetes in container labels Feb 2, 2020
@tstromberg tstromberg changed the title minikube status is slow / hung with docker driver due to wrong apiserver address? kic on macOS: "minikube status" hangs for over a minute Feb 3, 2020
@tstromberg
Copy link
Contributor

I wasn't able to duplicate this on Linux, but suspect that balint is correct here, in that we are either using the wrong IP or wrong port for apiserver healthz on Docker.

It's unrelated, but the debugging in #6452 may help with this, but any sort of logging will.

@medyagh
Copy link
Member

medyagh commented Feb 3, 2020

@balopat do you remember if this was the Beta or HEAD ? on the head I have fixed something that status uses external IP intead of internal IP. and I hope the head doesnt have that issue but if it does it is a bug and will be fixed !

@medyagh
Copy link
Member

medyagh commented Feb 4, 2020

@tstromberg

I was able to replicate this after my laptop hibernated over the night

 ./out/minikube status -p k1
...
hangs forver

it is clear that docker is hang and a docker restart is needed.
but not clear how we would detect it because docker gives me right status but in reality it is stuck

medya@~/workspace/minikube (obvious_denv) $ docker inspect -f "{{.State.Status}}" k2
running

all following commands get stuck:

docker rm k2
docker start k2
docker logs k2
# (trying with id)
docker restart 2a8584d66bef 

but it does let me create new containers

docker run -it ubuntu:latest /bin/bash
works

I feel this is related to docker/for-mac#1835
there is also this docker/for-mac#1990 (comment)

@medyagh medyagh changed the title kic on macOS: "minikube status" hangs for over a minute docker driver on macOS: "minikube status" hangs for over a minute Feb 4, 2020
@medyagh medyagh changed the title docker driver on macOS: "minikube status" hangs for over a minute docker driver on macOS: "minikube status" hangs after wake from hibernation Feb 4, 2020
@medyagh medyagh changed the title docker driver on macOS: "minikube status" hangs after wake from hibernation docker driver on macOS: hangs after wake from hibernation Feb 4, 2020
@medyagh
Copy link
Member

medyagh commented Feb 4, 2020

it worth noting that, after I restart docker desktop on mac, evertything is back to normal
and minikube status works (with caveat that it will be stopped and need to be started again)

$ ./out/minikube status -p k2
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped
medya@~/workspace/minikube (obvious_denv) $ ./out/minikube start -p k2 --vm-driver=docker
😄  [k2] minikube v1.7.0-beta.2 on Darwin 10.14.6
✨  Selecting experimental 'docker' driver from user configuration (alternates: [hyperkit])
🔄  Starting existing docker VM for "k2" ...
⌛  Waiting for the host to be provisioned ...
🐳  Preparing Kubernetes v1.17.2 on Docker '19.03.2' ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
🚀  Launching Kubernetes ... 
🌟  Enabling addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "k2"
medya@~/workspace/minikube (obvious_denv) $ ./out/minikube status -p k2
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

@tstromberg tstromberg added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Feb 5, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 5, 2020
@medyagh
Copy link
Member

medyagh commented May 11, 2020

I believe this issue was fixed by improving our delete logic PR that I made, @balopat have u seen this issue recently ?

@medyagh medyagh closed this as completed May 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

5 participants