Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-dns timed out: eviction manager: must evict pod(s) to reclaim ephemeral-storage #4084

Closed
ahmadreza9 opened this issue Apr 12, 2019 · 4 comments
Labels
ev/eviction pod evictions priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@ahmadreza9
Copy link

:minikube start --vm-driver=none
😄 minikube v1.0.0 on linux (amd64)
🤹 Downloading Kubernetes v1.14.0 images in the background ...
🔥 Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶 "minikube" IP address is 192.168.1.101
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.03.1-ce
⌛ Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.14.0 ...
🚀 Launching Kubernetes v1.14.0 using kubeadm ...
⌛ Waiting for pods: apiserver proxy etcd scheduler controller dns
💣 Error starting cluster: wait: waiting for k8s-app=kube-dns: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
❌ Problems detected in "kubelet":
Apr 12 00:36:15 Ahmadreza kubelet[14260]: I0412 00:36:15.463531 14260 eviction_manager.go:578] eviction manager: pod kube-proxy-qnsmv_kube-system(2eb4232c-5c95-11e9-a558-3085a9043d1c) is evicted successfully
Apr 12 00:36:15 Ahmadreza kubelet[14260]: I0412 00:36:15.463553 14260 eviction_manager.go:191] eviction manager: pods kube-proxy-qnsmv_kube-system(2eb4232c-5c95-11e9-a558-3085a9043d1c) evicted, waiting for pod to be cleaned up
Apr 12 00:36:47 Ahmadreza kubelet[14260]: I0412 00:36:47.382783 14260 eviction_manager.go:578] eviction manager: pod kube-proxy-nz7nm_kube-system(41dbbccc-5c95-11e9-a558-3085a9043d1c) is evicted successfully

minikube logs:
==> dmesg <==
[Apr11 23:51] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[ +0.000000] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
[ +0.022470] pmd_set_huge: Cannot satisfy [mem 0xf8000000-0xf8200000] with a huge-page mapping due to MTRR override.
[ +0.135868] ACPI Error: Needed type [Reference], found [Integer] 00000000adda63d9 (20180810/exresop-69)
[ +0.000065] ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [OpcodeName unavailable] (20180810/dswexec-427)
[ +0.000064] ACPI Error: Method parse/execution failed _PR.CPU0._PDC, AE_AML_OPERAND_TYPE (20180810/psparse-516)
[ +1.657952] ACPI Warning: SystemIO range 0x0000000000000428-0x000000000000042F conflicts with OpRegion 0x0000000000000400-0x000000000000044F (\GPIS) (20180810/utaddress-213)
[ +0.000006] ACPI Warning: SystemIO range 0x0000000000000428-0x000000000000042F conflicts with OpRegion 0x0000000000000400-0x000000000000047F (\PMIO) (20180810/utaddress-213)
[ +0.000009] ACPI Warning: SystemIO range 0x0000000000000540-0x000000000000054F conflicts with OpRegion 0x0000000000000500-0x000000000000057F (\GPIO) (20180810/utaddress-213)
[ +0.000004] ACPI Warning: SystemIO range 0x0000000000000540-0x000000000000054F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GP01) (20180810/utaddress-213)
[ +0.000004] ACPI Warning: SystemIO range 0x0000000000000530-0x000000000000053F conflicts with OpRegion 0x0000000000000500-0x000000000000057F (\GPIO) (20180810/utaddress-213)
[ +0.000004] ACPI Warning: SystemIO range 0x0000000000000530-0x000000000000053F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GP01) (20180810/utaddress-213)
[ +0.000004] ACPI Warning: SystemIO range 0x0000000000000500-0x000000000000052F conflicts with OpRegion 0x0000000000000500-0x000000000000057F (\GPIO) (20180810/utaddress-213)
[ +0.000004] ACPI Warning: SystemIO range 0x0000000000000500-0x000000000000052F conflicts with OpRegion 0x0000000000000500-0x0000000000000563 (\GP01) (20180810/utaddress-213)
[ +0.000004] lpc_ich: Resource conflict(s) found affecting gpio_ich
[ +7.405140] systemd[1]: Configuration file /lib/systemd/system/kubelet.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.034932] systemd[1]: Configuration file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +1.314142] nvidia: loading out-of-tree module taints kernel.
[ +0.000020] nvidia: module license 'NVIDIA' taints kernel.
[ +0.000001] Disabling lock debugging due to kernel taint
[ +0.020773] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 390.116 Sun Jan 27 07:21:36 PST 2019 (using threaded interrupts)
[ +2.296245] iwlwifi 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
[ +1.635454] uvcvideo 3-1.3:1.0: Entity type for entity Extension 6 was not initialized!
[ +0.000005] uvcvideo 3-1.3:1.0: Entity type for entity Processing 5 was not initialized!
[ +0.000002] uvcvideo 3-1.3:1.0: Entity type for entity Selector 4 was not initialized!
[ +0.000003] uvcvideo 3-1.3:1.0: Entity type for entity Camera 1 was not initialized!
[Apr11 23:52] resource sanity check: requesting [mem 0x000e0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window]
[ +0.000166] caller _nv029937rm+0x57/0x90 [nvidia] mapping multiple BARs
[ +0.006168] ACPI Warning: _SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20180810/nsarguments-66)
[Apr11 23:54] FAT-fs (sdb1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
[Apr12 01:05] IRQ 26: no longer affine to CPU3
[ +0.024055] IRQ 32: no longer affine to CPU4
[ +0.023832] IRQ 23: no longer affine to CPU5
[ +0.000005] IRQ 25: no longer affine to CPU5
[ +0.024015] IRQ 30: no longer affine to CPU6
[ +0.020027] IRQ 29: no longer affine to CPU7
[ +0.012773] cache: parent cpu1 should not be sleeping
[ +0.003686] cache: parent cpu2 should not be sleeping
[ +0.003691] cache: parent cpu3 should not be sleeping
[ +0.002928] cache: parent cpu4 should not be sleeping
[ +0.002914] cache: parent cpu5 should not be sleeping
[ +0.002790] cache: parent cpu6 should not be sleeping
[ +0.002828] cache: parent cpu7 should not be sleeping
[ +0.124481] iwlwifi 0000:03:00.0: RF_KILL bit toggled to enable radio.
[ +0.000388] ACPI: button: The lid device is not compliant to SW_LID.
[ +2.919453] done.
[Apr12 01:25] FAT-fs (sdb1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

==> kernel <==
09:29:01 up 9:38, 1 user, load average: 0.36, 0.36, 0.36
Linux Ahmadreza 4.19.0-kali4-amd64 #1 SMP Debian 4.19.28-2kali1 (2019-03-18) x86_64 GNU/Linux

==> kube-apiserver <==
I0412 04:58:36.970657 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:36.970978 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:37.971211 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:37.971510 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:38.971818 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:38.972095 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:39.972371 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:39.972743 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:40.973033 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:40.973502 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:41.973787 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:41.974124 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:42.974421 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:42.974672 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:43.974948 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:43.975248 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:44.975479 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:44.975784 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:45.976098 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:45.976381 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:46.976605 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:46.976878 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:47.977187 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:47.977588 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:48.977644 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:48.977870 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:49.978099 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:49.978347 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:50.978608 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:50.978879 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:51.979168 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:51.979465 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:52.979684 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:52.979924 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:53.980158 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:53.980385 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:54.980569 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:54.980777 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:55.981193 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:55.981453 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:56.981872 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:56.982112 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:57.982361 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:57.982674 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:58.982885 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:58.983115 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:58:59.983348 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:58:59.983615 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
I0412 04:59:00.983880 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
I0412 04:59:00.984191 1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002

==> kube-scheduler <==
E0412 04:36:03.168373 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:03.168903 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:03.169769 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:03.170799 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:03.171895 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:03.173387 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:03.174887 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:03.175915 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:03.177100 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:03.178144 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:04.170310 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:04.170853 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:04.171870 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:04.172826 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:04.173793 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:04.174924 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:04.176162 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:04.177068 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:04.178181 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:04.179249 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:05.172313 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:05.172836 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:05.173646 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:05.174833 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:05.175868 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:05.176495 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:05.177510 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:05.179249 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:05.180349 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:05.181178 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:06.174279 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:06.174945 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:06.175600 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:06.176751 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:06.177801 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:06.178919 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:06.179901 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:06.181021 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:06.182162 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:06.183291 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
E0412 04:36:12.235033 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0412 04:36:12.244568 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0412 04:36:12.244653 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0412 04:36:12.244725 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0412 04:36:12.251132 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0412 04:36:12.251228 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0412 04:36:14.142562 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0412 04:36:14.242795 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0412 04:36:14.242918 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0412 04:36:32.452078 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Thu 2019-04-11 23:51:07 +0430, end at Fri 2019-04-12 09:29:01 +0430. --
Apr 12 09:28:14 Ahmadreza kubelet[14260]: W0412 09:28:14.744599 14260 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
Apr 12 09:28:14 Ahmadreza kubelet[14260]: I0412 09:28:14.744640 14260 container_gc.go:85] attempting to delete unused containers
Apr 12 09:28:14 Ahmadreza kubelet[14260]: I0412 09:28:14.756764 14260 image_gc_manager.go:317] attempting to delete unused images
Apr 12 09:28:14 Ahmadreza kubelet[14260]: I0412 09:28:14.769517 14260 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Apr 12 09:28:14 Ahmadreza kubelet[14260]: I0412 09:28:14.769630 14260 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(58272442e226c838b193bbba4c44091e), kube-apiserver-minikube_kube-system(bed9378230899740b9b543ba67fc6aba), kube-controller-manager-minikube_kube-system(bdfb77c730b28423d94cceb95c2f9b79), etcd-minikube_kube-system(9b3117e6b116da497f1c7069f2976961)
Apr 12 09:28:14 Ahmadreza kubelet[14260]: E0412 09:28:14.769670 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(58272442e226c838b193bbba4c44091e)
Apr 12 09:28:14 Ahmadreza kubelet[14260]: E0412 09:28:14.769688 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(bed9378230899740b9b543ba67fc6aba)
Apr 12 09:28:14 Ahmadreza kubelet[14260]: E0412 09:28:14.769704 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(bdfb77c730b28423d94cceb95c2f9b79)
Apr 12 09:28:14 Ahmadreza kubelet[14260]: E0412 09:28:14.769719 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(9b3117e6b116da497f1c7069f2976961)
Apr 12 09:28:14 Ahmadreza kubelet[14260]: I0412 09:28:14.769729 14260 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
Apr 12 09:28:24 Ahmadreza kubelet[14260]: W0412 09:28:24.794494 14260 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
Apr 12 09:28:24 Ahmadreza kubelet[14260]: I0412 09:28:24.794522 14260 container_gc.go:85] attempting to delete unused containers
Apr 12 09:28:24 Ahmadreza kubelet[14260]: I0412 09:28:24.804204 14260 image_gc_manager.go:317] attempting to delete unused images
Apr 12 09:28:24 Ahmadreza kubelet[14260]: I0412 09:28:24.824277 14260 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Apr 12 09:28:24 Ahmadreza kubelet[14260]: I0412 09:28:24.824630 14260 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(58272442e226c838b193bbba4c44091e), kube-apiserver-minikube_kube-system(bed9378230899740b9b543ba67fc6aba), kube-controller-manager-minikube_kube-system(bdfb77c730b28423d94cceb95c2f9b79), etcd-minikube_kube-system(9b3117e6b116da497f1c7069f2976961)
Apr 12 09:28:24 Ahmadreza kubelet[14260]: E0412 09:28:24.824736 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(58272442e226c838b193bbba4c44091e)
Apr 12 09:28:24 Ahmadreza kubelet[14260]: E0412 09:28:24.824788 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(bed9378230899740b9b543ba67fc6aba)
Apr 12 09:28:24 Ahmadreza kubelet[14260]: E0412 09:28:24.824832 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(bdfb77c730b28423d94cceb95c2f9b79)
Apr 12 09:28:24 Ahmadreza kubelet[14260]: E0412 09:28:24.824878 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(9b3117e6b116da497f1c7069f2976961)
Apr 12 09:28:24 Ahmadreza kubelet[14260]: I0412 09:28:24.824919 14260 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
Apr 12 09:28:34 Ahmadreza kubelet[14260]: W0412 09:28:34.857205 14260 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
Apr 12 09:28:34 Ahmadreza kubelet[14260]: I0412 09:28:34.857240 14260 container_gc.go:85] attempting to delete unused containers
Apr 12 09:28:34 Ahmadreza kubelet[14260]: I0412 09:28:34.865290 14260 image_gc_manager.go:317] attempting to delete unused images
Apr 12 09:28:34 Ahmadreza kubelet[14260]: I0412 09:28:34.883108 14260 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Apr 12 09:28:34 Ahmadreza kubelet[14260]: I0412 09:28:34.883293 14260 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(58272442e226c838b193bbba4c44091e), kube-apiserver-minikube_kube-system(bed9378230899740b9b543ba67fc6aba), kube-controller-manager-minikube_kube-system(bdfb77c730b28423d94cceb95c2f9b79), etcd-minikube_kube-system(9b3117e6b116da497f1c7069f2976961)
Apr 12 09:28:34 Ahmadreza kubelet[14260]: E0412 09:28:34.883356 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(58272442e226c838b193bbba4c44091e)
Apr 12 09:28:34 Ahmadreza kubelet[14260]: E0412 09:28:34.883397 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(bed9378230899740b9b543ba67fc6aba)
Apr 12 09:28:34 Ahmadreza kubelet[14260]: E0412 09:28:34.883420 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(bdfb77c730b28423d94cceb95c2f9b79)
Apr 12 09:28:34 Ahmadreza kubelet[14260]: E0412 09:28:34.883442 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(9b3117e6b116da497f1c7069f2976961)
Apr 12 09:28:34 Ahmadreza kubelet[14260]: I0412 09:28:34.883457 14260 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
Apr 12 09:28:44 Ahmadreza kubelet[14260]: W0412 09:28:44.907339 14260 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
Apr 12 09:28:44 Ahmadreza kubelet[14260]: I0412 09:28:44.907367 14260 container_gc.go:85] attempting to delete unused containers
Apr 12 09:28:44 Ahmadreza kubelet[14260]: I0412 09:28:44.916626 14260 image_gc_manager.go:317] attempting to delete unused images
Apr 12 09:28:44 Ahmadreza kubelet[14260]: I0412 09:28:44.935398 14260 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Apr 12 09:28:44 Ahmadreza kubelet[14260]: I0412 09:28:44.935610 14260 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(58272442e226c838b193bbba4c44091e), kube-apiserver-minikube_kube-system(bed9378230899740b9b543ba67fc6aba), kube-controller-manager-minikube_kube-system(bdfb77c730b28423d94cceb95c2f9b79), etcd-minikube_kube-system(9b3117e6b116da497f1c7069f2976961)
Apr 12 09:28:44 Ahmadreza kubelet[14260]: E0412 09:28:44.935678 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(58272442e226c838b193bbba4c44091e)
Apr 12 09:28:44 Ahmadreza kubelet[14260]: E0412 09:28:44.935717 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(bed9378230899740b9b543ba67fc6aba)
Apr 12 09:28:44 Ahmadreza kubelet[14260]: E0412 09:28:44.935751 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(bdfb77c730b28423d94cceb95c2f9b79)
Apr 12 09:28:44 Ahmadreza kubelet[14260]: E0412 09:28:44.935785 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(9b3117e6b116da497f1c7069f2976961)
Apr 12 09:28:44 Ahmadreza kubelet[14260]: I0412 09:28:44.935808 14260 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
Apr 12 09:28:54 Ahmadreza kubelet[14260]: W0412 09:28:54.961559 14260 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
Apr 12 09:28:54 Ahmadreza kubelet[14260]: I0412 09:28:54.961588 14260 container_gc.go:85] attempting to delete unused containers
Apr 12 09:28:54 Ahmadreza kubelet[14260]: I0412 09:28:54.969837 14260 image_gc_manager.go:317] attempting to delete unused images
Apr 12 09:28:54 Ahmadreza kubelet[14260]: I0412 09:28:54.987630 14260 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
Apr 12 09:28:54 Ahmadreza kubelet[14260]: I0412 09:28:54.987806 14260 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(58272442e226c838b193bbba4c44091e), kube-apiserver-minikube_kube-system(bed9378230899740b9b543ba67fc6aba), kube-controller-manager-minikube_kube-system(bdfb77c730b28423d94cceb95c2f9b79), etcd-minikube_kube-system(9b3117e6b116da497f1c7069f2976961)
Apr 12 09:28:54 Ahmadreza kubelet[14260]: E0412 09:28:54.987862 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(58272442e226c838b193bbba4c44091e)
Apr 12 09:28:54 Ahmadreza kubelet[14260]: E0412 09:28:54.987891 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(bed9378230899740b9b543ba67fc6aba)
Apr 12 09:28:54 Ahmadreza kubelet[14260]: E0412 09:28:54.987914 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(bdfb77c730b28423d94cceb95c2f9b79)
Apr 12 09:28:54 Ahmadreza kubelet[14260]: E0412 09:28:54.987939 14260 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(9b3117e6b116da497f1c7069f2976961)
Apr 12 09:28:54 Ahmadreza kubelet[14260]: I0412 09:28:54.987956 14260 eviction_manager.go:385] eviction manager: unable to evict any pods from the node

operating system:Kali Linux 20019a
all software are the last version.

I would appreciate any response and help.

@tstromberg
Copy link
Contributor

Your system seems to be out of disk space and/or memory. Do you mind sharing the output of:

df -k
free

Thanks!

@tstromberg tstromberg added the ev/eviction pod evictions label Apr 25, 2019
@tstromberg tstromberg changed the title Error starting cluster: wait: waiting for k8s-app=kube-dns: timed out waiting for the condition kube-dns timed out: eviction manager: must evict pod(s) to reclaim ephemeral-storage Apr 25, 2019
@tstromberg tstromberg added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. labels Apr 25, 2019
@ahmadreza9
Copy link
Author

ahmadreza9 commented May 8, 2019

😄 minikube v1.0.0 on linux (amd64)
🤹 Downloading Kubernetes v1.14.0 images in the background ...
🔥 Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶 "minikube" IP address is 10.0.0.68
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.03.1-ce
⌛ Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.14.0 ...
🚀 Launching Kubernetes v1.14.0 using kubeadm ...
⌛ Waiting for pods: apiserver proxy etcd scheduler controller dns
🔑 Configuring cluster permissions ...
🤔 Verifying component health .....
🤹 Configuring local host environment ...

⚠️ The 'none' driver provides limited isolation and may reduce system security and reliability.
⚠️ For more information, see:
👉 https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md

⚠️ kubectl and minikube configuration will be stored in /root
⚠️ To use kubectl or minikube commands as your own user, you may
⚠️ need to relocate them. For example, to overwrite your own settings:

▪ sudo mv /root/.kube /root/.minikube $HOME
▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

💡 This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
💗 kubectl is now configured to use "minikube"
🏄 Done! Thank you for using minikube!

@ahmadreza9
Copy link
Author

ahmadreza9 commented May 8, 2019

Your system seems to be out of disk space and/or memory. Do you mind sharing the output of:

df -k
free

Thanks!

You were right. I didn't have enough storage for installation so I released some space and deleted minikube and started again. top result shows minikube installed correctly.

@amruthar
Copy link

amruthar commented May 29, 2019

When initially deployed my application seems to be working fine and all pods are up and running. But after several hours of inactivity, minikube encounters disk pressure and tries to evict critical pods and also my application pods. No idea why this is happening, especially since the resources used by the application aren't much in size.

Minikube logs are as follows:

E0528 22:35:38.451727       1 scheduling_queue.go:468] Unable to find backoff value for pod application-pod-1 in backoffQ
E0528 22:35:39.451914       1 scheduling_queue.go:468] Unable to find backoff value for pod application-pod-2 in backoffQ
E0528 22:35:41.452216       1 scheduling_queue.go:468] Unable to find backoff value for pod application-pod-3 in backoffQ
E0528 22:35:55.454173       1 scheduling_queue.go:468] Unable to find backoff value for pod application-pod-4 in backoffQ
E0528 22:36:05.455245       1 scheduling_queue.go:468] Unable to find backoff value for pod application-pod-5 in backoffQ
E0528 21:19:02.661157       1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
E0528 21:26:50.766291       1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
E0528 21:42:02.829178       1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
E0528 21:56:34.885640       1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
E0528 22:12:01.934972       1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
E0528 22:21:10.978633       1 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
May 29 08:56:29 arch-srv01 kubelet[12544]: W0529 08:56:29.743360   12544 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 29 08:56:29 arch-srv01 kubelet[12544]: W0529 08:56:29.743360   12544 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 29 08:56:29 arch-srv01 kubelet[12544]: I0529 08:56:29.743381   12544 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 29 08:56:29 arch-srv01 kubelet[12544]: I0529 08:56:29.743381   12544 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 29 08:56:29 arch-srv01 kubelet[12544]: I0529 08:56:29.743414   12544 eviction_manager.go:362] eviction manager: pods ranked for eviction: etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6), kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620), kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef), kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)
May 29 08:56:29 arch-srv01 kubelet[12544]: I0529 08:56:29.743414   12544 eviction_manager.go:362] eviction manager: pods ranked for eviction: etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6), kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620), kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef), kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)
May 29 08:56:29 arch-srv01 kubelet[12544]: E0529 08:56:29.743433   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6)
May 29 08:56:29 arch-srv01 kubelet[12544]: E0529 08:56:29.743433   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6)
May 29 08:56:29 arch-srv01 kubelet[12544]: E0529 08:56:29.743441   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620)
May 29 08:56:29 arch-srv01 kubelet[12544]: E0529 08:56:29.743441   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620)
May 29 08:56:29 arch-srv01 kubelet[12544]: E0529 08:56:29.743448   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:29 arch-srv01 kubelet[12544]: E0529 08:56:29.743448   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:29 arch-srv01 kubelet[12544]: E0529 08:56:29.743455   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)
May 29 08:56:29 arch-srv01 kubelet[12544]: E0529 08:56:29.743455   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)
May 29 08:56:29 arch-srv01 kubelet[12544]: I0529 08:56:29.743459   12544 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 29 08:56:29 arch-srv01 kubelet[12544]: I0529 08:56:29.743459   12544 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 29 08:56:39 arch-srv01 kubelet[12544]: W0529 08:56:39.757272   12544 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 29 08:56:39 arch-srv01 kubelet[12544]: I0529 08:56:39.757292   12544 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 29 08:56:39 arch-srv01 kubelet[12544]: I0529 08:56:39.757322   12544 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88), etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6), kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620), kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:39 arch-srv01 kubelet[12544]: E0529 08:56:39.757341   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)
May 29 08:56:39 arch-srv01 kubelet[12544]: W0529 08:56:39.757272   12544 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 29 08:56:39 arch-srv01 kubelet[12544]: I0529 08:56:39.757292   12544 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 29 08:56:39 arch-srv01 kubelet[12544]: I0529 08:56:39.757322   12544 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88), etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6), kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620), kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:39 arch-srv01 kubelet[12544]: E0529 08:56:39.757341   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)
May 29 08:56:39 arch-srv01 kubelet[12544]: E0529 08:56:39.757350   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6)
May 29 08:56:39 arch-srv01 kubelet[12544]: E0529 08:56:39.757350   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6)
May 29 08:56:39 arch-srv01 kubelet[12544]: E0529 08:56:39.757356   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620)
May 29 08:56:39 arch-srv01 kubelet[12544]: E0529 08:56:39.757356   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620)
May 29 08:56:39 arch-srv01 kubelet[12544]: E0529 08:56:39.757362   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:39 arch-srv01 kubelet[12544]: E0529 08:56:39.757362   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:39 arch-srv01 kubelet[12544]: I0529 08:56:39.757367   12544 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 29 08:56:39 arch-srv01 kubelet[12544]: I0529 08:56:39.757367   12544 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 29 08:56:49 arch-srv01 kubelet[12544]: W0529 08:56:49.773164   12544 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 29 08:56:49 arch-srv01 kubelet[12544]: W0529 08:56:49.773164   12544 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 29 08:56:49 arch-srv01 kubelet[12544]: I0529 08:56:49.773185   12544 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 29 08:56:49 arch-srv01 kubelet[12544]: I0529 08:56:49.773185   12544 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 29 08:56:49 arch-srv01 kubelet[12544]: I0529 08:56:49.773215   12544 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88), etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6), kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620), kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:49 arch-srv01 kubelet[12544]: I0529 08:56:49.773215   12544 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88), etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6), kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620), kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:49 arch-srv01 kubelet[12544]: E0529 08:56:49.773234   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)
May 29 08:56:49 arch-srv01 kubelet[12544]: E0529 08:56:49.773234   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)
May 29 08:56:49 arch-srv01 kubelet[12544]: E0529 08:56:49.773243   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6)
May 29 08:56:49 arch-srv01 kubelet[12544]: E0529 08:56:49.773243   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6)
May 29 08:56:49 arch-srv01 kubelet[12544]: E0529 08:56:49.773249   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620)
May 29 08:56:49 arch-srv01 kubelet[12544]: E0529 08:56:49.773249   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620)
May 29 08:56:49 arch-srv01 kubelet[12544]: E0529 08:56:49.773256   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:49 arch-srv01 kubelet[12544]: E0529 08:56:49.773256   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:49 arch-srv01 kubelet[12544]: I0529 08:56:49.773261   12544 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 29 08:56:49 arch-srv01 kubelet[12544]: I0529 08:56:49.773261   12544 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 29 08:56:59 arch-srv01 kubelet[12544]: W0529 08:56:59.787176   12544 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 29 08:56:59 arch-srv01 kubelet[12544]: W0529 08:56:59.787176   12544 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 29 08:56:59 arch-srv01 kubelet[12544]: I0529 08:56:59.787196   12544 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 29 08:56:59 arch-srv01 kubelet[12544]: I0529 08:56:59.787196   12544 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 29 08:56:59 arch-srv01 kubelet[12544]: I0529 08:56:59.787226   12544 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88), etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6), kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620), kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:59 arch-srv01 kubelet[12544]: I0529 08:56:59.787226   12544 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88), etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6), kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620), kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:59 arch-srv01 kubelet[12544]: E0529 08:56:59.787245   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)
May 29 08:56:59 arch-srv01 kubelet[12544]: E0529 08:56:59.787245   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)
May 29 08:56:59 arch-srv01 kubelet[12544]: E0529 08:56:59.787253   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6)
May 29 08:56:59 arch-srv01 kubelet[12544]: E0529 08:56:59.787253   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(b1c09cdca0c7af7dd622b51ca28a13f6)
May 29 08:56:59 arch-srv01 kubelet[12544]: E0529 08:56:59.787259   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620)
May 29 08:56:59 arch-srv01 kubelet[12544]: E0529 08:56:59.787259   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(6464641da67082de81d7aba70b212620)
May 29 08:56:59 arch-srv01 kubelet[12544]: E0529 08:56:59.787266   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:59 arch-srv01 kubelet[12544]: E0529 08:56:59.787266   12544 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(7602d8a1f5a1462a5e0792a6488cbbef)
May 29 08:56:59 arch-srv01 kubelet[12544]: I0529 08:56:59.787270   12544 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 29 08:56:59 arch-srv01 kubelet[12544]: I0529 08:56:59.787270   12544 eviction_manager.go:385] eviction manager: unable to evict any pods from the node

I'm having to stop and delete minikube and clean the overlay directory in docker lib and start minikube again to bring my application back up, only for this to happen again after several hours.

Does anybody know what's going wrong here?

Edit: Now the above mentioned steps also do not work and I am unable to bring minikube back up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ev/eviction pod evictions priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

3 participants