Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: Some fatal errors occurred: [ERROR Port-8443]: Port 8443 is in use #4251

Closed
vchekan opened this issue May 14, 2019 · 7 comments
Closed
Labels
cause/port-conflict Start failures due to port or other network conflict co/none-driver kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2 triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@vchekan
Copy link

vchekan commented May 14, 2019

The exact command to reproduce the issue:

export MINIKUBE_WANTUPDATENOTIFICATION=false
export MINIKUBE_WANTREPORTERRORPROMPT=false
export CHANGE_MINIKUBE_NONE_USER=true
sudo -E minikube start --vm-driver=none

The full output of the command that failed:

😄  minikube v1.0.1 on linux (amd64)
🔥  Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶  "minikube" IP address is 192.168.8.159
🐳  Configuring Docker as the container runtime ...
🐳  Version of container runtime is 18.09.6
✨  Preparing Kubernetes environment ...
❌  Unable to load cached images: loading cached images: loading image /home/vadim/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.1: stat /home/vadim/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.1: no such file or directory
💾  Downloading kubeadm v1.14.1
💾  Downloading kubelet v1.14.1
🚜  Pulling images required by Kubernetes v1.14.1 ...
🚀  Launching Kubernetes v1.14.1 using kubeadm ... 

💣  Error starting cluster: kubeadm init: 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI 


: running command: 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI 

 output: [init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING FileExisting-ebtables]: ebtables not found in system path
	[WARNING FileExisting-ethtool]: ethtool not found in system path
	[WARNING FileExisting-socat]: socat not found in system path
	[WARNING Hostname]: hostname "minikube" could not be reached
	[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING DirAvailable--data-minikube]: /data/minikube is not empty
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-8443]: Port 8443 is in use
	[ERROR Port-10251]: Port 10251 is in use
	[ERROR Port-10252]: Port 10252 is in use
	[ERROR Port-2379]: Port 2379 is in use
	[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
: running command: 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI 

.: exit status 1

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new
❌  Problems detected in "kube-addon-manager":
    error: unable to recognize "STDIN": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused

The output of the minikube logs command:

==> dmesg <==
[May11 16:28] secureboot: Secure boot could not be determined (mode 0)
[  +0.000000] pmd_set_huge: Cannot satisfy [mem 0xf8000000-0xf8200000] with a huge-page mapping due to MTRR override.
[  +1.100563] r8169 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control
[  +0.333285] ata5.00: supports DRM functions and may not be fully accessible
[  +0.000140] ata5.00: READ LOG DMA EXT failed, trying PIO
[  +0.018473] ata5.00: supports DRM functions and may not be fully accessible
[  +0.650433] usb 3-5.2: device descriptor read/64, error -32
[  +9.859326] kauditd_printk_skb: 58 callbacks suppressed
[  +0.489483] aufs au_opts_verify:1609:dockerd[1937]: dirperm1 breaks the protection by the permission bits on the lower branch
[May12 03:01] IRQ 16: no longer affine to CPU1
[  +0.000007] IRQ 29: no longer affine to CPU1
[  +0.024017] IRQ 23: no longer affine to CPU2
[  +0.000007] IRQ 27: no longer affine to CPU2
[  +0.032058] IRQ 26: no longer affine to CPU3
[  +0.000010] IRQ 28: no longer affine to CPU3
[  +0.012294]  cache: parent cpu1 should not be sleeping
[  +0.002224]  cache: parent cpu2 should not be sleeping
[  +0.002101]  cache: parent cpu3 should not be sleeping
[  +0.375470] ata5.00: supports DRM functions and may not be fully accessible
[  +0.018803] ata5.00: supports DRM functions and may not be fully accessible
[  +5.010804] ata1: link is slow to respond, please be patient (ready=0)
[  +4.651973] ata1: COMRESET failed (errno=-16)
[May12 06:52] sd 6:0:0:0: [sdc] No Caching mode page found
[  +0.000003] sd 6:0:0:0: [sdc] Assuming drive cache: write through
[  +0.012436] sd 6:0:0:1: [sdd] No Caching mode page found
[  +0.000009] sd 6:0:0:1: [sdd] Assuming drive cache: write through
[May12 11:07] sd 6:0:0:0: [sdc] No Caching mode page found
[  +0.000003] sd 6:0:0:0: [sdc] Assuming drive cache: write through
[  +0.001174] sd 6:0:0:1: [sdd] No Caching mode page found
[  +0.000003] sd 6:0:0:1: [sdd] Assuming drive cache: write through
[May12 12:03] kauditd_printk_skb: 32 callbacks suppressed
[May12 18:13] IRQ 16: no longer affine to CPU1
[  +0.000006] IRQ 29: no longer affine to CPU1
[  +0.024149] IRQ 23: no longer affine to CPU2
[  +0.000006] IRQ 27: no longer affine to CPU2
[  +0.032001] IRQ 26: no longer affine to CPU3
[  +0.000007] IRQ 28: no longer affine to CPU3
[  +0.011035]  cache: parent cpu1 should not be sleeping
[  +0.002241]  cache: parent cpu2 should not be sleeping
[  +0.002119]  cache: parent cpu3 should not be sleeping
[  +0.369614] ata5.00: supports DRM functions and may not be fully accessible
[  +0.019113] ata5.00: supports DRM functions and may not be fully accessible
[  +5.037481] ata1: link is slow to respond, please be patient (ready=0)
[  +4.676005] ata1: COMRESET failed (errno=-16)
[May13 02:54] kauditd_printk_skb: 37 callbacks suppressed

==> kernel <==
 20:38:38 up 2 days,  4:10,  1 user,  load average: 0.96, 0.86, 0.82
Linux desktop 4.18.0-18-generic #19~18.04.1-Ubuntu SMP Fri Apr 5 10:22:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

==> kube-addon-manager <==
INFO: == Kubernetes addon manager started at 2019-05-14T00:33:24+00:00 with ADDON_CHECK_INTERVAL_SEC=60 ==
error: unable to recognize "STDIN": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
WRN: == Failed to start /opt/namespace.yaml in namespace  at 2019-05-14T00:33:24+00:00. 99 tries remaining. ==
INFO: == Default service account in the kube-system namespace has token default-token-hr5j9 ==
find: '/etc/kubernetes/admission-controls': No such file or directory
INFO: == Entering periodical apply loop at 2019-05-14T00:33:29+00:00 ==
INFO: Leader is desktop
INFO: == Kubernetes addon ensure completed at 2019-05-14T00:33:29+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-14T00:33:31+00:00 ==
namespace/kube-system unchanged
INFO: == Successfully started /opt/namespace.yaml in namespace  at 2019-05-14T00:33:34+00:00
INFO: Leader is desktop
INFO: == Kubernetes addon ensure completed at 2019-05-14T00:34:29+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-14T00:34:30+00:00 ==
INFO: Leader is desktop
INFO: == Kubernetes addon ensure completed at 2019-05-14T00:35:30+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-14T00:35:31+00:00 ==
INFO: Leader is desktop
INFO: == Kubernetes addon ensure completed at 2019-05-14T00:36:29+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-14T00:36:31+00:00 ==
INFO: Leader is desktop
INFO: == Kubernetes addon ensure completed at 2019-05-14T00:37:29+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-14T00:37:31+00:00 ==
INFO: Leader is desktop
INFO: == Kubernetes addon ensure completed at 2019-05-14T00:38:29+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-14T00:38:30+00:00 ==

==> kube-apiserver <==
I0514 00:38:34.831507       1 log.go:172] http: TLS handshake error from 127.0.0.1:49094: remote error: tls: bad certificate
I0514 00:38:35.021488       1 log.go:172] http: TLS handshake error from 127.0.0.1:49096: remote error: tls: bad certificate
I0514 00:38:35.235536       1 log.go:172] http: TLS handshake error from 127.0.0.1:49100: remote error: tls: bad certificate
I0514 00:38:35.436229       1 log.go:172] http: TLS handshake error from 127.0.0.1:49102: remote error: tls: bad certificate
I0514 00:38:35.632126       1 log.go:172] http: TLS handshake error from 127.0.0.1:49104: remote error: tls: bad certificate
I0514 00:38:35.708998       1 log.go:172] http: TLS handshake error from 127.0.0.1:49106: remote error: tls: bad certificate
I0514 00:38:35.711809       1 log.go:172] http: TLS handshake error from 127.0.0.1:49108: remote error: tls: bad certificate
I0514 00:38:35.711850       1 log.go:172] http: TLS handshake error from 127.0.0.1:49110: remote error: tls: bad certificate
I0514 00:38:35.711876       1 log.go:172] http: TLS handshake error from 127.0.0.1:49112: remote error: tls: bad certificate
I0514 00:38:35.713858       1 log.go:172] http: TLS handshake error from 127.0.0.1:49114: remote error: tls: bad certificate
I0514 00:38:35.714111       1 log.go:172] http: TLS handshake error from 127.0.0.1:49116: remote error: tls: bad certificate
I0514 00:38:35.715785       1 log.go:172] http: TLS handshake error from 127.0.0.1:49118: remote error: tls: bad certificate
I0514 00:38:35.716377       1 log.go:172] http: TLS handshake error from 127.0.0.1:49124: remote error: tls: bad certificate
I0514 00:38:35.716620       1 log.go:172] http: TLS handshake error from 127.0.0.1:49120: remote error: tls: bad certificate
I0514 00:38:35.716870       1 log.go:172] http: TLS handshake error from 127.0.0.1:49122: remote error: tls: bad certificate
I0514 00:38:35.722733       1 log.go:172] http: TLS handshake error from 127.0.0.1:49126: remote error: tls: bad certificate
I0514 00:38:35.840634       1 log.go:172] http: TLS handshake error from 127.0.0.1:49128: remote error: tls: bad certificate
I0514 00:38:36.027863       1 log.go:172] http: TLS handshake error from 127.0.0.1:49130: remote error: tls: bad certificate
I0514 00:38:36.243197       1 log.go:172] http: TLS handshake error from 127.0.0.1:49132: remote error: tls: bad certificate
I0514 00:38:36.299517       1 log.go:172] http: TLS handshake error from 127.0.0.1:49134: remote error: tls: bad certificate
I0514 00:38:36.446919       1 log.go:172] http: TLS handshake error from 127.0.0.1:49136: remote error: tls: bad certificate
I0514 00:38:36.640301       1 log.go:172] http: TLS handshake error from 127.0.0.1:49138: remote error: tls: bad certificate
I0514 00:38:36.717558       1 log.go:172] http: TLS handshake error from 127.0.0.1:49140: remote error: tls: bad certificate
I0514 00:38:36.720112       1 log.go:172] http: TLS handshake error from 127.0.0.1:49150: remote error: tls: bad certificate
I0514 00:38:36.720144       1 log.go:172] http: TLS handshake error from 127.0.0.1:49144: remote error: tls: bad certificate
I0514 00:38:36.720168       1 log.go:172] http: TLS handshake error from 127.0.0.1:49142: remote error: tls: bad certificate
I0514 00:38:36.721405       1 log.go:172] http: TLS handshake error from 127.0.0.1:49146: remote error: tls: bad certificate
I0514 00:38:36.721835       1 log.go:172] http: TLS handshake error from 127.0.0.1:49152: remote error: tls: bad certificate
I0514 00:38:36.721908       1 log.go:172] http: TLS handshake error from 127.0.0.1:49148: remote error: tls: bad certificate
I0514 00:38:36.722280       1 log.go:172] http: TLS handshake error from 127.0.0.1:49154: remote error: tls: bad certificate
I0514 00:38:36.723077       1 log.go:172] http: TLS handshake error from 127.0.0.1:49156: remote error: tls: bad certificate
I0514 00:38:36.723116       1 log.go:172] http: TLS handshake error from 127.0.0.1:49158: remote error: tls: bad certificate
I0514 00:38:36.843667       1 log.go:172] http: TLS handshake error from 127.0.0.1:49160: remote error: tls: bad certificate
I0514 00:38:37.032639       1 log.go:172] http: TLS handshake error from 127.0.0.1:49164: remote error: tls: bad certificate
I0514 00:38:37.253082       1 log.go:172] http: TLS handshake error from 127.0.0.1:49166: remote error: tls: bad certificate
I0514 00:38:37.461185       1 log.go:172] http: TLS handshake error from 127.0.0.1:49168: remote error: tls: bad certificate
I0514 00:38:37.647529       1 log.go:172] http: TLS handshake error from 127.0.0.1:49170: remote error: tls: bad certificate
I0514 00:38:37.726883       1 log.go:172] http: TLS handshake error from 127.0.0.1:49172: remote error: tls: bad certificate
I0514 00:38:37.731172       1 log.go:172] http: TLS handshake error from 127.0.0.1:49178: remote error: tls: bad certificate
I0514 00:38:37.731797       1 log.go:172] http: TLS handshake error from 127.0.0.1:49174: remote error: tls: bad certificate
I0514 00:38:37.731833       1 log.go:172] http: TLS handshake error from 127.0.0.1:49184: remote error: tls: bad certificate
I0514 00:38:37.731857       1 log.go:172] http: TLS handshake error from 127.0.0.1:49182: remote error: tls: bad certificate
I0514 00:38:37.731881       1 log.go:172] http: TLS handshake error from 127.0.0.1:49176: remote error: tls: bad certificate
I0514 00:38:37.733898       1 log.go:172] http: TLS handshake error from 127.0.0.1:49186: remote error: tls: bad certificate
I0514 00:38:37.733938       1 log.go:172] http: TLS handshake error from 127.0.0.1:49180: remote error: tls: bad certificate
I0514 00:38:37.734391       1 log.go:172] http: TLS handshake error from 127.0.0.1:49190: remote error: tls: bad certificate
I0514 00:38:37.734678       1 log.go:172] http: TLS handshake error from 127.0.0.1:49188: remote error: tls: bad certificate
I0514 00:38:37.847115       1 log.go:172] http: TLS handshake error from 127.0.0.1:49194: remote error: tls: bad certificate
I0514 00:38:38.035683       1 log.go:172] http: TLS handshake error from 127.0.0.1:49198: remote error: tls: bad certificate
I0514 00:38:38.202140       1 log.go:172] http: TLS handshake error from 127.0.0.1:49200: remote error: tls: bad certificate

==> kube-scheduler <==
E0514 00:38:33.696199       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA")
E0514 00:38:33.696962       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA")
E0514 00:38:33.696988       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA")
E0514 00:38:33.699704       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA")
E0514 00:38:33.700091       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA")
E0514 00:38:33.700353       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA")
E0514 00:38:33.700372       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA")
E0514 00:38:33.700394       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA")
E0514 00:38:33.701981       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA")

The operating system version:
Linux Mint 19.1 Tessa

@tstromberg
Copy link
Contributor

The actual fatal error here is:

error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-8443]: Port 8443 is in use
	[ERROR Port-10251]: Port 10251 is in use
	[ERROR Port-10252]: Port 10252 is in use
	[ERROR Port-2379]: Port 2379 is in use
	[ERROR Port-2380]: Port 2380 is in use

Basically, it seems like the apiserver & other tools are already running, but for whatever reason, minikube doesn't know about them. Could you check if minikube delete fixes this case?

@tstromberg tstromberg changed the title Certificate error none: Some fatal errors occurred: [ERROR Port-8443]: Port 8443 is in use May 14, 2019
@tstromberg tstromberg added cause/port-conflict Start failures due to port or other network conflict co/none-driver kind/bug Categorizes issue or PR as related to a bug. triage/needs-information Indicates an issue needs more information in order to work on it. labels May 14, 2019
@vchekan
Copy link
Author

vchekan commented May 14, 2019

I did delete

🔄  Uninstalling Kubernetes v1.14.1 using kubeadm ...
🔥  Deleting "minikube" from none ...
💔  The "minikube" cluster has been deleted.

and restarted,

sudo -E minikube start --vm-driver=none
😄  minikube v1.0.1 on linux (amd64)
🔥  Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶  "minikube" IP address is 192.168.8.159
🐳  Configuring Docker as the container runtime ...
🐳  Version of container runtime is 18.09.6
✨  Preparing Kubernetes environment ...
❌  Unable to load cached images: loading cached images: loading image /home/vadim/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: stat /home/vadim/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1: no such file or directory
🚜  Pulling images required by Kubernetes v1.14.1 ...
🚀  Launching Kubernetes v1.14.1 using kubeadm ... 
⌛  Waiting for pods: apiserver proxy etcd scheduler controller dns^T
💣  Error starting cluster: wait: waiting for k8s-app=kube-dns: timed out waiting for the condition

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new
❌  Problems detected in "kubelet":
    May 14 17:36:25 desktop kubelet[8695]: I0514 17:36:25.887169    8695 eviction_manager.go:191] eviction manager: pods kube-proxy-p67m8_kube-system(3d555417-7690-11e9-b6ac-448a5ba0643b) evicted, waiting for pod to be cleaned up

now it gives different error:

==> dmesg <==
[  +0.000003] sd 6:0:0:0: [sdc] Assuming drive cache: write through
[  +0.001174] sd 6:0:0:1: [sdd] No Caching mode page found
[  +0.000003] sd 6:0:0:1: [sdd] Assuming drive cache: write through
[May12 12:03] kauditd_printk_skb: 32 callbacks suppressed
[May12 18:13] IRQ 16: no longer affine to CPU1
[  +0.000006] IRQ 29: no longer affine to CPU1
[  +0.024149] IRQ 23: no longer affine to CPU2
[  +0.000006] IRQ 27: no longer affine to CPU2
[  +0.032001] IRQ 26: no longer affine to CPU3
[  +0.000007] IRQ 28: no longer affine to CPU3
[  +0.011035]  cache: parent cpu1 should not be sleeping
[  +0.002241]  cache: parent cpu2 should not be sleeping
[  +0.002119]  cache: parent cpu3 should not be sleeping
[  +0.369614] ata5.00: supports DRM functions and may not be fully accessible
[  +0.019113] ata5.00: supports DRM functions and may not be fully accessible
[  +5.037481] ata1: link is slow to respond, please be patient (ready=0)
[  +4.676005] ata1: COMRESET failed (errno=-16)
[May13 02:54] kauditd_printk_skb: 37 callbacks suppressed
[May13 08:29] IRQ 16: no longer affine to CPU1
[  +0.000005] IRQ 29: no longer affine to CPU1
[  +0.032078] IRQ 23: no longer affine to CPU2
[  +0.000005] IRQ 27: no longer affine to CPU2
[  +0.028013] IRQ 26: no longer affine to CPU3
[  +0.000007] IRQ 28: no longer affine to CPU3
[  +0.011918]  cache: parent cpu1 should not be sleeping
[  +0.002257]  cache: parent cpu2 should not be sleeping
[  +0.002134]  cache: parent cpu3 should not be sleeping
[  +0.372018] ata5.00: supports DRM functions and may not be fully accessible
[  +0.018552] ata5.00: supports DRM functions and may not be fully accessible
[  +4.990702] ata1: link is slow to respond, please be patient (ready=0)
[  +4.707984] ata1: COMRESET failed (errno=-16)
[May13 09:16] IRQ 16: no longer affine to CPU1
[  +0.000005] IRQ 29: no longer affine to CPU1
[  +0.024103] IRQ 23: no longer affine to CPU2
[  +0.000005] IRQ 27: no longer affine to CPU2
[  +0.028057] IRQ 26: no longer affine to CPU3
[  +0.000008] IRQ 28: no longer affine to CPU3
[  +0.011724]  cache: parent cpu1 should not be sleeping
[  +0.002245]  cache: parent cpu2 should not be sleeping
[  +0.002123]  cache: parent cpu3 should not be sleeping
[  +0.381063] ata5.00: supports DRM functions and may not be fully accessible
[  +0.018664] ata5.00: supports DRM functions and may not be fully accessible
[  +2.561400] usb 4-3: Disable of device-initiated U1 failed.
[  +0.003499] usb 4-3: Disable of device-initiated U2 failed.
[  +2.495338] ata1: link is slow to respond, please be patient (ready=0)
[  +4.698135] ata1: COMRESET failed (errno=-16)
[May13 09:23] kauditd_printk_skb: 37 callbacks suppressed
[May13 09:27] kauditd_printk_skb: 37 callbacks suppressed
[May13 09:28] kauditd_printk_skb: 37 callbacks suppressed
[May13 12:32] kauditd_printk_skb: 37 callbacks suppressed

==> kernel <==
 17:43:45 up 3 days,  1:15,  1 user,  load average: 0.99, 0.95, 0.61
Linux desktop 4.18.0-18-generic #19~18.04.1-Ubuntu SMP Fri Apr 5 10:22:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

==> kube-apiserver <==
I0514 21:27:57.623015       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0514 21:27:57.652431       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0514 21:27:57.692877       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0514 21:27:57.751311       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0514 21:27:57.772741       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0514 21:27:57.812965       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0514 21:27:57.853583       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0514 21:27:57.892367       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0514 21:27:57.932762       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0514 21:27:57.974039       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0514 21:27:58.014396       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0514 21:27:58.054655       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0514 21:27:58.093226       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0514 21:27:58.132660       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0514 21:27:58.183463       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0514 21:27:58.214075       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0514 21:27:58.253741       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0514 21:27:58.294238       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0514 21:27:58.332971       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0514 21:27:58.373221       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0514 21:27:58.414098       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0514 21:27:58.453905       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0514 21:27:58.494003       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0514 21:27:58.534901       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0514 21:27:58.573597       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0514 21:27:58.614619       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0514 21:27:58.652112       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0514 21:27:58.653337       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0514 21:27:58.692924       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0514 21:27:58.744151       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0514 21:27:58.772514       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0514 21:27:58.814724       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0514 21:27:58.822621       1 controller.go:606] quota admission added evaluator for: endpoints
I0514 21:27:58.864590       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0514 21:27:58.895448       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0514 21:27:58.931729       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0514 21:27:58.933279       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0514 21:27:58.972681       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0514 21:27:59.012657       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0514 21:27:59.052714       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0514 21:27:59.093623       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0514 21:27:59.135067       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0514 21:27:59.174455       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
W0514 21:27:59.230747       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.8.159]
I0514 21:28:00.311707       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0514 21:28:00.600570       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0514 21:28:00.869186       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0514 21:28:02.329357       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0514 21:28:06.716329       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0514 21:28:06.982735       1 controller.go:606] quota admission added evaluator for: replicasets.apps

==> kube-scheduler <==
I0514 21:27:52.172086       1 serving.go:319] Generated self-signed cert in-memory
W0514 21:27:52.710042       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0514 21:27:52.710056       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0514 21:27:52.710075       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0514 21:27:52.810714       1 server.go:142] Version: v1.14.1
I0514 21:27:52.810988       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0514 21:27:52.811549       1 authorization.go:47] Authorization is disabled
W0514 21:27:52.811557       1 authentication.go:55] Authentication is disabled
I0514 21:27:52.811563       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0514 21:27:52.811905       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0514 21:27:55.862829       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0514 21:27:55.863080       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0514 21:27:55.863151       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0514 21:27:55.863223       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0514 21:27:55.863291       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0514 21:27:55.863342       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0514 21:27:55.863374       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0514 21:27:55.863475       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0514 21:27:55.863479       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0514 21:27:55.863504       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0514 21:27:56.863745       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0514 21:27:56.864660       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0514 21:27:56.865662       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0514 21:27:56.866692       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0514 21:27:56.867682       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0514 21:27:56.868680       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0514 21:27:56.869923       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0514 21:27:56.870904       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0514 21:27:56.872054       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0514 21:27:56.873098       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
I0514 21:27:58.713675       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0514 21:27:58.813998       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0514 21:27:58.814355       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-scheduler...
I0514 21:27:58.823733       1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Tue 2018-12-25 22:22:39 EST, end at Tue 2019-05-14 17:43:45 EDT. --
May 14 17:42:57 desktop kubelet[8695]: W0514 17:42:57.861255    8695 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 14 17:42:57 desktop kubelet[8695]: I0514 17:42:57.861280    8695 container_gc.go:85] attempting to delete unused containers
May 14 17:42:57 desktop kubelet[8695]: I0514 17:42:57.864160    8695 image_gc_manager.go:317] attempting to delete unused images
May 14 17:42:57 desktop kubelet[8695]: I0514 17:42:57.874679    8695 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 14 17:42:57 desktop kubelet[8695]: I0514 17:42:57.874742    8695 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-controller-manager-minikube_kube-system(decacd69524cb34d96e552c11c3ec281), kube-apiserver-minikube_kube-system(083310189e46973266844be77d59beb5), etcd-minikube_kube-system(8cc5a3f6cf69b1ca688335f21fff082c), kube-scheduler-minikube_kube-system(f44110a0ca540009109bfc32a7eb0baa)
May 14 17:42:57 desktop kubelet[8695]: E0514 17:42:57.874768    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(decacd69524cb34d96e552c11c3ec281)
May 14 17:42:57 desktop kubelet[8695]: E0514 17:42:57.874782    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(083310189e46973266844be77d59beb5)
May 14 17:42:57 desktop kubelet[8695]: E0514 17:42:57.874791    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(8cc5a3f6cf69b1ca688335f21fff082c)
May 14 17:42:57 desktop kubelet[8695]: E0514 17:42:57.874799    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(f44110a0ca540009109bfc32a7eb0baa)
May 14 17:42:57 desktop kubelet[8695]: I0514 17:42:57.874804    8695 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 14 17:43:07 desktop kubelet[8695]: W0514 17:43:07.894164    8695 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 14 17:43:07 desktop kubelet[8695]: I0514 17:43:07.894197    8695 container_gc.go:85] attempting to delete unused containers
May 14 17:43:07 desktop kubelet[8695]: I0514 17:43:07.899027    8695 image_gc_manager.go:317] attempting to delete unused images
May 14 17:43:07 desktop kubelet[8695]: I0514 17:43:07.906021    8695 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 14 17:43:07 desktop kubelet[8695]: I0514 17:43:07.906084    8695 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-controller-manager-minikube_kube-system(decacd69524cb34d96e552c11c3ec281), kube-apiserver-minikube_kube-system(083310189e46973266844be77d59beb5), etcd-minikube_kube-system(8cc5a3f6cf69b1ca688335f21fff082c), kube-scheduler-minikube_kube-system(f44110a0ca540009109bfc32a7eb0baa)
May 14 17:43:07 desktop kubelet[8695]: E0514 17:43:07.906106    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(decacd69524cb34d96e552c11c3ec281)
May 14 17:43:07 desktop kubelet[8695]: E0514 17:43:07.906116    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(083310189e46973266844be77d59beb5)
May 14 17:43:07 desktop kubelet[8695]: E0514 17:43:07.906124    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(8cc5a3f6cf69b1ca688335f21fff082c)
May 14 17:43:07 desktop kubelet[8695]: E0514 17:43:07.906131    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(f44110a0ca540009109bfc32a7eb0baa)
May 14 17:43:07 desktop kubelet[8695]: I0514 17:43:07.906136    8695 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 14 17:43:17 desktop kubelet[8695]: W0514 17:43:17.927368    8695 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 14 17:43:17 desktop kubelet[8695]: I0514 17:43:17.927392    8695 container_gc.go:85] attempting to delete unused containers
May 14 17:43:17 desktop kubelet[8695]: I0514 17:43:17.930729    8695 image_gc_manager.go:317] attempting to delete unused images
May 14 17:43:17 desktop kubelet[8695]: I0514 17:43:17.937161    8695 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 14 17:43:17 desktop kubelet[8695]: I0514 17:43:17.937242    8695 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-controller-manager-minikube_kube-system(decacd69524cb34d96e552c11c3ec281), kube-apiserver-minikube_kube-system(083310189e46973266844be77d59beb5), etcd-minikube_kube-system(8cc5a3f6cf69b1ca688335f21fff082c), kube-scheduler-minikube_kube-system(f44110a0ca540009109bfc32a7eb0baa)
May 14 17:43:17 desktop kubelet[8695]: E0514 17:43:17.937263    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(decacd69524cb34d96e552c11c3ec281)
May 14 17:43:17 desktop kubelet[8695]: E0514 17:43:17.937272    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(083310189e46973266844be77d59beb5)
May 14 17:43:17 desktop kubelet[8695]: E0514 17:43:17.937279    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(8cc5a3f6cf69b1ca688335f21fff082c)
May 14 17:43:17 desktop kubelet[8695]: E0514 17:43:17.937286    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(f44110a0ca540009109bfc32a7eb0baa)
May 14 17:43:17 desktop kubelet[8695]: I0514 17:43:17.937291    8695 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 14 17:43:27 desktop kubelet[8695]: W0514 17:43:27.958762    8695 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 14 17:43:27 desktop kubelet[8695]: I0514 17:43:27.958807    8695 container_gc.go:85] attempting to delete unused containers
May 14 17:43:27 desktop kubelet[8695]: I0514 17:43:27.961522    8695 image_gc_manager.go:317] attempting to delete unused images
May 14 17:43:27 desktop kubelet[8695]: I0514 17:43:27.967516    8695 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 14 17:43:27 desktop kubelet[8695]: I0514 17:43:27.967588    8695 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-controller-manager-minikube_kube-system(decacd69524cb34d96e552c11c3ec281), kube-apiserver-minikube_kube-system(083310189e46973266844be77d59beb5), etcd-minikube_kube-system(8cc5a3f6cf69b1ca688335f21fff082c), kube-scheduler-minikube_kube-system(f44110a0ca540009109bfc32a7eb0baa)
May 14 17:43:27 desktop kubelet[8695]: E0514 17:43:27.967609    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(decacd69524cb34d96e552c11c3ec281)
May 14 17:43:27 desktop kubelet[8695]: E0514 17:43:27.967619    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(083310189e46973266844be77d59beb5)
May 14 17:43:27 desktop kubelet[8695]: E0514 17:43:27.967626    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(8cc5a3f6cf69b1ca688335f21fff082c)
May 14 17:43:27 desktop kubelet[8695]: E0514 17:43:27.967632    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(f44110a0ca540009109bfc32a7eb0baa)
May 14 17:43:27 desktop kubelet[8695]: I0514 17:43:27.967637    8695 eviction_manager.go:385] eviction manager: unable to evict any pods from the node
May 14 17:43:37 desktop kubelet[8695]: W0514 17:43:37.989112    8695 eviction_manager.go:333] eviction manager: attempting to reclaim ephemeral-storage
May 14 17:43:37 desktop kubelet[8695]: I0514 17:43:37.989479    8695 container_gc.go:85] attempting to delete unused containers
May 14 17:43:37 desktop kubelet[8695]: I0514 17:43:37.993310    8695 image_gc_manager.go:317] attempting to delete unused images
May 14 17:43:38 desktop kubelet[8695]: I0514 17:43:38.003030    8695 eviction_manager.go:344] eviction manager: must evict pod(s) to reclaim ephemeral-storage
May 14 17:43:38 desktop kubelet[8695]: I0514 17:43:38.003108    8695 eviction_manager.go:362] eviction manager: pods ranked for eviction: kube-controller-manager-minikube_kube-system(decacd69524cb34d96e552c11c3ec281), kube-apiserver-minikube_kube-system(083310189e46973266844be77d59beb5), etcd-minikube_kube-system(8cc5a3f6cf69b1ca688335f21fff082c), kube-scheduler-minikube_kube-system(f44110a0ca540009109bfc32a7eb0baa)
May 14 17:43:38 desktop kubelet[8695]: E0514 17:43:38.003133    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-controller-manager-minikube_kube-system(decacd69524cb34d96e552c11c3ec281)
May 14 17:43:38 desktop kubelet[8695]: E0514 17:43:38.003143    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-apiserver-minikube_kube-system(083310189e46973266844be77d59beb5)
May 14 17:43:38 desktop kubelet[8695]: E0514 17:43:38.003150    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod etcd-minikube_kube-system(8cc5a3f6cf69b1ca688335f21fff082c)
May 14 17:43:38 desktop kubelet[8695]: E0514 17:43:38.003157    8695 eviction_manager.go:557] eviction manager: cannot evict a critical static pod kube-scheduler-minikube_kube-system(f44110a0ca540009109bfc32a7eb0baa)
May 14 17:43:38 desktop kubelet[8695]: I0514 17:43:38.003201    8695 eviction_manager.go:385] eviction manager: unable to evict any pods from the node

@amruthar
Copy link

amruthar commented May 22, 2019

Any updates on this? Would like to know any solution as I'm facing the same issue at kube-dns stage on a minikube stop, delete and restart.

@vchekan
Copy link
Author

vchekan commented May 22, 2019

I've managed to make it work with kvm driver, after finding some article listing additional pre-requisites: docker-machine script and several more packages.
It seems like plenty of bugs in documentation, namely pre-requisite chapter, which does not list all of them.

@tstromberg
Copy link
Contributor

The root cause for the original issue (Port 8443 in use) is because a apiserver is already running. minikube delete appeared to fix that.

For the DNS issue, see: https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md#known-issues

@tstromberg
Copy link
Contributor

I've managed to make it work with kvm driver, after finding some article listing additional pre-requisites: docker-machine script and several more packages.
It seems like plenty of bugs in documentation, namely pre-requisite chapter, which does not list all of them.

I'm curious, what install pre-requisites are missing? If you know of any, please feel free to open a bug or a pull request to modify them.

@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2 labels May 23, 2019
@tstromberg
Copy link
Contributor

I'm closing this issue as it hasn't seen activity in awhile, and it's unclear if this issue still exists. If this issue does continue to exist in the most recent release of minikube, please feel free to re-open it.

Thank you for opening the issue!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cause/port-conflict Start failures due to port or other network conflict co/none-driver kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2 triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

3 participants