Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: certificate apiserver not signed by CA certificate ca: crypto/rsa: verification error #5899

Closed
roxax19 opened this issue Nov 13, 2019 · 4 comments
Labels
co/none-driver kind/support Categorizes issue or PR as a support question.

Comments

@roxax19
Copy link

roxax19 commented Nov 13, 2019

The exact command to reproduce the issue:
sudo minikube start --vm-driver=none
The full output of the command that failed:

😄 minikube v1.5.2 on Ubuntu 18.04
💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
🏃 Using the running none "minikube" VM ...
⌛ Waiting for the host to be provisioned ...
🐳 Preparing Kubernetes v1.16.2 on Docker '18.06.1-ce' ...
▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
🔄 Relaunching Kubernetes using kubeadm ...

💣 Error restarting cluster: running cmd: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": command failed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
stdout: [certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority

stderr: error execution phase certs/apiserver: [certs] certificate apiserver not signed by CA certificate ca: crypto/rsa: verification error
To see the stack trace of this error execute with --v=5 or higher
: exit status 1

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Tue 2018-12-11 10:33:39 CET, end at Wed 2019-11-13 11:40:31 CET. --
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock"
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock"
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08+01:00" level=info msg="containerd successfully booted in 0.273860s"
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08.297765471+01:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42024d690, READY" module=grpc
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08.975049443+01:00" level=info msg="parsed scheme: "unix"" module=grpc
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08.975075649+01:00" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08.975116642+01:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0 }]" module=grpc
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08.975132499+01:00" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08.975186198+01:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420419b00, CONNECTING" module=grpc
nov 13 10:40:08 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:08.975347242+01:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420419b00, READY" module=grpc
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.423016983+01:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.423293141+01:00" level=warning msg="Your kernel does not support swap memory limit"
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.423329584+01:00" level=warning msg="Your kernel does not support cgroup rt period"
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.423338457+01:00" level=warning msg="Your kernel does not support cgroup rt runtime"
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.423780706+01:00" level=info msg="parsed scheme: "unix"" module=grpc
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.423793684+01:00" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.423824785+01:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0 }]" module=grpc
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.423850807+01:00" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.423887757+01:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42024dbc0, CONNECTING" module=grpc
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.424039037+01:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42024dbc0, READY" module=grpc
nov 13 10:40:14 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:14.424076470+01:00" level=info msg="Loading containers: start."
nov 13 10:40:17 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:17.628034801+01:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
nov 13 10:40:18 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:18.222187553+01:00" level=info msg="Loading containers: done."
nov 13 10:40:23 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:23.847034403+01:00" level=info msg="Docker daemon" commit=6d37f41 graphdriver(s)=overlay2 version=18.06.2-ce
nov 13 10:40:23 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:23.914979527+01:00" level=info msg="Daemon has completed initialization"
nov 13 10:40:24 Lenovo-Y50-70 dockerd[3280]: time="2019-11-13T10:40:24.600867540+01:00" level=info msg="API listen on /var/run/docker.sock"
nov 13 10:40:24 Lenovo-Y50-70 systemd[1]: Started Docker Application Container Engine.

==> container status <==
sudo: crictl: orden no encontrada
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

==> dmesg <==
[ +0,000002] driver_register+0x60/0xe0
[ +0,000000] ? 0xffffffffc055c000
[ +0,000002] __pci_register_driver+0x5a/0x60
[ +0,000022] i915_init+0x5c/0x5f [i915]
[ +0,000003] do_one_initcall+0x52/0x19f
[ +0,000002] ? __vunmap+0x8e/0xc0
[ +0,000002] ? _cond_resched+0x19/0x40
[ +0,000002] ? kmem_cache_alloc_trace+0x14e/0x1b0
[ +0,000002] ? do_init_module+0x27/0x213
[ +0,000002] do_init_module+0x5f/0x213
[ +0,000002] load_module+0x16bc/0x1f10
[ +0,000004] ? ima_post_read_file+0x96/0xa0
[ +0,000002] SYSC_finit_module+0xfc/0x120
[ +0,000002] ? SYSC_finit_module+0xfc/0x120
[ +0,000002] SyS_finit_module+0xe/0x10
[ +0,000002] do_syscall_64+0x73/0x130
[ +0,000002] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[ +0,000001] RIP: 0033:0x7f8a72ad5839
[ +0,000001] RSP: 002b:00007ffef1747d08 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[ +0,000001] RAX: ffffffffffffffda RBX: 00005652a0ec53e0 RCX: 00007f8a72ad5839
[ +0,000001] RDX: 0000000000000000 RSI: 00007f8a727b4145 RDI: 0000000000000015
[ +0,000000] RBP: 00007f8a727b4145 R08: 0000000000000000 R09: 00007ffef1747e20
[ +0,000001] R10: 0000000000000015 R11: 0000000000000246 R12: 0000000000000000
[ +0,000000] R13: 00005652a0eaff10 R14: 0000000000020000 R15: 00005652a0ec53e0
[ +0,000002] Code: e9 46 fc ff ff 48 c7 c6 d7 5d a0 c0 48 c7 c7 2f 51 a0 c0 e8 c4 68 11 c8 0f 0b e9 73 fe ff ff 48 c7 c7 b0 b5 a1 c0 e8 b1 68 11 c8 <0f> 0b e9 4b fe ff ff 48 c7 c6 e4 5d a0 c0 48 c7 c7 2f 51 a0 c0
[ +0,000023] ---[ end trace bb5423ed223a4603 ]---
[ +0,001945] [Firmware Bug]: ACPI(PEGP) defines _DOD but not _DOS
[ +1,353570] Bluetooth: hci0: unexpected event for opcode 0xfc2f
[ +0,311937] PKCS#7 signature not signed with a trusted key
[ +0,000024] nvidia: loading out-of-tree module taints kernel.
[ +0,000005] nvidia: module license 'NVIDIA' taints kernel.
[ +0,000001] Disabling lock debugging due to kernel taint
[ +0,008710] ACPI Error: [AR02] Namespace lookup failure, AE_NOT_FOUND (20170831/psargs-364)
[ +0,000007] No Local Variables are initialized for Method [_PRT]
[ +0,000000] No Arguments are initialized for method [_PRT]
[ +0,000002] ACPI Error: Method parse/execution failed _SB.PCI0.PEG0._PRT, AE_NOT_FOUND (20170831/psparse-550)
[ +0,000151] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 390.116 Sun Jan 27 07:21:36 PST 2019 (using threaded interrupts)
[ +0,363815] PKCS#7 signature not signed with a trusted key
[ +0,147691] PKCS#7 signature not signed with a trusted key
[ +1,140565] uvcvideo 3-6:1.0: Entity type for entity Extension 4 was not initialized!
[ +0,000001] uvcvideo 3-6:1.0: Entity type for entity Processing 2 was not initialized!
[ +0,000002] uvcvideo 3-6:1.0: Entity type for entity Camera 1 was not initialized!
[ +1,841975] PKCS#7 signature not signed with a trusted key
[ +0,810944] ACPI Warning: _SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20170831/nsarguments-100)
[nov13 10:39] kauditd_printk_skb: 78 callbacks suppressed
[ +0,523456] aufs aufs_fill_super:912:mount[2656]: no arg
[ +0,401335] overlayfs: missing 'lowerdir'
[ +6,681026] kauditd_printk_skb: 74 callbacks suppressed
[ +12,986929] kauditd_printk_skb: 77 callbacks suppressed
[ +8,267396] L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.
[ +6,783890] aufs au_opts_verify:1623:dockerd[1846]: dirperm1 breaks the protection by the permission bits on the lower branch
[nov13 10:40] kauditd_printk_skb: 76 callbacks suppressed
[ +15,737728] kauditd_printk_skb: 74 callbacks suppressed
[nov13 10:41] kauditd_printk_skb: 74 callbacks suppressed
[nov13 10:42] kauditd_printk_skb: 4 callbacks suppressed
[nov13 10:44] kauditd_printk_skb: 74 callbacks suppressed
[nov13 11:28] kauditd_printk_skb: 74 callbacks suppressed
[nov13 11:32] kauditd_printk_skb: 8 callbacks suppressed
[ +11,003330] kauditd_printk_skb: 74 callbacks suppressed
[nov13 11:36] kauditd_printk_skb: 74 callbacks suppressed

==> kernel <==
11:40:31 up 1:02, 1 user, load average: 2,29, 3,86, 2,98
Linux Lenovo-Y50-70 4.15.0-69-generic #78-Ubuntu SMP Wed Nov 6 11:30:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 18.04.3 LTS"

==> kubelet <==
-- Logs begin at Tue 2018-12-11 10:33:39 CET, end at Wed 2019-11-13 11:40:31 CET. --
nov 13 11:40:28 Lenovo-Y50-70 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
nov 13 11:40:28 Lenovo-Y50-70 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4935.
nov 13 11:40:28 Lenovo-Y50-70 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
nov 13 11:40:28 Lenovo-Y50-70 systemd[1]: Started kubelet: The Kubernetes Node Agent.
nov 13 11:40:28 Lenovo-Y50-70 kubelet[1627]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:28 Lenovo-Y50-70 kubelet[1627]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:28 Lenovo-Y50-70 kubelet[1627]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:28 Lenovo-Y50-70 kubelet[1627]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:28 Lenovo-Y50-70 kubelet[1627]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:28 Lenovo-Y50-70 kubelet[1627]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:28 Lenovo-Y50-70 kubelet[1627]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:28 Lenovo-Y50-70 kubelet[1627]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:28 Lenovo-Y50-70 kubelet[1627]: F1113 11:40:28.414819 1627 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
nov 13 11:40:28 Lenovo-Y50-70 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
nov 13 11:40:28 Lenovo-Y50-70 systemd[1]: kubelet.service: Failed with result 'exit-code'.
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4936.
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: Started kubelet: The Kubernetes Node Agent.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1662]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1662]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1662]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1662]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1662]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1662]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1662]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1662]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1662]: F1113 11:40:29.164155 1662 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: kubelet.service: Failed with result 'exit-code'.
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4937.
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: Started kubelet: The Kubernetes Node Agent.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1699]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1699]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1699]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1699]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1699]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1699]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1699]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1699]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:29 Lenovo-Y50-70 kubelet[1699]: F1113 11:40:29.837725 1699 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
nov 13 11:40:29 Lenovo-Y50-70 systemd[1]: kubelet.service: Failed with result 'exit-code'.
nov 13 11:40:30 Lenovo-Y50-70 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
nov 13 11:40:30 Lenovo-Y50-70 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4938.
nov 13 11:40:30 Lenovo-Y50-70 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
nov 13 11:40:30 Lenovo-Y50-70 systemd[1]: Started kubelet: The Kubernetes Node Agent.
nov 13 11:40:30 Lenovo-Y50-70 kubelet[1751]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:30 Lenovo-Y50-70 kubelet[1751]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:30 Lenovo-Y50-70 kubelet[1751]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:30 Lenovo-Y50-70 kubelet[1751]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:30 Lenovo-Y50-70 kubelet[1751]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:30 Lenovo-Y50-70 kubelet[1751]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:30 Lenovo-Y50-70 kubelet[1751]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:30 Lenovo-Y50-70 kubelet[1751]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 13 11:40:30 Lenovo-Y50-70 kubelet[1751]: F1113 11:40:30.668958 1751 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
nov 13 11:40:30 Lenovo-Y50-70 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
nov 13 11:40:30 Lenovo-Y50-70 systemd[1]: kubelet.service: Failed with result 'exit-code'.

The operating system version:
Ubuntu 18.04.3 LTS

@tstromberg tstromberg changed the title Can't start with mv-driver=none none: certificate apiserver not signed by CA certificate ca: crypto/rsa: verification error Nov 15, 2019
@tstromberg
Copy link
Contributor

I suspect that sudo minikube delete may fix it. I'm not exactly sure what happened here, but based on the logs, I think #5916 may fix it.

@roxax19
Copy link
Author

roxax19 commented Nov 15, 2019

I did sudo minikube delete and then sudo minikube start --vm-driver=none again, and it gives me another error:

The full output of the command:

 sudo minikube start --vm-driver=none
😄  minikube v1.5.2 on Ubuntu 18.04
🤹  Running on localhost (CPUs=8, Memory=11920MB, Disk=46678MB) ...
ℹ️   OS release is Ubuntu 18.04.3 LTS
🐳  Preparing Kubernetes v1.16.2 on Docker '18.06.1-ce' ...
    ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
🚜  Pulling images ...
🚀  Launching Kubernetes ... 

💣  Error starting cluster: init failed. cmd: "/bin/bash -c \"sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap\"": command failed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
stdout: [init] Using Kubernetes version: v1.16.2
[preflight] Running pre-flight checks

stderr: 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING FileExisting-ethtool]: ethtool not found in system path
	[WARNING FileExisting-socat]: socat not found in system path
	[WARNING Hostname]: hostname "minikube" could not be reached
	[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
	[WARNING Port-10250]: Port 10250 is in use
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-10251]: Port 10251 is in use
	[ERROR Port-10252]: Port 10252 is in use
	[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
: exit status 1

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
</details>

**The output of `minikube logs` command:**

<details>
minikube logs

💣  Error getting config: open /home/manuel/.minikube/profiles/minikube/config.json: permission denied

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new/choose
manuel@Lenovo-Y50-70:~$ sudo minikube logs
==> Docker <==
-- Logs begin at Wed 2018-12-12 18:08:57 CET, end at Fri 2019-11-15 09:53:09 CET. --
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock"
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock"
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26+01:00" level=info msg="containerd successfully booted in 0.428394s"
nov 15 09:45:26 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:26.685689918+01:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42007f960, READY" module=grpc
nov 15 09:45:27 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:27.544555687+01:00" level=info msg="parsed scheme: \"unix\"" module=grpc
nov 15 09:45:27 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:27.544582955+01:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
nov 15 09:45:27 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:27.544639558+01:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
nov 15 09:45:27 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:27.544664515+01:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
nov 15 09:45:27 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:27.544719792+01:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4202a6b50, CONNECTING" module=grpc
nov 15 09:45:27 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:27.544946568+01:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4202a6b50, READY" module=grpc
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.634669647+01:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.635188300+01:00" level=warning msg="Your kernel does not support swap memory limit"
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.635260582+01:00" level=warning msg="Your kernel does not support cgroup rt period"
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.635279351+01:00" level=warning msg="Your kernel does not support cgroup rt runtime"
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.636062533+01:00" level=info msg="parsed scheme: \"unix\"" module=grpc
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.636101542+01:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.636190008+01:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0  <nil>}]" module=grpc
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.636229037+01:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.636377207+01:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4202760d0, CONNECTING" module=grpc
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.636741149+01:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4202760d0, READY" module=grpc
nov 15 09:45:32 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:32.636776743+01:00" level=info msg="Loading containers: start."
nov 15 09:45:35 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:35.274228028+01:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
nov 15 09:45:36 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:36.231714568+01:00" level=info msg="Loading containers: done."
nov 15 09:45:40 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:40.371068953+01:00" level=info msg="Docker daemon" commit=6d37f41 graphdriver(s)=overlay2 version=18.06.2-ce
nov 15 09:45:40 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:40.417098086+01:00" level=info msg="Daemon has completed initialization"
nov 15 09:45:40 Lenovo-Y50-70 dockerd[3001]: time="2019-11-15T09:45:40.675089716+01:00" level=info msg="API listen on /var/run/docker.sock"
nov 15 09:45:40 Lenovo-Y50-70 systemd[1]: Started Docker Application Container Engine.

==> container status <==
sudo: crictl: orden no encontrada
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

==> dmesg <==
[  +0,000001]  __driver_attach+0xcc/0xf0
[  +0,000002]  ? driver_probe_device+0x490/0x490
[  +0,000000]  bus_for_each_dev+0x70/0xc0
[  +0,000002]  driver_attach+0x1e/0x20
[  +0,000000]  bus_add_driver+0x1c7/0x270
[  +0,000001]  ? 0xffffffffc086c000
[  +0,000001]  driver_register+0x60/0xe0
[  +0,000001]  ? 0xffffffffc086c000
[  +0,000001]  __pci_register_driver+0x5a/0x60
[  +0,000024]  i915_init+0x5c/0x5f [i915]
[  +0,000002]  do_one_initcall+0x52/0x19f
[  +0,000002]  ? __slab_alloc+0x20/0x40
[  +0,000002]  ? kmem_cache_alloc_trace+0x14e/0x1b0
[  +0,000002]  ? do_init_module+0x27/0x213
[  +0,000002]  do_init_module+0x5f/0x213
[  +0,000001]  load_module+0x16bc/0x1f10
[  +0,000003]  ? ima_post_read_file+0x96/0xa0
[  +0,000002]  SYSC_finit_module+0xfc/0x120
[  +0,000001]  ? SYSC_finit_module+0xfc/0x120
[  +0,000002]  SyS_finit_module+0xe/0x10
[  +0,000001]  do_syscall_64+0x73/0x130
[  +0,000003]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[  +0,000000] RIP: 0033:0x7f2c2069c839
[  +0,000001] RSP: 002b:00007fff9a3ad6d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[  +0,000001] RAX: ffffffffffffffda RBX: 00005569afe7b3e0 RCX: 00007f2c2069c839
[  +0,000000] RDX: 0000000000000000 RSI: 00007f2c2037b145 RDI: 0000000000000015
[  +0,000001] RBP: 00007f2c2037b145 R08: 0000000000000000 R09: 00007fff9a3ad7f0
[  +0,000000] R10: 0000000000000015 R11: 0000000000000246 R12: 0000000000000000
[  +0,000001] R13: 00005569afe62aa0 R14: 0000000000020000 R15: 00005569afe7b3e0
[  +0,000001] Code: e9 46 fc ff ff 48 c7 c6 d7 ad 7f c0 48 c7 c7 2f a1 7f c0 e8 c4 18 92 c5 0f 0b e9 73 fe ff ff 48 c7 c7 b0 05 81 c0 e8 b1 18 92 c5 <0f> 0b e9 4b fe ff ff 48 c7 c6 e4 ad 7f c0 48 c7 c7 2f a1 7f c0 
[  +0,000017] ---[ end trace 4e6af55edf8d47c0 ]---
[  +0,001783] [Firmware Bug]: ACPI(PEGP) defines _DOD but not _DOS
[  +2,812787] Bluetooth: hci0: unexpected event for opcode 0xfc2f
[  +0,323945] PKCS#7 signature not signed with a trusted key
[  +0,000011] nvidia: loading out-of-tree module taints kernel.
[  +0,000026] nvidia: module license 'NVIDIA' taints kernel.
[  +0,000000] Disabling lock debugging due to kernel taint
[  +0,008762] ACPI Error: [AR02] Namespace lookup failure, AE_NOT_FOUND (20170831/psargs-364)
[  +0,000007] No Local Variables are initialized for Method [_PRT]
[  +0,000001] No Arguments are initialized for method [_PRT]
[  +0,000002] ACPI Error: Method parse/execution failed \_SB.PCI0.PEG0._PRT, AE_NOT_FOUND (20170831/psparse-550)
[  +0,000166] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  390.116  Sun Jan 27 07:21:36 PST 2019 (using threaded interrupts)
[  +0,689935] uvcvideo 3-6:1.0: Entity type for entity Extension 4 was not initialized!
[  +0,000002] uvcvideo 3-6:1.0: Entity type for entity Processing 2 was not initialized!
[  +0,000001] uvcvideo 3-6:1.0: Entity type for entity Camera 1 was not initialized!
[  +0,493962] PKCS#7 signature not signed with a trusted key
[  +0,845056] PKCS#7 signature not signed with a trusted key
[nov15 09:44] PKCS#7 signature not signed with a trusted key
[  +1,020024] ACPI Warning: \_SB.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20170831/nsarguments-100)
[ +31,461864] aufs aufs_fill_super:912:mount[2411]: no arg
[  +0,576832] overlayfs: missing 'lowerdir'
[  +0,236059] kauditd_printk_skb: 78 callbacks suppressed
[  +7,260292] kauditd_printk_skb: 74 callbacks suppressed
[  +5,015957] kauditd_printk_skb: 16 callbacks suppressed
[  +6,218582] kauditd_printk_skb: 53 callbacks suppressed
[nov15 09:45] aufs au_opts_verify:1623:dockerd[1937]: dirperm1 breaks the protection by the permission bits on the lower branch
[  +1,332932] kauditd_printk_skb: 78 callbacks suppressed
[ +33,712069] kauditd_printk_skb: 74 callbacks suppressed
[nov15 09:46] kauditd_printk_skb: 18 callbacks suppressed
[  +5,635147] kauditd_printk_skb: 50 callbacks suppressed

==> kernel <==
 09:53:09 up 9 min,  1 user,  load average: 1,33, 3,82, 2,76
Linux Lenovo-Y50-70 4.15.0-69-generic #78-Ubuntu SMP Wed Nov 6 11:30:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 18.04.3 LTS"

==> kubelet <==
-- Logs begin at Wed 2018-12-12 18:08:57 CET, end at Fri 2019-11-15 09:53:09 CET. --
nov 15 09:53:06 Lenovo-Y50-70 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
nov 15 09:53:06 Lenovo-Y50-70 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 535.
nov 15 09:53:06 Lenovo-Y50-70 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
nov 15 09:53:06 Lenovo-Y50-70 systemd[1]: Started kubelet: The Kubernetes Node Agent.
nov 15 09:53:06 Lenovo-Y50-70 kubelet[3625]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:06 Lenovo-Y50-70 kubelet[3625]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:06 Lenovo-Y50-70 kubelet[3625]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:06 Lenovo-Y50-70 kubelet[3625]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:06 Lenovo-Y50-70 kubelet[3625]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:06 Lenovo-Y50-70 kubelet[3625]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:06 Lenovo-Y50-70 kubelet[3625]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:06 Lenovo-Y50-70 kubelet[3625]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:06 Lenovo-Y50-70 kubelet[3625]: F1115 09:53:06.729039    3625 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
nov 15 09:53:06 Lenovo-Y50-70 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
nov 15 09:53:06 Lenovo-Y50-70 systemd[1]: kubelet.service: Failed with result 'exit-code'.
nov 15 09:53:07 Lenovo-Y50-70 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
nov 15 09:53:07 Lenovo-Y50-70 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 536.
nov 15 09:53:07 Lenovo-Y50-70 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
nov 15 09:53:07 Lenovo-Y50-70 systemd[1]: Started kubelet: The Kubernetes Node Agent.
nov 15 09:53:07 Lenovo-Y50-70 kubelet[3655]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:07 Lenovo-Y50-70 kubelet[3655]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:07 Lenovo-Y50-70 kubelet[3655]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:07 Lenovo-Y50-70 kubelet[3655]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:07 Lenovo-Y50-70 kubelet[3655]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:07 Lenovo-Y50-70 kubelet[3655]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:07 Lenovo-Y50-70 kubelet[3655]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:07 Lenovo-Y50-70 kubelet[3655]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:07 Lenovo-Y50-70 kubelet[3655]: F1115 09:53:07.480412    3655 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
nov 15 09:53:07 Lenovo-Y50-70 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
nov 15 09:53:07 Lenovo-Y50-70 systemd[1]: kubelet.service: Failed with result 'exit-code'.
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 537.
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: Started kubelet: The Kubernetes Node Agent.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3688]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3688]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3688]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3688]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3688]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3688]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3688]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3688]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3688]: F1115 09:53:08.233421    3688 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: kubelet.service: Failed with result 'exit-code'.
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 538.
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: Started kubelet: The Kubernetes Node Agent.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3839]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3839]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3839]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3839]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3839]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3839]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3839]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3839]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
nov 15 09:53:08 Lenovo-Y50-70 kubelet[3839]: F1115 09:53:08.977542    3839 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
nov 15 09:53:08 Lenovo-Y50-70 systemd[1]: kubelet.service: Failed with result 'exit-code'.
</details>

I also did `sudo minikube start --vm-driver=none --alsologtostderr -v=1` and got this output:

<details>
sudo minikube start --vm-driver=none --alsologtostderr -v=1
I1115 09:54:31.694615    8440 notify.go:125] Checking for updates...
I1115 09:54:31.854138    8440 start.go:251] hostinfo: {"hostname":"Lenovo-Y50-70","uptime":660,"bootTime":1573807411,"procs":402,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"4.15.0-69-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"a2c06a2e-cae7-e311-b42f-f8a96334b593"}
I1115 09:54:31.855104    8440 start.go:261] virtualization: kvm host
😄  minikube v1.5.2 on Ubuntu 18.04
I1115 09:54:31.855577    8440 start.go:547] selectDriver: flag="none", old=&{{false false https://storage.googleapis.com/minikube/iso/minikube-v1.5.1.iso 2000 2 20000 none docker  [] [] [] [] 192.168.99.1/24  default qemu:///system false false <nil> [] false [] /nfsshares  false false true} {v1.16.2 172.16.174.93 8443 minikube minikubeCA [] [] cluster.local docker    10.96.0.0/12  [{kubelet resolv-conf /run/systemd/resolve/resolv.conf}] true false}}
I1115 09:54:31.857353    8440 start.go:293] selected: none
I1115 09:54:31.857603    8440 profile.go:82] Saving config to /home/manuel/.minikube/profiles/minikube/config.json ...
I1115 09:54:31.857708    8440 lock.go:41] attempting to write to file "/home/manuel/.minikube/profiles/minikube/config.json.tmp485489369" with filemode -rw-------
I1115 09:54:31.857778    8440 cache_images.go:296] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/manuel/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I1115 09:54:31.857847    8440 cache_images.go:296] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2
I1115 09:54:31.857874    8440 cache_images.go:296] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0
I1115 09:54:31.857892    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2 exists
I1115 09:54:31.857910    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0 exists
I1115 09:54:31.857873    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists
I1115 09:54:31.857949    8440 cache_images.go:298] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/manuel/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 completed in 184.842µs
I1115 09:54:31.857964    8440 cache_images.go:83] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /home/manuel/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded
I1115 09:54:31.857858    8440 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
I1115 09:54:31.857992    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 exists
I1115 09:54:31.858003    8440 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 completed in 154.243µs
I1115 09:54:31.858016    8440 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 succeeded
I1115 09:54:31.857909    8440 cache_images.go:298] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2 completed in 69.783µs
I1115 09:54:31.858031    8440 cache_images.go:83] CacheImage k8s.gcr.io/kube-scheduler:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2 succeeded
I1115 09:54:31.857793    8440 cache_images.go:296] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
I1115 09:54:31.858051    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
I1115 09:54:31.858060    8440 cache_images.go:298] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 completed in 287.231µs
I1115 09:54:31.857826    8440 cache_images.go:296] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2
I1115 09:54:31.858102    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2 exists
I1115 09:54:31.857815    8440 cache_images.go:296] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2
I1115 09:54:31.858121    8440 cache_images.go:298] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2 completed in 301.743µs
I1115 09:54:31.858143    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2 exists
I1115 09:54:31.858154    8440 cache_images.go:298] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2 completed in 366.455µs
I1115 09:54:31.858172    8440 cluster.go:101] Skipping create...Using existing machine configuration
I1115 09:54:31.858145    8440 cache_images.go:83] CacheImage k8s.gcr.io/kube-apiserver:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2 succeeded
I1115 09:54:31.858172    8440 cache_images.go:83] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2 succeeded
I1115 09:54:31.857838    8440 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1115 09:54:31.858255    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 exists
I1115 09:54:31.858268    8440 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 completed in 429.532µs
I1115 09:54:31.858285    8440 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 succeeded
I1115 09:54:31.857842    8440 cache_images.go:296] CacheImage: k8s.gcr.io/pause:3.1 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/pause_3.1
I1115 09:54:31.857859    8440 cache_images.go:296] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
I1115 09:54:31.857859    8440 cache_images.go:296] CacheImage: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1
I1115 09:54:31.858343    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
I1115 09:54:31.858356    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 exists
I1115 09:54:31.858363    8440 cache_images.go:298] CacheImage: k8s.gcr.io/coredns:1.6.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 completed in 509.473µs
I1115 09:54:31.858382    8440 cache_images.go:83] CacheImage k8s.gcr.io/coredns:1.6.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
I1115 09:54:31.857782    8440 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
I1115 09:54:31.858414    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 exists
I1115 09:54:31.858427    8440 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 completed in 663.377µs
I1115 09:54:31.857927    8440 cache_images.go:298] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0 completed in 57.421µs
I1115 09:54:31.858465    8440 cache_images.go:83] CacheImage k8s.gcr.io/kube-addon-manager:v9.0 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0 succeeded
I1115 09:54:31.858070    8440 cache_images.go:83] CacheImage k8s.gcr.io/etcd:3.3.15-0 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
I1115 09:54:31.857819    8440 cache_images.go:296] CacheImage: k8s.gcr.io/kube-proxy:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2
I1115 09:54:31.858493    8440 none.go:257] checking for running kubelet ...
I1115 09:54:31.858505    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2 exists
I1115 09:54:31.858518    8440 cache_images.go:298] CacheImage: k8s.gcr.io/kube-proxy:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2 completed in 718.504µs
I1115 09:54:31.858316    8440 cache_images.go:302] /home/manuel/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
I1115 09:54:31.858542    8440 exec_runner.go:42] (ExecRunner) Run:  systemctl is-active --quiet service kubelet
I1115 09:54:31.858545    8440 cache_images.go:298] CacheImage: k8s.gcr.io/pause:3.1 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/pause_3.1 completed in 707.932µs
I1115 09:54:31.858673    8440 cache_images.go:83] CacheImage k8s.gcr.io/pause:3.1 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
I1115 09:54:31.858447    8440 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 succeeded
I1115 09:54:31.858533    8440 cache_images.go:83] CacheImage k8s.gcr.io/kube-proxy:v1.16.2 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2 succeeded
I1115 09:54:31.858372    8440 cache_images.go:298] CacheImage: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 completed in 512.425µs
I1115 09:54:31.858705    8440 cache_images.go:83] CacheImage k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> /home/manuel/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 succeeded
I1115 09:54:31.858714    8440 cache_images.go:90] Successfully cached all images.
I1115 09:54:31.863648    8440 exec_runner.go:74] (ExecRunner) Non-zero exit: systemctl is-active --quiet service kubelet: exit status 3 (5.08039ms)
I1115 09:54:31.863700    8440 none.go:127] kubelet not running: check kubelet: command failed: systemctl is-active --quiet service kubelet
stdout: 
stderr: : exit status 3
I1115 09:54:31.863716    8440 cluster.go:113] Machine state:  Stopped
🔄  Starting existing none VM for "minikube" ...
I1115 09:54:31.865698    8440 cluster.go:131] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:https://get.docker.com}
⌛  Waiting for the host to be provisioned ...
I1115 09:54:31.865754    8440 cluster.go:144] configureHost: &{BaseDriver:0xc00034e280 CommonDriver:<nil> URL:tcp://172.16.174.93:2376 runtime:0xc0007905a0 exec:0x276da88}
I1115 09:54:31.865780    8440 cluster.go:163] none is a local driver, skipping auth/time setup
I1115 09:54:31.865788    8440 cluster.go:146] configureHost completed within 35.442µs
I1115 09:54:31.866543    8440 exec_runner.go:42] (ExecRunner) Run:  nslookup kubernetes.io
I1115 09:54:31.877935    8440 exec_runner.go:42] (ExecRunner) Run:  curl -sS https://k8s.gcr.io/
I1115 09:54:32.014752    8440 profile.go:82] Saving config to /home/manuel/.minikube/profiles/minikube/config.json ...
I1115 09:54:32.014871    8440 lock.go:41] attempting to write to file "/home/manuel/.minikube/profiles/minikube/config.json.tmp374253683" with filemode -rw-------
I1115 09:54:32.015258    8440 exec_runner.go:42] (ExecRunner) Run:  sudo systemctl start docker
I1115 09:54:32.030293    8440 exec_runner.go:42] (ExecRunner) Run:  docker version --format '{{.Server.Version}}'
🐳  Preparing Kubernetes v1.16.2 on Docker '18.06.1-ce' ...
I1115 09:54:32.079622    8440 settings.go:124] acquiring lock: {Name:kubeconfigUpdate Clock:{} Delay:10s Timeout:0s Cancel:<nil>}
I1115 09:54:32.079754    8440 settings.go:132] Updating kubeconfig:  /home/manuel/.kube/config
I1115 09:54:32.081444    8440 lock.go:41] attempting to write to file "/home/manuel/.kube/config" with filemode -rw-------
    ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
I1115 09:54:32.082130    8440 cache_images.go:96] LoadImages start: [k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/pause:3.1 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kube-addon-manager:v9.0 gcr.io/k8s-minikube/storage-provisioner:v1.8.1]
I1115 09:54:32.082304    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I1115 09:54:32.082312    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2
I1115 09:54:32.082333    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2
I1115 09:54:32.082342    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1
I1115 09:54:32.082370    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0
I1115 09:54:32.082341    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
I1115 09:54:32.082320    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2
I1115 09:54:32.082353    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13
I1115 09:54:32.082351    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1115 09:54:32.082355    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13
I1115 09:54:32.105782    8440 docker.go:107] Loading image: /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
I1115 09:54:32.105985    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
I1115 09:54:32.082365    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2
I1115 09:54:32.082307    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/pause_3.1
I1115 09:54:32.082373    8440 cache_images.go:211] Loading image from cache: /home/manuel/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
I1115 09:54:32.316288    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-sidecar-amd64_1.14.13 from cache
I1115 09:54:32.316312    8440 docker.go:107] Loading image: /var/lib/minikube/images/storage-provisioner_v1.8.1
I1115 09:54:32.316366    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/storage-provisioner_v1.8.1
I1115 09:54:32.461175    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 from cache
I1115 09:54:32.461202    8440 docker.go:107] Loading image: /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1115 09:54:32.461264    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I1115 09:54:32.600464    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64_1.14.13 from cache
I1115 09:54:32.600503    8440 docker.go:107] Loading image: /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
I1115 09:54:32.600553    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
I1115 09:54:32.736412    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/k8s-dns-kube-dns-amd64_1.14.13 from cache
I1115 09:54:32.736439    8440 docker.go:107] Loading image: /var/lib/minikube/images/pause_3.1
I1115 09:54:32.736494    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/pause_3.1
I1115 09:54:32.861402    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/pause_3.1 from cache
I1115 09:54:32.861427    8440 docker.go:107] Loading image: /var/lib/minikube/images/kube-scheduler_v1.16.2
I1115 09:54:32.861491    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/kube-scheduler_v1.16.2
I1115 09:54:33.029401    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.2 from cache
I1115 09:54:33.029447    8440 docker.go:107] Loading image: /var/lib/minikube/images/coredns_1.6.2
I1115 09:54:33.029516    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/coredns_1.6.2
I1115 09:54:33.174186    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 from cache
I1115 09:54:33.174219    8440 docker.go:107] Loading image: /var/lib/minikube/images/kube-apiserver_v1.16.2
I1115 09:54:33.174289    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/kube-apiserver_v1.16.2
I1115 09:54:33.334208    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.2 from cache
I1115 09:54:33.334252    8440 docker.go:107] Loading image: /var/lib/minikube/images/kube-addon-manager_v9.0
I1115 09:54:33.334313    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/kube-addon-manager_v9.0
I1115 09:54:33.502653    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0 from cache
I1115 09:54:33.502699    8440 docker.go:107] Loading image: /var/lib/minikube/images/kubernetes-dashboard-amd64_v1.10.1
I1115 09:54:33.502746    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/kubernetes-dashboard-amd64_v1.10.1
I1115 09:54:33.677184    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/kubernetes-dashboard-amd64_v1.10.1 from cache
I1115 09:54:33.677205    8440 docker.go:107] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.16.2
I1115 09:54:33.677252    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/kube-controller-manager_v1.16.2
I1115 09:54:33.850350    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.2 from cache
I1115 09:54:33.850375    8440 docker.go:107] Loading image: /var/lib/minikube/images/kube-proxy_v1.16.2
I1115 09:54:33.850426    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/kube-proxy_v1.16.2
I1115 09:54:33.995201    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.2 from cache
I1115 09:54:33.995229    8440 docker.go:107] Loading image: /var/lib/minikube/images/etcd_3.3.15-0
I1115 09:54:33.995282    8440 exec_runner.go:42] (ExecRunner) Run:  docker load -i /var/lib/minikube/images/etcd_3.3.15-0
I1115 09:54:34.192751    8440 cache_images.go:237] Successfully loaded image /home/manuel/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 from cache
I1115 09:54:34.192785    8440 cache_images.go:120] Successfully loaded all cached images.
I1115 09:54:34.192792    8440 cache_images.go:121] LoadImages end
I1115 09:54:34.192958    8440 kubeadm.go:665] kubelet v1.16.2 config:
[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.16.174.93 --pod-manifest-path=/etc/kubernetes/manifests --resolv-conf=/run/systemd/resolve/resolv.conf

[Install]
I1115 09:54:34.192991    8440 exec_runner.go:42] (ExecRunner) Run:  /bin/bash -c "pgrep kubelet && sudo systemctl stop kubelet"
I1115 09:54:34.209386    8440 cache_binaries.go:74] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.2/bin/linux/amd64/kubelet
I1115 09:54:34.209386    8440 cache_binaries.go:74] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.2/bin/linux/amd64/kubeadm
I1115 09:54:34.298448    8440 exec_runner.go:42] (ExecRunner) Run:  /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl start kubelet"
I1115 09:54:34.525029    8440 certs.go:75] acquiring lock: {Name:setupCerts Clock:{} Delay:15s Timeout:0s Cancel:<nil>}
I1115 09:54:34.525166    8440 certs.go:83] Setting up /home/manuel/.minikube for IP: 172.16.174.93
I1115 09:54:34.525223    8440 crypto.go:69] Generating cert /home/manuel/.minikube/client.crt with IP's: []
I1115 09:54:34.529200    8440 crypto.go:157] Writing cert to /home/manuel/.minikube/client.crt ...
I1115 09:54:34.529229    8440 lock.go:41] attempting to write to file "/home/manuel/.minikube/client.crt" with filemode -rw-r--r--
I1115 09:54:34.529440    8440 crypto.go:165] Writing key to /home/manuel/.minikube/client.key ...
I1115 09:54:34.529455    8440 lock.go:41] attempting to write to file "/home/manuel/.minikube/client.key" with filemode -rw-------
I1115 09:54:34.529577    8440 crypto.go:69] Generating cert /home/manuel/.minikube/apiserver.crt with IP's: [172.16.174.93 10.96.0.1 10.0.0.1]
I1115 09:54:34.532772    8440 crypto.go:157] Writing cert to /home/manuel/.minikube/apiserver.crt ...
I1115 09:54:34.532790    8440 lock.go:41] attempting to write to file "/home/manuel/.minikube/apiserver.crt" with filemode -rw-r--r--
I1115 09:54:34.532941    8440 crypto.go:165] Writing key to /home/manuel/.minikube/apiserver.key ...
I1115 09:54:34.532955    8440 lock.go:41] attempting to write to file "/home/manuel/.minikube/apiserver.key" with filemode -rw-------
I1115 09:54:34.533069    8440 crypto.go:69] Generating cert /home/manuel/.minikube/proxy-client.crt with IP's: []
I1115 09:54:34.536329    8440 crypto.go:157] Writing cert to /home/manuel/.minikube/proxy-client.crt ...
I1115 09:54:34.536344    8440 lock.go:41] attempting to write to file "/home/manuel/.minikube/proxy-client.crt" with filemode -rw-r--r--
I1115 09:54:34.536483    8440 crypto.go:165] Writing key to /home/manuel/.minikube/proxy-client.key ...
I1115 09:54:34.536496    8440 lock.go:41] attempting to write to file "/home/manuel/.minikube/proxy-client.key" with filemode -rw-------
I1115 09:54:34.537895    8440 exec_runner.go:42] (ExecRunner) Run:  openssl version
I1115 09:54:34.540420    8440 exec_runner.go:42] (ExecRunner) Run:  sudo test -f /etc/ssl/certs/minikubeCA.pem
I1115 09:54:34.545213    8440 exec_runner.go:42] (ExecRunner) Run:  openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1115 09:54:34.547906    8440 exec_runner.go:42] (ExecRunner) Run:  sudo test -f /etc/ssl/certs/b5213941.0
🔄  Relaunching Kubernetes using kubeadm ... 
I1115 09:54:34.558955    8440 kubeadm.go:436] RestartCluster start
I1115 09:54:34.559004    8440 exec_runner.go:42] (ExecRunner) Run:  sudo test -d /data/minikube
I1115 09:54:34.564427    8440 exec_runner.go:74] (ExecRunner) Non-zero exit: sudo test -d /data/minikube: exit status 1 (5.394854ms)
I1115 09:54:34.564476    8440 kubeadm.go:229] /data/minikube skipping compat symlinks: command failed: sudo test -d /data/minikube
stdout: 
stderr: : exit status 1
I1115 09:54:34.564504    8440 exec_runner.go:42] (ExecRunner) Run:  /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1115 09:54:34.610546    8440 exec_runner.go:74] (ExecRunner) Non-zero exit: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": exit status 1 (46.004903ms)
-- stdout --
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority

-- /stdout --
** stderr ** 
error execution phase certs/apiserver: [certs] certificate apiserver not signed by CA certificate ca: crypto/rsa: verification error
To see the stack trace of this error execute with --v=5 or higher

** /stderr **
I1115 09:54:34.610603    8440 kubeadm.go:439] RestartCluster took 51.636311ms
I1115 09:54:34.610641    8440 exec_runner.go:42] (ExecRunner) Run:  docker ps -a --filter=name=k8s_kube-apiserver --format="{{.ID}}"
I1115 09:54:34.659708    8440 logs.go:178] 0 containers: []
W1115 09:54:34.659730    8440 logs.go:180] No container was found matching "kube-apiserver"
I1115 09:54:34.659917    8440 exec_runner.go:42] (ExecRunner) Run:  docker ps -a --filter=name=k8s_coredns --format="{{.ID}}"
I1115 09:54:34.707159    8440 logs.go:178] 0 containers: []
W1115 09:54:34.707191    8440 logs.go:180] No container was found matching "coredns"
I1115 09:54:34.707240    8440 exec_runner.go:42] (ExecRunner) Run:  docker ps -a --filter=name=k8s_kube-scheduler --format="{{.ID}}"
I1115 09:54:34.754625    8440 logs.go:178] 0 containers: []
W1115 09:54:34.754648    8440 logs.go:180] No container was found matching "kube-scheduler"
I1115 09:54:34.754686    8440 exec_runner.go:42] (ExecRunner) Run:  docker ps -a --filter=name=k8s_kube-proxy --format="{{.ID}}"
I1115 09:54:34.805427    8440 logs.go:178] 0 containers: []
W1115 09:54:34.805450    8440 logs.go:180] No container was found matching "kube-proxy"
I1115 09:54:34.805490    8440 exec_runner.go:42] (ExecRunner) Run:  docker ps -a --filter=name=k8s_kube-addon-manager --format="{{.ID}}"
I1115 09:54:34.855381    8440 logs.go:178] 0 containers: []
W1115 09:54:34.855401    8440 logs.go:180] No container was found matching "kube-addon-manager"
I1115 09:54:34.855443    8440 exec_runner.go:42] (ExecRunner) Run:  docker ps -a --filter=name=k8s_kubernetes-dashboard --format="{{.ID}}"
I1115 09:54:34.906057    8440 logs.go:178] 0 containers: []
W1115 09:54:34.906079    8440 logs.go:180] No container was found matching "kubernetes-dashboard"
I1115 09:54:34.906123    8440 exec_runner.go:42] (ExecRunner) Run:  docker ps -a --filter=name=k8s_storage-provisioner --format="{{.ID}}"
I1115 09:54:34.954550    8440 logs.go:178] 0 containers: []
W1115 09:54:34.954572    8440 logs.go:180] No container was found matching "storage-provisioner"
I1115 09:54:34.954614    8440 exec_runner.go:42] (ExecRunner) Run:  docker ps -a --filter=name=k8s_kube-controller-manager --format="{{.ID}}"
I1115 09:54:35.002605    8440 logs.go:178] 0 containers: []
W1115 09:54:35.002629    8440 logs.go:180] No container was found matching "kube-controller-manager"
I1115 09:54:35.002645    8440 logs.go:92] Gathering logs for Docker ...
I1115 09:54:35.002657    8440 exec_runner.go:42] (ExecRunner) Run:  /bin/bash -c "sudo journalctl -u docker -n 200"
I1115 09:54:35.039281    8440 logs.go:92] Gathering logs for container status ...
I1115 09:54:35.039320    8440 exec_runner.go:42] (ExecRunner) Run:  /bin/bash -c "sudo crictl ps -a || sudo docker ps -a"
I1115 09:54:35.102578    8440 logs.go:92] Gathering logs for kubelet ...
I1115 09:54:35.102604    8440 exec_runner.go:42] (ExecRunner) Run:  /bin/bash -c "sudo journalctl -u kubelet -n 200"
I1115 09:54:35.131450    8440 logs.go:92] Gathering logs for dmesg ...
I1115 09:54:35.131480    8440 exec_runner.go:42] (ExecRunner) Run:  /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 200"
W1115 09:54:35.143322    8440 exit.go:101] Error restarting cluster: running cmd: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": command failed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
stdout: [certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority

stderr: error execution phase certs/apiserver: [certs] certificate apiserver not signed by CA certificate ca: crypto/rsa: verification error
To see the stack trace of this error execute with --v=5 or higher
: exit status 1

💣  Error restarting cluster: running cmd: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": command failed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
stdout: [certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority

stderr: error execution phase certs/apiserver: [certs] certificate apiserver not signed by CA certificate ca: crypto/rsa: verification error
To see the stack trace of this error execute with --v=5 or higher
: exit status 1

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:

@priyawadhwa priyawadhwa added kind/support Categorizes issue or PR as a support question. co/none-driver labels Nov 20, 2019
@medyagh
Copy link
Member

medyagh commented Jan 29, 2020

@roxax19 do you mind trying to kill the ports manually ?

in minikube we could try to do better to make sure the ports are free and if not let the user know before hand !

I will add this as a bug for none ! so we detect ports if they are used.

@medyagh
Copy link
Member

medyagh commented Jan 29, 2020

@roxax19 thank you so much for helping us making minikube better, I create a bug to track this issue, so we let the user know more friendly way that the ports are taken and provide better experience . I will close this one, you can track the progress of this bug here #5899

@medyagh medyagh closed this as completed Jan 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

4 participants