Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube start fails with driver kvm2 on AMD Ryzen CPU #6168

Closed
ateijelo opened this issue Dec 27, 2019 · 20 comments
Closed

minikube start fails with driver kvm2 on AMD Ryzen CPU #6168

ateijelo opened this issue Dec 27, 2019 · 20 comments
Assignees
Labels
area/guest-vm General configuration issues with the minikube guest VM kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@ateijelo
Copy link

ateijelo commented Dec 27, 2019

minikube fails to create a new cluster on Manjaro, using driver kvm2. The VM using boot2docker fails to boot properly.

The exact command to reproduce the issue:

minikube start --vm-driver=kvm2

The full output of the command that failed:

😄  minikube v1.6.2 on Arch 18.1.4
✨  Selecting 'kvm2' driver from user configuration (alternates: [virtualbox none])
🔥  Creating kvm2 VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...

💣  Unable to start VM. Please investigate and run 'minikube delete' if possible: create:
Error creating machine: Error in driver during machine creation: machine didn't return
an IP after 120 seconds

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

The output of the minikube logs command:

💣  command runner
❌  Error: [SSH_AUTH_FAILURE] getting ssh client for bootstrapper: Error dialing tcp via ssh client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
💡  Suggestion: Your host is failing to route packets to the minikube VM. If you have VPN software, try turning it off or configuring it so that it does not re-route traffic to the VM IP. If not, check your VM environment routing options.
📘  Documentation: https://minikube.sigs.k8s.io/docs/reference/networking/vpn/
⁉️   Related issues:
    ▪ https://github.com/kubernetes/minikube/issues/3930

😿  If the above advice does not help, please let us know: 
👉  https://github.com/kubernetes/minikube/issues/new/choose

The operating system version: Manjaro 18.1.4 (Arch), running kernel 5.4.2-1-MANJARO, with QEMU 4.2

More details:

Despite the 120 seconds to get an IP, the issue doesn't seem to be related to networking. After a lot of digging around I narrowed down the problem to this command:

/usr/bin/qemu-system-x86_64
   -machine pc-i440fx-4.2,accel=kvm,usb=off,dump-guest-core=off
   -cpu host
   -m 1908
   -boot menu=on
   -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
   -device lsi,id=scsi0,bus=pci.0,addr=0x4
   -drive file=$HOME/.minikube/machines/minikube/boot2docker.iso,format=raw,if=none,id=drive-scsi0-0-2,readonly=on
   -device scsi-cd,bus=scsi0.0,scsi-id=2,device_id=drive-scsi0-0-2,drive=drive-scsi0-0-2,id=scsi0-0-2,bootindex=1
   -drive file=$HOME/.minikube/machines/minikube/minikube.rawdisk,format=raw,if=none,id=drive-virtio-disk0,aio=threads
   -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2

On a Debian system running the same QEMU on kernel 5.3.0-2-amd64, that command shows, on the VM serial console, a lot of [ OK ] lines up to the minikube login: line. But in Manjaro, I get this:

Welcome to Buildroot 2019.02.7!

[  OK  ] Created slice User and Session Slice.
[FAILED] Failed to start Slices.
See 'systemctl status slices.target' for details.
[FAILED] Failed to listen on Journal Audit Socket.
See 'systemctl status systemd-journald-audit.socket' for details.
[FAILED] Failed to listen on Network Service Netlink Socket.
See 'systemctl status systemd-networkd.socket' for details.
[FAILED] Failed to listen on Journal Socket.
See 'systemctl status systemd-journald.socket' for details.
[DEPEND] Dependency failed for Journal Service.
[DEPEND] Dependency failed for Flus\u2026Journal to Persistent Storage.
[FAILED] Failed to mount Huge Pages File System.
See 'systemctl status dev-hugepages.mount' for details.
[FAILED] Failed to start Remount Root and Kernel File Systems.
See 'systemctl status systemd-remount-fs.service' for details.
[FAILED] Failed to listen on Journal Socket (/dev/log).
See 'systemctl status systemd-journald-dev-log.socket' for details.
[FAILED] Failed to start system-getty.slice.
See 'systemctl status system-getty.slice' for details.
[DEPEND] Dependency failed for Getty on tty1.
[FAILED] Failed to listen on udev Kernel Socket.
See 'systemctl status systemd-udevd-kernel.socket' for details.
[FAILED] Failed to start NFS client services.
See 'systemctl status nfs-client.target' for details.
[FAILED] Failed to start Swap.
See 'systemctl status swap.target' for details.
[FAILED] Failed to mount Temporary Directory (/tmp).
See 'systemctl status tmp.mount' for details.
[DEPEND] Dependency failed for Network Time Synchronization.
[DEPEND] Dependency failed for Network Name Resolution.
[FAILED] Failed to start Host and Network Name Lookups.
See 'systemctl status nss-lookup.target' for details.
[DEPEND] Dependency failed for NFS \u2026 monitor for NFSv2/3 locking..
[FAILED] Failed to start System Time Synchronized.
See 'systemctl status time-sync.target' for details.
[FAILED] Failed to start Create lis\u2026 nodes for the current kernel.
See 'systemctl status kmod-static-nodes.service' for details.
[FAILED] Failed to mount POSIX Message Queue File System.
See 'systemctl status dev-mqueue.mount' for details.
[FAILED] Failed to mount FUSE Control File System.
See 'systemctl status sys-fs-fuse-connections.mount' for details.
[FAILED] Failed to start Forward Pa\u2026uests to Wall Directory Watch.
See 'systemctl status systemd-ask-password-wall.path' for details.
[FAILED] Failed to mount Kernel Debug File System.
See 'systemctl status sys-kernel-debug.mount' for details.
[FAILED] Failed to listen on initctl Compatibility Named Pipe.
See 'systemctl status systemd-initctl.socket' for details.
[FAILED] Failed to start Apply Kernel Variables.
See 'systemctl status systemd-sysctl.service' for details.
[FAILED] Failed to mount NFSD configuration filesystem.
See 'systemctl status proc-fs-nfsd.mount' for details.
[DEPEND] Dependency failed for NFS Mount Daemon.
[DEPEND] Dependency failed for NFS server and services.
[FAILED] Failed to start Remote File Systems (Pre).
See 'systemctl status remote-fs-pre.target' for details.
[FAILED] Failed to start Dispatch P\u2026ts to Console Directory Watch.
See 'systemctl status systemd-ask-password-console.path' for details.
[FAILED] Failed to listen on udev Control Socket.
See 'systemctl status systemd-udevd-control.socket' for details.
[FAILED] Failed to start udev Coldplug all Devices.
See 'systemctl status systemd-udev-trigger.service' for details.
[FAILED] Failed to start udev Wait \u2026omplete Device Initialization.
See 'systemctl status systemd-udev-settle.service' for details.
[DEPEND] Dependency failed for minikube automount.
[FAILED] Failed to start Paths.
See 'systemctl status paths.target' for details.
[FAILED] Failed to start system-serial\x2dgetty.slice.
See 'systemctl status "system-serial\\x2dgetty.slice"' for details.
[DEPEND] Dependency failed for Serial Getty on ttyS0.
[FAILED] Failed to start Login Prompts.
See 'systemctl status getty.target' for details.
[FAILED] Failed to start Create Static Device Nodes in /dev.
See 'systemctl status systemd-tmpfiles-setup-dev.service' for details.
[FAILED] Failed to start Local File Systems (Pre).
See 'systemctl status local-fs-pre.target' for details.
[FAILED] Failed to start Local File Systems.
See 'systemctl status local-fs.target' for details.
[FAILED] Failed to start Preprocess NFS configuration.
See 'systemctl status nfs-config.service' for details.
[FAILED] Failed to start udev Kernel Device Manager.
See 'systemctl status systemd-udevd.service' for details.
[FAILED] Failed to start Network Service.
See 'systemctl status systemd-networkd.service' for details.
[FAILED] Failed to start Network.
See 'systemctl status network.target' for details.
[DEPEND] Dependency failed for Notify NFS peers of a restart.
[FAILED] Failed to start Remote File Systems.
See 'systemctl status remote-fs.target' for details.
[FAILED] Failed to start Containers.
See 'systemctl status machines.target' for details.
[FAILED] Failed to start RPC Port Mapper.
See 'systemctl status rpcbind.target' for details.
[FAILED] Failed to start Create Volatile Files and Directories.
See 'systemctl status systemd-tmpfiles-setup.service' for details.
[FAILED] Failed to start Update UTMP about System Boot/Shutdown.
See 'systemctl status systemd-update-utmp.service' for details.
[DEPEND] Dependency failed for Upda\u2026about System Runlevel Changes.
[FAILED] Failed to start Rebuild Journal Catalog.
See 'systemctl status systemd-journal-catalog-update.service' for details.
[FAILED] Failed to start Update is Completed.
See 'systemctl status systemd-update-done.service' for details.
[FAILED] Failed to start System Initialization.
See 'systemctl status sysinit.target' for details.
[DEPEND] Dependency failed for RPCbind Server Activation Socket.
[DEPEND] Dependency failed for RPC bind service.
[DEPEND] Dependency failed for OpenSSH server daemon.
[DEPEND] Dependency failed for Hyper-V FCOPY Daemon.
[DEPEND] Dependency failed for Hyper-V VSS Daemon.
[DEPEND] Dependency failed for Basic System.
[DEPEND] Dependency failed for Multi-User System.
[DEPEND] Dependency failed for Login Service.
[DEPEND] Dependency failed for D-Bus System Message Bus Socket.
[DEPEND] Dependency failed for D-Bus System Message Bus.
[DEPEND] Dependency failed for vmtoolsd for openvmtools.
[DEPEND] Dependency failed for Hyper-V Key Value Pair Daemon.
[DEPEND] Dependency failed for VirtualBox Guest Service.
[DEPEND] Dependency failed for Dail\u2026anup of Temporary Directories.
[FAILED] Failed to start Timers.
See 'systemctl status timers.target' for details.
[FAILED] Failed to start Sockets.
See 'systemctl status sockets.target' for details.

And the VM just stays hung there. Is there any way I could troubleshoot that boot process?

Both computers are loading the same boot2docker.iso (sha256: a24153a2e49f082d5f4a36ea5d1608cba2482d563e8642a8dffd6560c40f3ed2).

The other difference between the systems is the CPU:

  • Debian: Intel(R) Core(TM) i7-8550U
  • Manjaro: AMD Ryzen 7 3800X 8-Core
@ateijelo
Copy link
Author

ateijelo commented Dec 28, 2019

I just tried a live Ubuntu 19.10 (running kernel 5.3.0-18-generic) on the same AMD desktop I have Manjaro on. I followed these steps:

  • downloaded minikube-linux-amd64, v1.6.2
  • installed libvirt-daemon, libvirt-daemon-system and virt-manager
  • ran minikube start

That failed again with "machine didn't return an IP after 120 seconds", and the minikube VM console in Virtual Machine Manager shows the same errors as in my previous comment:

Welcome to Buildroot 2019.02.7!

[  OK  ] Listening on udev Kernel Socket.
[FAILED] Failed to listen on initctl Compatibility Named Pipe.
See 'systemctl status systemd-initctl.socket' for details.
...

My working theory at the moment is that there may be some incompatibility between minikube right now and my CPU. I edited the name of the issue to reflect that. I'll keep investigating.

@ateijelo ateijelo changed the title minikube start fails with driver kvm2 on Manjaro minikube start fails with driver kvm2 on AMD Ryzen CPU Dec 28, 2019
@afbjorklund
Copy link
Collaborator

Probably yet another systemd bug, similar to this one:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1835809

@ateijelo
Copy link
Author

Another data point:

I just ran the same qemu command as above, except I replaced -cpu host with -cpu Haswell-v4. QEMU spit out a number of warnings about TCG not supporting some CPU features, but minikube booted successfully

The question now is, will using a different emulated CPU impact minikube somehow?

And a minor question, is there a way for me to change the default -cpu flag minikube/libvirt sends to qemu (without hacking a script that replaces the real qemu binary and changes the flags on the fly)?

@ateijelo
Copy link
Author

ateijelo commented Dec 28, 2019

Ok, I managed to make it work. I tried a handful of CPUs manually and found -cpu kvm64 to have good performance. Other CPUs ran considerably slower.

I couldn't find any clean/elegant way to have minikube start the VM with a different CPU. So I ended up renaming /usr/bin/qemu-system-x86_64 to /usr/bin/qemu-system-x86_64.orig and putting this script in its place:

#!/usr/bin/env python

import os
import sys

argv = sys.argv[:]

if "-cpu" in argv:
    i = argv.index("-cpu")
    if argv[i + 1] == "host":
        argv[i + 1] = "kvm64"

os.execvp("qemu-system-x86_64.orig", argv)

After this, minikube start created the cluster successfully.

This is clearly not a solution, and the bug remains somewhere in boot2docker or systemd, but at least this unblocks me and I hope it'll help others.

@yuchengwu
Copy link

yuchengwu commented Dec 29, 2019

I probably run into same issue, execting minikube start <args> on machine 1 start successfully while machine 2 always fail. Both machines are ubuntu 18.04 with virtualbox 5.2.32, the other major difference are cpu and kernel

machine 1: Intel(R) Core(TM) i5-5200U, 4.15.0-72-generic
machine 2: AMD Ryzen 5 3600 6-Core, 5.0.0-23-generic

then I upgrade both to ubuntu 19.04, now their kernel version are identical to 5.0.0-37-generic, agsin, machine 2 still run with problems while machine 1 works OK.

for machine 2
The full output of the command that failed:

Details

wyc@3600:~/go/src/k8s.io/minikube$ ./out/minikube-linux-amd64 start --iso-url=https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.6.0.iso --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --registry-mirror=https://registry.docker-cn.com
😄 minikube v1.6.2 on Ubuntu 19.04
✨ Selecting 'virtualbox' driver from existing profile (alternates: [none])
✅ Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
🏃 Using the running virtualbox "minikube" VM ...
⌛ Waiting for the host to be provisioned ...

💣 Unable to start VM. Please investigate and run 'minikube delete' if possible
❌ Error: [DOCKER_UNAVAILABLE] Temporary Error: Error configuring auth on host: OS type not recognized
💡 Suggestion: Docker inside the VM is unavailable. Try running 'minikube delete' to reset the VM.
⁉️ Related issues:
#3952

😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose

The output of the minikube logs command

Details wyc@3600:~/go/src/k8s.io/minikube$ ./out/minikube-linux-amd64 logs

💣 command runner
❌ Error: [SSH_AUTH_FAILURE] getting ssh client for bootstrapper: Error dialing tcp via ssh client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
💡 Suggestion: Your host is failing to route packets to the minikube VM. If you have VPN software, try turning it off or configuring it so that it does not re-route traffic to the VM IP. If not, check your VM environment routing options.
📘 Documentation: https://minikube.sigs.k8s.io/docs/reference/networking/vpn/
⁉️ Related issues:
#3930

😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose

the minikube start process is totally a black box to me, any ssuggestions to help me debug the situation, thanks.

@ateijelo
Copy link
Author

I'm also new to minikube, but I had played with Virtualbox, libvirt and QEMU before, so I had an idea how to poke around to figure out what was happening.

minikube's goal is to create a kubernetes cluster and set up kubectl to use it. And it has two distinct ways of doing it:

  • use docker images & containers in your own host, that what --vm-driver=none would do
  • create a virtual machine and run the whole cluster in it

What I was trying to solve above was the second approach. If you don't mind the noise from using your real host, try minikube start --vm-driver=none. That approach may even have advantages in performance and resource allocation (you don't have to pre-allocate ram and disk for the VM).

If you wanna use the VM approach, then minikube can, in Linux, use KVM or VirtualBox. For KVM, the process goes like:

  • minikube asks libvirtd to create a VM
  • libvirtd uses qemu-system-x86_64 to run it

What I did above was to force qemu-system-x86_64 to use a different CPU, which seems to fix the incompatibility between the VM and the Ryzen CPU.

I see your minikube tried to use VirtualBox. Try installing libvirt-daemon-system and running minikube start --vm-driver=kvm2 and then follow the steps above. Hope this helps.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Dec 29, 2019

@ateijelo : Thanks for investigating, and coming up with a reasonable workaround in the meantime.

It would be interesting to see if Boot2Docker (Docker Machine) still works properly, since it doesn't use systemd (but init) and shouldn't have the same issue. If you could verify, that would be great!

https://github.com/boot2docker/boot2docker/releases/download/v19.03.5/boot2docker.iso

@yuchengwu : Your problem sounds different, although possibly also related to the AMD CPU ?

We need more verbose logs to determine why it didn't start, and that "provisioning" error text posted by libmachine is horrible ("Error configuring auth on host: OS type not recognized") #5215 et al

@afbjorklund afbjorklund added area/guest-vm General configuration issues with the minikube guest VM kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. co/kvm2-driver KVM2 driver related issues labels Dec 29, 2019
@yuchengwu
Copy link

Thanks for the advices, To avoid polluting my host env, I first tried to switch using --vm-driver=kvm2 but encountered #2991 (comment) ,

then I consider using --vm-driver=none option, instead of running the command directly on host machine, I created a vm and executing minikube start --vm-driver=none inside it, this time everything go well.

@yuchengwu
Copy link

yuchengwu commented Dec 29, 2019

@afbjorklund

hi, the more verbose version minikube start --alsologtostderr -v=7 --vm-driver=virtualbox comes:

Details
wyc@3600:~/go/src/k8s.io/minikube$ ./out/minikube-linux-amd64 delete 
🙄  "minikube" profile does not exist
🙄  "minikube" cluster does not exist. Proceeding ahead with cleanup.
💔  The "minikube" cluster has been deleted.
🔥  Successfully deleted profile "minikube"
wyc@3600:~/go/src/k8s.io/minikube$ ./out/minikube-linux-amd64 start --alsologtostderr -v=7 --vm-driver=virtualbox --iso-url=https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.6.0.iso --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --registry-mirror=https://registry.docker-cn.com
I1229 23:48:59.168485   21341 notify.go:125] Checking for updates...
I1229 23:49:02.367192   21341 start.go:255] hostinfo: {"hostname":"3600","uptime":8064,"bootTime":1577626478,"procs":506,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"19.04","kernelVersion":"5.0.0-37-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"e124648c-1671-4f43-8a72-9804d20188e4"}
I1229 23:49:02.367929   21341 start.go:265] virtualization: kvm host
😄  minikube v1.6.2 on Ubuntu 19.04
I1229 23:49:02.368036   21341 start.go:555] selectDriver: flag="virtualbox", old=
I1229 23:49:02.368056   21341 global.go:60] Querying for installed drivers using PATH=/home/wyc/.minikube/bin:/usr/local/go/bin:/home/wyc/go/bin:/home/wyc/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
I1229 23:49:02.475863   21341 global.go:68] kvm2 priority: 6, state: {Installed:true Healthy:false Error:/usr/bin/virsh domcapabilities --virttype kvm failed:
error: failed to get emulator capabilities
error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host Fix:Follow your Linux distribution instructions for configuring KVM Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
I1229 23:49:02.475950   21341 global.go:68] none priority: 2, state: {Installed:true Healthy:true Error: Fix: Doc:}
I1229 23:49:02.517229   21341 global.go:68] virtualbox priority: 4, state: {Installed:true Healthy:true Error: Fix: Doc:}
I1229 23:49:02.517281   21341 global.go:68] vmware priority: 5, state: {Installed:false Healthy:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I1229 23:49:02.517298   21341 driver.go:109] requested: "virtualbox"
I1229 23:49:02.517305   21341 driver.go:113] choosing "virtualbox" because it was requested
I1229 23:49:02.517311   21341 driver.go:128] not recommending "none" due to priority: 2
I1229 23:49:02.517317   21341 driver.go:123] not recommending "kvm2" due to health: /usr/bin/virsh domcapabilities --virttype kvm failed:
error: failed to get emulator capabilities
error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
I1229 23:49:02.517323   21341 driver.go:146] Picked: virtualbox
I1229 23:49:02.517330   21341 driver.go:147] Alternatives: [none]
✨  Selecting 'virtualbox' driver from user configuration (alternates: [none])
I1229 23:49:02.517376   21341 start.go:297] selected driver: virtualbox
I1229 23:49:02.517380   21341 start.go:585] validating driver "virtualbox" against 
I1229 23:49:02.559894   21341 start.go:591] status for virtualbox: {Installed:true Healthy:true Error: Fix: Doc:}
I1229 23:49:02.560035   21341 start.go:650] selecting image repository for country cn ...
✅  Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
I1229 23:49:02.712122   21341 downloader.go:60] Not caching ISO, using https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.6.0.iso
I1229 23:49:02.712313   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.17.0
I1229 23:49:02.712333   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.1
I1229 23:49:02.712348   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.17.0 exists
I1229 23:49:02.712363   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.1 exists
I1229 23:49:02.712535   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.1 completed in 205.603µs
I1229 23:49:02.712553   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.1 succeeded
I1229 23:49:02.712581   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.17.0
I1229 23:49:02.712606   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.0.0-beta8 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard_v2.0.0-beta8
I1229 23:49:02.712595   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_1.6.5
I1229 23:49:02.712316   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.2 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper_v1.0.2
I1229 23:49:02.712660   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_1.6.5 exists
I1229 23:49:02.712673   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_1.6.5 completed in 101.201µs
I1229 23:49:02.712685   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns_1.6.5 succeeded
I1229 23:49:02.712340   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.17.0
I1229 23:49:02.712709   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.17.0 exists
I1229 23:49:02.712716   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.17.0 completed in 382.736µs
I1229 23:49:02.712736   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.17.0 succeeded
I1229 23:49:02.712322   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.4.3-0
I1229 23:49:02.712758   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.4.3-0 exists
I1229 23:49:02.712356   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.17.0
I1229 23:49:02.712763   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-addon-manager:v9.0.2 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-addon-manager_v9.0.2
I1229 23:49:02.712800   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.17.0 exists
I1229 23:49:02.712811   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.17.0 completed in 459.866µs
I1229 23:49:02.712815   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-addon-manager_v9.0.2 exists
I1229 23:49:02.712802   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper_v1.0.2 exists
I1229 23:49:02.712852   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.2 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper_v1.0.2 completed in 534.808µs
I1229 23:49:02.712871   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.2 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper_v1.0.2 succeeded
I1229 23:49:02.712826   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.17.0 succeeded
I1229 23:49:02.712596   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.17.0 completed in 151.952µs
I1229 23:49:02.713165   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-addon-manager:v9.0.2 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-addon-manager_v9.0.2 completed in 399.146µs
I1229 23:49:02.713184   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.17.0 succeeded
I1229 23:49:02.713196   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/kube-addon-manager:v9.0.2 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-addon-manager_v9.0.2 succeeded
I1229 23:49:02.712620   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.17.0 exists
I1229 23:49:02.713216   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.17.0 completed in 853.982µs
I1229 23:49:02.713230   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.17.0 succeeded
I1229 23:49:02.712636   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard_v2.0.0-beta8 exists
I1229 23:49:02.713265   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.0.0-beta8 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard_v2.0.0-beta8 completed in 654.34µs
I1229 23:49:02.712765   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.4.3-0 completed in 445.717µs
I1229 23:49:02.713287   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.0.0-beta8 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard_v2.0.0-beta8 succeeded
I1229 23:49:02.712358   21341 cache_images.go:347] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v1.8.1 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v1.8.1
I1229 23:49:02.713308   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.4.3-0 succeeded
I1229 23:49:02.712354   21341 profile.go:89] Saving config to /home/wyc/.minikube/profiles/minikube/config.json ...
I1229 23:49:02.713459   21341 cache_images.go:353] /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v1.8.1 exists
I1229 23:49:02.713477   21341 cache_images.go:349] CacheImage: registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v1.8.1 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v1.8.1 completed in 1.121726ms
I1229 23:49:02.713516   21341 cache_images.go:89] CacheImage registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v1.8.1 -> /home/wyc/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner_v1.8.1 succeeded
I1229 23:49:02.713539   21341 cache_images.go:96] Successfully cached all images.
I1229 23:49:02.713543   21341 lock.go:35] WriteFile acquiring /home/wyc/.minikube/profiles/minikube/config.json: {Name:mkbd64491712af7accb77961d0d4b5df4102cc8c Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I1229 23:49:02.713737   21341 cluster.go:97] Machine does not exist... provisioning new machine
I1229 23:49:02.713747   21341 cluster.go:98] Provisioning machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.6.0.iso Memory:2000 CPUs:2 DiskSize:20000 VMDriver:virtualbox ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[https://registry.docker-cn.com] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true KubernetesConfig:{KubernetesVersion:v1.17.0 NodeIP: NodePort:8443 NodeName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false} HostOnlyNicType:virtio NatNicType:virtio}
🔥  Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
I1229 23:49:02.714065   21341 main.go:110] libmachine: Reading certificate data from /home/wyc/.minikube/certs/ca.pem
I1229 23:49:02.714089   21341 main.go:110] libmachine: Decoding PEM data...
I1229 23:49:02.714106   21341 main.go:110] libmachine: Parsing certificate...
I1229 23:49:02.714179   21341 main.go:110] libmachine: Reading certificate data from /home/wyc/.minikube/certs/cert.pem
I1229 23:49:02.714198   21341 main.go:110] libmachine: Decoding PEM data...
I1229 23:49:02.714209   21341 main.go:110] libmachine: Parsing certificate...
I1229 23:49:02.714265   21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage --version
I1229 23:49:02.738182   21341 main.go:110] libmachine: STDOUT:
{
6.0.6_Ubuntur129722
}
I1229 23:49:02.738199   21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:02.738220   21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage list hostonlyifs
I1229 23:49:02.793595   21341 main.go:110] libmachine: STDOUT:
{
Name:            vboxnet2
GUID:            786f6276-656e-4274-8000-0a0027000002
DHCP:            Disabled
IPAddress:       192.168.33.1
NetworkMask:     255.255.255.0
IPV6Address:     
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:02
MediumType:      Ethernet
Wireless:        No
Status:          Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet2

Name: vboxnet1
GUID: 786f6276-656e-4174-8000-0a0027000001
DHCP: Disabled
IPAddress: 172.28.128.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:01
MediumType: Ethernet
Wireless: No
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet1

Name: vboxnet3
GUID: 786f6276-656e-4374-8000-0a0027000003
DHCP: Disabled
IPAddress: 192.168.59.1
NetworkMask: 255.255.255.0
IPV6Address: fde4:8dba:82e1::1
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 0a:00:27:00:00:03
MediumType: Ethernet
Wireless: No
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet3

Name: vboxnet0
GUID: 786f6276-656e-4074-8000-0a0027000000
DHCP: Disabled
IPAddress: 192.168.99.1
NetworkMask: 255.255.255.0
IPV6Address: fe80::800:27ff:fe00:0
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 0a:00:27:00:00:00
MediumType: Ethernet
Wireless: No
Status: Up
VBoxNetworkName: HostInterfaceNetworking-vboxnet0

}
I1229 23:49:02.793670 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:02.794185 21341 main.go:110] libmachine: Downloading /home/wyc/.minikube/cache/boot2docker.iso from file:///home/wyc/.minikube/cache/iso/minikube-v1.6.0.iso...
I1229 23:49:02.862001 21341 main.go:110] libmachine: Creating VirtualBox VM...
I1229 23:49:02.862022 21341 main.go:110] libmachine: Creating SSH key...
I1229 23:49:03.130812 21341 main.go:110] libmachine: Creating disk image...
I1229 23:49:03.130851 21341 main.go:110] libmachine: Creating 20000 MB hard disk image...
I1229 23:49:03.130865 21341 main.go:110] libmachine: Writing magic tar header
I1229 23:49:03.130908 21341 main.go:110] libmachine: Writing SSH key tar header
I1229 23:49:03.130943 21341 main.go:110] libmachine: Calling inner createDiskImage
I1229 23:49:03.130970 21341 main.go:110] libmachine: /usr/bin/VBoxManage convertfromraw stdin /home/wyc/.minikube/machines/minikube/disk.vmdk 20971520000 --format VMDK
I1229 23:49:03.130999 21341 main.go:110] libmachine: Starting command
I1229 23:49:03.131526 21341 main.go:110] libmachine: Copying to stdin
I1229 23:49:03.131592 21341 main.go:110] libmachine: Filling zeroes
I1229 23:49:07.030808 21341 main.go:110] libmachine: Closing STDIN
I1229 23:49:07.030832 21341 main.go:110] libmachine: Waiting on cmd
I1229 23:49:07.031866 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage createvm --basefolder /home/wyc/.minikube/machines/minikube --name minikube --register
I1229 23:49:07.069747 21341 main.go:110] libmachine: STDOUT:
{
Virtual machine 'minikube' is created and registered.
UUID: 8ae3a993-ee93-467e-beee-9e6f06161ac7
Settings file: '/home/wyc/.minikube/machines/minikube/minikube/minikube.vbox'
}
I1229 23:49:07.069772 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.069782 21341 main.go:110] libmachine: VM CPUS: 2
I1229 23:49:07.069792 21341 main.go:110] libmachine: VM Memory: 2000
I1229 23:49:07.069827 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage modifyvm minikube --firmware bios --bioslogofadein off --bioslogofadeout off --bioslogodisplaytime 0 --biosbootmenu disabled --ostype Linux26_64 --cpus 2 --memory 2000 --acpi on --ioapic on --rtcuseutc on --natdnshostresolver1 on --natdnsproxy1 off --cpuhotplug off --pae on --hpet on --hwvirtex on --nestedpaging on --largepages on --vtxvpid on --accelerate3d off --boot1 dvd
I1229 23:49:07.114149 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.114181 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.114206 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage modifyvm minikube --nic1 nat --nictype1 virtio --cableconnected1 on
I1229 23:49:07.158171 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.158200 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.158224 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage storagectl minikube --name SATA --add sata --hostiocache on
I1229 23:49:07.192786 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.192810 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.192834 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage storageattach minikube --storagectl SATA --port 0 --device 0 --type dvddrive --medium /home/wyc/.minikube/machines/minikube/boot2docker.iso
I1229 23:49:07.234633 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.234656 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.234685 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage storageattach minikube --storagectl SATA --port 1 --device 0 --type hdd --medium /home/wyc/.minikube/machines/minikube/disk.vmdk
I1229 23:49:07.278659 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.278686 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.278709 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage guestproperty set minikube /VirtualBox/GuestAdd/SharedFolders/MountPrefix /
I1229 23:49:07.319667 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.319696 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.319716 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage guestproperty set minikube /VirtualBox/GuestAdd/SharedFolders/MountDir /
I1229 23:49:07.356630 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.356653 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.356665 21341 main.go:110] libmachine: setting up shareDir '/home' -> 'hosthome'
I1229 23:49:07.356687 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage sharedfolder add minikube --name hosthome --hostpath /home --automount
I1229 23:49:07.397456 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.397476 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.397493 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage setextradata minikube VBoxInternal2/SharedFoldersEnableSymlinksCreate/hosthome 1
I1229 23:49:07.438973 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.438999 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.439012 21341 main.go:110] libmachine: Starting the VM...
I1229 23:49:07.439028 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage showvminfo minikube --machinereadable
I1229 23:49:07.498480 21341 main.go:110] libmachine: STDOUT:
{
name="minikube"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="8ae3a993-ee93-467e-beee-9e6f06161ac7"
CfgFile="/home/wyc/.minikube/machines/minikube/minikube/minikube.vbox"
SnapFldr="/home/wyc/.minikube/machines/minikube/minikube/Snapshots"
LogFldr="/home/wyc/.minikube/machines/minikube/minikube/Logs"
hardwareuuid="8ae3a993-ee93-467e-beee-9e6f06161ac7"
memory=2000
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="poweroff"
VMStateChangeTime="2019-12-29T15:49:07.055000000"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="/home/wyc/.minikube/machines/minikube/boot2docker.iso"
"SATA-ImageUUID-0-0"="00a314ca-4a39-430c-a461-1c67eb56aee9"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="/home/wyc/.minikube/machines/minikube/disk.vmdk"
"SATA-ImageUUID-1-0"="bd524d27-6195-4c73-811d-4f3296ff81b5"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="080027C78B54"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
nic2="none"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="pulse"
audio_in="off"
audio_out="off"
clipboard="disabled"
draganddrop="disabled"
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="hosthome"
SharedFolderPathMachineMapping1="/home"
videocap="off"
videocapaudio="off"
capturescreens=""
capturefilename="/home/wyc/.minikube/machines/minikube/minikube/minikube.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
}
I1229 23:49:07.498575 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.498632 21341 main.go:110] libmachine: Check network to re-create if needed...
I1229 23:49:07.498649 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage list hostonlyifs
I1229 23:49:07.541696 21341 main.go:110] libmachine: STDOUT:
{
Name: vboxnet2
GUID: 786f6276-656e-4274-8000-0a0027000002
DHCP: Disabled
IPAddress: 192.168.33.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:02
MediumType: Ethernet
Wireless: No
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet2

Name: vboxnet1
GUID: 786f6276-656e-4174-8000-0a0027000001
DHCP: Disabled
IPAddress: 172.28.128.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:01
MediumType: Ethernet
Wireless: No
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet1

Name: vboxnet3
GUID: 786f6276-656e-4374-8000-0a0027000003
DHCP: Disabled
IPAddress: 192.168.59.1
NetworkMask: 255.255.255.0
IPV6Address: fde4:8dba:82e1::1
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 0a:00:27:00:00:03
MediumType: Ethernet
Wireless: No
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet3

Name: vboxnet0
GUID: 786f6276-656e-4074-8000-0a0027000000
DHCP: Disabled
IPAddress: 192.168.99.1
NetworkMask: 255.255.255.0
IPV6Address: fe80::800:27ff:fe00:0
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 0a:00:27:00:00:00
MediumType: Ethernet
Wireless: No
Status: Up
VBoxNetworkName: HostInterfaceNetworking-vboxnet0

}
I1229 23:49:07.541746 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.542370 21341 main.go:110] libmachine: Searching for hostonly interface for IPv4: 192.168.99.1 and Mask: ffffff00
I1229 23:49:07.542381 21341 main.go:110] libmachine: Found: vboxnet0
I1229 23:49:07.542394 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage list dhcpservers
I1229 23:49:07.580335 21341 main.go:110] libmachine: STDOUT:
{
NetworkName: HostInterfaceNetworking-vboxnet0
IP: 192.168.99.11
NetworkMask: 255.255.255.0
lowerIPAddress: 192.168.99.100
upperIPAddress: 192.168.99.254
Enabled: Yes
Global options:
1:255.255.255.0

NetworkName: HostInterfaceNetworking-vboxnet1
IP: 172.28.128.2
NetworkMask: 255.255.255.0
lowerIPAddress: 172.28.128.3
upperIPAddress: 172.28.128.254
Enabled: Yes
Global options:
1:255.255.255.0

}
I1229 23:49:07.580383 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.580467 21341 main.go:110] libmachine: Removing orphan DHCP servers...
I1229 23:49:07.580487 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage list hostonlyifs
I1229 23:49:07.634817 21341 main.go:110] libmachine: STDOUT:
{
Name: vboxnet2
GUID: 786f6276-656e-4274-8000-0a0027000002
DHCP: Disabled
IPAddress: 192.168.33.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:02
MediumType: Ethernet
Wireless: No
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet2

Name: vboxnet1
GUID: 786f6276-656e-4174-8000-0a0027000001
DHCP: Disabled
IPAddress: 172.28.128.1
NetworkMask: 255.255.255.0
IPV6Address:
IPV6NetworkMaskPrefixLength: 0
HardwareAddress: 0a:00:27:00:00:01
MediumType: Ethernet
Wireless: No
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet1

Name: vboxnet3
GUID: 786f6276-656e-4374-8000-0a0027000003
DHCP: Disabled
IPAddress: 192.168.59.1
NetworkMask: 255.255.255.0
IPV6Address: fde4:8dba:82e1::1
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 0a:00:27:00:00:03
MediumType: Ethernet
Wireless: No
Status: Down
VBoxNetworkName: HostInterfaceNetworking-vboxnet3

Name: vboxnet0
GUID: 786f6276-656e-4074-8000-0a0027000000
DHCP: Disabled
IPAddress: 192.168.99.1
NetworkMask: 255.255.255.0
IPV6Address: fe80::800:27ff:fe00:0
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 0a:00:27:00:00:00
MediumType: Ethernet
Wireless: No
Status: Up
VBoxNetworkName: HostInterfaceNetworking-vboxnet0

}
I1229 23:49:07.634915 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.635094 21341 main.go:110] libmachine: Adding/Modifying DHCP server "192.168.99.13" with address range "192.168.99.100" - "192.168.99.254"...
I1229 23:49:07.635114 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage list dhcpservers
I1229 23:49:07.672508 21341 main.go:110] libmachine: STDOUT:
{
NetworkName: HostInterfaceNetworking-vboxnet0
IP: 192.168.99.11
NetworkMask: 255.255.255.0
lowerIPAddress: 192.168.99.100
upperIPAddress: 192.168.99.254
Enabled: Yes
Global options:
1:255.255.255.0

NetworkName: HostInterfaceNetworking-vboxnet1
IP: 172.28.128.2
NetworkMask: 255.255.255.0
lowerIPAddress: 172.28.128.3
upperIPAddress: 172.28.128.254
Enabled: Yes
Global options:
1:255.255.255.0

}
I1229 23:49:07.672544 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.672626 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage dhcpserver modify --netname HostInterfaceNetworking-vboxnet0 --ip 192.168.99.13 --netmask 255.255.255.0 --lowerip 192.168.99.100 --upperip 192.168.99.254 --enable
I1229 23:49:07.713458 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.713482 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.713517 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage modifyvm minikube --nic2 hostonly --nictype2 virtio --nicpromisc2 deny --hostonlyadapter2 vboxnet0 --cableconnected2 on
I1229 23:49:07.755550 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.755569 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.755633 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage modifyvm minikube --natpf1 delete ssh
I1229 23:49:07.798920 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.798957 21341 main.go:110] libmachine: STDERR:
{
VBoxManage: error: Code NS_ERROR_INVALID_ARG (0x80070057) - Invalid argument value (extended info not available)
VBoxManage: error: Context: "RemoveRedirect(Bstr(ValueUnion.psz).raw())" at line 1866 of file VBoxManageModifyVM.cpp
}
I1229 23:49:07.798990 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage modifyvm minikube --natpf1 ssh,tcp,127.0.0.1,40755,,22
I1229 23:49:07.839930 21341 main.go:110] libmachine: STDOUT:
{
}
I1229 23:49:07.839952 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:07.839968 21341 main.go:110] libmachine: COMMAND: /usr/bin/VBoxManage startvm minikube --type headless
I1229 23:49:08.035882 21341 main.go:110] libmachine: STDOUT:
{
Waiting for VM "minikube" to power on...
VM "minikube" has been successfully started.
}
I1229 23:49:08.035905 21341 main.go:110] libmachine: STDERR:
{
}
I1229 23:49:08.035922 21341 main.go:110] libmachine: Checking vm logs: /home/wyc/.minikube/machines/minikube/minikube/Logs/VBox.log
I1229 23:49:08.036306 21341 main.go:110] libmachine: Waiting for an IP...
I1229 23:49:08.036324 21341 main.go:110] libmachine: Getting to WaitForSSH function...
I1229 23:49:08.036373 21341 main.go:110] libmachine: Using SSH client type: native
I1229 23:49:08.036552 21341 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7bbd30] 0x7bbd00 [] 0s} 127.0.0.1 40755 }
I1229 23:49:08.036568 21341 main.go:110] libmachine: About to run SSH command:
exit 0
I1229 23:50:23.119550 21341 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41538->127.0.0.1:40755: read: connection reset by peer
I1229 23:51:41.193917 21341 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41542->127.0.0.1:40755: read: connection reset by peer
I1229 23:52:59.267413 21341 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41550->127.0.0.1:40755: read: connection reset by peer
I1229 23:54:17.831805 21341 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41600->127.0.0.1:40755: read: connection reset by peer
I1229 23:55:35.902879 21341 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41622->127.0.0.1:40755: read: connection reset by peer
....

minikube logs output:

Details

$ ./out/minikube-linux-amd64 logs

💣 command runner
❌ Error: [SSH_AUTH_FAILURE] getting ssh client for bootstrapper: Error dialing tcp via ssh client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
💡 Suggestion: Your host is failing to route packets to the minikube VM. If you have VPN software, try turning it off or configuring it so that it does not re-route traffic to the VM IP. If not, check your VM environment routing options.
📘 Documentation: https://minikube.sigs.k8s.io/docs/reference/networking/vpn/
⁉️ Related issues:
#3930

😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose

this time the previous error gone, looks like no error complains, but vm failed to connect, view the vm boot console
Screenshot from 2019-12-30 00-01-16

found that the minikube vm stuck on booting.

@ateijelo
Copy link
Author

@yuchengwu for both KVM and VirtualBox, the standard VM console stops there, even when it's successful. The actual output goes to the serial port. For VirtualBox, you need to enable the serial port in the settings:

image

and then connect to that port with telnet localhost 8888 to interact with it.

@yuchengwu
Copy link

@ateijelo , using this setting I got this:

Details

$ telnet localhost 8888
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Welcome to Buildroot 2019.02.7!

[ OK ] Created slice User and Session Slice.
[FAILED] Failed to start Slices.
See 'systemctl status slices.target' for details.
[FAILED] Failed to listen on Journal Audit Socket.
See 'systemctl status systemd-journald-audit.socket' for details.
[FAILED] Failed to listen on Network Service Netlink Socket.
See 'systemctl status systemd-networkd.socket' for details.
[FAILED] Failed to listen on Journal Socket.
See 'systemctl status systemd-journald.socket' for details.
[DEPEND] Dependency failed for Journal Service.
[DEPEND] Dependency failed for Flus…Journal to Persistent Storage.
[FAILED] Failed to mount Huge Pages File System.
See 'systemctl status dev-hugepages.mount' for details.
[FAILED] Failed to start Remount Root and Kernel File Systems.
See 'systemctl status systemd-remount-fs.service' for details.
[FAILED] Failed to listen on Journal Socket (/dev/log).
See 'systemctl status systemd-journald-dev-log.socket' for details.
[FAILED] Failed to start system-getty.slice.
See 'systemctl status system-getty.slice' for details.
[DEPEND] Dependency failed for Getty on tty1.
[FAILED] Failed to listen on udev Kernel Socket.
See 'systemctl status systemd-udevd-kernel.socket' for details.
[FAILED] Failed to start NFS client services.
See 'systemctl status nfs-client.target' for details.
[FAILED] Failed to start Swap.
See 'systemctl status swap.target' for details.
[FAILED] Failed to mount Temporary Directory (/tmp).
See 'systemctl status tmp.mount' for details.
[DEPEND] Dependency failed for Network Time Synchronization.
[DEPEND] Dependency failed for Network Name Resolution.
[FAILED] Failed to start Host and Network Name Lookups.
See 'systemctl status nss-lookup.target' for details.
[DEPEND] Dependency failed for NFS … monitor for NFSv2/3 locking..
[FAILED] Failed to start System Time Synchronized.
See 'systemctl status time-sync.target' for details.
[FAILED] Failed to start Create lis… nodes for the current kernel.
See 'systemctl status kmod-static-nodes.service' for details.
[FAILED] Failed to mount POSIX Message Queue File System.
See 'systemctl status dev-mqueue.mount' for details.
[FAILED] Failed to mount FUSE Control File System.
See 'systemctl status sys-fs-fuse-connections.mount' for details.
[FAILED] Failed to start Forward Pa…uests to Wall Directory Watch.
See 'systemctl status systemd-ask-password-wall.path' for details.
[FAILED] Failed to mount Kernel Debug File System.
See 'systemctl status sys-kernel-debug.mount' for details.
[FAILED] Failed to listen on initctl Compatibility Named Pipe.
See 'systemctl status systemd-initctl.socket' for details.
[FAILED] Failed to start Apply Kernel Variables.
See 'systemctl status systemd-sysctl.service' for details.
[FAILED] Failed to mount NFSD configuration filesystem.
See 'systemctl status proc-fs-nfsd.mount' for details.
[DEPEND] Dependency failed for NFS Mount Daemon.
[DEPEND] Dependency failed for NFS server and services.
[FAILED] Failed to start Remote File Systems (Pre).
See 'systemctl status remote-fs-pre.target' for details.
[FAILED] Failed to start Dispatch P…ts to Console Directory Watch.
See 'systemctl status systemd-ask-password-console.path' for details.
[FAILED] Failed to listen on udev Control Socket.
See 'systemctl status systemd-udevd-control.socket' for details.
[FAILED] Failed to start udev Coldplug all Devices.
See 'systemctl status systemd-udev-trigger.service' for details.
[FAILED] Failed to start udev Wait …omplete Device Initialization.
See 'systemctl status systemd-udev-settle.service' for details.
[DEPEND] Dependency failed for minikube automount.
[FAILED] Failed to start Paths.
See 'systemctl status paths.target' for details.
[FAILED] Failed to start system-serial\x2dgetty.slice.
See 'systemctl status "system-serial\x2dgetty.slice"' for details.
[DEPEND] Dependency failed for Serial Getty on ttyS0.
[FAILED] Failed to start Login Prompts.
See 'systemctl status getty.target' for details.
[FAILED] Failed to start Create Static Device Nodes in /dev.
See 'systemctl status systemd-tmpfiles-setup-dev.service' for details.
[FAILED] Failed to start Local File Systems (Pre).
See 'systemctl status local-fs-pre.target' for details.
[FAILED] Failed to start Local File Systems.
See 'systemctl status local-fs.target' for details.
[FAILED] Failed to start Preprocess NFS configuration.
See 'systemctl status nfs-config.service' for details.
[FAILED] Failed to start udev Kernel Device Manager.
See 'systemctl status systemd-udevd.service' for details.
[FAILED] Failed to start Network Service.
See 'systemctl status systemd-networkd.service' for details.
[FAILED] Failed to start Network.
See 'systemctl status network.target' for details.
[DEPEND] Dependency failed for Notify NFS peers of a restart.
[FAILED] Failed to start Remote File Systems.
See 'systemctl status remote-fs.target' for details.
[FAILED] Failed to start Containers.
See 'systemctl status machines.target' for details.
[FAILED] Failed to start RPC Port Mapper.
See 'systemctl status rpcbind.target' for details.
[FAILED] Failed to start Create Volatile Files and Directories.
See 'systemctl status systemd-tmpfiles-setup.service' for details.
[FAILED] Failed to start Update UTMP about System Boot/Shutdown.
See 'systemctl status systemd-update-utmp.service' for details.
[DEPEND] Dependency failed for Upda…about System Runlevel Changes.
[FAILED] Failed to start Rebuild Journal Catalog.
See 'systemctl status systemd-journal-catalog-update.service' for details.
[FAILED] Failed to start Update is Completed.
See 'systemctl status systemd-update-done.service' for details.
[FAILED] Failed to start System Initialization.
See 'systemctl status sysinit.target' for details.
[DEPEND] Dependency failed for RPCbind Server Activation Socket.
[DEPEND] Dependency failed for RPC bind service.
[DEPEND] Dependency failed for OpenSSH server daemon.
[DEPEND] Dependency failed for Hyper-V FCOPY Daemon.
[DEPEND] Dependency failed for Hyper-V VSS Daemon.
[DEPEND] Dependency failed for Basic System.
[DEPEND] Dependency failed for Multi-User System.
[DEPEND] Dependency failed for Login Service.
[DEPEND] Dependency failed for D-Bus System Message Bus Socket.
[DEPEND] Dependency failed for D-Bus System Message Bus.
[DEPEND] Dependency failed for vmtoolsd for openvmtools.
[DEPEND] Dependency failed for Hyper-V Key Value Pair Daemon.
[DEPEND] Dependency failed for VirtualBox Guest Service.
[DEPEND] Dependency failed for Dail…anup of Temporary Directories.
[FAILED] Failed to start Timers.
See 'systemctl status timers.target' for details.
[FAILED] Failed to start Sockets.
See 'systemctl status sockets.target' for details.

@ateijelo
Copy link
Author

Yup, it's the same issue that KVM has, but I see no way of changing the virtual CPU when using VirtualBox. All in all, this doesn't look like a minikube issue, but the systemd issue with a Ryzen CPUs that @afbjorklund pointed out. I'm gonna try the other boot2docker.iso file now and will report my findings.

@ateijelo
Copy link
Author

I just tried the boot2docker.iso linked above, directly with QEMU, and yes, it seems to work fine with -cpu host. I ran this (with the original, "unhacked" qemu):

/usr/bin/qemu-system-x86_64
    -machine pc-i440fx-4.2,accel=kvm,usb=off,dump-guest-core=off
    -cpu host
    -m 1908
    -smp 8
    -boot menu=off,strict=off
    -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
    -device lsi,id=scsi0,bus=pci.0,addr=0x4
    -drive file=$HOME/Downloads/boot2docker.iso,format=raw,if=none,id=drive-scsi0-0-2,readonly=on
    -device scsi-cd,bus=scsi0.0,scsi-id=2,device_id=drive-scsi0-0-2,drive=drive-scsi0-0-2,id=scsi0-0-2,bootindex=1
    -drive file=$HOME/.minikube/machines/minikube/minikube.rawdisk,format=raw,if=none,id=drive-virtio-disk0,aio=threads
    -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2

and got this:

     ┌----------------------------------------------------┐
     │                    Boot2Docker                     │
     ├----------------------------------------------------┤
     │ Boot2Docker                                        │
     │                                                    │
     │                                                    │
     │                                                    │
     │                                                    │
     └----------------------------------------------------┘
                Press [Tab] to edit options
               Automatic boot in 0 seconds...
Loading /boot/vmlinuz... ok
Loading /boot/initrd.img...ok

   ( '>')
  /) TC (\   Core is distributed with ABSOLUTELY NO WARRANTY.
 (/-_--_-\)           www.tinycorelinux.net

docker@boot2docker:~$ docker images
REPOSITORY                                TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                     v1.17.0             7d54289267dc        3 weeks ago         116MB
k8s.gcr.io/kube-controller-manager        v1.17.0             5eb3b7486872        3 weeks ago         161MB
k8s.gcr.io/kube-apiserver                 v1.17.0             0cae8d5cc64c        3 weeks ago         171MB
k8s.gcr.io/kube-scheduler                 v1.17.0             78c190f736b1        3 weeks ago         94.4MB
kubernetesui/dashboard                    v2.0.0-beta8        eb51a3597525        3 weeks ago         90.8MB
k8s.gcr.io/coredns                        1.6.5               70f311871ae1        7 weeks ago         41.6MB
k8s.gcr.io/etcd                           3.4.3-0             303ce5db0e90        2 months ago        288MB
kubernetesui/metrics-scraper              v1.0.2              3b08661dc379        2 months ago        40.1MB
k8s.gcr.io/kube-addon-manager             v9.0.2              bd12a212f9dc        5 months ago        83.1MB
k8s.gcr.io/pause                          3.1                 da86e6ba6ca1        2 years ago         742kB
gcr.io/k8s-minikube/storage-provisioner   v1.8.1              4689081edb10        2 years ago         80.8MB

docker@boot2docker:~$ cat /proc/cpuinfo
processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 23
model		: 113
model name	: AMD Ryzen 7 3800X 8-Core Processor
...

So, looks like more evidence that this is the aforementioned systemd issue.

@afbjorklund
Copy link
Collaborator

Okay, thanks for confirming. Patching systemd will be problematic, would prefer to not have to fork Buildroot and trying to stick with the longterm version of Linux (not sure if systemd has such a concept)

The patch itself is small enough: systemd/systemd@1c53d4a

And there is already quite the number of patches, so maybe it could be added upstream ? I guess we should build an ISO with the fix, and try to confirm that it actually fixes the issue before doing so.

https://github.com/buildroot/buildroot/tree/2019.02.x/package/systemd

@afbjorklund afbjorklund removed the co/kvm2-driver KVM2 driver related issues label Dec 30, 2019
@afbjorklund
Copy link
Collaborator

instead of running the command directly on host machine, I created a vm and executing minikube start --vm-driver=none inside it, this time everything go well

This is unfortunately the workaround, if you are unable to patch QEMU (e.g. not running KVM) and until there is an updated ISO available. Using the previous major version of the ISO (with the older systemd) might also work, but it has side effects and is not guaranteed to work with new minikube.

@afbjorklund afbjorklund added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Dec 30, 2019
@ateijelo
Copy link
Author

ateijelo commented Jan 2, 2020

@afbjorklund how can I build and/or test the iso with the patch?

Edit: I found this: https://minikube.sigs.k8s.io/docs/contributing/iso/ ; am I on the right track there?

@ateijelo
Copy link
Author

ateijelo commented Jan 2, 2020

Ok, I think I got it. I checked out branch afbjorklund:systemd-amd (from #6183) and built minikube.iso. I ran it with QEMU (without the -cpu hack) and it worked:

...
[  OK  ] Started Notify NFS peers of a restart.
[  OK  ] Started Login Service.
[  OK  ] Started OpenSSH server daemon.
[  OK  ] Reached target Multi-User System.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Started Update UTMP about System Runlevel Changes.

Welcome to minikube
minikube login: root
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

# cat /proc/cpuinfo
processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 23
model		: 113
model name	: AMD Ryzen 7 3800X 8-Core Processor
stepping	: 0
microcode	: 0x1000065
cpu MHz		: 3899.998
cache size	: 512 KB
physical id	: 0
...

I tried it with VirtualBox as well, and it also worked fine.

@afbjorklund
Copy link
Collaborator

Excellent news, thanks for building and testing it! Then we should include it with the next ISO...

@afbjorklund afbjorklund added this to the v1.7.0 milestone Jan 2, 2020
@afbjorklund afbjorklund self-assigned this Jan 4, 2020
@tstromberg
Copy link
Contributor

This should be fixed in v1.7.0 beta 0

@ateijelo
Copy link
Author

I just tried it, all looks good:

$ minikube start 
😄  minikube v1.7.0-beta.0 on Arch 18.1.5
✨  Automatically selected the  'kvm2' driver (alternates: [virtualbox none docker])
💾  Downloading driver docker-machine-driver-kvm2:
    > docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s
    > docker-machine-driver-kvm2: 13.82 MiB / 13.82 MiB  100.00% 8.94 MiB p/s 1
💿  Downloading VM boot image ...
    > minikube-v1.7.0-beta.0.iso.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s
    > minikube-v1.7.0-beta.0.iso: 150.92 MiB / 150.92 MiB  100.00% 9.50 MiB p/s
🔥  Creating kvm2 VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
💾  Downloading kubectl v1.17.0
💾  Downloading kubelet v1.17.0
💾  Downloading kubeadm v1.17.0
🚜  Pulling images ...
🚀  Launching Kubernetes ... 
⌛  Waiting for cluster to come online ...
🏄  Done! kubectl is now configured to use "minikube"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/guest-vm General configuration issues with the minikube guest VM kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

4 participants