Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube start fails on Windows 10 Home #6057

Closed
omriarnon-s1 opened this issue Dec 11, 2019 · 1 comment
Closed

minikube start fails on Windows 10 Home #6057

omriarnon-s1 opened this issue Dec 11, 2019 · 1 comment

Comments

@omriarnon-s1
Copy link

omriarnon-s1 commented Dec 11, 2019

I'm trying to run 'minikube start' on Windows 10 home after installing minikube and placing kubectl.exe in a folder with an environment variable pointing there, but I'm getting the below error.

I've also put KUBECONFIG=C:\Users****.kube\config

Which points to a config file I've created using kubectl.

The exact command to reproduce the issue:
minikube start

The full output of the command that failed:

C:\WINDOWS\system32>minikube start

  • minikube v1.6.0 on Microsoft Windows 10 Home 10.0.18362 Build 18362
    • KUBECONFIG=C:\Users****.kube\config
  • Selecting 'virtualbox' driver from existing profile (alternates: [hyperv])
  • Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
  • Using the running virtualbox "minikube" VM ...
  • Waiting for the host to be provisioned ...
  • Found network options:
    • NO_PROXY=192.168.99.100
    • no_proxy=192.168.99.100
  • Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
    • env NO_PROXY=192.168.99.100
    • env NO_PROXY=192.168.99.100

X Failed to setup kubeconfig: writing kubeconfig: Error writing file C:\Users*.kube\config: error acquiring lock for C:\Users*.kube\config: timeout acquiring mutex
*

The output of the minikube logs command:


C:\WINDOWS\system32>minikube logs

  • ==> Docker <==
  • -- Logs begin at Wed 2019-12-11 12:12:07 UTC, end at Wed 2019-12-11 12:24:23 UTC. --
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.097956172Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.098100212Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.098323437Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.098756217Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.098824656Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.098914651Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.098968051Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.099034119Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.101978936Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102065576Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102209584Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102275681Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102381326Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102441269Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102518529Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102572808Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102621845Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102674533Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102857456Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.102996855Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.103573928Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.103653488Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.103726492Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.103778743Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.103831781Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.103880350Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.103930229Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.103988625Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.104041526Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.104091488Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.104139821Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.104224298Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.104280496Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.104329885Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.104377498Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.104587444Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.104678769Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.104732058Z" level=info msg="containerd successfully booted in 0.009329s"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.117748356Z" level=info msg="parsed scheme: "unix"" module=grpc
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.117784544Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.117805154Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.117815373Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.118722331Z" level=info msg="parsed scheme: "unix"" module=grpc
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.118757492Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.118775311Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.118787214Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.139130542Z" level=warning msg="Your kernel does not support cgroup blkio weight"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.139246598Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.139295503Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.139343960Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.139386837Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.139427561Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.139692589Z" level=info msg="Loading containers: start."
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.219098819Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.263440611Z" level=info msg="Loading containers: done."
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.293948224Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.294074817Z" level=info msg="Daemon has completed initialization"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.311263350Z" level=info msg="API listen on /var/run/docker.sock"
  • Dec 11 12:12:17 minikube dockerd[2455]: time="2019-12-11T12:12:17.311540027Z" level=info msg="API listen on [::]:2376"
  • Dec 11 12:12:17 minikube systemd[1]: Started Docker Application Container Engine.
  • ==> container status <==
  • time="2019-12-11T12:24:25Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
  • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  • ==> dmesg <==
  • [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
  • [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
  • [ +0.187191] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
  • [ +28.776729] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
  • [Dec11 12:12] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
  • [ +0.018762] systemd-fstab-generator[1349]: Ignoring "noauto" for root device
  • [ +0.001598] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
  • [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
  • [ +0.533058] vboxvideo: loading out-of-tree module taints kernel.
  • [ +0.000036] vboxvideo: Unknown symbol ttm_bo_mmap (err -2)
  • [ +0.000012] vboxvideo: Unknown symbol ttm_bo_global_release (err -2)
  • [ +0.000011] vboxvideo: Unknown symbol ttm_bo_manager_func (err -2)
  • [ +0.000004] vboxvideo: Unknown symbol ttm_bo_global_init (err -2)
  • [ +0.000008] vboxvideo: Unknown symbol ttm_bo_device_release (err -2)
  • [ +0.000014] vboxvideo: Unknown symbol ttm_bo_kunmap (err -2)
  • [ +0.000006] vboxvideo: Unknown symbol ttm_bo_del_sub_from_lru (err -2)
  • [ +0.000007] vboxvideo: Unknown symbol ttm_bo_device_init (err -2)
  • [ +0.000001] vboxvideo: Unknown symbol ttm_bo_init_mm (err -2)
  • [ +0.000001] vboxvideo: Unknown symbol ttm_bo_dma_acc_size (err -2)
  • [ +0.000004] vboxvideo: Unknown symbol ttm_tt_init (err -2)
  • [ +0.000002] vboxvideo: Unknown symbol ttm_bo_kmap (err -2)
  • [ +0.000007] vboxvideo: Unknown symbol ttm_bo_add_to_lru (err -2)
  • [ +0.000004] vboxvideo: Unknown symbol ttm_mem_global_release (err -2)
  • [ +0.000002] vboxvideo: Unknown symbol ttm_mem_global_init (err -2)
  • [ +0.000012] vboxvideo: Unknown symbol ttm_bo_init (err -2)
  • [ +0.000002] vboxvideo: Unknown symbol ttm_bo_validate (err -2)
  • [ +0.000006] vboxvideo: Unknown symbol ttm_bo_put (err -2)
  • [ +0.000004] vboxvideo: Unknown symbol ttm_tt_fini (err -2)
  • [ +0.000002] vboxvideo: Unknown symbol ttm_bo_eviction_valuable (err -2)
  • [ +0.032803] vgdrvHeartbeatInit: Setting up heartbeat to trigger every 2000 milliseconds
  • [ +0.000250] vboxguest: misc device minor 57, IRQ 20, I/O port d020, MMIO at 00000000f0000000 (size 0x400000)
  • [ +0.224156] hpet1: lost 780 rtc interrupts
  • [ +0.045805] hpet1: lost 1 rtc interrupts
  • [ +0.003045] VBoxService 5.2.32 r132073 (verbosity: 0) linux.amd64 (Jul 12 2019 10:32:28) release log
  •           00:00:00.003588 main     Log opened 2019-12-11T12:12:08.360998000Z
    
  • [ +0.000074] 00:00:00.003678 main OS Product: Linux
  • [ +0.000036] 00:00:00.003735 main OS Release: 4.19.81
  • [ +0.000029] 00:00:00.003768 main OS Version: Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Tue Dec 10 16:09:50 PST 2019
  • [ +0.000037] 00:00:00.003796 main Executable: /usr/sbin/VBoxService
  •           00:00:00.003797 main     Process ID: 2084
    
  •           00:00:00.003798 main     Package type: LINUX_64BITS_GENERIC
    
  • [ +0.000030] 00:00:00.003836 main 5.2.32 r132073 started. Verbose level = 0
  • [ +0.000948] 00:00:00.004761 main Error: Service 'control' failed to initialize: VERR_INVALID_PARAMETER
  • [ +0.000110] 00:00:00.004884 main Session 0 is about to close ...
  • [ +0.000057] 00:00:00.004927 main Stopping all guest processes ...
  • [ +0.000040] 00:00:00.004982 main Closing all guest files ...
  • [ +0.000479] 00:00:00.005453 main Ended.
  • [ +0.488137] hpet1: lost 16 rtc interrupts
  • [ +0.097512] hpet1: lost 3 rtc interrupts
  • [ +0.332638] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
  • [ +8.809070] systemd-fstab-generator[2342]: Ignoring "noauto" for root device
  • [Dec11 12:13] systemd-fstab-generator[2695]: Ignoring "noauto" for root device
  • [ +1.528679] systemd-fstab-generator[2822]: Ignoring "noauto" for root device
  • [Dec11 12:14] NFSD: Unable to end grace period: -110
  • [Dec11 12:16] systemd-fstab-generator[2996]: Ignoring "noauto" for root device
  • [ +2.163747] systemd-fstab-generator[3119]: Ignoring "noauto" for root device
  • [Dec11 12:19] systemd-fstab-generator[3296]: Ignoring "noauto" for root device
  • [ +1.787558] systemd-fstab-generator[3420]: Ignoring "noauto" for root device
  • [Dec11 12:21] systemd-fstab-generator[3596]: Ignoring "noauto" for root device
  • [ +1.593709] systemd-fstab-generator[3719]: Ignoring "noauto" for root device
  • ==> kernel <==
  • 12:24:25 up 13 min, 0 users, load average: 0.00, 0.07, 0.12
  • Linux minikube 4.19.81 Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
  • PRETTY_NAME="Buildroot 2019.02.7"
  • ==> kubelet <==
  • -- Logs begin at Wed 2019-12-11 12:12:07 UTC, end at Wed 2019-12-11 12:24:25 UTC. --
  • -- No entries --

The operating system version:
Windows 10 Home

@tstromberg
Copy link
Contributor

This issue appears to be a duplicate of #6058, do you mind if we move the conversation there?

Ths way we can centralize the content relating to the issue. If you feel that this issue is not in fact a duplicate, please re-open it using /reopen. If you have additional information to share, please add it to the new issue.

Thank you for reporting this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants