Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Win10: error acquiring lock for C:\Users<username>/.kube/config: timeout acquiring mutex #6058

Closed
nkolchenko opened this issue Dec 11, 2019 · 13 comments · Fixed by #6059
Closed
Assignees
Labels
os/windows priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. top-10-issues Top 10 support issues

Comments

@nkolchenko
Copy link

nkolchenko commented Dec 11, 2019

Found an issue with the latest minikube for Windows.
I have installed latest minikube-installer.exe from https://github.com/kubernetes/minikube/releases/latest/download/minikube-installer.exe and attempted to run minikube.
I already had Vitualbox-6.0.14
and kubectl-1.17.0 ( from https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows)

The exact command to reproduce the issue:

minikube start --vm-driver=virtualbox

The full output of the command that failed:

  • minikube v1.6.0 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
  • Selecting 'virtualbox' driver from user configuration (alternates: [hyperv])
  • Tip: Use 'minikube start -p ' to create a new cluster, or
    'minikube delete' to delete this one.
  • Using the running virtualbox "minikube" VM ...
  • Waiting for the host to be provisioned ...
  • Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...

X Failed to setup kubeconfig: writing kubeconfig: Error writing file
C:\Users<username>/.kube/config: error acquiring lock for
C:\Users<username>/.kube/config: timeout acquiring mutex
*

The output of the minikube logs command:

attempt:
minikube delete
! Unable to get the status of the minikube cluster.
! "minikube" cluster does not exist. Proceeding ahead with cleanup.

  • Removing C:\Users\Mykola_Kolchenko.minikube\machines\minikube ...
  • The "minikube" cluster has been deleted.
  • Successfully deleted profile "minikube"

C:\Users\Mykola_Kolchenko>minikube start --vm-driver=virtualbox

  • minikube v1.6.0 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
  • Selecting 'virtualbox' driver from user configuration (alternates: [])
  • Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
  • Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...

X Failed to setup kubeconfig: writing kubeconfig: Error writing file
C:\Users\Mykola_Kolchenko/.kube/config: error acquiring lock for
C:\Users\Mykola_Kolchenko/.kube/config: timeout acquiring mutex
*

logs:

minikube logs

  • ==> Docker <==
  • -- Logs begin at Wed 2019-12-11 12:43:25 UTC, end at Wed 2019-12-11
    12:46:26 UTC. --
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.070598370Z" level=info msg="loading plugin
    "io.containerd.snapshotter.v1.native"..."
    type=io.containerd.snapshotter.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.070867643Z" level=info msg="loading plugin
    "io.containerd.snapshotter.v1.overlayfs"..."
    type=io.containerd.snapshotter.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.071190572Z" level=info msg="loading plugin
    "io.containerd.snapshotter.v1.zfs"..."
    type=io.containerd.snapshotter.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.071706854Z" level=info msg="skip loading
    plugin "io.containerd.snapshotter.v1.zfs"..."
    type=io.containerd.snapshotter.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.071761385Z" level=info msg="loading plugin
    "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.071910465Z" level=warning msg="could not use
    snapshotter btrfs in metadata plugin" error="path
    /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs
    must be a btrfs filesystem to be used with the btrfs snapshotter"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.071932415Z" level=warning msg="could not use
    snapshotter aufs in metadata plugin" error="modprobe aufs failed:
    "modprobe: FATAL: Module aufs not found in directory
    /lib/modules/4.19.81\n": exit status 1"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.071947821Z" level=warning msg="could not use
    snapshotter zfs in metadata plugin" error="path
    /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs
    must be a zfs filesystem to be used with the zfs snapshotter: skip
    plugin"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.078588688Z" level=info msg="loading plugin
    "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.078639561Z" level=info msg="loading plugin
    "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.078773499Z" level=info msg="loading plugin
    "io.containerd.service.v1.containers-service"..."
    type=io.containerd.service.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.078792153Z" level=info msg="loading plugin
    "io.containerd.service.v1.content-service"..."
    type=io.containerd.service.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.078805331Z" level=info msg="loading plugin
    "io.containerd.service.v1.diff-service"..."
    type=io.containerd.service.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.078819423Z" level=info msg="loading plugin
    "io.containerd.service.v1.images-service"..."
    type=io.containerd.service.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.078833815Z" level=info msg="loading plugin
    "io.containerd.service.v1.leases-service"..."
    type=io.containerd.service.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.078847622Z" level=info msg="loading plugin
    "io.containerd.service.v1.namespaces-service"..."
    type=io.containerd.service.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.078860845Z" level=info msg="loading plugin
    "io.containerd.service.v1.snapshots-service"..."
    type=io.containerd.service.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.078901944Z" level=info msg="loading plugin
    "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079088367Z" level=info msg="loading plugin
    "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079180654Z" level=info msg="loading plugin
    "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079593964Z" level=info msg="loading plugin
    "io.containerd.service.v1.tasks-service"..."
    type=io.containerd.service.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079668936Z" level=info msg="loading plugin
    "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079765233Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079784382Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079799205Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079813088Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079826421Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079900489Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079930167Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079951759Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.079972861Z" level=info msg="loading plugin
    "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.080058710Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.080093756Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.080116291Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.080137911Z" level=info msg="loading plugin
    "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.080326775Z" level=info msg=serving...
    address="/var/run/docker/containerd/containerd-debug.sock"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.080401705Z" level=info msg=serving...
    address="/var/run/docker/containerd/containerd.sock"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.080420689Z" level=info msg="containerd
    successfully booted in 0.015665s"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.095593315Z" level=info msg="parsed scheme:
    "unix"" module=grpc
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.095773226Z" level=info msg="scheme "unix"
    not registered, fallback to default scheme" module=grpc
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.095905089Z" level=info
    msg="ccResolverWrapper: sending update to cc:
    {[{unix:///var/run/docker/containerd/containerd.sock 0 }]
    }" module=grpc
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.096177219Z" level=info msg="ClientConn
    switching balancer to "pick_first"" module=grpc
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.098346706Z" level=info msg="parsed scheme:
    "unix"" module=grpc
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.098431113Z" level=info msg="scheme "unix"
    not registered, fallback to default scheme" module=grpc
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.098500185Z" level=info
    msg="ccResolverWrapper: sending update to cc:
    {[{unix:///var/run/docker/containerd/containerd.sock 0 }]
    }" module=grpc
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.098535736Z" level=info msg="ClientConn
    switching balancer to "pick_first"" module=grpc
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.132093998Z" level=warning msg="Your kernel
    does not support cgroup blkio weight"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.132229073Z" level=warning msg="Your kernel
    does not support cgroup blkio weight_device"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.132304855Z" level=warning msg="Your kernel
    does not support cgroup blkio throttle.read_bps_device"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.132381430Z" level=warning msg="Your kernel
    does not support cgroup blkio throttle.write_bps_device"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.132452179Z" level=warning msg="Your kernel
    does not support cgroup blkio throttle.read_iops_device"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.132521123Z" level=warning msg="Your kernel
    does not support cgroup blkio throttle.write_iops_device"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.132992763Z" level=info msg="Loading
    containers: start."
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.418296330Z" level=info msg="Default bridge
    (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option
    --bip can be used to set a preferred IP address"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.512437686Z" level=info msg="Loading
    containers: done."
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.554682425Z" level=info msg="Docker daemon"
    commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.554897086Z" level=info msg="Daemon has
    completed initialization"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.590769662Z" level=info msg="API listen on
    /var/run/docker.sock"
  • Dec 11 12:43:42 minikube dockerd[2452]:
    time="2019-12-11T12:43:42.590950463Z" level=info msg="API listen on
    [::]:2376"
  • Dec 11 12:43:42 minikube systemd[1]: Started Docker Application
    Container Engine.
  • ==> container status <==
  • time="2019-12-11T12:46:28Z" level=fatal msg="failed to connect:
    failed to connect, make sure you are running as root and the runtime
    has been started: context deadline exceeded"
  • CONTAINER ID IMAGE COMMAND CREATED
    STATUS PORTS NAMES
  • ==> dmesg <==
  • [ +0.000007] vboxvideo: Unknown symbol ttm_mem_global_release (err -2)
  • [ +0.000006] vboxvideo: Unknown symbol ttm_mem_global_init (err -2)
  • [ +0.000022] vboxvideo: Unknown symbol ttm_bo_init (err -2)
  • [ +0.000004] vboxvideo: Unknown symbol ttm_bo_validate (err -2)
  • [ +0.000011] vboxvideo: Unknown symbol ttm_bo_put (err -2)
  • [ +0.000009] vboxvideo: Unknown symbol ttm_tt_fini (err -2)
  • [ +0.000003] vboxvideo: Unknown symbol ttm_bo_eviction_valuable (err -2)
  • [ +0.061576] vgdrvHeartbeatInit: Setting up heartbeat to trigger
    every 2000 milliseconds
  • [ +0.002439] vboxguest: misc device minor 57, IRQ 20, I/O port
    d020, MMIO at 00000000f0000000 (size 0x400000)
  • [ +0.463150] hpet1: lost 941 rtc interrupts
  • [ +0.060801] hpet1: lost 2 rtc interrupts
  • [ +0.001035] VBoxService 5.2.32 r132073 (verbosity: 0) linux.amd64
    (Jul 12 2019 10:32:28) release log
  •           00:00:00.001576 main     Log opened
    

2019-12-11T12:43:32.034004000Z

  • [ +0.000061] 00:00:00.001664 main OS Product: Linux
  • [ +0.000035] 00:00:00.001704 main OS Release: 4.19.81
  • [ +0.000035] 00:00:00.001739 main OS Version: Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Tue Dec 10
    16:09:50 PST 2019
  • [ +0.000047] 00:00:00.001773 main Executable: /usr/sbin/VBoxService
  •           00:00:00.001774 main     Process ID: 2089
    
  •           00:00:00.001775 main     Package type: LINUX_64BITS_GENERIC
    
  • [ +0.000036] 00:00:00.001823 main 5.2.32 r132073 started.
    Verbose level = 0
  • [ +1.101222] hpet1: lost 43 rtc interrupts
  • [ +1.311508] NFSD: the nfsdcld client tracking upcall will be
    removed in 3.10. Please transition to using nfsdcltrack.
  • [ +2.641711] hpet1: lost 246 rtc interrupts
  • [ +4.637116] systemd-fstab-generator[2350]: Ignoring "noauto" for root device
  • [ +0.370050] hpet1: lost 318 rtc interrupts
  • [ +5.001085] hpet1: lost 318 rtc interrupts
  • [ +10.024981] hpet_rtc_timer_reinit: 64 callbacks suppressed
  • [ +0.000002] hpet1: lost 319 rtc interrupts
  • [ +5.007120] hpet1: lost 318 rtc interrupts
  • [ +5.004752] hpet1: lost 318 rtc interrupts
  • [Dec11 12:44] hpet1: lost 319 rtc interrupts
  • [ +5.015205] hpet1: lost 319 rtc interrupts
  • [ +5.009998] hpet1: lost 320 rtc interrupts
  • [ +5.018471] hpet1: lost 319 rtc interrupts
  • [ +5.018293] hpet1: lost 319 rtc interrupts
  • [ +5.018381] hpet1: lost 319 rtc interrupts
  • [ +5.017657] hpet1: lost 319 rtc interrupts
  • [ +5.018013] hpet1: lost 319 rtc interrupts
  • [ +5.021037] hpet1: lost 320 rtc interrupts
  • [ +5.016852] hpet1: lost 319 rtc interrupts
  • [ +5.020229] hpet1: lost 319 rtc interrupts
  • [Dec11 12:45] hpet1: lost 320 rtc interrupts
  • [ +5.002191] hpet1: lost 318 rtc interrupts
  • [ +5.003096] hpet1: lost 318 rtc interrupts
  • [ +5.007287] hpet1: lost 319 rtc interrupts
  • [ +5.009778] hpet1: lost 318 rtc interrupts
  • [ +5.001608] hpet1: lost 318 rtc interrupts
  • [ +5.006869] hpet1: lost 319 rtc interrupts
  • [ +0.327601] NFSD: Unable to end grace period: -110
  • [ +4.683225] hpet1: lost 319 rtc interrupts
  • [ +5.000428] hpet1: lost 318 rtc interrupts
  • [ +5.001182] hpet1: lost 318 rtc interrupts
  • [ +5.013529] hpet1: lost 319 rtc interrupts
  • [ +5.006682] hpet1: lost 318 rtc interrupts
  • [Dec11 12:46] hpet1: lost 318 rtc interrupts
  • [ +5.009824] hpet1: lost 319 rtc interrupts
  • [ +5.019722] hpet1: lost 319 rtc interrupts
  • [ +4.994733] hpet1: lost 318 rtc interrupts
  • [ +5.006073] hpet1: lost 318 rtc interrupts
  • [ +4.992710] hpet1: lost 318 rtc interrupts
  • ==> kernel <==
  • 12:46:28 up 4 min, 0 users, load average: 0.28, 0.45, 0.22
  • Linux minikube 4.19.81 Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
  • PRETTY_NAME="Buildroot 2019.02.7"
  • ==> kubelet <==
  • -- Logs begin at Wed 2019-12-11 12:43:25 UTC, end at Wed 2019-12-11
    12:46:28 UTC. --
  • -- No entries --

The operating system version:
Microsoft Windows 10 Enterprise 10.0.18362 Build 18362

@blackcape
Copy link

I encountered the same problem today after upgraded from 1.5.2. I tried to reinstall (first uninstall and delete old configs/folders under Users) with no success

@Gnafu
Copy link

Gnafu commented Dec 11, 2019

Same error here, also tried with an elevated command prompt without luck

@fdbeirao
Copy link

fdbeirao commented Dec 11, 2019

Just happened to me. Quite surprised to see this has been reported just 2 hours ago.

I'm running minikube start --vm-driver=hyperv

Output of minikube start --vm-driver=hyperv
* minikube v1.6.0 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
* Selecting 'hyperv' driver from user configuration (alternates: [])
* Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
* Using the running hyperv "minikube" VM ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
*
X Failed to setup kubeconfig: writing kubeconfig: Error writing file C:\Users\32xxxxxxx/.kube/config: error acquiring lock for C:\Users\32xxxxxxx/.kube/config: timeout acquiring mutex
*
* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
  - https://github.com/kubernetes/minikube/issues/new/choose
Output of minikube logs
* ==> Docker <==
* -- Logs begin at Wed 2019-12-11 14:09:14 UTC, end at Wed 2019-12-11 15:09:14 UTC. --
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.393786045Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshot                                                                                        ter.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.393930645Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snaps                                                                                        hotter.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394091845Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter                                                                                        .v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394268145Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapsh                                                                                        otter.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394297045Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1                                                                                          * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394338245Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/c                                                                                        ontainerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394347245Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "                                                                                        modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394353345Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/con                                                                                        tainerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.426914045Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1                                                                                           * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427009445Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427051945Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.                                                                                        service.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427062845Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.ser                                                                                        vice.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427070945Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.servic                                                                                        e.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427081145Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.serv                                                                                        ice.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427090145Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.serv                                                                                        ice.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427149845Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.                                                                                        service.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427165145Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.s                                                                                        ervice.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427174345Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1                                                                                           * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427346045Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2                                                                                            * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427458145Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1                                                                                         * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427887345Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.servi                                                                                        ce.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427931345Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v                                                                                        1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427962045Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1                                                                                            * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427971245Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1                                                                                               * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427994345Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428002945Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1                                                                                                * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428010045Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1                                                                                           * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428018045Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1                                                                                                * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428025645Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1                                                                                                * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428033245Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1                                                                                            * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428040945Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428099845Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428109945Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428117745Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428125345Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428222245Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428278945Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428288145Z" level=info msg="containerd successfully booted in 0.037273s"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.438401945Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.438436745Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.438453245Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  }] }" module=grpc
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.438461745Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.439208145Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.439238645Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.439255745Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  }] }" module=grpc
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.439263845Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506153745Z" level=warning msg="Your kernel does not support cgroup blkio weight"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506196445Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506206145Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506211645Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506216845Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506221845Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506368945Z" level=info msg="Loading containers: start."
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.643001145Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.719495845Z" level=info msg="Loading containers: done."
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.749463945Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.749668245Z" level=info msg="Daemon has completed initialization"
* Dec 11 14:09:46 minikube systemd[1]: Started Docker Application Container Engine.
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.829557645Z" level=info msg="API listen on /var/run/docker.sock"
* Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.829644045Z" level=info msg="API listen on [::]:2376"
*
* ==> container status <==
* time="2019-12-11T14:26:07Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
* CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
*
* ==> dmesg <==
* [Dec11 14:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
* [  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
* [  +0.033960] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
* [  +0.036381] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [  +0.010703] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
*               * this clock source is slow. Consider trying other clock sources
* [Dec11 14:09] Unstable clock detected, switching default tracing clock to "global"
*               If you want to keep using the local clock, then add:
*                 "trace_clock=local"
*               on the kernel command line
* [  +0.000120] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
* [  +0.830354] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
* [  +0.694804] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
* [  +0.009835] systemd-fstab-generator[1222]: Ignoring "noauto" for root device
* [  +0.001639] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
* [  +0.000001] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
* [  +2.813761] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
* [  +0.787794] vboxguest: loading out-of-tree module taints kernel.
* [  +0.004798] vboxguest: PCI device not found, probably running on physical hardware.
* [ +15.135110] systemd-fstab-generator[2462]: Ignoring "noauto" for root device
* [Dec11 14:11] NFSD: Unable to end grace period: -110
* [Dec11 14:12] systemd-fstab-generator[3223]: Ignoring "noauto" for root device
* [Dec11 14:17] systemd-fstab-generator[3896]: Ignoring "noauto" for root device
*
* ==> kernel <==
*  14:26:07 up 17 min,  0 users,  load average: 0.00, 0.00, 0.00
* Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.7"
*
* ==> kubelet <==
* -- Logs begin at Wed 2019-12-11 14:09:14 UTC, end at Wed 2019-12-11 15:09:14 UTC. --
* -- No entries --

@andreysorokin
Copy link

Same here: Failed to setup kubeconfig: writing kubeconfig: Error writing file C:\Users<username>/.kube/config: error acquiring lock for C:\Users<username>/.kube/config: timeout acquiring mutex

@nkolchenko
Copy link
Author

So as a workaround it is possible to Rollback to 1.5.2 which works fine.
Proof: https://stackoverflow.com/questions/59282858/minikube-start-on-windows-is-failing-with-error-timeout-acquiring-mutex/59285606#59285606

@tstromberg tstromberg added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. top-10-issues Top 10 support issues labels Dec 11, 2019
@tstromberg tstromberg changed the title minikube-1.6.0 can not write ".kube/config" on Win10 Win10: error acquiring lock for C:\Users<username>/.kube/config: timeout acquiring mutex Dec 11, 2019
@tstromberg
Copy link
Contributor

tstromberg commented Dec 11, 2019

Thank you for reporting this. It appears to be a regression with v1.6.0, likely due to #5912

If someone runs into this, could they share the output of:

minikube start --alsologtostderr

I'm betting this is an issue in handling Windows path names, but a cursory look at the code doesn't make it obvious where the issue is. Alternatively, it may be that we now limit lock acquisition to 60 seconds.

@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. os/windows labels Dec 11, 2019
@tstromberg
Copy link
Contributor

If any Windows users have time to help look into this, here's the function that's returning an error:

releaser, err := mutex.Acquire(spec)

Here is where it's being called:

if err := lock.WriteFile(fPath, data, 0600); err != nil {

@tstromberg
Copy link
Contributor

tstromberg commented Dec 11, 2019

It looks like this is failing in our Windows CI tests, but got lost in the noise:

I1210 21:21:06.759609    3972 settings.go:123] acquiring lock: {Name:m1-2473796964-3209652669-2317700591-1001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 21:21:06.759609    3972 settings.go:131] Updating kubeconfig:  C:\Users\admin/.kube/config
I1210 21:21:07.645605    3972 lock.go:44] acquiring lock for C:\Users\admin/.kube/config: {Name:m1-2473796964-3209652669-2317700591-1001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
W1210 21:22:07.646008    3972 exit.go:101] Failed to setup kubeconfig: writing kubeconfig: Error writing file C:\Users\admin/.kube/config: error acquiring lock for C:\Users\admin/.kube/config: timeout acquiring mutex

It appears to be waiting the full minute. I believe the issue may be that we are generating duplicate lock names in some cases. We appear to use the same lock name here for an unrelated file:

lock.go:44] acquiring lock for C:\Users\admin\.minikube\profiles\containerd-20191210T212325.7356633-8584\config.json: {Name:m1-2473796964-3209652669-2317700591-1001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}

My current theory is that our lock naming function is broken on Windows somehow. The expected lock name for C:\Users\admin/.kube/config: mC-Users-admin-kube-config-test, not m1-2473796964-3209652669-2317700591-1001.

@tstromberg
Copy link
Contributor

At this point, I feel that we should generate lock names using a checksum of the filename rather than massaging it via regexp.

@tstromberg tstromberg removed the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Dec 11, 2019
@tstromberg tstromberg self-assigned this Dec 11, 2019
@tstromberg
Copy link
Contributor

NOTE: For those afflicted, we will be releasing v1.6.1 with a fix for this ASAP.

@tstromberg
Copy link
Contributor

tstromberg commented Dec 11, 2019

We were able to root-cause the issue! Currently, we generate lock names by taking the last 39 characters of: fmt.Sprintf("%s-%s", path, uid))

On most platforms, uid's are 0-6 characters long. However, on Windows, a SID is returned, which looks like: S-1-5-21-1180699209-877415012-3182924384-1004

That's 46 characters long. Effectively, on Windows, all of our locks shared the same name based on the running users SID: m21-1180699209-877415012-3182924384

The PR appears to work for Windows users. You can test it by downloading https://storage.googleapis.com/minikube-builds/6059/minikube-windows-amd64.exe - We plan to issue a v1.6.1 release within the couple of hours.

@tstromberg
Copy link
Contributor

Leaving open until v1.6.1 ships.

@tstromberg
Copy link
Contributor

v1.6.1 is now available with this bug fixed: https://github.com/kubernetes/minikube/releases/tag/v1.6.1

Thank you for your patience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
os/windows priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. top-10-issues Top 10 support issues
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants