-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Win10: error acquiring lock for C:\Users<username>/.kube/config: timeout acquiring mutex #6058
Comments
I encountered the same problem today after upgraded from 1.5.2. I tried to reinstall (first uninstall and delete old configs/folders under Users) with no success |
Same error here, also tried with an elevated command prompt without luck |
Just happened to me. Quite surprised to see this has been reported just 2 hours ago. I'm running Output of minikube start --vm-driver=hyperv* minikube v1.6.0 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362 * Selecting 'hyperv' driver from user configuration (alternates: []) * Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one. * Using the running hyperv "minikube" VM ... * Waiting for the host to be provisioned ... * Preparing Kubernetes v1.17.0 on Docker '19.03.5' ... * X Failed to setup kubeconfig: writing kubeconfig: Error writing file C:\Users\32xxxxxxx/.kube/config: error acquiring lock for C:\Users\32xxxxxxx/.kube/config: timeout acquiring mutex * * Sorry that minikube crashed. If this was unexpected, we would love to hear from you: - https://github.com/kubernetes/minikube/issues/new/choose Output of minikube logs* ==> Docker <== * -- Logs begin at Wed 2019-12-11 14:09:14 UTC, end at Wed 2019-12-11 15:09:14 UTC. -- * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.393786045Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshot ter.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.393930645Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snaps hotter.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394091845Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter .v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394268145Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapsh otter.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394297045Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394338245Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/c ontainerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394347245Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: " modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.394353345Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/con tainerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.426914045Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427009445Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427051945Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd. service.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427062845Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.ser vice.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427070945Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.servic e.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427081145Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.serv ice.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427090145Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.serv ice.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427149845Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd. service.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427165145Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.s ervice.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427174345Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427346045Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427458145Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427887345Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.servi ce.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427931345Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v 1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427962045Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427971245Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.427994345Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428002945Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428010045Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428018045Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428025645Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428033245Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428040945Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428099845Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428109945Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428117745Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428125345Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428222245Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428278945Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.428288145Z" level=info msg="containerd successfully booted in 0.037273s" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.438401945Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.438436745Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.438453245Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.438461745Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.439208145Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.439238645Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.439255745Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.439263845Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506153745Z" level=warning msg="Your kernel does not support cgroup blkio weight" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506196445Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506206145Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506211645Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506216845Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506221845Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.506368945Z" level=info msg="Loading containers: start." * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.643001145Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.719495845Z" level=info msg="Loading containers: done." * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.749463945Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5 * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.749668245Z" level=info msg="Daemon has completed initialization" * Dec 11 14:09:46 minikube systemd[1]: Started Docker Application Container Engine. * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.829557645Z" level=info msg="API listen on /var/run/docker.sock" * Dec 11 14:09:46 minikube dockerd[2736]: time="2019-12-11T14:09:46.829644045Z" level=info msg="API listen on [::]:2376" * * ==> container status <== * time="2019-12-11T14:26:07Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded" * CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES * * ==> dmesg <== * [Dec11 14:08] You have booted with nomodeset. This means your GPU drivers are DISABLED * [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly * [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it * [ +0.033960] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. * [ +0.036381] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. * [ +0.010703] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, * * this clock source is slow. Consider trying other clock sources * [Dec11 14:09] Unstable clock detected, switching default tracing clock to "global" * If you want to keep using the local clock, then add: * "trace_clock=local" * on the kernel command line * [ +0.000120] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 * [ +0.830354] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons * [ +0.694804] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument * [ +0.009835] systemd-fstab-generator[1222]: Ignoring "noauto" for root device * [ +0.001639] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. * [ +0.000001] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) * [ +2.813761] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. * [ +0.787794] vboxguest: loading out-of-tree module taints kernel. * [ +0.004798] vboxguest: PCI device not found, probably running on physical hardware. * [ +15.135110] systemd-fstab-generator[2462]: Ignoring "noauto" for root device * [Dec11 14:11] NFSD: Unable to end grace period: -110 * [Dec11 14:12] systemd-fstab-generator[3223]: Ignoring "noauto" for root device * [Dec11 14:17] systemd-fstab-generator[3896]: Ignoring "noauto" for root device * * ==> kernel <== * 14:26:07 up 17 min, 0 users, load average: 0.00, 0.00, 0.00 * Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux * PRETTY_NAME="Buildroot 2019.02.7" * * ==> kubelet <== * -- Logs begin at Wed 2019-12-11 14:09:14 UTC, end at Wed 2019-12-11 15:09:14 UTC. -- * -- No entries -- |
Same here: Failed to setup kubeconfig: writing kubeconfig: Error writing file C:\Users<username>/.kube/config: error acquiring lock for C:\Users<username>/.kube/config: timeout acquiring mutex |
So as a workaround it is possible to Rollback to 1.5.2 which works fine. |
Thank you for reporting this. It appears to be a regression with v1.6.0, likely due to #5912 If someone runs into this, could they share the output of:
I'm betting this is an issue in handling Windows path names, but a cursory look at the code doesn't make it obvious where the issue is. Alternatively, it may be that we now limit lock acquisition to 60 seconds. |
If any Windows users have time to help look into this, here's the function that's returning an error: minikube/pkg/util/lock/lock.go Line 45 in e394424
Here is where it's being called:
|
It looks like this is failing in our Windows CI tests, but got lost in the noise:
It appears to be waiting the full minute. I believe the issue may be that we are generating duplicate lock names in some cases. We appear to use the same lock name here for an unrelated file:
My current theory is that our lock naming function is broken on Windows somehow. The expected lock name for |
At this point, I feel that we should generate lock names using a checksum of the filename rather than massaging it via regexp. |
NOTE: For those afflicted, we will be releasing v1.6.1 with a fix for this ASAP. |
We were able to root-cause the issue! Currently, we generate lock names by taking the last 39 characters of: On most platforms, uid's are 0-6 characters long. However, on Windows, a SID is returned, which looks like: That's 46 characters long. Effectively, on Windows, all of our locks shared the same name based on the running users SID: The PR appears to work for Windows users. You can test it by downloading https://storage.googleapis.com/minikube-builds/6059/minikube-windows-amd64.exe - We plan to issue a v1.6.1 release within the couple of hours. |
Leaving open until v1.6.1 ships. |
v1.6.1 is now available with this bug fixed: https://github.com/kubernetes/minikube/releases/tag/v1.6.1 Thank you for your patience! |
Found an issue with the latest minikube for Windows.
I have installed latest minikube-installer.exe from https://github.com/kubernetes/minikube/releases/latest/download/minikube-installer.exe and attempted to run minikube.
I already had Vitualbox-6.0.14
and kubectl-1.17.0 ( from https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows)
The exact command to reproduce the issue:
minikube start --vm-driver=virtualbox
The full output of the command that failed:
'minikube delete' to delete this one.
X Failed to setup kubeconfig: writing kubeconfig: Error writing file
C:\Users<username>/.kube/config: error acquiring lock for
C:\Users<username>/.kube/config: timeout acquiring mutex
*
to hear from you:
The output of the
minikube logs
command:attempt:
minikube delete
! Unable to get the status of the minikube cluster.
! "minikube" cluster does not exist. Proceeding ahead with cleanup.
C:\Users\Mykola_Kolchenko>minikube start --vm-driver=virtualbox
X Failed to setup kubeconfig: writing kubeconfig: Error writing file
C:\Users\Mykola_Kolchenko/.kube/config: error acquiring lock for
C:\Users\Mykola_Kolchenko/.kube/config: timeout acquiring mutex
*
to hear from you:
logs:
minikube logs
12:46:26 UTC. --
time="2019-12-11T12:43:42.070598370Z" level=info msg="loading plugin
"io.containerd.snapshotter.v1.native"..."
type=io.containerd.snapshotter.v1
time="2019-12-11T12:43:42.070867643Z" level=info msg="loading plugin
"io.containerd.snapshotter.v1.overlayfs"..."
type=io.containerd.snapshotter.v1
time="2019-12-11T12:43:42.071190572Z" level=info msg="loading plugin
"io.containerd.snapshotter.v1.zfs"..."
type=io.containerd.snapshotter.v1
time="2019-12-11T12:43:42.071706854Z" level=info msg="skip loading
plugin "io.containerd.snapshotter.v1.zfs"..."
type=io.containerd.snapshotter.v1
time="2019-12-11T12:43:42.071761385Z" level=info msg="loading plugin
"io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
time="2019-12-11T12:43:42.071910465Z" level=warning msg="could not use
snapshotter btrfs in metadata plugin" error="path
/var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs
must be a btrfs filesystem to be used with the btrfs snapshotter"
time="2019-12-11T12:43:42.071932415Z" level=warning msg="could not use
snapshotter aufs in metadata plugin" error="modprobe aufs failed:
"modprobe: FATAL: Module aufs not found in directory
/lib/modules/4.19.81\n": exit status 1"
time="2019-12-11T12:43:42.071947821Z" level=warning msg="could not use
snapshotter zfs in metadata plugin" error="path
/var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs
must be a zfs filesystem to be used with the zfs snapshotter: skip
plugin"
time="2019-12-11T12:43:42.078588688Z" level=info msg="loading plugin
"io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
time="2019-12-11T12:43:42.078639561Z" level=info msg="loading plugin
"io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
time="2019-12-11T12:43:42.078773499Z" level=info msg="loading plugin
"io.containerd.service.v1.containers-service"..."
type=io.containerd.service.v1
time="2019-12-11T12:43:42.078792153Z" level=info msg="loading plugin
"io.containerd.service.v1.content-service"..."
type=io.containerd.service.v1
time="2019-12-11T12:43:42.078805331Z" level=info msg="loading plugin
"io.containerd.service.v1.diff-service"..."
type=io.containerd.service.v1
time="2019-12-11T12:43:42.078819423Z" level=info msg="loading plugin
"io.containerd.service.v1.images-service"..."
type=io.containerd.service.v1
time="2019-12-11T12:43:42.078833815Z" level=info msg="loading plugin
"io.containerd.service.v1.leases-service"..."
type=io.containerd.service.v1
time="2019-12-11T12:43:42.078847622Z" level=info msg="loading plugin
"io.containerd.service.v1.namespaces-service"..."
type=io.containerd.service.v1
time="2019-12-11T12:43:42.078860845Z" level=info msg="loading plugin
"io.containerd.service.v1.snapshots-service"..."
type=io.containerd.service.v1
time="2019-12-11T12:43:42.078901944Z" level=info msg="loading plugin
"io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
time="2019-12-11T12:43:42.079088367Z" level=info msg="loading plugin
"io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
time="2019-12-11T12:43:42.079180654Z" level=info msg="loading plugin
"io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
time="2019-12-11T12:43:42.079593964Z" level=info msg="loading plugin
"io.containerd.service.v1.tasks-service"..."
type=io.containerd.service.v1
time="2019-12-11T12:43:42.079668936Z" level=info msg="loading plugin
"io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
time="2019-12-11T12:43:42.079765233Z" level=info msg="loading plugin
"io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.079784382Z" level=info msg="loading plugin
"io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.079799205Z" level=info msg="loading plugin
"io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.079813088Z" level=info msg="loading plugin
"io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.079826421Z" level=info msg="loading plugin
"io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.079900489Z" level=info msg="loading plugin
"io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.079930167Z" level=info msg="loading plugin
"io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.079951759Z" level=info msg="loading plugin
"io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.079972861Z" level=info msg="loading plugin
"io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
time="2019-12-11T12:43:42.080058710Z" level=info msg="loading plugin
"io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.080093756Z" level=info msg="loading plugin
"io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.080116291Z" level=info msg="loading plugin
"io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.080137911Z" level=info msg="loading plugin
"io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
time="2019-12-11T12:43:42.080326775Z" level=info msg=serving...
address="/var/run/docker/containerd/containerd-debug.sock"
time="2019-12-11T12:43:42.080401705Z" level=info msg=serving...
address="/var/run/docker/containerd/containerd.sock"
time="2019-12-11T12:43:42.080420689Z" level=info msg="containerd
successfully booted in 0.015665s"
time="2019-12-11T12:43:42.095593315Z" level=info msg="parsed scheme:
"unix"" module=grpc
time="2019-12-11T12:43:42.095773226Z" level=info msg="scheme "unix"
not registered, fallback to default scheme" module=grpc
time="2019-12-11T12:43:42.095905089Z" level=info
msg="ccResolverWrapper: sending update to cc:
{[{unix:///var/run/docker/containerd/containerd.sock 0 }]
}" module=grpc
time="2019-12-11T12:43:42.096177219Z" level=info msg="ClientConn
switching balancer to "pick_first"" module=grpc
time="2019-12-11T12:43:42.098346706Z" level=info msg="parsed scheme:
"unix"" module=grpc
time="2019-12-11T12:43:42.098431113Z" level=info msg="scheme "unix"
not registered, fallback to default scheme" module=grpc
time="2019-12-11T12:43:42.098500185Z" level=info
msg="ccResolverWrapper: sending update to cc:
{[{unix:///var/run/docker/containerd/containerd.sock 0 }]
}" module=grpc
time="2019-12-11T12:43:42.098535736Z" level=info msg="ClientConn
switching balancer to "pick_first"" module=grpc
time="2019-12-11T12:43:42.132093998Z" level=warning msg="Your kernel
does not support cgroup blkio weight"
time="2019-12-11T12:43:42.132229073Z" level=warning msg="Your kernel
does not support cgroup blkio weight_device"
time="2019-12-11T12:43:42.132304855Z" level=warning msg="Your kernel
does not support cgroup blkio throttle.read_bps_device"
time="2019-12-11T12:43:42.132381430Z" level=warning msg="Your kernel
does not support cgroup blkio throttle.write_bps_device"
time="2019-12-11T12:43:42.132452179Z" level=warning msg="Your kernel
does not support cgroup blkio throttle.read_iops_device"
time="2019-12-11T12:43:42.132521123Z" level=warning msg="Your kernel
does not support cgroup blkio throttle.write_iops_device"
time="2019-12-11T12:43:42.132992763Z" level=info msg="Loading
containers: start."
time="2019-12-11T12:43:42.418296330Z" level=info msg="Default bridge
(docker0) is assigned with an IP address 172.17.0.0/16. Daemon option
--bip can be used to set a preferred IP address"
time="2019-12-11T12:43:42.512437686Z" level=info msg="Loading
containers: done."
time="2019-12-11T12:43:42.554682425Z" level=info msg="Docker daemon"
commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
time="2019-12-11T12:43:42.554897086Z" level=info msg="Daemon has
completed initialization"
time="2019-12-11T12:43:42.590769662Z" level=info msg="API listen on
/var/run/docker.sock"
time="2019-12-11T12:43:42.590950463Z" level=info msg="API listen on
[::]:2376"
Container Engine.
failed to connect, make sure you are running as root and the runtime
has been started: context deadline exceeded"
STATUS PORTS NAMES
every 2000 milliseconds
d020, MMIO at 00000000f0000000 (size 0x400000)
(Jul 12 2019 10:32:28) release log
2019-12-11T12:43:32.034004000Z
16:09:50 PST 2019
Verbose level = 0
removed in 3.10. Please transition to using nfsdcltrack.
12:46:28 UTC. --
The operating system version:
Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
The text was updated successfully, but these errors were encountered: