Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volume mount files get corrupted and become unaccessible #11552

Closed
montmejat opened this issue Jun 1, 2021 · 12 comments
Closed

Volume mount files get corrupted and become unaccessible #11552

montmejat opened this issue Jun 1, 2021 · 12 comments
Labels
area/mount kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. long-term-support Long-term support issues that can't be fixed in code triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@montmejat
Copy link

montmejat commented Jun 1, 2021

logs_alsologtostderr.txt

Steps to reproduce the issue:

  1. minikube start
  2. minikube mount my/local/folder:/data
  3. kubectl apply -k cd/my_deployment
  4. minikube ssh
  5. ls /data
    ls: cannot access '/data': Input/output error

Full output of minikube logs command:

==> Audit <==
|---------|-------------------------|----------|------------------|---------|--------------------------------|--------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-------------------------|----------|------------------|---------|--------------------------------|--------------------------------|
| -p | minikube docker-env | minikube | amontmejatdabaux | v1.19.0 | Thu, 27 May 2021 17:14:25 CEST | Thu, 27 May 2021 17:14:26 CEST |
| stop | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 10:08:33 CEST | Fri, 28 May 2021 10:08:47 CEST |
| start | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 10:12:32 CEST | Fri, 28 May 2021 10:13:38 CEST |
| -p | minikube docker-env | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 10:18:27 CEST | Fri, 28 May 2021 10:18:28 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 10:28:50 CEST | Fri, 28 May 2021 10:29:03 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 10:34:52 CEST | Fri, 28 May 2021 10:36:02 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 10:50:14 CEST | Fri, 28 May 2021 10:50:18 CEST |
| -p | minikube docker-env | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 13:26:01 CEST | Fri, 28 May 2021 13:26:05 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 13:28:36 CEST | Fri, 28 May 2021 13:28:52 CEST |
| -p | minikube docker-env | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 13:35:39 CEST | Fri, 28 May 2021 13:35:40 CEST |
| service | dash-service | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 13:30:09 CEST | Fri, 28 May 2021 13:36:14 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 13:36:29 CEST | Fri, 28 May 2021 13:36:37 CEST |
| -p | minikube docker-env | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 14:03:18 CEST | Fri, 28 May 2021 14:03:19 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 14:05:14 CEST | Fri, 28 May 2021 14:05:24 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 14:38:55 CEST | Fri, 28 May 2021 14:39:26 CEST |
| service | django-webapp-svc --url | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 14:26:30 CEST | Fri, 28 May 2021 14:40:10 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 14:40:25 CEST | Fri, 28 May 2021 14:40:34 CEST |
| delete | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 14:41:16 CEST | Fri, 28 May 2021 14:41:39 CEST |
| start | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 14:45:40 CEST | Fri, 28 May 2021 14:47:04 CEST |
| -p | minikube docker-env | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 14:49:10 CEST | Fri, 28 May 2021 14:49:11 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 15:05:07 CEST | Fri, 28 May 2021 15:05:20 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 15:06:34 CEST | Fri, 28 May 2021 15:06:53 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 15:08:13 CEST | Fri, 28 May 2021 15:08:27 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 15:11:53 CEST | Fri, 28 May 2021 15:12:05 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 15:23:27 CEST | Fri, 28 May 2021 15:23:38 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 15:33:02 CEST | Fri, 28 May 2021 15:33:11 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 16:37:02 CEST | Fri, 28 May 2021 16:37:09 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 16:40:58 CEST | Fri, 28 May 2021 16:40:59 CEST |
| ssh | ls / | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 16:48:05 CEST | Fri, 28 May 2021 16:48:06 CEST |
| ssh | ls /; ls | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 16:48:11 CEST | Fri, 28 May 2021 16:48:11 CEST |
| ssh | ls /; ls /etc | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 16:48:17 CEST | Fri, 28 May 2021 16:48:18 CEST |
| stop | | minikube | amontmejatdabaux | v1.19.0 | Fri, 28 May 2021 17:44:02 CEST | Fri, 28 May 2021 17:44:16 CEST |
| start | | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 11:06:37 CEST | Mon, 31 May 2021 11:07:39 CEST |
| -p | minikube docker-env | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 11:30:22 CEST | Mon, 31 May 2021 11:30:23 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 11:54:56 CEST | Mon, 31 May 2021 11:55:06 CEST |
| -p | minikube docker-env | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 11:55:59 CEST | Mon, 31 May 2021 11:56:00 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 12:00:04 CEST | Mon, 31 May 2021 12:00:15 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 14:21:41 CEST | Mon, 31 May 2021 14:21:53 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 14:21:27 CEST | Mon, 31 May 2021 14:22:52 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 14:22:55 CEST | Mon, 31 May 2021 14:22:58 CEST |
| service | dash-service | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 14:27:37 CEST | Mon, 31 May 2021 14:37:19 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 14:40:28 CEST | Mon, 31 May 2021 14:40:44 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Mon, 31 May 2021 15:37:49 CEST | Mon, 31 May 2021 15:37:52 CEST |
| start | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 10:43:25 CEST | Tue, 01 Jun 2021 10:44:23 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 10:48:40 CEST | Tue, 01 Jun 2021 10:48:50 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 10:57:18 CEST | Tue, 01 Jun 2021 10:57:30 CEST |
| service | dash-service | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:03:18 CEST | Tue, 01 Jun 2021 11:12:38 CEST |
| service | dash-service | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:24:23 CEST | Tue, 01 Jun 2021 11:24:28 CEST |
| stop | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:28:39 CEST | Tue, 01 Jun 2021 11:28:52 CEST |
| start | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:29:21 CEST | Tue, 01 Jun 2021 11:30:15 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:31:13 CEST | Tue, 01 Jun 2021 11:32:08 CEST |
| stop | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:34:30 CEST | Tue, 01 Jun 2021 11:34:48 CEST |
| start | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:36:06 CEST | Tue, 01 Jun 2021 11:37:00 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:37:11 CEST | Tue, 01 Jun 2021 11:37:20 CEST |
| delete | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:37:24 CEST | Tue, 01 Jun 2021 11:37:46 CEST |
| start | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:37:51 CEST | Tue, 01 Jun 2021 11:39:20 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:41:57 CEST | Tue, 01 Jun 2021 11:42:01 CEST |
| -p | minikube docker-env | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 11:44:17 CEST | Tue, 01 Jun 2021 11:44:18 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 12:13:13 CEST | Tue, 01 Jun 2021 12:13:21 CEST |
| ssh | | minikube | amontmejatdabaux | v1.19.0 | Tue, 01 Jun 2021 13:21:45 CEST | Tue, 01 Jun 2021 13:21:54 CEST |
|---------|-------------------------|----------|------------------|---------|--------------------------------|--------------------------------|

==> Last Start <==
Log file created at: 2021/06/01 11:37:51
Running on machine: LFRC02DF22FMD6N
Binary: Built with gc go1.16 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0601 11:37:51.116850 4623 out.go:278] Setting OutFile to fd 1 ...
I0601 11:37:51.117046 4623 out.go:330] isatty.IsTerminal(1) = true
I0601 11:37:51.117049 4623 out.go:291] Setting ErrFile to fd 2...
I0601 11:37:51.117053 4623 out.go:330] isatty.IsTerminal(2) = true
I0601 11:37:51.117150 4623 root.go:317] Updating PATH: /Users/amontmejatdabaux/.minikube/bin
I0601 11:37:51.117550 4623 out.go:285] Setting JSON to false
I0601 11:37:51.154588 4623 start.go:108] hostinfo: {"hostname":"LFRC02DF22FMD6N","uptime":8760,"bootTime":1622531511,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"10.15.7","kernelVersion":"19.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"64f23d6d-4c30-3fc4-b7c2-9ee0bba0d29a"}
W0601 11:37:51.154690 4623 start.go:116] gopshost.Virtualization returned error: not implemented yet
I0601 11:37:51.179285 4623 out.go:157] 😄 minikube v1.19.0 on Darwin 10.15.7
I0601 11:37:51.180256 4623 driver.go:322] Setting default libvirt URI to qemu:///system
I0601 11:37:51.180528 4623 global.go:103] Querying for installed drivers using PATH=/Users/amontmejatdabaux/.minikube/bin:/Users/amontmejatdabaux/Documents/Personnel/neovide/target/release:/Users/amontmejatdabaux/Documents/Altran/Projects/minikube:/Users/amontmejatdabaux/Documents/Altran/Projects/kubernetes:/Users/amontmejatdabaux/Documents/Altran/Projects/sonar-scanner-4.6.0.2311-macosx/bin:/Users/amontmejatdabaux/Library/Python/3.8/bin:/Users/amontmejatdabaux/.cargo/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/share/dotnet:~/.dotnet/tools:/Library/Apple/usr/bin:/Library/Frameworks/Mono.framework/Versions/Current/Commands
I0601 11:37:51.205188 4623 global.go:111] hyperkit default: true priority: 8, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc:}
I0601 11:37:51.205328 4623 global.go:111] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/}
I0601 11:37:51.208720 4623 global.go:111] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I0601 11:37:51.208742 4623 global.go:111] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0601 11:37:51.209031 4623 global.go:111] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I0601 11:37:51.209162 4623 global.go:111] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0601 11:37:51.209334 4623 global.go:111] vmwarefusion default: false priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/}
I0601 11:37:51.317788 4623 docker.go:119] docker version: linux-20.10.5
E0601 14:15:54.185650 6690 out.go:374] unable to parse "I0601 11:37:51.317971 4623 cli_runner.go:115] Run: docker system info --format "{{json .}}"\n": template: I0601 11:37:51.317971 4623 cli_runner.go:115] Run: docker system info --format "{{json .}}"
:1: function "json" not defined - returning raw string.
I0601 11:37:51.317971 4623 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0601 11:37:51.845843 4623 info.go:261] docker info: {ID:Z3ZN:O33Q:V6G5:X5BQ:Y7I3:DKU3:I3BE:W3CJ:KDDM:SF4P:PDAQ:6KNM Containers:12 ContainersRunning:0 ContainersPaused:0 ContainersStopped:12 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-06-01 09:37:51.443986111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:2083364864 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:}}
I0601 11:37:51.845927 4623 global.go:111] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0601 11:37:51.845942 4623 driver.go:258] not recommending "ssh" due to default: false
I0601 11:37:51.845954 4623 driver.go:292] Picked: docker
I0601 11:37:51.845958 4623 driver.go:293] Alternatives: [hyperkit ssh]
I0601 11:37:51.845960 4623 driver.go:294] Rejects: [podman virtualbox vmware vmwarefusion parallels]
I0601 11:37:51.871252 4623 out.go:157] ✨ Automatically selected the docker driver. Other choices: hyperkit, ssh
I0601 11:37:51.871353 4623 start.go:276] selected driver: docker
I0601 11:37:51.871379 4623 start.go:718] validating driver "docker" against
I0601 11:37:51.871398 4623 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
E0601 14:15:54.434415 6690 out.go:374] unable to parse "I0601 11:37:51.871884 4623 cli_runner.go:115] Run: docker system info --format "{{json .}}"\n": template: I0601 11:37:51.871884 4623 cli_runner.go:115] Run: docker system info --format "{{json .}}"
:1: function "json" not defined - returning raw string.
I0601 11:37:51.871884 4623 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0601 11:37:52.084016 4623 info.go:261] docker info: {ID:Z3ZN:O33Q:V6G5:X5BQ:Y7I3:DKU3:I3BE:W3CJ:KDDM:SF4P:PDAQ:6KNM Containers:12 ContainersRunning:0 ContainersPaused:0 ContainersStopped:12 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-06-01 09:37:52.021379726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:2083364864 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:}}
I0601 11:37:52.084119 4623 start_flags.go:253] no existing cluster config was found, will generate one from the flags
I0601 11:37:52.088165 4623 start_flags.go:311] Using suggested 1986MB memory alloc based on sys=16384MB, container=1986MB
I0601 11:37:52.088779 4623 start_flags.go:730] Wait components to verify : map[apiserver:true system_pods:true]
I0601 11:37:52.088803 4623 cni.go:81] Creating CNI manager for ""
I0601 11:37:52.088809 4623 cni.go:153] CNI unnecessary in this configuration, recommending no CNI
I0601 11:37:52.088813 4623 start_flags.go:270] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:1986 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0601 11:37:52.112689 4623 out.go:157] 👍 Starting control plane node minikube in cluster minikube
I0601 11:37:52.112833 4623 image.go:107] Checking for gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 in local docker daemon
I0601 11:37:52.260383 4623 image.go:111] Found gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 in local docker daemon, skipping pull
I0601 11:37:52.260409 4623 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 exists in daemon, skipping pull
I0601 11:37:52.260420 4623 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0601 11:37:52.260460 4623 preload.go:105] Found local preload: /Users/amontmejatdabaux/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0601 11:37:52.260465 4623 cache.go:54] Caching tarball of preloaded images
I0601 11:37:52.260478 4623 preload.go:131] Found /Users/amontmejatdabaux/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0601 11:37:52.260480 4623 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker
I0601 11:37:52.260804 4623 profile.go:148] Saving config to /Users/amontmejatdabaux/.minikube/profiles/minikube/config.json ...
I0601 11:37:52.260830 4623 lock.go:36] WriteFile acquiring /Users/amontmejatdabaux/.minikube/profiles/minikube/config.json: {Name:mk7f14a86a9a54de24a5acf0868f0bdfb1728398 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0601 11:37:52.261202 4623 cache.go:185] Successfully downloaded all kic artifacts
I0601 11:37:52.261224 4623 start.go:313] acquiring machines lock for minikube: {Name:mk563fe32e8b25353e13d80e37c132b4d9158249 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0601 11:37:52.261285 4623 start.go:317] acquired machines lock for "minikube" in 53.059µs
I0601 11:37:52.261302 4623 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:1986 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0601 11:37:52.261344 4623 start.go:126] createHost starting for "" (driver="docker")
I0601 11:37:52.285734 4623 out.go:184] 🔥 Creating docker container (CPUs=2, Memory=1986MB) ...
I0601 11:37:52.285959 4623 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0601 11:37:52.286331 4623 client.go:168] LocalClient.Create starting
I0601 11:37:52.286669 4623 main.go:126] libmachine: Reading certificate data from /Users/amontmejatdabaux/.minikube/certs/ca.pem
I0601 11:37:52.286971 4623 main.go:126] libmachine: Decoding PEM data...
I0601 11:37:52.286998 4623 main.go:126] libmachine: Parsing certificate...
I0601 11:37:52.287410 4623 main.go:126] libmachine: Reading certificate data from /Users/amontmejatdabaux/.minikube/certs/cert.pem
I0601 11:37:52.287658 4623 main.go:126] libmachine: Decoding PEM data...
I0601 11:37:52.287683 4623 main.go:126] libmachine: Parsing certificate...
E0601 14:15:55.227670 6690 out.go:379] unable to execute I0601 11:37:52.288507 4623 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
: template: I0601 11:37:52.288507 4623 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
:1:262: executing "I0601 11:37:52.288507 4623 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
I0601 11:37:52.288507 4623 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
E0601 14:15:55.250333 6690 out.go:379] unable to execute W0601 11:37:52.413603 4623 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
: template: W0601 11:37:52.413603 4623 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
:1:257: executing "W0601 11:37:52.413603 4623 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
W0601 11:37:52.413603 4623 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0601 11:37:52.414190 4623 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs...
I0601 11:37:52.414210 4623 cli_runner.go:115] Run: docker network inspect minikube
W0601 11:37:52.538363 4623 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I0601 11:37:52.538391 4623 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0601 11:37:52.538401 4623 network_create.go:254] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: No such network: minikube

** /stderr **
E0601 14:15:55.658459 6690 out.go:379] unable to execute I0601 11:37:52.538537 4623 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
: template: I0601 11:37:52.538537 4623 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
:1:260: executing "I0601 11:37:52.538537 4623 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"\n" at <index .Options "com.docker.network.driver.mtu">: error calling index: index of untyped nil - returning raw string.
I0601 11:37:52.538537 4623 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0601 11:37:52.666139 4623 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000f0c8] misses:0}
I0601 11:37:52.666488 4623 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0601 11:37:52.666782 4623 network_create.go:100] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0601 11:37:52.666930 4623 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0601 11:37:58.517387 4623 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube: (5.850396751s)
I0601 11:37:58.517405 4623 network_create.go:84] docker network minikube 192.168.49.0/24 created
I0601 11:37:58.517718 4623 kic.go:102] calculated static IP "192.168.49.2" for the "minikube" container
I0601 11:37:58.518112 4623 cli_runner.go:115] Run: docker ps -a --format
I0601 11:37:58.648180 4623 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0601 11:37:58.773108 4623 oci.go:102] Successfully created a docker volume minikube
I0601 11:37:58.773254 4623 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -d /var/lib
I0601 11:37:59.580595 4623 oci.go:106] Successfully prepared a docker volume minikube
I0601 11:37:59.580679 4623 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0601 11:37:59.580732 4623 preload.go:105] Found local preload: /Users/amontmejatdabaux/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0601 11:37:59.580741 4623 kic.go:175] Starting extracting preloaded images to volume ...
I0601 11:37:59.580880 4623 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/amontmejatdabaux/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -I lz4 -xf /preloaded.tar -C /extractDir
E0601 14:15:56.040339 6690 out.go:374] unable to parse "I0601 11:37:59.580941 4623 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"\n": template: I0601 11:37:59.580941 4623 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
:1: function "json" not defined - returning raw string.
I0601 11:37:59.580941 4623 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0601 11:37:59.789479 4623 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=1986mb --memory-swap=1986mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6
I0601 11:38:06.700549 4623 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/amontmejatdabaux/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 -I lz4 -xf /preloaded.tar -C /extractDir: (7.119597979s)
I0601 11:38:06.700899 4623 kic.go:184] duration metric: took 7.119815 seconds to extract preloaded images to volume
I0601 11:38:13.840399 4623 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=1986mb --memory-swap=1986mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6: (14.050771322s)
I0601 11:38:13.840554 4623 cli_runner.go:115] Run: docker container inspect minikube --format=
I0601 11:38:13.968369 4623 cli_runner.go:115] Run: docker container inspect minikube --format=
I0601 11:38:14.096222 4623 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0601 11:38:14.317750 4623 oci.go:278] the created container "minikube" has a running status.
I0601 11:38:14.317779 4623 kic.go:206] Creating ssh key for kic: /Users/amontmejatdabaux/.minikube/machines/minikube/id_rsa...
I0601 11:38:14.481698 4623 kic_runner.go:188] docker (temp): /Users/amontmejatdabaux/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0601 11:38:14.671825 4623 cli_runner.go:115] Run: docker container inspect minikube --format=
I0601 11:38:14.801710 4623 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0601 11:38:14.801721 4623 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0601 11:38:14.985547 4623 cli_runner.go:115] Run: docker container inspect minikube --format=
I0601 11:38:15.111433 4623 machine.go:88] provisioning docker machine ...
I0601 11:38:15.111468 4623 ubuntu.go:169] provisioning hostname "minikube"
E0601 14:15:56.423747 6690 out.go:379] unable to execute I0601 11:38:15.111615 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:15.111615 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:15.111615 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:15.111615 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:38:15.235929 4623 main.go:126] libmachine: Using SSH client type: native
E0601 14:15:56.466858 6690 out.go:374] unable to parse "I0601 11:38:15.236171 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }\n": template: I0601 11:38:15.236171 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }
:1: unexpected "{" in command - returning raw string.
I0601 11:38:15.236171 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }
I0601 11:38:15.236179 4623 main.go:126] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0601 11:38:15.244689 4623 main.go:126] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0601 11:38:18.390069 4623 main.go:126] libmachine: SSH cmd err, output: : minikube

E0601 14:15:56.598491 6690 out.go:379] unable to execute I0601 11:38:18.390194 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:18.390194 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:18.390194 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:18.390194 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:38:18.525025 4623 main.go:126] libmachine: Using SSH client type: native
E0601 14:15:56.641597 6690 out.go:374] unable to parse "I0601 11:38:18.525230 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }\n": template: I0601 11:38:18.525230 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }
:1: unexpected "{" in command - returning raw string.
I0601 11:38:18.525230 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }
I0601 11:38:18.525242 4623 main.go:126] libmachine: About to run SSH command:

	if ! grep -xq '.*\sminikube' /etc/hosts; then
		if grep -xq '127.0.1.1\s.*' /etc/hosts; then
			sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
		else 
			echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
		fi
	fi

I0601 11:38:18.644902 4623 main.go:126] libmachine: SSH cmd err, output: :
I0601 11:38:18.644918 4623 ubuntu.go:175] set auth options {CertDir:/Users/amontmejatdabaux/.minikube CaCertPath:/Users/amontmejatdabaux/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/amontmejatdabaux/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/amontmejatdabaux/.minikube/machines/server.pem ServerKeyPath:/Users/amontmejatdabaux/.minikube/machines/server-key.pem ClientKeyPath:/Users/amontmejatdabaux/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/amontmejatdabaux/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/amontmejatdabaux/.minikube}
I0601 11:38:18.644938 4623 ubuntu.go:177] setting up certificates
I0601 11:38:18.644944 4623 provision.go:83] configureAuth start
I0601 11:38:18.645045 4623 cli_runner.go:115] Run: docker container inspect -f "" minikube
I0601 11:38:18.769341 4623 provision.go:137] copyHostCerts
I0601 11:38:18.769447 4623 exec_runner.go:145] found /Users/amontmejatdabaux/.minikube/ca.pem, removing ...
I0601 11:38:18.769453 4623 exec_runner.go:190] rm: /Users/amontmejatdabaux/.minikube/ca.pem
I0601 11:38:18.770541 4623 exec_runner.go:152] cp: /Users/amontmejatdabaux/.minikube/certs/ca.pem --> /Users/amontmejatdabaux/.minikube/ca.pem (1107 bytes)
I0601 11:38:18.771030 4623 exec_runner.go:145] found /Users/amontmejatdabaux/.minikube/cert.pem, removing ...
I0601 11:38:18.771033 4623 exec_runner.go:190] rm: /Users/amontmejatdabaux/.minikube/cert.pem
I0601 11:38:18.771105 4623 exec_runner.go:152] cp: /Users/amontmejatdabaux/.minikube/certs/cert.pem --> /Users/amontmejatdabaux/.minikube/cert.pem (1147 bytes)
I0601 11:38:18.771537 4623 exec_runner.go:145] found /Users/amontmejatdabaux/.minikube/key.pem, removing ...
I0601 11:38:18.771541 4623 exec_runner.go:190] rm: /Users/amontmejatdabaux/.minikube/key.pem
I0601 11:38:18.771773 4623 exec_runner.go:152] cp: /Users/amontmejatdabaux/.minikube/certs/key.pem --> /Users/amontmejatdabaux/.minikube/key.pem (1675 bytes)
I0601 11:38:18.772030 4623 provision.go:111] generating server cert: /Users/amontmejatdabaux/.minikube/machines/server.pem ca-key=/Users/amontmejatdabaux/.minikube/certs/ca.pem private-key=/Users/amontmejatdabaux/.minikube/certs/ca-key.pem org=amontmejatdabaux.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0601 11:38:18.915489 4623 provision.go:165] copyRemoteCerts
I0601 11:38:18.915828 4623 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
E0601 14:15:57.282303 6690 out.go:379] unable to execute I0601 11:38:18.915906 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:18.915906 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:18.915906 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:18.915906 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:38:19.040072 4623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53948 SSHKeyPath:/Users/amontmejatdabaux/.minikube/machines/minikube/id_rsa Username:docker}
I0601 11:38:19.125248 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1107 bytes)
I0601 11:38:19.146749 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
I0601 11:38:19.164918 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0601 11:38:19.183369 4623 provision.go:86] duration metric: configureAuth took 538.411748ms
I0601 11:38:19.183380 4623 ubuntu.go:193] setting minikube options for container-runtime
E0601 14:15:57.440824 6690 out.go:379] unable to execute I0601 11:38:19.183912 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:19.183912 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:19.183912 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:19.183912 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:38:19.308302 4623 main.go:126] libmachine: Using SSH client type: native
E0601 14:15:57.488150 6690 out.go:374] unable to parse "I0601 11:38:19.308506 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }\n": template: I0601 11:38:19.308506 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }
:1: unexpected "{" in command - returning raw string.
I0601 11:38:19.308506 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }
I0601 11:38:19.308512 4623 main.go:126] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0601 11:38:19.430663 4623 main.go:126] libmachine: SSH cmd err, output: : overlay

I0601 11:38:19.430677 4623 ubuntu.go:71] root file system type: overlay
I0601 11:38:19.431231 4623 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
E0601 14:15:57.630203 6690 out.go:379] unable to execute I0601 11:38:19.431335 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:19.431335 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:19.431335 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:19.431335 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:38:19.559231 4623 main.go:126] libmachine: Using SSH client type: native
E0601 14:15:57.671238 6690 out.go:374] unable to parse "I0601 11:38:19.559442 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }\n": template: I0601 11:38:19.559442 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }
:1: unexpected "{" in command - returning raw string.
I0601 11:38:19.559442 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }
I0601 11:38:19.559505 4623 main.go:126] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0601 11:38:19.687998 4623 main.go:126] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

E0601 14:16:00.028334 6690 out.go:379] unable to execute I0601 11:38:19.688120 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:19.688120 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:19.688120 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:19.688120 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:38:19.814294 4623 main.go:126] libmachine: Using SSH client type: native
E0601 14:16:00.076084 6690 out.go:374] unable to parse "I0601 11:38:19.814487 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }\n": template: I0601 11:38:19.814487 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }
:1: unexpected "{" in command - returning raw string.
I0601 11:38:19.814487 4623 main.go:126] libmachine: &{{{ 0 [] [] []} docker [0x13f80c0] 0x13f8080 [] 0s} 127.0.0.1 53948 }
I0601 11:38:19.814501 4623 main.go:126] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0601 11:38:45.916962 4623 main.go:126] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-03-02 20:16:15.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-01 09:38:19.688458873 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process
-OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0601 11:38:45.916985 4623 machine.go:91] provisioned docker machine in 30.805462468s
I0601 11:38:45.916995 4623 client.go:171] LocalClient.Create took 53.630526935s
I0601 11:38:45.917016 4623 start.go:168] duration metric: libmachine.API.Create for "minikube" took 53.630916535s
I0601 11:38:45.917024 4623 start.go:267] post-start starting for "minikube" (driver="docker")
I0601 11:38:45.917026 4623 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0601 11:38:45.917182 4623 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
E0601 14:16:02.023147 6690 out.go:379] unable to execute I0601 11:38:45.917264 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:45.917264 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:45.917264 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:45.917264 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:38:46.045526 4623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53948 SSHKeyPath:/Users/amontmejatdabaux/.minikube/machines/minikube/id_rsa Username:docker}
I0601 11:38:46.138275 4623 ssh_runner.go:149] Run: cat /etc/os-release
I0601 11:38:46.142655 4623 main.go:126] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0601 11:38:46.142671 4623 main.go:126] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0601 11:38:46.142678 4623 main.go:126] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0601 11:38:46.142688 4623 info.go:137] Remote host: Ubuntu 20.04.1 LTS
I0601 11:38:46.142705 4623 filesync.go:118] Scanning /Users/amontmejatdabaux/.minikube/addons for local assets ...
I0601 11:38:46.143142 4623 filesync.go:118] Scanning /Users/amontmejatdabaux/.minikube/files for local assets ...
I0601 11:38:46.143250 4623 start.go:270] post-start completed in 226.221557ms
I0601 11:38:46.144131 4623 cli_runner.go:115] Run: docker container inspect -f "" minikube
I0601 11:38:46.269851 4623 profile.go:148] Saving config to /Users/amontmejatdabaux/.minikube/profiles/minikube/config.json ...
I0601 11:38:46.271280 4623 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
E0601 14:16:02.313393 6690 out.go:379] unable to execute I0601 11:38:46.271374 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:46.271374 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:46.271374 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:46.271374 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:38:46.397280 4623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53948 SSHKeyPath:/Users/amontmejatdabaux/.minikube/machines/minikube/id_rsa Username:docker}
I0601 11:38:46.482701 4623 start.go:129] duration metric: createHost completed in 54.221211254s
I0601 11:38:46.482716 4623 start.go:80] releasing machines lock for "minikube", held for 54.221285426s
I0601 11:38:46.482889 4623 cli_runner.go:115] Run: docker container inspect -f "" minikube
I0601 11:38:46.620200 4623 ssh_runner.go:149] Run: systemctl --version
E0601 14:16:02.446226 6690 out.go:379] unable to execute I0601 11:38:46.620276 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:46.620276 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:46.620276 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:46.620276 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:38:46.621543 4623 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
E0601 14:16:02.492213 6690 out.go:379] unable to execute I0601 11:38:46.621818 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:46.621818 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:46.621818 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:46.621818 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:38:46.752602 4623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53948 SSHKeyPath:/Users/amontmejatdabaux/.minikube/machines/minikube/id_rsa Username:docker}
I0601 11:38:46.752809 4623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53948 SSHKeyPath:/Users/amontmejatdabaux/.minikube/machines/minikube/id_rsa Username:docker}
I0601 11:38:47.257034 4623 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0601 11:38:47.268683 4623 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0601 11:38:47.279515 4623 cruntime.go:219] skipping containerd shutdown because we are bound to it
I0601 11:38:47.279623 4623 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0601 11:38:47.290309 4623 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0601 11:38:47.304275 4623 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0601 11:38:47.314130 4623 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0601 11:38:47.373195 4623 ssh_runner.go:149] Run: sudo systemctl start docker
I0601 11:38:47.384931 4623 ssh_runner.go:149] Run: docker version --format
I0601 11:38:47.562161 4623 out.go:184] 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.5 ...
I0601 11:38:47.571001 4623 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal
I0601 11:38:47.795526 4623 network.go:68] got host ip for mount in container by digging dns: 192.168.65.2
I0601 11:38:47.796007 4623 ssh_runner.go:149] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0601 11:38:47.800930 4623 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
E0601 14:16:02.925669 6690 out.go:379] unable to execute I0601 11:38:47.811871 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
: template: I0601 11:38:47.811871 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:38:47.811871 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:38:47.811871 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0601 11:38:47.936863 4623 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0601 11:38:47.936891 4623 preload.go:105] Found local preload: /Users/amontmejatdabaux/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0601 11:38:47.936992 4623 ssh_runner.go:149] Run: docker images --format :
I0601 11:38:47.978438 4623 docker.go:455] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0601 11:38:47.978450 4623 docker.go:392] Images already preloaded, skipping extraction
I0601 11:38:47.978640 4623 ssh_runner.go:149] Run: docker images --format :
I0601 11:38:48.016751 4623 docker.go:455] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0601 11:38:48.016771 4623 cache_images.go:74] Images are preloaded, skipping loading
I0601 11:38:48.016906 4623 ssh_runner.go:149] Run: docker info --format
I0601 11:38:48.142581 4623 cni.go:81] Creating CNI manager for ""
I0601 11:38:48.142588 4623 cni.go:153] CNI unnecessary in this configuration, recommending no CNI
I0601 11:38:48.142597 4623 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0601 11:38:48.142609 4623 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0601 11:38:48.142730 4623 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: "minikube"
      kubeletExtraArgs:
      node-ip: 192.168.49.2
      taints: []

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"

disable disk resource management by default

imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249

I0601 11:38:48.142812 4623 kubeadm.go:897] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
config:
{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0601 11:38:48.142939 4623 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
I0601 11:38:48.151763 4623 binaries.go:44] Found k8s binaries, skipping transfer
I0601 11:38:48.151851 4623 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0601 11:38:48.159405 4623 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0601 11:38:48.172912 4623 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0601 11:38:48.185412 4623 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes)
I0601 11:38:48.197846 4623 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0601 11:38:48.201608 4623 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0601 11:38:48.210943 4623 certs.go:52] Setting up /Users/amontmejatdabaux/.minikube/profiles/minikube for IP: 192.168.49.2
I0601 11:38:48.211085 4623 certs.go:171] skipping minikubeCA CA generation: /Users/amontmejatdabaux/.minikube/ca.key
I0601 11:38:48.211141 4623 certs.go:171] skipping proxyClientCA CA generation: /Users/amontmejatdabaux/.minikube/proxy-client-ca.key
I0601 11:38:48.211254 4623 certs.go:286] generating minikube-user signed cert: /Users/amontmejatdabaux/.minikube/profiles/minikube/client.key
I0601 11:38:48.211619 4623 crypto.go:69] Generating cert /Users/amontmejatdabaux/.minikube/profiles/minikube/client.crt with IP's: []
I0601 11:38:48.368419 4623 crypto.go:157] Writing cert to /Users/amontmejatdabaux/.minikube/profiles/minikube/client.crt ...
I0601 11:38:48.368429 4623 lock.go:36] WriteFile acquiring /Users/amontmejatdabaux/.minikube/profiles/minikube/client.crt: {Name:mk997eef99b5be13f1e2496606935b94df33548c Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0601 11:38:48.370194 4623 crypto.go:165] Writing key to /Users/amontmejatdabaux/.minikube/profiles/minikube/client.key ...
I0601 11:38:48.370221 4623 lock.go:36] WriteFile acquiring /Users/amontmejatdabaux/.minikube/profiles/minikube/client.key: {Name:mkb684e7dcd04f85db5763cff501b27dec1586be Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0601 11:38:48.370958 4623 certs.go:286] generating minikube signed cert: /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0601 11:38:48.370964 4623 crypto.go:69] Generating cert /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0601 11:38:48.479178 4623 crypto.go:157] Writing cert to /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0601 11:38:48.479192 4623 lock.go:36] WriteFile acquiring /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk19d90ce67d5bcd2555e3a7f4fc994b449559c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0601 11:38:48.479510 4623 crypto.go:165] Writing key to /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0601 11:38:48.479515 4623 lock.go:36] WriteFile acquiring /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkc3923307c63c0eda6b6fa17705ff7b4962092e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0601 11:38:48.480049 4623 certs.go:297] copying /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.crt
I0601 11:38:48.480788 4623 certs.go:301] copying /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.key
I0601 11:38:48.481196 4623 certs.go:286] generating aggregator signed cert: /Users/amontmejatdabaux/.minikube/profiles/minikube/proxy-client.key
I0601 11:38:48.481200 4623 crypto.go:69] Generating cert /Users/amontmejatdabaux/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0601 11:38:48.535286 4623 crypto.go:157] Writing cert to /Users/amontmejatdabaux/.minikube/profiles/minikube/proxy-client.crt ...
I0601 11:38:48.535291 4623 lock.go:36] WriteFile acquiring /Users/amontmejatdabaux/.minikube/profiles/minikube/proxy-client.crt: {Name:mk0601fdd1585869c865bec2fec55b0ff11ba8b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0601 11:38:48.535600 4623 crypto.go:165] Writing key to /Users/amontmejatdabaux/.minikube/profiles/minikube/proxy-client.key ...
I0601 11:38:48.535604 4623 lock.go:36] WriteFile acquiring /Users/amontmejatdabaux/.minikube/profiles/minikube/proxy-client.key: {Name:mk8ad7f1f3032b1bd6199fbafbc665598c01ad3e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0601 11:38:48.537598 4623 certs.go:361] found cert: /Users/amontmejatdabaux/.minikube/certs/Users/amontmejatdabaux/.minikube/certs/ca-key.pem (1679 bytes)
I0601 11:38:48.537657 4623 certs.go:361] found cert: /Users/amontmejatdabaux/.minikube/certs/Users/amontmejatdabaux/.minikube/certs/ca.pem (1107 bytes)
I0601 11:38:48.537706 4623 certs.go:361] found cert: /Users/amontmejatdabaux/.minikube/certs/Users/amontmejatdabaux/.minikube/certs/cert.pem (1147 bytes)
I0601 11:38:48.537757 4623 certs.go:361] found cert: /Users/amontmejatdabaux/.minikube/certs/Users/amontmejatdabaux/.minikube/certs/key.pem (1675 bytes)
I0601 11:38:48.538551 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0601 11:38:48.559526 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0601 11:38:48.577308 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0601 11:38:48.595538 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0601 11:38:48.614704 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0601 11:38:48.632007 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0601 11:38:48.649157 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0601 11:38:48.666396 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0601 11:38:48.682727 4623 ssh_runner.go:316] scp /Users/amontmejatdabaux/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0601 11:38:48.700463 4623 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (740 bytes)
I0601 11:38:48.713665 4623 ssh_runner.go:149] Run: openssl version
I0601 11:38:48.721930 4623 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0601 11:38:48.730985 4623 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0601 11:38:48.734817 4623 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May 6 14:01 /usr/share/ca-certificates/minikubeCA.pem
I0601 11:38:48.734886 4623 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0601 11:38:48.740866 4623 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0601 11:38:48.749186 4623 kubeadm.go:386] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:1986 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0601 11:38:48.749307 4623 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format=
I0601 11:38:48.785447 4623 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0601 11:38:48.793370 4623 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0601 11:38:48.800999 4623 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0601 11:38:48.801075 4623 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0601 11:38:48.808280 4623 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0601 11:38:48.808299 4623 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0601 11:38:49.814503 4623 out.go:184] ▪ Generating certificates and keys ...
I0601 11:38:53.483194 4623 out.go:184] ▪ Booting up control plane ...
I0601 11:39:14.533325 4623 out.go:184] ▪ Configuring RBAC rules ...
I0601 11:39:14.925700 4623 cni.go:81] Creating CNI manager for ""
I0601 11:39:14.925711 4623 cni.go:153] CNI unnecessary in this configuration, recommending no CNI
I0601 11:39:14.925733 4623 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0601 11:39:14.926430 4623 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0601 11:39:14.926432 4623 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.19.0 minikube.k8s.io/commit=15cede53bdc5fe242228853e737333b09d4336b5 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_06_01T11_39_14_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0601 11:39:14.949930 4623 ops.go:34] apiserver oom_adj: -16
I0601 11:39:15.203119 4623 kubeadm.go:973] duration metric: took 277.068651ms to wait for elevateKubeSystemPrivileges.
I0601 11:39:15.282948 4623 kubeadm.go:388] StartCluster complete in 26.533695785s
I0601 11:39:15.282969 4623 settings.go:142] acquiring lock: {Name:mk2f9e7c496e8eebbe13a6ff0d965022b2c7106d Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0601 11:39:15.283090 4623 settings.go:150] Updating kubeconfig: /Users/amontmejatdabaux/.kube/config
I0601 11:39:15.283834 4623 lock.go:36] WriteFile acquiring /Users/amontmejatdabaux/.kube/config: {Name:mka5878e515d85c3febb30f809dc934ef9e5fa84 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0601 11:39:15.812740 4623 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0601 11:39:15.812767 4623 start.go:200] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0601 11:39:15.812802 4623 addons.go:328] enableAddons start: toEnable=map[], additional=[]
I0601 11:39:15.812836 4623 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I0601 11:39:15.812848 4623 addons.go:55] Setting default-storageclass=true in profile "minikube"
I0601 11:39:15.837569 4623 out.go:157] 🔎 Verifying Kubernetes components...
I0601 11:39:15.812849 4623 addons.go:131] Setting addon storage-provisioner=true in "minikube"
I0601 11:39:15.837588 4623 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
W0601 11:39:15.837591 4623 addons.go:140] addon storage-provisioner should already be in state true
I0601 11:39:15.837624 4623 host.go:66] Checking if "minikube" exists ...
I0601 11:39:15.837787 4623 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0601 11:39:15.838019 4623 cli_runner.go:115] Run: docker container inspect minikube --format=
I0601 11:39:15.838141 4623 cli_runner.go:115] Run: docker container inspect minikube --format=
E0601 14:16:07.719998 6690 out.go:379] unable to execute I0601 11:39:15.853439 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
: template: I0601 11:39:15.853439 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:39:15.853439 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "8443/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:39:15.853439 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0601 11:39:16.006652 4623 addons.go:131] Setting addon default-storageclass=true in "minikube"
I0601 11:39:16.040953 4623 out.go:157] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0601 11:39:16.017530 4623 api_server.go:48] waiting for apiserver process to appear ...
W0601 11:39:16.040985 4623 addons.go:140] addon default-storageclass should already be in state true
I0601 11:39:16.041016 4623 host.go:66] Checking if "minikube" exists ...
I0601 11:39:16.041116 4623 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0601 11:39:16.041123 4623 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0601 11:39:16.041123 4623 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
E0601 14:16:07.951024 6690 out.go:379] unable to execute I0601 11:39:16.041222 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:39:16.041222 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:39:16.041222 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:39:16.041222 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:39:16.043052 4623 cli_runner.go:115] Run: docker container inspect minikube --format=
I0601 11:39:16.061314 4623 api_server.go:68] duration metric: took 248.504583ms to wait for apiserver process to appear ...
I0601 11:39:16.061335 4623 api_server.go:84] waiting for apiserver healthz status ...
I0601 11:39:16.061371 4623 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:53947/healthz ...
I0601 11:39:16.073472 4623 api_server.go:241] https://127.0.0.1:53947/healthz returned 200:
ok
I0601 11:39:16.097839 4623 api_server.go:137] control plane version: v1.20.2
I0601 11:39:16.097856 4623 api_server.go:127] duration metric: took 36.517779ms to wait for apiserver health ...
I0601 11:39:16.097864 4623 system_pods.go:42] waiting for kube-system pods to appear ...
I0601 11:39:16.105943 4623 system_pods.go:58] 0 kube-system pods found
I0601 11:39:16.105960 4623 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
I0601 11:39:16.182622 4623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53948 SSHKeyPath:/Users/amontmejatdabaux/.minikube/machines/minikube/id_rsa Username:docker}
I0601 11:39:16.182714 4623 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml
I0601 11:39:16.182725 4623 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
E0601 14:16:08.294939 6690 out.go:379] unable to execute I0601 11:39:16.182853 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: I0601 11:39:16.182853 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:94: executing "I0601 11:39:16.182853 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
I0601 11:39:16.182853 4623 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0601 11:39:16.279599 4623 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0601 11:39:16.319974 4623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53948 SSHKeyPath:/Users/amontmejatdabaux/.minikube/machines/minikube/id_rsa Username:docker}
I0601 11:39:16.372499 4623 system_pods.go:58] 0 kube-system pods found
I0601 11:39:16.372541 4623 retry.go:31] will retry after 381.329545ms: only 0 pod(s) have shown up
I0601 11:39:16.418331 4623 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0601 11:39:16.700484 4623 out.go:157] 🌟 Enabled addons: storage-provisioner, default-storageclass
I0601 11:39:16.700532 4623 addons.go:330] enableAddons completed in 887.736098ms
I0601 11:39:16.758339 4623 system_pods.go:58] 1 kube-system pods found
I0601 11:39:16.758365 4623 system_pods.go:60] "storage-provisioner" [e3200d1b-efee-41e5-9b0a-a5ec4b58ad82] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0601 11:39:16.758371 4623 retry.go:31] will retry after 422.765636ms: only 1 pod(s) have shown up
I0601 11:39:17.188926 4623 system_pods.go:58] 1 kube-system pods found
I0601 11:39:17.188941 4623 system_pods.go:60] "storage-provisioner" [e3200d1b-efee-41e5-9b0a-a5ec4b58ad82] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0601 11:39:17.188948 4623 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up
I0601 11:39:17.665555 4623 system_pods.go:58] 1 kube-system pods found
I0601 11:39:17.665571 4623 system_pods.go:60] "storage-provisioner" [e3200d1b-efee-41e5-9b0a-a5ec4b58ad82] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0601 11:39:17.665576 4623 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
I0601 11:39:18.259353 4623 system_pods.go:58] 1 kube-system pods found
I0601 11:39:18.259363 4623 system_pods.go:60] "storage-provisioner" [e3200d1b-efee-41e5-9b0a-a5ec4b58ad82] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0601 11:39:18.259368 4623 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up
I0601 11:39:19.097416 4623 system_pods.go:58] 1 kube-system pods found
I0601 11:39:19.097428 4623 system_pods.go:60] "storage-provisioner" [e3200d1b-efee-41e5-9b0a-a5ec4b58ad82] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0601 11:39:19.097434 4623 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up
I0601 11:39:19.853183 4623 system_pods.go:58] 5 kube-system pods found
I0601 11:39:19.853194 4623 system_pods.go:60] "etcd-minikube" [9610bea4-2ce0-4cf3-8f3d-5912e4c3f7d3] Pending
I0601 11:39:19.853197 4623 system_pods.go:60] "kube-apiserver-minikube" [1b530730-654c-434c-9db8-7d307c8e55da] Pending
I0601 11:39:19.853204 4623 system_pods.go:60] "kube-controller-manager-minikube" [ce6692ec-4870-4d12-8fb9-182ffb46195f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0601 11:39:19.853208 4623 system_pods.go:60] "kube-scheduler-minikube" [2ed5f36b-bffb-4728-a4b8-05ee5bd16f89] Pending
I0601 11:39:19.853212 4623 system_pods.go:60] "storage-provisioner" [e3200d1b-efee-41e5-9b0a-a5ec4b58ad82] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0601 11:39:19.853216 4623 system_pods.go:73] duration metric: took 3.755339041s to wait for pod list to return data ...
I0601 11:39:19.853223 4623 kubeadm.go:543] duration metric: took 4.04043159s to wait for : map[apiserver:true system_pods:true] ...
I0601 11:39:19.853235 4623 node_conditions.go:102] verifying NodePressure condition ...
I0601 11:39:19.857264 4623 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
I0601 11:39:19.857276 4623 node_conditions.go:123] node cpu capacity is 8
I0601 11:39:19.857289 4623 node_conditions.go:105] duration metric: took 4.050258ms to run NodePressure ...
I0601 11:39:19.857296 4623 start.go:205] waiting for startup goroutines ...
I0601 11:39:20.038894 4623 start.go:460] kubectl: 1.21.0, cluster: 1.20.2 (minor skew: 1)
I0601 11:39:20.062306 4623 out.go:157] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

==> Docker <==
-- Logs begin at Tue 2021-06-01 09:38:16 UTC, end at Tue 2021-06-01 12:16:10 UTC. --
Jun 01 10:06:04 minikube dockerd[474]: time="2021-06-01T10:06:04.529455800Z" level=info msg="Layer sha256:c88b2608bf0d3b6dff7486231bc540a89a2cd4d42a28bb74b38dd4a9eb8aec4f cleaned up"
Jun 01 10:06:04 minikube dockerd[474]: time="2021-06-01T10:06:04.626613800Z" level=info msg="Layer sha256:c88b2608bf0d3b6dff7486231bc540a89a2cd4d42a28bb74b38dd4a9eb8aec4f cleaned up"
Jun 01 10:06:04 minikube dockerd[474]: time="2021-06-01T10:06:04.724918900Z" level=info msg="Layer sha256:c88b2608bf0d3b6dff7486231bc540a89a2cd4d42a28bb74b38dd4a9eb8aec4f cleaned up"
Jun 01 10:06:37 minikube dockerd[474]: time="2021-06-01T10:06:37.000885200Z" level=info msg="ignoring event" container=ebf950e622ea376de447ad9ad577ec70cc9fea071ec1c2ad46757c5c5d8694e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:06:39 minikube dockerd[474]: time="2021-06-01T10:06:39.332184000Z" level=info msg="ignoring event" container=dee2ff182cbe68edd92a5ab063dcfe89face97e80ab51fb1789246915cfa8b16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:06:39 minikube dockerd[474]: time="2021-06-01T10:06:39.565715600Z" level=info msg="Layer sha256:b40846cb8b09401e473da99e626e9e4ef78babec285c25f6582c16064fc1c55e cleaned up"
Jun 01 10:06:40 minikube dockerd[474]: time="2021-06-01T10:06:40.441357900Z" level=info msg="ignoring event" container=d2be8a186693d644fb4b74e59cb7a174c723ead2a4c52897a623fc585ebd0878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:06:40 minikube dockerd[474]: time="2021-06-01T10:06:40.685728700Z" level=info msg="Layer sha256:87abe438b2b256f7d75e366482d069994b846f24078ba666c0893b911e5b7142 cleaned up"
Jun 01 10:06:40 minikube dockerd[474]: time="2021-06-01T10:06:40.786203100Z" level=info msg="Layer sha256:87abe438b2b256f7d75e366482d069994b846f24078ba666c0893b911e5b7142 cleaned up"
Jun 01 10:06:40 minikube dockerd[474]: time="2021-06-01T10:06:40.884901000Z" level=info msg="Layer sha256:87abe438b2b256f7d75e366482d069994b846f24078ba666c0893b911e5b7142 cleaned up"
Jun 01 10:13:48 minikube dockerd[474]: time="2021-06-01T10:13:48.733733300Z" level=info msg="ignoring event" container=efbea2f6ce87abe44178a267159b5b2af3d9d8887a394cf6a21b15e4a29a1a2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:13:52 minikube dockerd[474]: time="2021-06-01T10:13:52.347856400Z" level=info msg="ignoring event" container=d57dbc88ede74bf9e6816872859f2fb56ceda7e070b012a56ca15f4761e10216 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:14:08 minikube dockerd[474]: time="2021-06-01T10:14:08.335987900Z" level=info msg="ignoring event" container=45ec290f8fa87dc52777fa8a85fac857d1228a6863da5b768fe31f6da09603f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:14:39 minikube dockerd[474]: time="2021-06-01T10:14:39.314363500Z" level=info msg="ignoring event" container=4c732a0e82828fd285631949d1596e834160490d0189628557c4708c384cdd67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:15:32 minikube dockerd[474]: time="2021-06-01T10:15:32.155437000Z" level=info msg="ignoring event" container=91bd6795e11e4e20cdb21b797e9e45240fbd3b352579a3cb36886516ba56560c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:17:00 minikube dockerd[474]: time="2021-06-01T10:17:00.123404300Z" level=info msg="ignoring event" container=1e3b9920a7e7c8247877ea7b5fcf4908df8de6a3b43f78c8493ed4a3dba07c47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:21:46 minikube dockerd[474]: time="2021-06-01T10:21:46.601989400Z" level=info msg="ignoring event" container=0935f7b565f191c785587a68c44d376c8fe40859cebf202491e76a30c5c7648e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:26:59 minikube dockerd[474]: time="2021-06-01T10:26:59.339268500Z" level=info msg="ignoring event" container=050fdcf54952b4892c780d885e1bc09ac48601064fe824694f0f059daa2d6236 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:32:09 minikube dockerd[474]: time="2021-06-01T10:32:09.963640600Z" level=info msg="ignoring event" container=955e48fc99e9b6963fee2f356232156bd7a1d9efc491ccd6ffa9d943b6407991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:37:19 minikube dockerd[474]: time="2021-06-01T10:37:19.713343700Z" level=info msg="ignoring event" container=1778a2907489adcc256558ca851f27140d6b83434d2fbdafe37648c5bc4792f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:42:25 minikube dockerd[474]: time="2021-06-01T10:42:25.357754200Z" level=info msg="ignoring event" container=598657ff4e945f4f6b76657578c63def953e1754b6bc05a697608af139b3f225 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:47:34 minikube dockerd[474]: time="2021-06-01T10:47:34.792897200Z" level=info msg="ignoring event" container=f7ee40b8ee44e49779121c1a3c0f8f47ff24c1564c01f4ec33f12818dec0a827 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:52:43 minikube dockerd[474]: time="2021-06-01T10:52:43.363676100Z" level=info msg="ignoring event" container=dd1560c07e57deed62a40e10f944dfa0d2bfa664c305a7d0c85103de06cdeb93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 10:57:50 minikube dockerd[474]: time="2021-06-01T10:57:50.992710700Z" level=info msg="ignoring event" container=9f24b7aed6d7ed075869e6e72f25eb6bef2e06b1b043cc6f89534d4c32767825 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:03:05 minikube dockerd[474]: time="2021-06-01T11:03:05.911568000Z" level=info msg="ignoring event" container=58d9efcacf55e7f344fe1f4d28e45679d9d412005e15270dda3a76a1983c6504 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:08:12 minikube dockerd[474]: time="2021-06-01T11:08:12.361753000Z" level=info msg="ignoring event" container=1897525f6dffdb509ee73d737b3532484d390d4a67daa00473c18c69dc8e1620 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:13:21 minikube dockerd[474]: time="2021-06-01T11:13:21.946421300Z" level=info msg="ignoring event" container=d0eb6f83371c0011fb065c9e6afcf7dfd0531cc6c30ac5183c1be50e270b3b1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:18:28 minikube dockerd[474]: time="2021-06-01T11:18:28.765791500Z" level=info msg="ignoring event" container=914ef7e697c9c964c481a5e7f78f8754f2bdf7dbd3f36fe7cd1fa28a1b313c8b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:19:07 minikube dockerd[474]: time="2021-06-01T11:19:07.102665900Z" level=info msg="ignoring event" container=6296af78103f27bec6acb8378f543cc9ceb9efc440cb2ff9fb910afd1261a483 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:19:07 minikube dockerd[474]: time="2021-06-01T11:19:07.706821200Z" level=info msg="ignoring event" container=632cb0ebea7a58a802e7119e02a24dadc5c0f51a1ca8cd0f5fa27e5efa17a7fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:19:37 minikube dockerd[474]: time="2021-06-01T11:19:37.036889500Z" level=info msg="Container 72ecfe1f88c9f4aae754785f933609187dd0b4bfbce9b0c102b0e67808d56065 failed to exit within 30 seconds of signal 15 - using the force"
Jun 01 11:19:37 minikube dockerd[474]: time="2021-06-01T11:19:37.115257100Z" level=info msg="ignoring event" container=72ecfe1f88c9f4aae754785f933609187dd0b4bfbce9b0c102b0e67808d56065 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:19:37 minikube dockerd[474]: time="2021-06-01T11:19:37.207520100Z" level=info msg="ignoring event" container=eef8db98474136cb037613495999faa021d6cbb8bee40470313fc18a379413df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:20:11 minikube dockerd[474]: time="2021-06-01T11:20:11.212425100Z" level=info msg="ignoring event" container=bd87f33843fc793233f924a89bbeed733a6c9a9eacfbd8afe251728305ee34c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:20:11 minikube dockerd[474]: time="2021-06-01T11:20:11.444438400Z" level=info msg="Layer sha256:3cc312922e8b02f48b08b421ac22c2055cf57cfda3ea01c180d0fa469ad4e95e cleaned up"
Jun 01 11:20:11 minikube dockerd[474]: time="2021-06-01T11:20:11.550729300Z" level=info msg="Layer sha256:3cc312922e8b02f48b08b421ac22c2055cf57cfda3ea01c180d0fa469ad4e95e cleaned up"
Jun 01 11:20:11 minikube dockerd[474]: time="2021-06-01T11:20:11.653805000Z" level=info msg="Layer sha256:3cc312922e8b02f48b08b421ac22c2055cf57cfda3ea01c180d0fa469ad4e95e cleaned up"
Jun 01 11:20:36 minikube dockerd[474]: time="2021-06-01T11:20:36.035494700Z" level=error msg="54b3079be5c14e5259fc865be3bf6b84e472b6b2bff12749e5d9f494913546ca cleanup: failed to delete container from containerd: no such container"
Jun 01 11:20:36 minikube dockerd[474]: time="2021-06-01T11:20:36.035561800Z" level=error msg="Handler for POST /v1.40/containers/54b3079be5c14e5259fc865be3bf6b84e472b6b2bff12749e5d9f494913546ca/start returned error: error while creating mount source path '/frontend': mkdir /frontend: file exists"
Jun 01 11:20:36 minikube dockerd[474]: time="2021-06-01T11:20:36.099061700Z" level=error msg="d8a96b823ed7786be79e16384efdc97767881922c7534c28395c18f6f502da07 cleanup: failed to delete container from containerd: no such container"
Jun 01 11:20:36 minikube dockerd[474]: time="2021-06-01T11:20:36.099219500Z" level=error msg="Handler for POST /v1.40/containers/d8a96b823ed7786be79e16384efdc97767881922c7534c28395c18f6f502da07/start returned error: error while creating mount source path '/frontend': mkdir /frontend: file exists"
Jun 01 11:20:37 minikube dockerd[474]: time="2021-06-01T11:20:37.056165700Z" level=error msg="dc53e2131ce6d064d9c70364bfb4c6534295fcd36099608b72919fad6252e8c5 cleanup: failed to delete container from containerd: no such container"
Jun 01 11:20:37 minikube dockerd[474]: time="2021-06-01T11:20:37.056214100Z" level=error msg="Handler for POST /v1.40/containers/dc53e2131ce6d064d9c70364bfb4c6534295fcd36099608b72919fad6252e8c5/start returned error: error while creating mount source path '/frontend': mkdir /frontend: file exists"
Jun 01 11:20:37 minikube dockerd[474]: time="2021-06-01T11:20:37.114041300Z" level=error msg="098312bbaa39739a8da37946e28c9e5ee241c2997b47856807202078be5911f3 cleanup: failed to delete container from containerd: no such container"
Jun 01 11:20:37 minikube dockerd[474]: time="2021-06-01T11:20:37.114101900Z" level=error msg="Handler for POST /v1.40/containers/098312bbaa39739a8da37946e28c9e5ee241c2997b47856807202078be5911f3/start returned error: error while creating mount source path '/data': mkdir /data: file exists"
Jun 01 11:20:52 minikube dockerd[474]: time="2021-06-01T11:20:52.567867900Z" level=error msg="806d30698e13439b051a893777aeadcb1240f63347c5a4df5b5b79e1b279245c cleanup: failed to delete container from containerd: no such container"
Jun 01 11:20:52 minikube dockerd[474]: time="2021-06-01T11:20:52.567923700Z" level=error msg="Handler for POST /v1.40/containers/806d30698e13439b051a893777aeadcb1240f63347c5a4df5b5b79e1b279245c/start returned error: error while creating mount source path '/frontend': mkdir /frontend: file exists"
Jun 01 11:20:52 minikube dockerd[474]: time="2021-06-01T11:20:52.620946400Z" level=error msg="b3f08084fb3d1b2a11e3c72a1765b998da19b41eb24dcc31d03f75cd5fed7e4e cleanup: failed to delete container from containerd: no such container"
Jun 01 11:20:52 minikube dockerd[474]: time="2021-06-01T11:20:52.620998000Z" level=error msg="Handler for POST /v1.40/containers/b3f08084fb3d1b2a11e3c72a1765b998da19b41eb24dcc31d03f75cd5fed7e4e/start returned error: error while creating mount source path '/frontend': mkdir /frontend: file exists"
Jun 01 11:21:18 minikube dockerd[474]: time="2021-06-01T11:21:18.528467400Z" level=error msg="94e8007c2e1e3e17906f19a0a0744139c07de3045319eda3bafffea379f3a65c cleanup: failed to delete container from containerd: no such container"
Jun 01 11:21:18 minikube dockerd[474]: time="2021-06-01T11:21:18.528521400Z" level=error msg="Handler for POST /v1.40/containers/94e8007c2e1e3e17906f19a0a0744139c07de3045319eda3bafffea379f3a65c/start returned error: error while creating mount source path '/frontend': mkdir /frontend: file exists"
Jun 01 11:21:18 minikube dockerd[474]: time="2021-06-01T11:21:18.577096100Z" level=error msg="2d347d10bf71f23a47b88e3a949005f1e92bf95f445b690be68325f2da2e814e cleanup: failed to delete container from containerd: no such container"
Jun 01 11:21:18 minikube dockerd[474]: time="2021-06-01T11:21:18.577173100Z" level=error msg="Handler for POST /v1.40/containers/2d347d10bf71f23a47b88e3a949005f1e92bf95f445b690be68325f2da2e814e/start returned error: error while creating mount source path '/frontend': mkdir /frontend: file exists"
Jun 01 11:22:12 minikube dockerd[474]: time="2021-06-01T11:22:12.270954700Z" level=info msg="ignoring event" container=cfec0382f2de228219b04def29ab99acfd949d21b51de1fd72c67d2af9c53699 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:22:42 minikube dockerd[474]: time="2021-06-01T11:22:42.163289300Z" level=info msg="Container f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481 failed to exit within 30 seconds of signal 15 - using the force"
Jun 01 11:22:42 minikube dockerd[474]: time="2021-06-01T11:22:42.172653000Z" level=info msg="Container f92fe63750b32e5f195548b50c964b55b5fab1482c9da5646bd625ae2279d125 failed to exit within 30 seconds of signal 15 - using the force"
Jun 01 11:22:42 minikube dockerd[474]: time="2021-06-01T11:22:42.228511500Z" level=info msg="ignoring event" container=f92fe63750b32e5f195548b50c964b55b5fab1482c9da5646bd625ae2279d125 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:22:42 minikube dockerd[474]: time="2021-06-01T11:22:42.232733300Z" level=info msg="ignoring event" container=f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:22:42 minikube dockerd[474]: time="2021-06-01T11:22:42.286712900Z" level=info msg="ignoring event" container=5e2854140751ce24efd656a6f57855211ecef53f0881d31ef0e052ca2d3a031e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jun 01 11:22:42 minikube dockerd[474]: time="2021-06-01T11:22:42.289208300Z" level=info msg="ignoring event" container=840c81cad7d4987e4d7157cffe1cf1307bbb10cc7cd6ea09d201c6810185144c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
18846b891607b 98404fa5381c1 52 minutes ago Running back-end 0 13b2e7d69d1c5
9d4ea73025689 d1a364dc548d5 52 minutes ago Running front-end 0 13b2e7d69d1c5
9e0e1138284ce 17686a7bbcd4f 52 minutes ago Running django-webapp 0 66cd604f88f4b
4f9775804fa5b 6e38f40d628db 3 hours ago Running storage-provisioner 1 7f39cfd5020f8
770540eac42fd bfe3a36ebd252 3 hours ago Running coredns 0 3604b8a75da26
eb50b063cc1c1 43154ddb57a83 3 hours ago Running kube-proxy 0 2dcbe0190c5e8
a68fb72787e46 6e38f40d628db 3 hours ago Exited storage-provisioner 0 7f39cfd5020f8
928c59af93c3e ed2c44fbdd78b 3 hours ago Running kube-scheduler 0 d6debad8d4b2c
d9955d9699be6 a27166429d98e 3 hours ago Running kube-controller-manager 0 6a9719112e3b1
9c1385e7159f2 0369cf4303ffd 3 hours ago Running etcd 0 d6fbe42cf43d3
23d3a26d062a9 a8c2fdb8bf76e 3 hours ago Running kube-apiserver 0 12aa40fa0b77e

==> coredns [770540eac42f] <==
E0601 09:39:40.308915 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
E0601 09:39:40.308961 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Namespace: an error on the server ("") has prevented the request from succeeding (get namespaces)
E0601 09:39:40.309089 1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: an error on the server ("") has prevented the request from succeeding (get endpoints)
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[INFO] plugin/ready: Still waiting on: "kubernetes"

==> describe nodes <==
Name: minikube
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=15cede53bdc5fe242228853e737333b09d4336b5
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2021_06_01T11_39_14_0700
minikube.k8s.io/version=v1.19.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 01 Jun 2021 09:39:11 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Tue, 01 Jun 2021 12:16:06 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Tue, 01 Jun 2021 12:15:46 +0000 Tue, 01 Jun 2021 10:05:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 01 Jun 2021 12:15:46 +0000 Tue, 01 Jun 2021 10:05:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 01 Jun 2021 12:15:46 +0000 Tue, 01 Jun 2021 10:05:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 01 Jun 2021 12:15:46 +0000 Tue, 01 Jun 2021 10:05:32 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
Capacity:
cpu: 8
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2034536Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2034536Ki
pods: 110
System Info:
Machine ID: 73c9fceff7724090ba72b64bf6e8eff8
System UUID: 18359240-320c-4ac0-a119-b73779305f33
Boot ID: ffbbfdac-8105-4008-b457-3cca147ffd65
Kernel Version: 5.10.25-linuxkit
OS Image: Ubuntu 20.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.5
Kubelet Version: v1.20.2
Kube-Proxy Version: v1.20.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


default django-webapp-56976bffc8-qt4m5 250m (3%) 500m (6%) 128Mi (6%) 512Mi (25%) 52m
default mia-dashboard-595c88fdcd-vhdbf 400m (5%) 2 (25%) 512Mi (25%) 2Gi (103%) 52m
kube-system coredns-74ff55c5b-bs2ql 100m (1%) 0 (0%) 70Mi (3%) 170Mi (8%) 156m
kube-system etcd-minikube 100m (1%) 0 (0%) 100Mi (5%) 0 (0%) 156m
kube-system kube-apiserver-minikube 250m (3%) 0 (0%) 0 (0%) 0 (0%) 156m
kube-system kube-controller-manager-minikube 200m (2%) 0 (0%) 0 (0%) 0 (0%) 156m
kube-system kube-proxy-nf6rl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 156m
kube-system kube-scheduler-minikube 100m (1%) 0 (0%) 0 (0%) 0 (0%) 156m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 156m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 1400m (17%) 2500m (31%)
memory 810Mi (40%) 2730Mi (137%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:

==> dmesg <==
[ +0.033981] bpfilter: write fail -32
[ +29.929766] bpfilter: read fail 0
[ +0.030540] bpfilter: read fail 0
[ +0.031825] bpfilter: read fail 0
[ +0.035595] bpfilter: write fail -32
[ +13.751650] bpfilter: write fail -32
[ +0.033859] bpfilter: write fail -32
[Jun 1 12:11] bpfilter: read fail 0
[ +0.039844] bpfilter: write fail -32
[ +0.028241] bpfilter: read fail 0
[ +0.031097] bpfilter: read fail 0
[ +29.898993] bpfilter: read fail 0
[ +0.032304] bpfilter: read fail 0
[ +0.024998] bpfilter: read fail 0
[ +0.034947] bpfilter: read fail 0
[ +13.755765] bpfilter: write fail -32
[ +0.043093] bpfilter: read fail 0
[ +0.028036] bpfilter: read fail 0
[Jun 1 12:12] bpfilter: read fail 0
[ +0.029913] bpfilter: write fail -32
[ +0.025841] bpfilter: read fail 0
[ +0.034617] bpfilter: read fail 0
[ +29.910771] bpfilter: read fail 0
[ +0.032289] bpfilter: write fail -32
[ +0.030226] bpfilter: write fail -32
[ +13.785497] bpfilter: write fail -32
[ +0.041620] bpfilter: read fail 0
[ +0.030403] bpfilter: read fail 0
[Jun 1 12:13] bpfilter: write fail -32
[ +0.036689] bpfilter: read fail 0
[ +0.038675] bpfilter: write fail -32
[ +29.924261] bpfilter: write fail -32
[ +0.032822] bpfilter: read fail 0
[ +0.030465] bpfilter: read fail 0
[ +13.785002] bpfilter: read fail 0
[ +0.034060] bpfilter: write fail -32
[ +0.026811] bpfilter: read fail 0
[ +0.036095] bpfilter: read fail 0
[Jun 1 12:14] bpfilter: read fail 0
[ +0.032885] bpfilter: write fail -32
[ +0.033268] bpfilter: read fail 0
[ +0.030191] bpfilter: read fail 0
[ +29.906522] bpfilter: write fail -32
[ +0.032223] bpfilter: write fail -32
[ +13.813574] bpfilter: read fail 0
[ +0.030079] bpfilter: write fail -32
[ +0.034050] bpfilter: write fail -32
[Jun 1 12:15] bpfilter: read fail 0
[ +0.038116] bpfilter: write fail -32
[ +0.034401] bpfilter: read fail 0
[ +0.029682] bpfilter: read fail 0
[ +29.898256] bpfilter: write fail -32
[ +0.034055] bpfilter: write fail -32
[ +13.812971] bpfilter: read fail 0
[ +0.032523] bpfilter: write fail -32
[ +0.031907] bpfilter: read fail 0
[ +0.039883] bpfilter: read fail 0
[Jun 1 12:16] bpfilter: read fail 0
[ +0.030374] bpfilter: write fail -32
[ +0.030364] bpfilter: write fail -32

==> etcd [9c1385e7159f] <==
2021-06-01 12:07:32.660657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:07:42.661609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:07:52.661552 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:08:02.625483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:08:12.626060 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:08:22.626990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:08:32.591287 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:08:42.591197 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:08:52.592989 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:08:58.155809 I | mvcc: store.index: compact 6820
2021-06-01 12:08:58.156739 I | mvcc: finished scheduled compaction at 6820 (took 740.4µs)
2021-06-01 12:09:02.556244 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:09:12.555922 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:09:22.556088 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:09:32.522216 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:09:42.521456 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:09:52.521024 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:10:02.487931 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:10:12.486287 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:10:22.488783 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:10:32.451494 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:10:42.451077 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:10:52.452705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:11:02.417893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:11:12.417647 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:11:22.416817 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:11:32.382073 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:11:42.382215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:11:52.381552 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:12:02.346563 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:12:12.348258 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:12:22.347210 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:12:32.316680 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:12:42.315805 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:12:52.315434 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:13:02.283009 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:13:12.280981 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:13:22.282163 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:13:32.247114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:13:42.247145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:13:52.246717 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:13:57.816599 I | mvcc: store.index: compact 7031
2021-06-01 12:13:57.817222 I | mvcc: finished scheduled compaction at 7031 (took 406µs)
2021-06-01 12:14:02.211934 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:14:12.212963 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:14:22.214441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:14:31.454092 I | etcdserver: start to snapshot (applied: 10001, lastsnap: 0)
2021-06-01 12:14:31.469418 I | etcdserver: saved snapshot at index 10001
2021-06-01 12:14:31.470684 I | etcdserver: compacted raft log at 5001
2021-06-01 12:14:32.177752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:14:42.177338 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:14:52.177677 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:15:02.142526 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:15:12.143487 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:15:22.143453 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:15:32.108245 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:15:42.108918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:15:52.107436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:16:02.072962 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-06-01 12:16:12.072810 I | etcdserver/api/etcdhttp: /health OK (status code 200)

==> kernel <==
12:16:17 up 2:37, 0 users, load average: 0.33, 0.39, 0.35
Linux minikube 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.1 LTS"

==> kube-apiserver [23d3a26d062a] <==
I0601 12:04:16.298941 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:04:57.467705 1 client.go:360] parsed scheme: "passthrough"
I0601 12:04:57.467769 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:04:57.467795 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:05:30.513779 1 client.go:360] parsed scheme: "passthrough"
I0601 12:05:30.513840 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:05:30.513900 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:06:05.891454 1 client.go:360] parsed scheme: "passthrough"
I0601 12:06:05.891686 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:06:05.891895 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:06:36.090811 1 client.go:360] parsed scheme: "passthrough"
I0601 12:06:36.090964 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:06:36.090991 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:07:19.768083 1 client.go:360] parsed scheme: "passthrough"
I0601 12:07:19.768156 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:07:19.768180 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:07:53.468834 1 client.go:360] parsed scheme: "passthrough"
I0601 12:07:53.468919 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:07:53.468950 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0601 12:08:03.445795 1 watcher.go:220] watch chan error: etcdserver: mvcc: required revision has been compacted
I0601 12:08:25.109945 1 client.go:360] parsed scheme: "passthrough"
I0601 12:08:25.110050 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:08:25.110086 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:08:57.678791 1 client.go:360] parsed scheme: "passthrough"
I0601 12:08:57.678865 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:08:57.678896 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:09:34.161642 1 client.go:360] parsed scheme: "passthrough"
I0601 12:09:34.161704 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:09:34.161717 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:10:17.785475 1 client.go:360] parsed scheme: "passthrough"
I0601 12:10:17.785555 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:10:17.785582 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:10:58.820276 1 client.go:360] parsed scheme: "passthrough"
I0601 12:10:58.820366 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:10:58.820391 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:11:42.106793 1 client.go:360] parsed scheme: "passthrough"
I0601 12:11:42.106857 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:11:42.106874 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:12:21.966076 1 client.go:360] parsed scheme: "passthrough"
I0601 12:12:21.966278 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:12:21.966327 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:13:06.111794 1 client.go:360] parsed scheme: "passthrough"
I0601 12:13:06.111892 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:13:06.111924 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:13:43.516459 1 client.go:360] parsed scheme: "passthrough"
I0601 12:13:43.516606 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:13:43.516662 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:14:20.494378 1 client.go:360] parsed scheme: "passthrough"
I0601 12:14:20.494437 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:14:20.494457 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:15:00.076681 1 client.go:360] parsed scheme: "passthrough"
I0601 12:15:00.076824 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:15:00.076871 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0601 12:15:06.357926 1 watcher.go:220] watch chan error: etcdserver: mvcc: required revision has been compacted
I0601 12:15:32.813745 1 client.go:360] parsed scheme: "passthrough"
I0601 12:15:32.813876 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:15:32.813925 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0601 12:16:17.005837 1 client.go:360] parsed scheme: "passthrough"
I0601 12:16:17.006001 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0601 12:16:17.006035 1 clientconn.go:948] ClientConn switching balancer to "pick_first"

==> kube-controller-manager [d9955d9699be] <==
I0601 09:39:30.159623 1 shared_informer.go:247] Caches are synced for node
I0601 09:39:30.159654 1 range_allocator.go:172] Starting range CIDR allocator
I0601 09:39:30.159663 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I0601 09:39:30.159667 1 shared_informer.go:247] Caches are synced for cidrallocator
I0601 09:39:30.167344 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
I0601 09:39:30.175779 1 shared_informer.go:247] Caches are synced for PV protection
I0601 09:39:30.182854 1 shared_informer.go:247] Caches are synced for GC
I0601 09:39:30.187143 1 shared_informer.go:247] Caches are synced for endpoint
I0601 09:39:30.194718 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I0601 09:39:30.196579 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nf6rl"
I0601 09:39:30.198397 1 shared_informer.go:247] Caches are synced for stateful set
I0601 09:39:30.199937 1 shared_informer.go:247] Caches are synced for disruption
I0601 09:39:30.199968 1 disruption.go:339] Sending events to api server.
I0601 09:39:30.200138 1 shared_informer.go:247] Caches are synced for job
I0601 09:39:30.200239 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0601 09:39:30.200428 1 shared_informer.go:247] Caches are synced for TTL
I0601 09:39:30.200453 1 shared_informer.go:247] Caches are synced for persistent volume
I0601 09:39:30.201060 1 shared_informer.go:247] Caches are synced for expand
I0601 09:39:30.202430 1 shared_informer.go:247] Caches are synced for service account
I0601 09:39:30.208040 1 shared_informer.go:247] Caches are synced for PVC protection
I0601 09:39:30.208107 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-bs2ql"
I0601 09:39:30.211953 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I0601 09:39:30.213932 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0601 09:39:30.220355 1 shared_informer.go:247] Caches are synced for ReplicationController
I0601 09:39:30.300096 1 shared_informer.go:247] Caches are synced for HPA
E0601 09:39:30.305599 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0601 09:39:30.350137 1 shared_informer.go:247] Caches are synced for attach detach
I0601 09:39:30.414711 1 shared_informer.go:247] Caches are synced for resource quota
I0601 09:39:30.447694 1 shared_informer.go:247] Caches are synced for resource quota
I0601 09:39:30.570045 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0601 09:39:30.870317 1 shared_informer.go:247] Caches are synced for garbage collector
I0601 09:39:30.899753 1 shared_informer.go:247] Caches are synced for garbage collector
I0601 09:39:30.899799 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0601 09:42:13.246183 1 event.go:291] "Event occurred" object="default/django-webapp" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set django-webapp-56976bffc8 to 1"
I0601 09:42:13.253464 1 event.go:291] "Event occurred" object="default/django-webapp-56976bffc8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: django-webapp-56976bffc8-vr5qg"
I0601 09:42:13.256053 1 event.go:291] "Event occurred" object="default/mia-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mia-dashboard-595c88fdcd to 1"
I0601 09:42:13.299373 1 event.go:291] "Event occurred" object="default/mia-dashboard-595c88fdcd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mia-dashboard-595c88fdcd-sc2qx"
I0601 10:05:22.810645 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node minikube status is now: NodeNotReady"
I0601 10:05:22.903129 1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0601 10:05:22.915678 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0601 10:05:22.921966 1 event.go:291] "Event occurred" object="kube-system/etcd-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0601 10:05:22.928743 1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0601 10:05:22.934571 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0601 10:05:22.943082 1 event.go:291] "Event occurred" object="kube-system/kube-proxy-nf6rl" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0601 10:05:22.954259 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-bs2ql" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0601 10:05:22.955913 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0601 10:05:32.957849 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0601 10:13:45.251126 1 event.go:291] "Event occurred" object="default/django-webapp" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set django-webapp-56976bffc8 to 1"
I0601 10:13:45.257727 1 event.go:291] "Event occurred" object="default/mia-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mia-dashboard-595c88fdcd to 1"
I0601 10:13:45.261075 1 event.go:291] "Event occurred" object="default/django-webapp-56976bffc8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: django-webapp-56976bffc8-dnbtm"
I0601 10:13:45.270606 1 event.go:291] "Event occurred" object="default/mia-dashboard-595c88fdcd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mia-dashboard-595c88fdcd-k756j"
I0601 10:43:00.456028 1 cleaner.go:180] Cleaning CSR "csr-lxcnc" as it is more than 1h0m0s old and approved.
I0601 11:20:19.097317 1 event.go:291] "Event occurred" object="default/django-webapp" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set django-webapp-56976bffc8 to 1"
I0601 11:20:19.106187 1 event.go:291] "Event occurred" object="default/mia-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mia-dashboard-595c88fdcd to 1"
I0601 11:20:19.108709 1 event.go:291] "Event occurred" object="default/django-webapp-56976bffc8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: django-webapp-56976bffc8-k4swq"
I0601 11:20:19.116247 1 event.go:291] "Event occurred" object="default/mia-dashboard-595c88fdcd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mia-dashboard-595c88fdcd-bzdkf"
I0601 11:23:44.201470 1 event.go:291] "Event occurred" object="default/django-webapp" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set django-webapp-56976bffc8 to 1"
I0601 11:23:44.249651 1 event.go:291] "Event occurred" object="default/django-webapp-56976bffc8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: django-webapp-56976bffc8-qt4m5"
I0601 11:23:44.253146 1 event.go:291] "Event occurred" object="default/mia-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mia-dashboard-595c88fdcd to 1"
I0601 11:23:44.261952 1 event.go:291] "Event occurred" object="default/mia-dashboard-595c88fdcd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mia-dashboard-595c88fdcd-vhdbf"

==> kube-proxy [eb50b063cc1c] <==
I0601 09:39:31.073351 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0601 09:39:31.073458 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
W0601 09:39:33.541494 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0601 09:39:33.541676 1 server_others.go:185] Using iptables Proxier.
I0601 09:39:33.543383 1 server.go:650] Version: v1.20.2
I0601 09:39:33.543895 1 conntrack.go:52] Setting nf_conntrack_max to 262144
E0601 09:39:33.544494 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
I0601 09:39:33.544643 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0601 09:39:33.544802 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0601 09:39:33.546253 1 config.go:224] Starting endpoint slice config controller
I0601 09:39:33.546269 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0601 09:39:33.546330 1 config.go:315] Starting service config controller
I0601 09:39:33.546341 1 shared_informer.go:240] Waiting for caches to sync for service config
I0601 09:39:33.646583 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0601 09:39:33.646673 1 shared_informer.go:247] Caches are synced for service config
I0601 09:39:48.814739 1 trace.go:205] Trace[889545886]: "iptables restore" (01-Jun-2021 09:39:46.501) (total time: 2312ms):
Trace[889545886]: [2.312760105s] [2.312760105s] END
I0601 09:40:02.798928 1 trace.go:205] Trace[475688217]: "iptables restore" (01-Jun-2021 09:40:00.448) (total time: 2350ms):
Trace[475688217]: [2.350716297s] [2.350716297s] END
I0601 09:42:22.863878 1 trace.go:205] Trace[906390026]: "iptables restore" (01-Jun-2021 09:42:19.736) (total time: 3126ms):
Trace[906390026]: [3.126966879s] [3.126966879s] END
I0601 09:42:34.636834 1 trace.go:205] Trace[73885452]: "iptables restore" (01-Jun-2021 09:42:31.937) (total time: 2699ms):
Trace[73885452]: [2.699775701s] [2.699775701s] END
I0601 09:43:53.005099 1 trace.go:205] Trace[1022034513]: "iptables restore" (01-Jun-2021 09:43:50.467) (total time: 2537ms):
Trace[1022034513]: [2.537101377s] [2.537101377s] END
I0601 09:44:02.962333 1 trace.go:205] Trace[1462303341]: "iptables restore" (01-Jun-2021 09:44:00.699) (total time: 2262ms):
Trace[1462303341]: [2.262408724s] [2.262408724s] END
I0601 10:05:32.567047 1 trace.go:205] Trace[1573424644]: "iptables restore" (01-Jun-2021 10:05:30.154) (total time: 2408ms):
Trace[1573424644]: [2.4085606s] [2.4085606s] END
I0601 10:05:41.891379 1 trace.go:205] Trace[880211525]: "iptables restore" (01-Jun-2021 10:05:39.681) (total time: 2210ms):
Trace[880211525]: [2.2102055s] [2.2102055s] END
I0601 10:13:55.607872 1 trace.go:205] Trace[1427966137]: "iptables restore" (01-Jun-2021 10:13:53.384) (total time: 2223ms):
Trace[1427966137]: [2.2233628s] [2.2233628s] END
I0601 11:16:28.285923 1 trace.go:205] Trace[21283813]: "iptables restore" (01-Jun-2021 11:16:25.792) (total time: 2492ms):
Trace[21283813]: [2.4920503s] [2.4920503s] END
I0601 11:19:16.491250 1 trace.go:205] Trace[1896256671]: "iptables restore" (01-Jun-2021 11:19:14.196) (total time: 2295ms):
Trace[1896256671]: [2.295064s] [2.295064s] END
I0601 11:19:25.325488 1 trace.go:205] Trace[951594710]: "iptables restore" (01-Jun-2021 11:19:23.191) (total time: 2133ms):
Trace[951594710]: [2.1339458s] [2.1339458s] END
I0601 11:20:28.896005 1 trace.go:205] Trace[940687437]: "iptables restore" (01-Jun-2021 11:20:26.681) (total time: 2214ms):
Trace[940687437]: [2.2149253s] [2.2149253s] END
I0601 11:20:38.898613 1 trace.go:205] Trace[430454500]: "iptables restore" (01-Jun-2021 11:20:36.596) (total time: 2302ms):
Trace[430454500]: [2.3020199s] [2.3020199s] END
I0601 11:22:14.012608 1 trace.go:205] Trace[1380591178]: "iptables restore" (01-Jun-2021 11:22:11.337) (total time: 2675ms):
Trace[1380591178]: [2.675138s] [2.675138s] END
I0601 11:23:55.423483 1 trace.go:205] Trace[240008931]: "iptables restore" (01-Jun-2021 11:23:52.731) (total time: 2692ms):
Trace[240008931]: [2.6923865s] [2.6923865s] END
I0601 11:24:06.313801 1 trace.go:205] Trace[1485531]: "iptables restore" (01-Jun-2021 11:24:03.730) (total time: 2583ms):
Trace[1485531]: [2.5837015s] [2.5837015s] END

==> kube-scheduler [928c59af93c3] <==
I0601 09:39:09.431458 1 serving.go:331] Generated self-signed cert in-memory
W0601 09:39:11.739072 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0601 09:39:11.739182 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0601 09:39:11.739283 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0601 09:39:11.739299 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0601 09:39:11.847356 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0601 09:39:11.848921 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0601 09:39:11.849010 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0601 09:39:11.849022 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0601 09:39:11.916372 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0601 09:39:11.917288 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0601 09:39:11.917358 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0601 09:39:11.917789 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0601 09:39:11.917948 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0601 09:39:11.918662 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0601 09:39:11.919867 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0601 09:39:11.920234 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0601 09:39:11.920414 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0601 09:39:11.920507 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0601 09:39:11.921375 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0601 09:39:11.921473 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0601 09:39:12.779715 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0601 09:39:12.868890 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0601 09:39:13.249160 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file

==> kubelet <==
-- Logs begin at Tue 2021-06-01 09:38:16 UTC, end at Tue 2021-06-01 12:16:22 UTC. --
Jun 01 11:22:42 minikube kubelet[2422]: I0601 11:22:42.941860 2422 scope.go:95] [topologymanager] RemoveContainer - Container ID: f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481
Jun 01 11:22:42 minikube kubelet[2422]: E0601 11:22:42.942572 2422 remote_runtime.go:332] ContainerStatus "f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481
Jun 01 11:22:42 minikube kubelet[2422]: W0601 11:22:42.942650 2422 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481}): failed to get container status "f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481": rpc error: code = Unknown desc = Error: No such container: f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481
Jun 01 11:22:42 minikube kubelet[2422]: I0601 11:22:42.942672 2422 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfec0382f2de228219b04def29ab99acfd949d21b51de1fd72c67d2af9c53699
Jun 01 11:22:42 minikube kubelet[2422]: E0601 11:22:42.943614 2422 remote_runtime.go:332] ContainerStatus "cfec0382f2de228219b04def29ab99acfd949d21b51de1fd72c67d2af9c53699" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: cfec0382f2de228219b04def29ab99acfd949d21b51de1fd72c67d2af9c53699
Jun 01 11:22:42 minikube kubelet[2422]: W0601 11:22:42.943687 2422 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker cfec0382f2de228219b04def29ab99acfd949d21b51de1fd72c67d2af9c53699}): failed to get container status "cfec0382f2de228219b04def29ab99acfd949d21b51de1fd72c67d2af9c53699": rpc error: code = Unknown desc = Error: No such container: cfec0382f2de228219b04def29ab99acfd949d21b51de1fd72c67d2af9c53699
Jun 01 11:22:42 minikube kubelet[2422]: I0601 11:22:42.943707 2422 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2d347d10bf71f23a47b88e3a949005f1e92bf95f445b690be68325f2da2e814e
Jun 01 11:22:42 minikube kubelet[2422]: E0601 11:22:42.944392 2422 remote_runtime.go:332] ContainerStatus "2d347d10bf71f23a47b88e3a949005f1e92bf95f445b690be68325f2da2e814e" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 2d347d10bf71f23a47b88e3a949005f1e92bf95f445b690be68325f2da2e814e
Jun 01 11:22:42 minikube kubelet[2422]: W0601 11:22:42.944437 2422 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker 2d347d10bf71f23a47b88e3a949005f1e92bf95f445b690be68325f2da2e814e}): failed to get container status "2d347d10bf71f23a47b88e3a949005f1e92bf95f445b690be68325f2da2e814e": rpc error: code = Unknown desc = Error: No such container: 2d347d10bf71f23a47b88e3a949005f1e92bf95f445b690be68325f2da2e814e
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.006362 2422 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-9ltq4" (UniqueName: "kubernetes.io/secret/1fff3ecf-be66-42f9-98d9-da85ad87c6e9-default-token-9ltq4") pod "1fff3ecf-be66-42f9-98d9-da85ad87c6e9" (UID: "1fff3ecf-be66-42f9-98d9-da85ad87c6e9")
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.006490 2422 reconciler.go:196] operationExecutor.UnmountVolume started for volume "frontend-storage" (UniqueName: "kubernetes.io/host-path/1fff3ecf-be66-42f9-98d9-da85ad87c6e9-html-storage-pv") pod "1fff3ecf-be66-42f9-98d9-da85ad87c6e9" (UID: "1fff3ecf-be66-42f9-98d9-da85ad87c6e9")
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.006738 2422 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-9ltq4" (UniqueName: "kubernetes.io/secret/b5d1afa6-2a03-4da0-b692-232cb4a692ba-default-token-9ltq4") pod "b5d1afa6-2a03-4da0-b692-232cb4a692ba" (UID: "b5d1afa6-2a03-4da0-b692-232cb4a692ba")
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.006810 2422 reconciler.go:196] operationExecutor.UnmountVolume started for volume "data-storage" (UniqueName: "kubernetes.io/host-path/1fff3ecf-be66-42f9-98d9-da85ad87c6e9-data-pv") pod "1fff3ecf-be66-42f9-98d9-da85ad87c6e9" (UID: "1fff3ecf-be66-42f9-98d9-da85ad87c6e9")
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.006927 2422 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fff3ecf-be66-42f9-98d9-da85ad87c6e9-data-pv" (OuterVolumeSpecName: "data-storage") pod "1fff3ecf-be66-42f9-98d9-da85ad87c6e9" (UID: "1fff3ecf-be66-42f9-98d9-da85ad87c6e9"). InnerVolumeSpecName "data-pv". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.007014 2422 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fff3ecf-be66-42f9-98d9-da85ad87c6e9-html-storage-pv" (OuterVolumeSpecName: "frontend-storage") pod "1fff3ecf-be66-42f9-98d9-da85ad87c6e9" (UID: "1fff3ecf-be66-42f9-98d9-da85ad87c6e9"). InnerVolumeSpecName "html-storage-pv". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.010942 2422 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fff3ecf-be66-42f9-98d9-da85ad87c6e9-default-token-9ltq4" (OuterVolumeSpecName: "default-token-9ltq4") pod "1fff3ecf-be66-42f9-98d9-da85ad87c6e9" (UID: "1fff3ecf-be66-42f9-98d9-da85ad87c6e9"). InnerVolumeSpecName "default-token-9ltq4". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.011622 2422 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d1afa6-2a03-4da0-b692-232cb4a692ba-default-token-9ltq4" (OuterVolumeSpecName: "default-token-9ltq4") pod "b5d1afa6-2a03-4da0-b692-232cb4a692ba" (UID: "b5d1afa6-2a03-4da0-b692-232cb4a692ba"). InnerVolumeSpecName "default-token-9ltq4". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.107536 2422 reconciler.go:319] Volume detached for volume "default-token-9ltq4" (UniqueName: "kubernetes.io/secret/b5d1afa6-2a03-4da0-b692-232cb4a692ba-default-token-9ltq4") on node "minikube" DevicePath ""
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.107881 2422 reconciler.go:319] Volume detached for volume "data-pv" (UniqueName: "kubernetes.io/host-path/1fff3ecf-be66-42f9-98d9-da85ad87c6e9-data-pv") on node "minikube" DevicePath ""
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.107935 2422 reconciler.go:319] Volume detached for volume "default-token-9ltq4" (UniqueName: "kubernetes.io/secret/1fff3ecf-be66-42f9-98d9-da85ad87c6e9-default-token-9ltq4") on node "minikube" DevicePath ""
Jun 01 11:22:43 minikube kubelet[2422]: I0601 11:22:43.107991 2422 reconciler.go:319] Volume detached for volume "html-storage-pv" (UniqueName: "kubernetes.io/host-path/1fff3ecf-be66-42f9-98d9-da85ad87c6e9-html-storage-pv") on node "minikube" DevicePath ""
Jun 01 11:22:43 minikube kubelet[2422]: E0601 11:22:43.393931 2422 kuberuntime_container.go:662] killContainer "django-webapp"(id={"docker" "f92fe63750b32e5f195548b50c964b55b5fab1482c9da5646bd625ae2279d125"}) for pod "" failed: rpc error: code = Unknown desc = Error: No such container: f92fe63750b32e5f195548b50c964b55b5fab1482c9da5646bd625ae2279d125
Jun 01 11:22:43 minikube kubelet[2422]: E0601 11:22:43.394267 2422 kuberuntime_container.go:662] killContainer "back-end"(id={"docker" "f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481"}) for pod "" failed: rpc error: code = Unknown desc = Error: No such container: f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481
Jun 01 11:22:43 minikube kubelet[2422]: E0601 11:22:43.396283 2422 kubelet_pods.go:1256] Failed killing the pod "django-webapp-56976bffc8-k4swq": failed to "KillContainer" for "django-webapp" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: f92fe63750b32e5f195548b50c964b55b5fab1482c9da5646bd625ae2279d125"
Jun 01 11:22:43 minikube kubelet[2422]: E0601 11:22:43.396391 2422 kubelet_pods.go:1256] Failed killing the pod "mia-dashboard-595c88fdcd-bzdkf": failed to "KillContainer" for "back-end" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: f564a9e307ef1be29e55f6a3983aeab9fdde09fd4d0b5b4172043941bdd4a481"
Jun 01 11:22:43 minikube kubelet[2422]: W0601 11:22:43.914686 2422 docker_sandbox.go:240] Both sandbox container and checkpoint for id "840c81cad7d4987e4d7157cffe1cf1307bbb10cc7cd6ea09d201c6810185144c" could not be found. Proceed without further sandbox information.
Jun 01 11:22:43 minikube kubelet[2422]: W0601 11:22:43.917857 2422 docker_sandbox.go:240] Both sandbox container and checkpoint for id "840c81cad7d4987e4d7157cffe1cf1307bbb10cc7cd6ea09d201c6810185144c" could not be found. Proceed without further sandbox information.
Jun 01 11:22:55 minikube kubelet[2422]: W0601 11:22:55.980784 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 11:22:55 minikube kubelet[2422]: W0601 11:22:55.981185 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jun 01 11:23:44 minikube kubelet[2422]: I0601 11:23:44.257271 2422 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jun 01 11:23:44 minikube kubelet[2422]: I0601 11:23:44.298690 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9ltq4" (UniqueName: "kubernetes.io/secret/462d47a2-55e1-49d1-bffa-7f4d6172efde-default-token-9ltq4") pod "django-webapp-56976bffc8-qt4m5" (UID: "462d47a2-55e1-49d1-bffa-7f4d6172efde")
Jun 01 11:23:44 minikube kubelet[2422]: W0601 11:23:44.872203 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/django-webapp-56976bffc8-qt4m5 through plugin: invalid network status for
Jun 01 11:23:45 minikube kubelet[2422]: W0601 11:23:45.258621 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/django-webapp-56976bffc8-qt4m5 through plugin: invalid network status for
Jun 01 11:23:47 minikube kubelet[2422]: I0601 11:23:47.258353 2422 topology_manager.go:187] [topologymanager] Topology Admit Handler
Jun 01 11:23:47 minikube kubelet[2422]: I0601 11:23:47.308816 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "data-pv" (UniqueName: "kubernetes.io/host-path/47b651e5-16be-4adc-a18d-440e06f99658-data-pv") pod "mia-dashboard-595c88fdcd-vhdbf" (UID: "47b651e5-16be-4adc-a18d-440e06f99658")
Jun 01 11:23:47 minikube kubelet[2422]: I0601 11:23:47.308917 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9ltq4" (UniqueName: "kubernetes.io/secret/47b651e5-16be-4adc-a18d-440e06f99658-default-token-9ltq4") pod "mia-dashboard-595c88fdcd-vhdbf" (UID: "47b651e5-16be-4adc-a18d-440e06f99658")
Jun 01 11:23:47 minikube kubelet[2422]: I0601 11:23:47.308965 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "html-storage-pv" (UniqueName: "kubernetes.io/host-path/47b651e5-16be-4adc-a18d-440e06f99658-html-storage-pv") pod "mia-dashboard-595c88fdcd-vhdbf" (UID: "47b651e5-16be-4adc-a18d-440e06f99658")
Jun 01 11:23:47 minikube kubelet[2422]: W0601 11:23:47.945655 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/mia-dashboard-595c88fdcd-vhdbf through plugin: invalid network status for
Jun 01 11:23:48 minikube kubelet[2422]: W0601 11:23:48.284480 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/mia-dashboard-595c88fdcd-vhdbf through plugin: invalid network status for
Jun 01 11:23:49 minikube kubelet[2422]: W0601 11:23:49.315517 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/mia-dashboard-595c88fdcd-vhdbf through plugin: invalid network status for
Jun 01 11:27:55 minikube kubelet[2422]: W0601 11:27:55.633775 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 11:27:55 minikube kubelet[2422]: W0601 11:27:55.636696 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jun 01 11:32:55 minikube kubelet[2422]: W0601 11:32:55.285319 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 11:32:55 minikube kubelet[2422]: W0601 11:32:55.288119 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jun 01 11:37:54 minikube kubelet[2422]: W0601 11:37:54.955437 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 11:37:54 minikube kubelet[2422]: W0601 11:37:54.960688 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jun 01 11:42:54 minikube kubelet[2422]: W0601 11:42:54.604359 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 11:42:54 minikube kubelet[2422]: W0601 11:42:54.605230 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jun 01 11:47:54 minikube kubelet[2422]: W0601 11:47:54.255909 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 11:47:54 minikube kubelet[2422]: W0601 11:47:54.259225 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jun 01 11:52:53 minikube kubelet[2422]: W0601 11:52:53.905376 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 11:52:53 minikube kubelet[2422]: W0601 11:52:53.907683 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jun 01 11:57:53 minikube kubelet[2422]: W0601 11:57:53.556230 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 11:57:53 minikube kubelet[2422]: W0601 11:57:53.558480 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jun 01 12:02:53 minikube kubelet[2422]: W0601 12:02:53.210359 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 12:02:53 minikube kubelet[2422]: W0601 12:02:53.214382 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jun 01 12:07:52 minikube kubelet[2422]: W0601 12:07:52.878060 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 12:07:52 minikube kubelet[2422]: W0601 12:07:52.880411 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jun 01 12:12:52 minikube kubelet[2422]: W0601 12:12:52.532856 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jun 01 12:12:52 minikube kubelet[2422]: W0601 12:12:52.535556 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory

==> storage-provisioner [4f9775804fa5] <==
I0601 09:39:41.615764 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0601 09:39:49.620476 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0601 09:39:49.620673 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0601 09:39:49.641077 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0601 09:39:49.641151 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb52d39a-89ab-4cb7-98cd-be07951cbae5", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_61f40261-3111-4085-83ea-5a7da4a13e21 became leader
I0601 09:39:49.641254 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_61f40261-3111-4085-83ea-5a7da4a13e21!
I0601 09:39:49.742633 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_61f40261-3111-4085-83ea-5a7da4a13e21!

==> storage-provisioner [a68fb72787e4] <==
I0601 09:39:30.896418 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0601 09:39:40.928640 1 main.go:39] error getting server version: an error on the server ("") has prevented the request from succeeding

I've added the logs with --alsologtostderr as a file because it is too big.

Does any one has an idea where it could come from? It seems to work fine at first, I can access the files in minikube and in my applications. Then, I don't touch it for a few minutes and then it gets corrupted. I can't seem to find what the cause is, if anyone has had a similar problem, I would appreciate any information that could help me. What could be a procedure that I can follow to debug an error like this? Is there any bad practice from my docker image using that volume that could break it (it's a rather simple app)?

@montmejat
Copy link
Author

Update: this seems to come from the minikube mount command and from the shared folders, if I copy the files directly into minikube, I do not get this error of corrupted files.

@RA489
Copy link

RA489 commented Jun 7, 2021

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Jun 7, 2021
@spowelljr spowelljr added area/mount long-term-support Long-term support issues that can't be fixed in code labels Jul 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 12, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 11, 2021
@guiliguili
Copy link

I have the same issue as @aurelien-m :

  • minikube
  • macOS 12.0.1

@sharifelgamal sharifelgamal added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Dec 22, 2021
@spowelljr
Copy link
Member

Hi @aurelien-m, this should have been fixed with #13013, try using the newest version of minikube (v1.24.0) and let me know if it's resolved, thanks!

@spowelljr
Copy link
Member

Hi @guiliguili, I see you're using the newest version of minikube and still seem to be getting this error. What driver do you use with minikube? And have you tried deleting the cluster (minikube delete --all) and creating a new one since updating to minikube v1.24.0?

@spowelljr spowelljr added the triage/needs-information Indicates an issue needs more information in order to work on it. label Jan 6, 2022
@montmejat
Copy link
Author

Thank you for coming back, I cannot test it as I have moved away from using Kubernetes and minikube at the moment, and I don't have access to my older work on it!

@RA489
Copy link

RA489 commented Jan 18, 2022

@aurelien-m should we close this issue if no further queries?

@montmejat
Copy link
Author

I'm fine with closing it, I don't have any further queries

@RA489
Copy link

RA489 commented Jan 18, 2022

/close

@k8s-ci-robot
Copy link
Contributor

@RA489: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/mount kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. long-term-support Long-term support issues that can't be fixed in code triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

7 participants