Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1 #15986

Closed
uniorder opened this issue Mar 7, 2023 · 5 comments
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@uniorder
Copy link

uniorder commented Mar 7, 2023

重现问题所需的命令

失败的命令的完整输出

😄 Debian bookworm/sid 上的 minikube v1.29.0
✨ 根据现有的配置文件使用 docker 驱动程序
❗ docker is currently using the zfs storage driver, consider switching to overlay2 for better performance
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🏃 Updating the running docker "minikube" container ...

❌ Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
stdout:

stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

minikube logs命令的输出

* 
* ==> Audit <==
* |---------|-------------------|----------|-------|---------|---------------------|---------------------|
| Command |       Args        | Profile  | User  | Version |     Start Time      |      End Time       |
|---------|-------------------|----------|-------|---------|---------------------|---------------------|
| start   |                   | minikube | kehaw | v1.29.0 | 07 Mar 23 10:46 CST |                     |
| config  | set driver docker | minikube | kehaw | v1.29.0 | 07 Mar 23 10:54 CST | 07 Mar 23 10:54 CST |
| start   | --driver=docker   | minikube | kehaw | v1.29.0 | 07 Mar 23 10:54 CST |                     |
| start   |                   | minikube | kehaw | v1.29.0 | 07 Mar 23 10:57 CST |                     |
| delete  |                   | minikube | kehaw | v1.29.0 | 07 Mar 23 11:16 CST | 07 Mar 23 11:16 CST |
| start   | --driver=docker   | minikube | kehaw | v1.29.0 | 07 Mar 23 11:16 CST |                     |
| start   |                   | minikube | kehaw | v1.29.0 | 07 Mar 23 11:18 CST |                     |
| kubectl | -- get po -A      | minikube | kehaw | v1.29.0 | 07 Mar 23 11:20 CST |                     |
| start   |                   | minikube | kehaw | v1.29.0 | 07 Mar 23 11:22 CST |                     |
|---------|-------------------|----------|-------|---------|---------------------|---------------------|

* 
* ==> Last Start <==
* Log file created at: 2023/03/07 11:22:49
Running on machine: kehaw-desktop
Binary: Built with gc go1.19.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0307 11:22:49.855154   10524 out.go:296] Setting OutFile to fd 1 ...
I0307 11:22:49.855606   10524 out.go:348] isatty.IsTerminal(1) = true
I0307 11:22:49.855610   10524 out.go:309] Setting ErrFile to fd 2...
I0307 11:22:49.855614   10524 out.go:348] isatty.IsTerminal(2) = true
I0307 11:22:49.855978   10524 root.go:334] Updating PATH: /home/kehaw/.minikube/bin
I0307 11:22:49.857102   10524 out.go:303] Setting JSON to false
I0307 11:22:49.858641   10524 start.go:125] hostinfo: {"hostname":"kehaw-desktop","uptime":91,"bootTime":1678159279,"procs":478,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"bookworm/sid","kernelVersion":"5.19.0-35-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"7495c662-ec99-4854-811f-debefcd8b2ec"}
I0307 11:22:49.858754   10524 start.go:135] virtualization: kvm host
I0307 11:22:49.860173   10524 out.go:177] 😄  Debian bookworm/sid 上的 minikube v1.29.0
I0307 11:22:49.861356   10524 notify.go:220] Checking for updates...
W0307 11:22:49.861667   10524 preload.go:295] Failed to list preload files: open /home/kehaw/.minikube/cache/preloaded-tarball: no such file or directory
I0307 11:22:49.861872   10524 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0307 11:22:49.863643   10524 driver.go:365] Setting default libvirt URI to qemu:///system
I0307 11:22:50.101195   10524 docker.go:141] docker version: linux-23.0.1:Docker Engine - Community
I0307 11:22:50.101617   10524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0307 11:22:50.175134   10524 info.go:266] docker info: {ID:d55f1e6b-37fc-43fa-93a7-f5e4240d7783 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:zfs DriverStatus:[[Zpool rpool] [Zpool Health ONLINE] [Parent Dataset rpool/ROOT/ubuntu_tneqb2/var/lib] [Space Used By Parent 4117721088] [Space Available 53699051520] [Parent Quota no] [Compression lz4]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:false NGoroutines:31 SystemTime:2023-03-07 11:22:50.166309899 +0800 CST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.19.0-35-generic OperatingSystem:Ubuntu 22.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33396248576 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kehaw-desktop Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0307 11:22:50.175354   10524 docker.go:282] overlay module found
I0307 11:22:50.176064   10524 out.go:177] ✨  根据现有的配置文件使用 docker 驱动程序
I0307 11:22:50.176695   10524 start.go:296] selected driver: docker
I0307 11:22:50.176796   10524 start.go:857] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/kehaw:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 11:22:50.176844   10524 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0307 11:22:50.177066   10524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0307 11:22:50.248718   10524 info.go:266] docker info: {ID:d55f1e6b-37fc-43fa-93a7-f5e4240d7783 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:zfs DriverStatus:[[Zpool rpool] [Zpool Health ONLINE] [Parent Dataset rpool/ROOT/ubuntu_tneqb2/var/lib] [Space Used By Parent 4117721088] [Space Available 53699051520] [Parent Quota no] [Compression lz4]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:false NGoroutines:31 SystemTime:2023-03-07 11:22:50.240823562 +0800 CST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.19.0-35-generic OperatingSystem:Ubuntu 22.10 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33396248576 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:kehaw-desktop Labels:[] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.16.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
W0307 11:22:50.248921   10524 out.go:239] ❗  docker is currently using the zfs storage driver, consider switching to overlay2 for better performance
I0307 11:22:50.250349   10524 cni.go:84] Creating CNI manager for ""
I0307 11:22:50.250367   10524 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0307 11:22:50.250430   10524 start_flags.go:319] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/kehaw:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 11:22:50.251272   10524 out.go:177] 👍  Starting control plane node minikube in cluster minikube
I0307 11:22:50.251962   10524 cache.go:120] Beginning downloading kic base image for docker with docker
I0307 11:22:50.252537   10524 out.go:177] 🚜  Pulling base image ...
I0307 11:22:50.253208   10524 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
I0307 11:22:50.253288   10524 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0307 11:22:50.253371   10524 profile.go:148] Saving config to /home/kehaw/.minikube/profiles/minikube/config.json ...
I0307 11:22:50.253907   10524 cache.go:107] acquiring lock: {Name:mkf8cdde529cccaf12b5547876976aeac85ce853 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 11:22:50.253908   10524 cache.go:107] acquiring lock: {Name:mk58c5f5997525c765339fcea57ee731c8e6f67c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 11:22:50.254057   10524 cache.go:107] acquiring lock: {Name:mk436220cc95d2c6a2f94cc578e2082e8e090189 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 11:22:50.254059   10524 cache.go:107] acquiring lock: {Name:mkd5b03dff843a1dfc848f5fcd29bc891ef59933 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 11:22:50.254061   10524 cache.go:107] acquiring lock: {Name:mkfda1ef13131b8dbcfe27ee4e24d9725da36bb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 11:22:50.254060   10524 cache.go:107] acquiring lock: {Name:mk72e9c1dfbf81120885cf91e675aa24a2acca5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 11:22:50.254063   10524 cache.go:107] acquiring lock: {Name:mk6f8369bc62d72b06ee620df2866dfffde55b43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 11:22:50.254064   10524 cache.go:107] acquiring lock: {Name:mk6b9ab5a76f2fff3f0e07c1a5af39077098336d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 11:22:50.254699   10524 cache.go:115] /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 exists
I0307 11:22:50.254700   10524 cache.go:115] /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
I0307 11:22:50.254701   10524 cache.go:115] /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 exists
I0307 11:22:50.254718   10524 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.26.1" -> "/home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1" took 817.327µs
I0307 11:22:50.254720   10524 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.26.1" -> "/home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1" took 818.788µs
I0307 11:22:50.254720   10524 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 814.942µs
I0307 11:22:50.254731   10524 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.26.1 -> /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 succeeded
I0307 11:22:50.254731   10524 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
I0307 11:22:50.254732   10524 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.26.1 -> /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 succeeded
I0307 11:22:50.254775   10524 cache.go:115] /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 exists
I0307 11:22:50.254777   10524 cache.go:115] /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 exists
I0307 11:22:50.254783   10524 cache.go:115] /home/kehaw/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0307 11:22:50.254782   10524 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.26.1" -> "/home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1" took 889.65µs
I0307 11:22:50.254783   10524 cache.go:96] cache image "registry.k8s.io/etcd:3.5.6-0" -> "/home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0" took 889.292µs
I0307 11:22:50.254787   10524 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.26.1 -> /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 succeeded
I0307 11:22:50.254789   10524 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.6-0 -> /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 succeeded
I0307 11:22:50.254791   10524 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/kehaw/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 902.278µs
I0307 11:22:50.254799   10524 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/kehaw/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0307 11:22:50.254813   10524 cache.go:115] /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
I0307 11:22:50.254818   10524 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 930.691µs
I0307 11:22:50.254824   10524 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
I0307 11:22:50.254895   10524 cache.go:115] /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 exists
I0307 11:22:50.254900   10524 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.26.1" -> "/home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1" took 1.01214ms
I0307 11:22:50.254904   10524 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.26.1 -> /home/kehaw/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 succeeded
I0307 11:22:50.254909   10524 cache.go:87] Successfully saved all images to host disk.
I0307 11:22:50.292057   10524 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
I0307 11:22:50.292078   10524 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
I0307 11:22:50.292100   10524 cache.go:193] Successfully downloaded all kic artifacts
I0307 11:22:50.292236   10524 start.go:364] acquiring machines lock for minikube: {Name:mk95f277aa6f507edc1589c727996cc5a31fb51f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 11:22:50.292330   10524 start.go:368] acquired machines lock for "minikube" in 82.604µs
I0307 11:22:50.292351   10524 start.go:96] Skipping create...Using existing machine configuration
I0307 11:22:50.292366   10524 fix.go:55] fixHost starting: 
I0307 11:22:50.292654   10524 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0307 11:22:50.329500   10524 fix.go:103] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0307 11:22:50.329519   10524 fix.go:129] unexpected machine state, will restart: <nil>
I0307 11:22:50.330258   10524 out.go:177] 🔄  Restarting existing docker container for "minikube" ...
I0307 11:22:50.330764   10524 cli_runner.go:164] Run: docker start minikube
I0307 11:22:50.766951   10524 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}}
I0307 11:22:50.803700   10524 kic.go:426] container "minikube" state is running.
I0307 11:22:50.804558   10524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0307 11:22:50.844183   10524 profile.go:148] Saving config to /home/kehaw/.minikube/profiles/minikube/config.json ...
I0307 11:22:50.844384   10524 machine.go:88] provisioning docker machine ...
I0307 11:22:50.844834   10524 ubuntu.go:169] provisioning hostname "minikube"
I0307 11:22:50.844950   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0307 11:22:50.882156   10524 main.go:141] libmachine: Using SSH client type: native
I0307 11:22:50.882725   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
I0307 11:22:50.882732   10524 main.go:141] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0307 11:22:50.883928   10524 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42810->127.0.0.1:32772: read: connection reset by peer
I0307 11:22:54.021788   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

I0307 11:22:54.021905   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0307 11:22:54.059016   10524 main.go:141] libmachine: Using SSH client type: native
I0307 11:22:54.059127   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
I0307 11:22:54.059136   10524 main.go:141] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0307 11:22:54.173773   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0307 11:22:54.173870   10524 ubuntu.go:175] set auth options {CertDir:/home/kehaw/.minikube CaCertPath:/home/kehaw/.minikube/certs/ca.pem CaPrivateKeyPath:/home/kehaw/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/kehaw/.minikube/machines/server.pem ServerKeyPath:/home/kehaw/.minikube/machines/server-key.pem ClientKeyPath:/home/kehaw/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/kehaw/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/kehaw/.minikube}
I0307 11:22:54.173879   10524 ubuntu.go:177] setting up certificates
I0307 11:22:54.173938   10524 provision.go:83] configureAuth start
I0307 11:22:54.173969   10524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0307 11:22:54.208836   10524 provision.go:138] copyHostCerts
I0307 11:22:54.209201   10524 exec_runner.go:144] found /home/kehaw/.minikube/ca.pem, removing ...
I0307 11:22:54.209284   10524 exec_runner.go:207] rm: /home/kehaw/.minikube/ca.pem
I0307 11:22:54.209334   10524 exec_runner.go:151] cp: /home/kehaw/.minikube/certs/ca.pem --> /home/kehaw/.minikube/ca.pem (1074 bytes)
I0307 11:22:54.209716   10524 exec_runner.go:144] found /home/kehaw/.minikube/cert.pem, removing ...
I0307 11:22:54.209719   10524 exec_runner.go:207] rm: /home/kehaw/.minikube/cert.pem
I0307 11:22:54.209750   10524 exec_runner.go:151] cp: /home/kehaw/.minikube/certs/cert.pem --> /home/kehaw/.minikube/cert.pem (1119 bytes)
I0307 11:22:54.209914   10524 exec_runner.go:144] found /home/kehaw/.minikube/key.pem, removing ...
I0307 11:22:54.209917   10524 exec_runner.go:207] rm: /home/kehaw/.minikube/key.pem
I0307 11:22:54.209945   10524 exec_runner.go:151] cp: /home/kehaw/.minikube/certs/key.pem --> /home/kehaw/.minikube/key.pem (1675 bytes)
I0307 11:22:54.210076   10524 provision.go:112] generating server cert: /home/kehaw/.minikube/machines/server.pem ca-key=/home/kehaw/.minikube/certs/ca.pem private-key=/home/kehaw/.minikube/certs/ca-key.pem org=kehaw.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0307 11:22:54.354678   10524 provision.go:172] copyRemoteCerts
I0307 11:22:54.358199   10524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 11:22:54.358224   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0307 11:22:54.399402   10524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/kehaw/.minikube/machines/minikube/id_rsa Username:docker}
I0307 11:22:54.483682   10524 ssh_runner.go:362] scp /home/kehaw/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I0307 11:22:54.497360   10524 ssh_runner.go:362] scp /home/kehaw/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0307 11:22:54.507329   10524 ssh_runner.go:362] scp /home/kehaw/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes)
I0307 11:22:54.517326   10524 provision.go:86] duration metric: configureAuth took 343.369344ms
I0307 11:22:54.517333   10524 ubuntu.go:193] setting minikube options for container-runtime
I0307 11:22:54.517433   10524 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0307 11:22:54.517459   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0307 11:22:54.554604   10524 main.go:141] libmachine: Using SSH client type: native
I0307 11:22:54.554708   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
I0307 11:22:54.554713   10524 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0307 11:22:54.674042   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: zfs

I0307 11:22:54.674051   10524 ubuntu.go:71] root file system type: zfs
I0307 11:22:54.674291   10524 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0307 11:22:54.674427   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0307 11:22:54.710897   10524 main.go:141] libmachine: Using SSH client type: native
I0307 11:22:54.711026   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
I0307 11:22:54.711069   10524 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0307 11:22:54.834852   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0307 11:22:54.834902   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0307 11:22:54.872275   10524 main.go:141] libmachine: Using SSH client type: native
I0307 11:22:54.872377   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
I0307 11:22:54.872388   10524 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0307 11:22:54.994835   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0307 11:22:54.994843   10524 machine.go:91] provisioned docker machine in 4.15045535s
I0307 11:22:54.994862   10524 start.go:300] post-start starting for "minikube" (driver="docker")
I0307 11:22:54.994866   10524 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 11:22:54.994900   10524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 11:22:54.994922   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0307 11:22:55.030814   10524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/kehaw/.minikube/machines/minikube/id_rsa Username:docker}
I0307 11:22:55.118816   10524 ssh_runner.go:195] Run: cat /etc/os-release
I0307 11:22:55.120233   10524 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0307 11:22:55.120242   10524 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0307 11:22:55.120247   10524 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0307 11:22:55.120250   10524 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0307 11:22:55.120345   10524 filesync.go:126] Scanning /home/kehaw/.minikube/addons for local assets ...
I0307 11:22:55.120385   10524 filesync.go:126] Scanning /home/kehaw/.minikube/files for local assets ...
I0307 11:22:55.120410   10524 start.go:303] post-start completed in 125.544043ms
I0307 11:22:55.120441   10524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0307 11:22:55.120463   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0307 11:22:55.156050   10524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/kehaw/.minikube/machines/minikube/id_rsa Username:docker}
I0307 11:22:55.238624   10524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0307 11:22:55.240528   10524 fix.go:57] fixHost completed within 4.948167334s
I0307 11:22:55.240534   10524 start.go:83] releasing machines lock for "minikube", held for 4.948199263s
I0307 11:22:55.240567   10524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0307 11:22:55.275733   10524 ssh_runner.go:195] Run: cat /version.json
I0307 11:22:55.275758   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0307 11:22:55.275859   10524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0307 11:22:55.275923   10524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0307 11:22:55.316338   10524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/kehaw/.minikube/machines/minikube/id_rsa Username:docker}
I0307 11:22:55.316721   10524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/kehaw/.minikube/machines/minikube/id_rsa Username:docker}
I0307 11:22:55.694775   10524 ssh_runner.go:195] Run: systemctl --version
I0307 11:22:55.700050   10524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0307 11:22:55.702449   10524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0307 11:22:55.712537   10524 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0307 11:22:55.712823   10524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0307 11:22:55.716797   10524 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0307 11:22:55.724602   10524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 11:22:55.728591   10524 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0307 11:22:55.728609   10524 start.go:483] detecting cgroup driver to use...
I0307 11:22:55.728635   10524 detect.go:199] detected "systemd" cgroup driver on host os
I0307 11:22:55.728774   10524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 11:22:55.736479   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0307 11:22:55.741252   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 11:22:55.745787   10524 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
I0307 11:22:55.745811   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I0307 11:22:55.750100   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 11:22:55.754439   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 11:22:55.758736   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 11:22:55.762794   10524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 11:22:55.766620   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 11:22:55.770771   10524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 11:22:55.774672   10524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 11:22:55.778175   10524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 11:22:55.869813   10524 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 11:22:55.905116   10524 start.go:483] detecting cgroup driver to use...
I0307 11:22:55.905136   10524 detect.go:199] detected "systemd" cgroup driver on host os
I0307 11:22:55.905194   10524 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0307 11:22:55.910796   10524 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0307 11:22:55.910829   10524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 11:22:55.916570   10524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 11:22:55.923851   10524 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0307 11:22:56.023637   10524 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0307 11:22:56.069792   10524 docker.go:529] configuring docker to use "systemd" as cgroup driver...
I0307 11:22:56.069808   10524 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes)
I0307 11:22:56.077501   10524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 11:22:56.199688   10524 ssh_runner.go:195] Run: sudo systemctl restart docker
I0307 11:22:56.233111   10524 out.go:177] 
W0307 11:22:56.233808   10524 out.go:239] ❌  Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
stdout:

stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

W0307 11:22:56.233816   10524 out.go:239] 
W0307 11:22:56.234726   10524 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
I0307 11:22:56.235369   10524 out.go:177] 

* 
* ==> Docker <==
* -- Logs begin at Tue 2023-03-07 03:22:50 UTC, end at Tue 2023-03-07 03:25:04 UTC. --
Mar 07 03:22:51 minikube systemd[1]: Failed to start Docker Application Container Engine.
Mar 07 03:22:51 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Mar 07 03:22:51 minikube systemd[1]: Stopped Docker Application Container Engine.
Mar 07 03:22:51 minikube systemd[1]: docker.service: Start request repeated too quickly.
Mar 07 03:22:51 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Mar 07 03:22:51 minikube systemd[1]: Failed to start Docker Application Container Engine.
Mar 07 03:22:56 minikube systemd[1]: Starting Docker Application Container Engine...
Mar 07 03:22:56 minikube dockerd[437]: time="2023-03-07T03:22:56.225791551Z" level=info msg="Starting up"
Mar 07 03:22:56 minikube dockerd[437]: time="2023-03-07T03:22:56.226782243Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 07 03:22:56 minikube dockerd[437]: time="2023-03-07T03:22:56.226792697Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 07 03:22:56 minikube dockerd[437]: time="2023-03-07T03:22:56.226806728Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 07 03:22:56 minikube dockerd[437]: time="2023-03-07T03:22:56.226813164Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 07 03:22:56 minikube dockerd[437]: time="2023-03-07T03:22:56.227640383Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 07 03:22:56 minikube dockerd[437]: time="2023-03-07T03:22:56.227663783Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 07 03:22:56 minikube dockerd[437]: time="2023-03-07T03:22:56.227671741Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 07 03:22:56 minikube dockerd[437]: time="2023-03-07T03:22:56.227692941Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 07 03:22:56 minikube dockerd[437]: time="2023-03-07T03:22:56.228851356Z" level=error msg="failed to mount overlay: invalid argument" storage-driver=overlay2
Mar 07 03:22:56 minikube dockerd[437]: failed to start daemon: error initializing graphdriver: driver not supported
Mar 07 03:22:56 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Mar 07 03:22:56 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Mar 07 03:22:56 minikube systemd[1]: Failed to start Docker Application Container Engine.
Mar 07 03:22:56 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
Mar 07 03:22:56 minikube systemd[1]: Stopped Docker Application Container Engine.
Mar 07 03:22:56 minikube systemd[1]: Starting Docker Application Container Engine...
Mar 07 03:22:56 minikube dockerd[453]: time="2023-03-07T03:22:56.501246089Z" level=info msg="Starting up"
Mar 07 03:22:56 minikube dockerd[453]: time="2023-03-07T03:22:56.502353664Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 07 03:22:56 minikube dockerd[453]: time="2023-03-07T03:22:56.502365361Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 07 03:22:56 minikube dockerd[453]: time="2023-03-07T03:22:56.502377537Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 07 03:22:56 minikube dockerd[453]: time="2023-03-07T03:22:56.502383213Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 07 03:22:56 minikube dockerd[453]: time="2023-03-07T03:22:56.503689763Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 07 03:22:56 minikube dockerd[453]: time="2023-03-07T03:22:56.503715959Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 07 03:22:56 minikube dockerd[453]: time="2023-03-07T03:22:56.503738922Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 07 03:22:56 minikube dockerd[453]: time="2023-03-07T03:22:56.503745839Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 07 03:22:56 minikube dockerd[453]: time="2023-03-07T03:22:56.504977713Z" level=error msg="failed to mount overlay: invalid argument" storage-driver=overlay2
Mar 07 03:22:56 minikube dockerd[453]: failed to start daemon: error initializing graphdriver: driver not supported
Mar 07 03:22:56 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Mar 07 03:22:56 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Mar 07 03:22:56 minikube systemd[1]: Failed to start Docker Application Container Engine.
Mar 07 03:22:56 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
Mar 07 03:22:56 minikube systemd[1]: Stopped Docker Application Container Engine.
Mar 07 03:22:56 minikube systemd[1]: Starting Docker Application Container Engine...
Mar 07 03:22:56 minikube dockerd[469]: time="2023-03-07T03:22:56.750896785Z" level=info msg="Starting up"
Mar 07 03:22:56 minikube dockerd[469]: time="2023-03-07T03:22:56.751859359Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 07 03:22:56 minikube dockerd[469]: time="2023-03-07T03:22:56.751868995Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 07 03:22:56 minikube dockerd[469]: time="2023-03-07T03:22:56.751880656Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 07 03:22:56 minikube dockerd[469]: time="2023-03-07T03:22:56.751886463Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 07 03:22:56 minikube dockerd[469]: time="2023-03-07T03:22:56.752511036Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 07 03:22:56 minikube dockerd[469]: time="2023-03-07T03:22:56.752520233Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 07 03:22:56 minikube dockerd[469]: time="2023-03-07T03:22:56.752531543Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 07 03:22:56 minikube dockerd[469]: time="2023-03-07T03:22:56.752543097Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 07 03:22:56 minikube dockerd[469]: time="2023-03-07T03:22:56.753562259Z" level=error msg="failed to mount overlay: invalid argument" storage-driver=overlay2
Mar 07 03:22:56 minikube dockerd[469]: failed to start daemon: error initializing graphdriver: driver not supported
Mar 07 03:22:56 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Mar 07 03:22:56 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Mar 07 03:22:56 minikube systemd[1]: Failed to start Docker Application Container Engine.
Mar 07 03:22:56 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Mar 07 03:22:56 minikube systemd[1]: Stopped Docker Application Container Engine.
Mar 07 03:22:56 minikube systemd[1]: docker.service: Start request repeated too quickly.
Mar 07 03:22:56 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Mar 07 03:22:56 minikube systemd[1]: Failed to start Docker Application Container Engine.

* 
* ==> container status <==
* 
* ==> describe nodes <==
* 
* ==> dmesg <==
* [Mar 7 03:21] x86/cpu: SGX disabled by BIOS.
[  +0.005713] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
[  +0.000000]   #7  #8  #9 #10 #11
[  +0.010659] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[  +0.677978] hpet_acpi_add: no address or irqs in _CRS
[  +0.010912] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[  +0.000061] platform eisa.0: EISA: Cannot allocate resource for mainboard
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 1
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 2
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 3
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 4
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
[  +0.011426] resource sanity check: requesting [mem 0xfdffe800-0xfe0007ff], which spans more than pnp 00:06 [mem 0xfdb00000-0xfdffffff]
[  +0.000002] caller pmc_core_probe+0xb6/0x250 mapping multiple BARs
[  +0.160994] acpi PNP0C14:01: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:00)
[  +0.000055] wmi_bus wmi_bus-PNP0C14:02: WQBC data block query control method not found
[  +0.000002] acpi PNP0C14:02: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:00)
[  +0.002735] acpi PNP0C14:03: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:00)
[  +0.002206] acpi PNP0C14:04: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:00)
[  +0.000075] acpi PNP0C14:05: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:00)
[  +0.001584] acpi PNP0C14:06: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:00)
[  +0.017814] usb: port power management may be unreliable
[  +0.083139] r8169 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control
[  +1.038974] spl: loading out-of-tree module taints kernel.
[  +0.007168] znvpair: module license 'CDDL' taints kernel.
[  +0.000001] Disabling lock debugging due to kernel taint
[  +1.701995] systemd[1]: Configuration file /etc/systemd/system/runsunloginclient.service is marked executable. Please remove executable permission bits. Proceeding anyway.
[  +5.675057] snd_hda_intel 0000:00:1f.3: Too many BDL entries: buffer=2097152, period=65536
[  +0.000597] snd_hda_intel 0000:00:1f.3: Too many BDL entries: buffer=2097152, period=65536
[  +0.000179] snd_hda_intel 0000:00:1f.3: Too many BDL entries: buffer=1048576, period=32768
[  +0.000485] snd_hda_intel 0000:00:1f.3: Too many BDL entries: buffer=2097152, period=65536
[  +0.000445] snd_hda_intel 0000:00:1f.3: Too many BDL entries: buffer=2097152, period=65536
[  +0.000182] snd_hda_intel 0000:00:1f.3: Too many BDL entries: buffer=1048576, period=32768
[  +3.800593] kauditd_printk_skb: 36 callbacks suppressed
[  +6.841681] snd_hda_intel 0000:00:1f.3: Too many BDL entries: buffer=2097152, period=65536
[  +0.000725] snd_hda_intel 0000:00:1f.3: Too many BDL entries: buffer=2097152, period=65536
[Mar 7 03:22] overlayfs: upper fs does not support RENAME_WHITEOUT.
[  +0.000020] overlayfs: upper fs missing required features.
[  +0.275788] overlayfs: upper fs does not support RENAME_WHITEOUT.
[  +0.000019] overlayfs: upper fs missing required features.
[  +0.246276] overlayfs: upper fs does not support RENAME_WHITEOUT.
[  +0.000020] overlayfs: upper fs missing required features.
[  +4.471535] overlayfs: upper fs does not support RENAME_WHITEOUT.
[  +0.000020] overlayfs: upper fs missing required features.
[  +0.276085] overlayfs: upper fs does not support RENAME_WHITEOUT.
[  +0.000019] overlayfs: upper fs missing required features.
[  +0.248527] overlayfs: upper fs does not support RENAME_WHITEOUT.
[  +0.000019] overlayfs: upper fs missing required features.

* 
* ==> kernel <==
*  03:25:06 up 3 min,  0 users,  load average: 0.77, 0.77, 0.36
Linux minikube 5.19.0-35-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Fri Feb 3 18:36:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"

* 
* ==> kubelet <==
* -- Logs begin at Tue 2023-03-07 03:22:50 UTC, end at Tue 2023-03-07 03:25:06 UTC. --
-- No entries --

使用的操作系统版本

Ubuntu 22.10

@uniorder uniorder added the l/zh-CN Issues in or relating to Chinese label Mar 7, 2023
@guigarfr
Copy link

same here

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 14, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 19, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants