==> Audit <== |---------|------|----------|----------------|---------|---------------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|----------|----------------|---------|---------------------|----------| | start | | minikube | kchernopiatova | v1.33.1 | 14 Aug 24 13:45 +03 | | |---------|------|----------|----------------|---------|---------------------|----------| ==> Last Start <== Log file created at: 2024/08/14 13:45:56 Running on machine: Ksenias-MacBook-Pro Binary: Built with gc go1.22.3 for darwin/arm64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0814 13:45:56.090556 13432 out.go:291] Setting OutFile to fd 1 ... I0814 13:45:56.090720 13432 out.go:343] isatty.IsTerminal(1) = true I0814 13:45:56.090722 13432 out.go:304] Setting ErrFile to fd 2... I0814 13:45:56.090725 13432 out.go:343] isatty.IsTerminal(2) = true I0814 13:45:56.090868 13432 root.go:338] Updating PATH: /Users/kchernopiatova/.minikube/bin W0814 13:45:56.090928 13432 root.go:314] Error reading config file at /Users/kchernopiatova/.minikube/config/config.json: open /Users/kchernopiatova/.minikube/config/config.json: no such file or directory I0814 13:45:56.091697 13432 out.go:298] Setting JSON to false I0814 13:45:56.114650 13432 start.go:129] hostinfo: {"hostname":"Ksenias-MacBook-Pro.local","uptime":72165,"bootTime":1723560191,"procs":323,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"8c737af5-2447-5719-8d76-83a5a51ee78b"} W0814 13:45:56.114730 13432 start.go:137] gopshost.Virtualization returned error: not implemented yet I0814 13:45:56.119994 13432 out.go:177] 😄 minikube v1.33.1 on Darwin 14.5 (arm64) W0814 13:45:56.128478 13432 preload.go:294] Failed to list preload files: open /Users/kchernopiatova/.minikube/cache/preloaded-tarball: no such file or directory I0814 13:45:56.128817 13432 notify.go:220] Checking for updates... I0814 13:45:56.129179 13432 driver.go:392] Setting default libvirt URI to qemu:///system I0814 13:45:56.129223 13432 global.go:112] Querying for installed drivers using PATH=/Users/kchernopiatova/.minikube/bin:/Users/kchernopiatova:/Users/kchernopiatova/go/bin:/Users/kchernopiatova/apache-maven-3.9.6/bin:/opt/homebrew/lib/ruby/gems/3.3.0/bin:/opt/homebrew/opt/ruby/bin:/opt/homebrew/opt/openjdk@11/bin:/opt/homebrew/lib/ruby/gems/3.3.0/bin:/opt/homebrew/opt/ruby/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Library/Apple/usr/bin:/Library/TeX/texbin I0814 13:45:56.129452 13432 global.go:133] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/ Version:} I0814 13:45:56.129540 13432 global.go:133] vmware default: false priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "vmrun": executable file not found in $PATH Reason: Fix:Install vmrun Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:} I0814 13:45:56.170409 13432 docker.go:122] docker version: linux-27.1.1:Docker Desktop 4.33.0 (160616) I0814 13:45:56.170544 13432 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0814 13:45:56.383030 13432 info.go:266] docker info: {ID:71ace211-dd19-4041-b323-e3d1ea226ad4 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:66 SystemTime:2024-08-14 10:45:56.371713344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.10.0-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:4111306752 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/kchernopiatova/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/kchernopiatova/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1-desktop.1] map[Name:compose Path:/Users/kchernopiatova/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1-desktop.1] map[Name:debug Path:/Users/kchernopiatova/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:/Users/kchernopiatova/.docker/cli-plugins/docker-desktop SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/kchernopiatova/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/kchernopiatova/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/kchernopiatova/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/kchernopiatova/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/kchernopiatova/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/kchernopiatova/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.11.0]] Warnings:}} I0814 13:45:56.383132 13432 global.go:133] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0814 13:45:56.383303 13432 global.go:133] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0814 13:45:56.383311 13432 global.go:133] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0814 13:45:56.383388 13432 global.go:133] hyperkit default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "hyperkit": executable file not found in $PATH Reason: Fix:Run 'brew install hyperkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/ Version:} I0814 13:45:56.383449 13432 global.go:133] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/ Version:} I0814 13:45:56.383509 13432 global.go:133] qemu2 default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "qemu-system-aarch64": executable file not found in $PATH Reason: Fix:Install qemu-system Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/qemu/ Version:} I0814 13:45:56.383516 13432 driver.go:314] not recommending "ssh" due to default: false I0814 13:45:56.383530 13432 driver.go:349] Picked: docker I0814 13:45:56.383533 13432 driver.go:350] Alternatives: [ssh] I0814 13:45:56.383535 13432 driver.go:351] Rejects: [virtualbox vmware podman hyperkit parallels qemu2] I0814 13:45:56.391895 13432 out.go:177] ✨ Automatically selected the docker driver I0814 13:45:56.395674 13432 start.go:297] selected driver: docker I0814 13:45:56.395677 13432 start.go:901] validating driver "docker" against I0814 13:45:56.395698 13432 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0814 13:45:56.395817 13432 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0814 13:45:56.459342 13432 info.go:266] docker info: {ID:71ace211-dd19-4041-b323-e3d1ea226ad4 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:66 SystemTime:2024-08-14 10:45:56.447003594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.10.0-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:4111306752 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/kchernopiatova/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/kchernopiatova/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1-desktop.1] map[Name:compose Path:/Users/kchernopiatova/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1-desktop.1] map[Name:debug Path:/Users/kchernopiatova/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:/Users/kchernopiatova/.docker/cli-plugins/docker-desktop SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/kchernopiatova/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/kchernopiatova/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/kchernopiatova/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/kchernopiatova/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/kchernopiatova/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/kchernopiatova/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.11.0]] Warnings:}} I0814 13:45:56.459492 13432 start_flags.go:310] no existing cluster config was found, will generate one from the flags I0814 13:45:56.459606 13432 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=8192MB, container=3920MB I0814 13:45:56.459699 13432 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true] I0814 13:45:56.462955 13432 out.go:177] 📌 Using Docker Desktop driver with root privileges I0814 13:45:56.466717 13432 cni.go:84] Creating CNI manager for "" I0814 13:45:56.466737 13432 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0814 13:45:56.466744 13432 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0814 13:45:56.466785 13432 start.go:340] cluster config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0814 13:45:56.470828 13432 out.go:177] 👍 Starting "minikube" primary control-plane node in "minikube" cluster I0814 13:45:56.476790 13432 cache.go:121] Beginning downloading kic base image for docker with docker I0814 13:45:56.480838 13432 out.go:177] 🚜 Pulling base image v0.0.44 ... I0814 13:45:56.491891 13432 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0814 13:45:56.491894 13432 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local docker daemon I0814 13:45:56.507281 13432 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e to local cache I0814 13:45:56.507437 13432 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e in local cache directory I0814 13:45:56.507577 13432 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e to local cache I0814 13:45:56.631722 13432 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 I0814 13:45:56.631739 13432 cache.go:56] Caching tarball of preloaded images I0814 13:45:56.631876 13432 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0814 13:45:56.641051 13432 out.go:177] 💾 Downloading Kubernetes v1.30.0 preload ... I0814 13:45:56.644169 13432 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ... I0814 13:45:56.860896 13432 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4?checksum=md5:677034533668c42fec962cc52f9b3c42 -> /Users/kchernopiatova/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 I0814 13:46:50.308121 13432 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ... I0814 13:46:50.308313 13432 preload.go:255] verifying checksum of /Users/kchernopiatova/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ... I0814 13:46:50.857473 13432 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker I0814 13:46:50.857651 13432 profile.go:143] Saving config to /Users/kchernopiatova/.minikube/profiles/minikube/config.json ... I0814 13:46:50.857665 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/profiles/minikube/config.json: {Name:mk4512775c7b7aa23b1dc27b66fcfe9943c349fb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:47:11.110257 13432 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e as a tarball I0814 13:47:11.110278 13432 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e from local cache I0814 13:48:14.747496 13432 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e from cached tarball I0814 13:48:14.749200 13432 cache.go:194] Successfully downloaded all kic artifacts I0814 13:48:14.753893 13432 start.go:360] acquireMachinesLock for minikube: {Name:mk40276abb70f5c4e22c492e4e7a38c93dcc0896 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0814 13:48:14.756094 13432 start.go:364] duration metric: took 1.237708ms to acquireMachinesLock for "minikube" I0814 13:48:14.756792 13432 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0814 13:48:14.758738 13432 start.go:125] createHost starting for "" (driver="docker") I0814 13:48:14.773638 13432 out.go:204] 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... I0814 13:48:14.781754 13432 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0814 13:48:14.782088 13432 client.go:168] LocalClient.Create starting I0814 13:48:14.783800 13432 main.go:141] libmachine: Creating CA: /Users/kchernopiatova/.minikube/certs/ca.pem I0814 13:48:14.961473 13432 main.go:141] libmachine: Creating client certificate: /Users/kchernopiatova/.minikube/certs/cert.pem I0814 13:48:15.255791 13432 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0814 13:48:15.274859 13432 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0814 13:48:15.275099 13432 network_create.go:281] running [docker network inspect minikube] to gather additional debugging logs... I0814 13:48:15.275230 13432 cli_runner.go:164] Run: docker network inspect minikube W0814 13:48:15.288952 13432 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0814 13:48:15.288982 13432 network_create.go:284] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error response from daemon: network minikube not found I0814 13:48:15.288992 13432 network_create.go:286] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error response from daemon: network minikube not found ** /stderr ** I0814 13:48:15.289829 13432 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0814 13:48:15.308112 13432 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x14001437d60} I0814 13:48:15.308446 13432 network_create.go:124] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ... I0814 13:48:15.308683 13432 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0814 13:48:15.503522 13432 network_create.go:108] docker network minikube 192.168.49.0/24 created I0814 13:48:15.504313 13432 kic.go:121] calculated static IP "192.168.49.2" for the "minikube" container I0814 13:48:15.505276 13432 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0814 13:48:15.519285 13432 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0814 13:48:15.534455 13432 oci.go:103] Successfully created a docker volume minikube I0814 13:48:15.534612 13432 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -d /var/lib I0814 13:48:16.078013 13432 oci.go:107] Successfully prepared a docker volume minikube I0814 13:48:16.081231 13432 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0814 13:48:16.083631 13432 kic.go:194] Starting extracting preloaded images to volume ... I0814 13:48:16.085194 13432 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/kchernopiatova/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -I lz4 -xf /preloaded.tar -C /extractDir I0814 13:48:19.317412 13432 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/kchernopiatova/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e -I lz4 -xf /preloaded.tar -C /extractDir: (3.227418666s) I0814 13:48:19.320247 13432 kic.go:203] duration metric: took 3.23395425s to extract preloaded images to volume ... I0814 13:48:19.323413 13432 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0814 13:48:19.717428 13432 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e I0814 13:48:20.037240 13432 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0814 13:48:20.063310 13432 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0814 13:48:20.085493 13432 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0814 13:48:20.171701 13432 oci.go:144] the created container "minikube" has a running status. I0814 13:48:20.171780 13432 kic.go:225] Creating ssh key for kic: /Users/kchernopiatova/.minikube/machines/minikube/id_rsa... I0814 13:48:20.235328 13432 kic_runner.go:191] docker (temp): /Users/kchernopiatova/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0814 13:48:20.272288 13432 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0814 13:48:20.294294 13432 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0814 13:48:20.294313 13432 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0814 13:48:20.370804 13432 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0814 13:48:20.389199 13432 machine.go:94] provisionDockerMachine start ... I0814 13:48:20.389775 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:20.409323 13432 main.go:141] libmachine: Using SSH client type: native I0814 13:48:20.412409 13432 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1031d7180] 0x1031d99e0 [] 0s} 127.0.0.1 53715 } I0814 13:48:20.412414 13432 main.go:141] libmachine: About to run SSH command: hostname I0814 13:48:20.417462 13432 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF I0814 13:48:23.782353 13432 main.go:141] libmachine: SSH cmd err, output: : minikube I0814 13:48:23.783057 13432 ubuntu.go:169] provisioning hostname "minikube" I0814 13:48:23.783910 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:23.921900 13432 main.go:141] libmachine: Using SSH client type: native I0814 13:48:23.922215 13432 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1031d7180] 0x1031d99e0 [] 0s} 127.0.0.1 53715 } I0814 13:48:23.922219 13432 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0814 13:48:24.342352 13432 main.go:141] libmachine: SSH cmd err, output: : minikube I0814 13:48:24.342787 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:24.360963 13432 main.go:141] libmachine: Using SSH client type: native I0814 13:48:24.361152 13432 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1031d7180] 0x1031d99e0 [] 0s} 127.0.0.1 53715 } I0814 13:48:24.361158 13432 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0814 13:48:24.675783 13432 main.go:141] libmachine: SSH cmd err, output: : I0814 13:48:24.675823 13432 ubuntu.go:175] set auth options {CertDir:/Users/kchernopiatova/.minikube CaCertPath:/Users/kchernopiatova/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/kchernopiatova/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/kchernopiatova/.minikube/machines/server.pem ServerKeyPath:/Users/kchernopiatova/.minikube/machines/server-key.pem ClientKeyPath:/Users/kchernopiatova/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/kchernopiatova/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/kchernopiatova/.minikube} I0814 13:48:24.675849 13432 ubuntu.go:177] setting up certificates I0814 13:48:24.676189 13432 provision.go:84] configureAuth start I0814 13:48:24.676522 13432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0814 13:48:24.693759 13432 provision.go:143] copyHostCerts I0814 13:48:24.694087 13432 exec_runner.go:151] cp: /Users/kchernopiatova/.minikube/certs/key.pem --> /Users/kchernopiatova/.minikube/key.pem (1679 bytes) I0814 13:48:24.694362 13432 exec_runner.go:151] cp: /Users/kchernopiatova/.minikube/certs/ca.pem --> /Users/kchernopiatova/.minikube/ca.pem (1099 bytes) I0814 13:48:24.694526 13432 exec_runner.go:151] cp: /Users/kchernopiatova/.minikube/certs/cert.pem --> /Users/kchernopiatova/.minikube/cert.pem (1143 bytes) I0814 13:48:24.694824 13432 provision.go:117] generating server cert: /Users/kchernopiatova/.minikube/machines/server.pem ca-key=/Users/kchernopiatova/.minikube/certs/ca.pem private-key=/Users/kchernopiatova/.minikube/certs/ca-key.pem org=kchernopiatova.minikube san=[127.0.0.1 192.168.49.2 localhost minikube] I0814 13:48:24.843223 13432 provision.go:177] copyRemoteCerts I0814 13:48:24.843455 13432 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0814 13:48:24.843508 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:24.857005 13432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53715 SSHKeyPath:/Users/kchernopiatova/.minikube/machines/minikube/id_rsa Username:docker} I0814 13:48:25.125126 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1099 bytes) I0814 13:48:25.352500 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes) I0814 13:48:25.573120 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0814 13:48:25.819482 13432 provision.go:87] duration metric: took 1.143276708s to configureAuth I0814 13:48:25.819505 13432 ubuntu.go:193] setting minikube options for container-runtime I0814 13:48:25.824344 13432 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0814 13:48:25.824498 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:25.840329 13432 main.go:141] libmachine: Using SSH client type: native I0814 13:48:25.840477 13432 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1031d7180] 0x1031d99e0 [] 0s} 127.0.0.1 53715 } I0814 13:48:25.840481 13432 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0814 13:48:26.152522 13432 main.go:141] libmachine: SSH cmd err, output: : overlay I0814 13:48:26.152545 13432 ubuntu.go:71] root file system type: overlay I0814 13:48:26.154242 13432 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ... I0814 13:48:26.154577 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:26.171564 13432 main.go:141] libmachine: Using SSH client type: native I0814 13:48:26.171720 13432 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1031d7180] 0x1031d99e0 [] 0s} 127.0.0.1 53715 } I0814 13:48:26.171755 13432 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0814 13:48:26.596261 13432 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0814 13:48:26.596781 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:26.614449 13432 main.go:141] libmachine: Using SSH client type: native I0814 13:48:26.614598 13432 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1031d7180] 0x1031d99e0 [] 0s} 127.0.0.1 53715 } I0814 13:48:26.614609 13432 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0814 13:48:30.416529 13432 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2024-04-30 11:46:26.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2024-08-14 10:48:26.588902011 +0000 @@ -1,46 +1,49 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. +LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0814 13:48:30.416579 13432 machine.go:97] duration metric: took 10.027627208s to provisionDockerMachine I0814 13:48:30.416880 13432 client.go:171] duration metric: took 15.635207333s to LocalClient.Create I0814 13:48:30.416928 13432 start.go:167] duration metric: took 15.63560175s to libmachine.API.Create "minikube" I0814 13:48:30.416937 13432 start.go:293] postStartSetup for "minikube" (driver="docker") I0814 13:48:30.416975 13432 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0814 13:48:30.417199 13432 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0814 13:48:30.417439 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:30.446860 13432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53715 SSHKeyPath:/Users/kchernopiatova/.minikube/machines/minikube/id_rsa Username:docker} I0814 13:48:30.705106 13432 ssh_runner.go:195] Run: cat /etc/os-release I0814 13:48:30.739516 13432 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0814 13:48:30.739560 13432 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0814 13:48:30.739570 13432 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0814 13:48:30.739575 13432 info.go:137] Remote host: Ubuntu 22.04.4 LTS I0814 13:48:30.739585 13432 filesync.go:126] Scanning /Users/kchernopiatova/.minikube/addons for local assets ... I0814 13:48:30.739764 13432 filesync.go:126] Scanning /Users/kchernopiatova/.minikube/files for local assets ... I0814 13:48:30.739834 13432 start.go:296] duration metric: took 322.898917ms for postStartSetup I0814 13:48:30.742321 13432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0814 13:48:30.769513 13432 profile.go:143] Saving config to /Users/kchernopiatova/.minikube/profiles/minikube/config.json ... I0814 13:48:30.770571 13432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0814 13:48:30.770622 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:30.787768 13432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53715 SSHKeyPath:/Users/kchernopiatova/.minikube/machines/minikube/id_rsa Username:docker} I0814 13:48:31.008459 13432 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0814 13:48:31.064196 13432 start.go:128] duration metric: took 16.305760667s to createHost I0814 13:48:31.065009 13432 start.go:83] releasing machines lock for "minikube", held for 16.3090325s I0814 13:48:31.065894 13432 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0814 13:48:31.099494 13432 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0814 13:48:31.099805 13432 ssh_runner.go:195] Run: cat /version.json I0814 13:48:31.099873 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:31.100334 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0814 13:48:31.117109 13432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53715 SSHKeyPath:/Users/kchernopiatova/.minikube/machines/minikube/id_rsa Username:docker} I0814 13:48:31.117163 13432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53715 SSHKeyPath:/Users/kchernopiatova/.minikube/machines/minikube/id_rsa Username:docker} I0814 13:48:31.320680 13432 ssh_runner.go:195] Run: systemctl --version I0814 13:48:31.574712 13432 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0814 13:48:31.634067 13432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0814 13:48:31.931284 13432 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0814 13:48:31.931688 13432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0814 13:48:32.171195 13432 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0814 13:48:32.171861 13432 start.go:494] detecting cgroup driver to use... I0814 13:48:32.171897 13432 detect.go:196] detected "cgroupfs" cgroup driver on host os I0814 13:48:32.173411 13432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0814 13:48:32.340925 13432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0814 13:48:32.445102 13432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0814 13:48:32.549494 13432 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver... I0814 13:48:32.550135 13432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml" I0814 13:48:32.655407 13432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0814 13:48:32.757506 13432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0814 13:48:32.861763 13432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0814 13:48:32.963592 13432 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0814 13:48:33.062668 13432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0814 13:48:33.165605 13432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml" I0814 13:48:33.272420 13432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml" I0814 13:48:33.382476 13432 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0814 13:48:33.474029 13432 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0814 13:48:33.575265 13432 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0814 13:48:33.952515 13432 ssh_runner.go:195] Run: sudo systemctl restart containerd I0814 13:48:34.612995 13432 start.go:494] detecting cgroup driver to use... I0814 13:48:34.613038 13432 detect.go:196] detected "cgroupfs" cgroup driver on host os I0814 13:48:34.613493 13432 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0814 13:48:34.850551 13432 cruntime.go:279] skipping containerd shutdown because we are bound to it I0814 13:48:34.850972 13432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0814 13:48:35.025259 13432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0814 13:48:35.400879 13432 ssh_runner.go:195] Run: which cri-dockerd I0814 13:48:35.445930 13432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0814 13:48:35.538644 13432 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0814 13:48:35.730287 13432 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0814 13:48:36.141525 13432 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0814 13:48:36.684817 13432 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver... I0814 13:48:36.687818 13432 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes) I0814 13:48:36.877531 13432 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0814 13:48:37.247640 13432 ssh_runner.go:195] Run: sudo systemctl restart docker I0814 13:48:39.840399 13432 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.592797375s) I0814 13:48:39.840594 13432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket I0814 13:48:39.934786 13432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0814 13:48:40.027208 13432 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0814 13:48:40.338563 13432 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0814 13:48:40.641881 13432 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0814 13:48:40.943209 13432 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0814 13:48:41.060218 13432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0814 13:48:41.154884 13432 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0814 13:48:41.433583 13432 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service I0814 13:48:41.922113 13432 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock I0814 13:48:41.923019 13432 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0814 13:48:41.967788 13432 start.go:562] Will wait 60s for crictl version I0814 13:48:41.968014 13432 ssh_runner.go:195] Run: which crictl I0814 13:48:42.010975 13432 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0814 13:48:42.224770 13432 start.go:578] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 26.1.1 RuntimeApiVersion: v1 I0814 13:48:42.224921 13432 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0814 13:48:42.412833 13432 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0814 13:48:42.597651 13432 out.go:204] 🐳 Preparing Kubernetes v1.30.0 on Docker 26.1.1 ... I0814 13:48:42.599693 13432 cli_runner.go:164] Run: docker exec -t minikube dig +short host.docker.internal I0814 13:48:42.769638 13432 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254 I0814 13:48:42.770273 13432 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts I0814 13:48:42.815034 13432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0814 13:48:42.942888 13432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0814 13:48:42.975121 13432 kubeadm.go:877] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ... I0814 13:48:42.976084 13432 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0814 13:48:42.976171 13432 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0814 13:48:43.119766 13432 docker.go:685] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0814 13:48:43.119783 13432 docker.go:615] Images already preloaded, skipping extraction I0814 13:48:43.120971 13432 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0814 13:48:43.254271 13432 docker.go:685] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0814 13:48:43.254287 13432 cache_images.go:84] Images are preloaded, skipping loading I0814 13:48:43.254296 13432 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 docker true true} ... I0814 13:48:43.255378 13432 kubeadm.go:940] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} I0814 13:48:43.255552 13432 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0814 13:48:43.576660 13432 cni.go:84] Creating CNI manager for "" I0814 13:48:43.576686 13432 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0814 13:48:43.576702 13432 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0814 13:48:43.576742 13432 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0814 13:48:43.577485 13432 kubeadm.go:187] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.30.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0814 13:48:43.577954 13432 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0 I0814 13:48:43.662879 13432 binaries.go:44] Found k8s binaries, skipping transfer I0814 13:48:43.663138 13432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0814 13:48:43.744292 13432 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes) I0814 13:48:43.900989 13432 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0814 13:48:44.059704 13432 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes) I0814 13:48:44.217534 13432 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0814 13:48:44.258175 13432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0814 13:48:44.383823 13432 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0814 13:48:44.698975 13432 ssh_runner.go:195] Run: sudo systemctl start kubelet I0814 13:48:44.807458 13432 certs.go:68] Setting up /Users/kchernopiatova/.minikube/profiles/minikube for IP: 192.168.49.2 I0814 13:48:44.807671 13432 certs.go:194] generating shared ca certs ... I0814 13:48:44.807696 13432 certs.go:226] acquiring lock for ca certs: {Name:mka446c2b1c65948a74eee58496e0e0c23ae1c1f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:44.809312 13432 certs.go:240] generating "minikubeCA" ca cert: /Users/kchernopiatova/.minikube/ca.key I0814 13:48:45.000847 13432 crypto.go:156] Writing cert to /Users/kchernopiatova/.minikube/ca.crt ... I0814 13:48:45.000860 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/ca.crt: {Name:mk365f312ea717007e3c6ebd534187e7103b20b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:45.001098 13432 crypto.go:164] Writing key to /Users/kchernopiatova/.minikube/ca.key ... I0814 13:48:45.001101 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/ca.key: {Name:mkd0c98707f16b5564c669a34045c0996f3c5365 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:45.001196 13432 certs.go:240] generating "proxyClientCA" ca cert: /Users/kchernopiatova/.minikube/proxy-client-ca.key I0814 13:48:45.030893 13432 crypto.go:156] Writing cert to /Users/kchernopiatova/.minikube/proxy-client-ca.crt ... I0814 13:48:45.030895 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/proxy-client-ca.crt: {Name:mkd24320ec9e35e4c56443a4fdfb0189ab6498e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:45.030985 13432 crypto.go:164] Writing key to /Users/kchernopiatova/.minikube/proxy-client-ca.key ... I0814 13:48:45.030987 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/proxy-client-ca.key: {Name:mkb9c05e4d09d3cd4cc84ac19552d53d925ee325 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:45.031078 13432 certs.go:256] generating profile certs ... I0814 13:48:45.031118 13432 certs.go:363] generating signed profile cert for "minikube-user": /Users/kchernopiatova/.minikube/profiles/minikube/client.key I0814 13:48:45.031405 13432 crypto.go:68] Generating cert /Users/kchernopiatova/.minikube/profiles/minikube/client.crt with IP's: [] I0814 13:48:45.123544 13432 crypto.go:156] Writing cert to /Users/kchernopiatova/.minikube/profiles/minikube/client.crt ... I0814 13:48:45.123552 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/profiles/minikube/client.crt: {Name:mkc0d6c14396c4b512d6c612a710afa1d835c4b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:45.123741 13432 crypto.go:164] Writing key to /Users/kchernopiatova/.minikube/profiles/minikube/client.key ... I0814 13:48:45.123743 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/profiles/minikube/client.key: {Name:mka29b24a7c06ad825277d1ef16a70a14e20234f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:45.123836 13432 certs.go:363] generating signed profile cert for "minikube": /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.key.7fb57e3c I0814 13:48:45.123845 13432 crypto.go:68] Generating cert /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.crt.7fb57e3c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2] I0814 13:48:45.361689 13432 crypto.go:156] Writing cert to /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.crt.7fb57e3c ... I0814 13:48:45.361699 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.crt.7fb57e3c: {Name:mk9113166b080770bca0290d24f89fc529a77214 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:45.362012 13432 crypto.go:164] Writing key to /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.key.7fb57e3c ... I0814 13:48:45.362015 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.key.7fb57e3c: {Name:mkcf01edc0729fb7cdbc30fad37b79ecaf98745e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:45.362127 13432 certs.go:381] copying /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.crt.7fb57e3c -> /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.crt I0814 13:48:45.362530 13432 certs.go:385] copying /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.key.7fb57e3c -> /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.key I0814 13:48:45.362740 13432 certs.go:363] generating signed profile cert for "aggregator": /Users/kchernopiatova/.minikube/profiles/minikube/proxy-client.key I0814 13:48:45.362752 13432 crypto.go:68] Generating cert /Users/kchernopiatova/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0814 13:48:45.452161 13432 crypto.go:156] Writing cert to /Users/kchernopiatova/.minikube/profiles/minikube/proxy-client.crt ... I0814 13:48:45.452167 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/profiles/minikube/proxy-client.crt: {Name:mk56b4a554076f1c671243d950bf6e4d6afe38e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:45.452555 13432 crypto.go:164] Writing key to /Users/kchernopiatova/.minikube/profiles/minikube/proxy-client.key ... I0814 13:48:45.452566 13432 lock.go:35] WriteFile acquiring /Users/kchernopiatova/.minikube/profiles/minikube/proxy-client.key: {Name:mka7a835152c84bdcb0387cbfefbf0b151477992 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0814 13:48:45.453406 13432 certs.go:484] found cert: /Users/kchernopiatova/.minikube/certs/ca-key.pem (1675 bytes) I0814 13:48:45.453541 13432 certs.go:484] found cert: /Users/kchernopiatova/.minikube/certs/ca.pem (1099 bytes) I0814 13:48:45.453635 13432 certs.go:484] found cert: /Users/kchernopiatova/.minikube/certs/cert.pem (1143 bytes) I0814 13:48:45.453732 13432 certs.go:484] found cert: /Users/kchernopiatova/.minikube/certs/key.pem (1679 bytes) I0814 13:48:45.460281 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0814 13:48:45.721301 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0814 13:48:45.939950 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0814 13:48:46.158787 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0814 13:48:46.379207 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes) I0814 13:48:46.605942 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0814 13:48:46.829879 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0814 13:48:47.061876 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0814 13:48:47.282778 13432 ssh_runner.go:362] scp /Users/kchernopiatova/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0814 13:48:47.504429 13432 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0814 13:48:47.661714 13432 ssh_runner.go:195] Run: openssl version I0814 13:48:47.704394 13432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0814 13:48:47.803960 13432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0814 13:48:47.845848 13432 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 10:48 /usr/share/ca-certificates/minikubeCA.pem I0814 13:48:47.845946 13432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0814 13:48:47.899655 13432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0814 13:48:47.999519 13432 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt I0814 13:48:48.036104 13432 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1 stdout: stderr: stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory I0814 13:48:48.036444 13432 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0814 13:48:48.036641 13432 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0814 13:48:48.177173 13432 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0814 13:48:48.265970 13432 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0814 13:48:48.350200 13432 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver I0814 13:48:48.350757 13432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0814 13:48:48.466601 13432 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0814 13:48:48.466817 13432 kubeadm.go:156] found existing configuration files: I0814 13:48:48.466953 13432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0814 13:48:48.586084 13432 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/admin.conf: No such file or directory I0814 13:48:48.586242 13432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf I0814 13:48:48.673543 13432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0814 13:48:48.762694 13432 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/kubelet.conf: No such file or directory I0814 13:48:48.762852 13432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf I0814 13:48:48.850923 13432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0814 13:48:48.949677 13432 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/controller-manager.conf: No such file or directory I0814 13:48:48.949877 13432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0814 13:48:49.059238 13432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0814 13:48:49.189288 13432 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/scheduler.conf: No such file or directory I0814 13:48:49.189475 13432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0814 13:48:49.274802 13432 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0814 13:48:49.478540 13432 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0 I0814 13:48:49.478622 13432 kubeadm.go:309] [preflight] Running pre-flight checks I0814 13:48:49.757290 13432 kubeadm.go:309] [WARNING Swap]: swap is supported for cgroup v2 only; the NodeSwap feature gate of the kubelet is beta but disabled by default I0814 13:48:49.757628 13432 kubeadm.go:309] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0814 13:48:49.757885 13432 kubeadm.go:309] error execution phase preflight: [preflight] Some fatal errors occurred: I0814 13:48:49.758266 13432 kubeadm.go:309] [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH I0814 13:48:49.758612 13432 kubeadm.go:309] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` I0814 13:48:49.758807 13432 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher W0814 13:48:49.759643 13432 out.go:239] 💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.30.0 [preflight] Running pre-flight checks stderr: [WARNING Swap]: swap is supported for cgroup v2 only; the NodeSwap feature gate of the kubelet is beta but disabled by default [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher I0814 13:48:49.759763 13432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force" I0814 13:49:07.200009 13432 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (17.440635417s) I0814 13:49:07.200949 13432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0814 13:49:07.300574 13432 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver I0814 13:49:07.300794 13432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0814 13:49:07.385723 13432 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0814 13:49:07.385774 13432 kubeadm.go:156] found existing configuration files: I0814 13:49:07.385967 13432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0814 13:49:07.474313 13432 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/admin.conf: No such file or directory I0814 13:49:07.474502 13432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf I0814 13:49:07.556451 13432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0814 13:49:07.644693 13432 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/kubelet.conf: No such file or directory I0814 13:49:07.644887 13432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf I0814 13:49:07.726930 13432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0814 13:49:07.814935 13432 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/controller-manager.conf: No such file or directory I0814 13:49:07.815137 13432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0814 13:49:07.896674 13432 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0814 13:49:07.985768 13432 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/scheduler.conf: No such file or directory I0814 13:49:07.985965 13432 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0814 13:49:08.072788 13432 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0814 13:49:08.218672 13432 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0 I0814 13:49:08.218821 13432 kubeadm.go:309] [preflight] Running pre-flight checks I0814 13:49:08.491389 13432 kubeadm.go:309] [WARNING Swap]: swap is supported for cgroup v2 only; the NodeSwap feature gate of the kubelet is beta but disabled by default I0814 13:49:08.491559 13432 kubeadm.go:309] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0814 13:49:08.491665 13432 kubeadm.go:309] error execution phase preflight: [preflight] Some fatal errors occurred: I0814 13:49:08.491825 13432 kubeadm.go:309] [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH I0814 13:49:08.491986 13432 kubeadm.go:309] [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` I0814 13:49:08.492085 13432 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher I0814 13:49:08.492157 13432 kubeadm.go:393] duration metric: took 20.45630675s to StartCluster I0814 13:49:08.493262 13432 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]} I0814 13:49:08.493419 13432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver I0814 13:49:08.702079 13432 cri.go:89] found id: "" I0814 13:49:08.702404 13432 logs.go:276] 0 containers: [] W0814 13:49:08.702415 13432 logs.go:278] No container was found matching "kube-apiserver" I0814 13:49:08.702422 13432 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]} I0814 13:49:08.702626 13432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd I0814 13:49:08.906085 13432 cri.go:89] found id: "" I0814 13:49:08.906099 13432 logs.go:276] 0 containers: [] W0814 13:49:08.906107 13432 logs.go:278] No container was found matching "etcd" I0814 13:49:08.906113 13432 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]} I0814 13:49:08.906274 13432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns I0814 13:49:09.109410 13432 cri.go:89] found id: "" I0814 13:49:09.109434 13432 logs.go:276] 0 containers: [] W0814 13:49:09.109442 13432 logs.go:278] No container was found matching "coredns" I0814 13:49:09.109450 13432 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]} I0814 13:49:09.109672 13432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler I0814 13:49:09.312668 13432 cri.go:89] found id: "" I0814 13:49:09.312683 13432 logs.go:276] 0 containers: [] W0814 13:49:09.312691 13432 logs.go:278] No container was found matching "kube-scheduler" I0814 13:49:09.312697 13432 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]} I0814 13:49:09.312871 13432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy I0814 13:49:09.510790 13432 cri.go:89] found id: "" I0814 13:49:09.510804 13432 logs.go:276] 0 containers: [] W0814 13:49:09.510812 13432 logs.go:278] No container was found matching "kube-proxy" I0814 13:49:09.510818 13432 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]} I0814 13:49:09.510995 13432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager I0814 13:49:09.712061 13432 cri.go:89] found id: "" I0814 13:49:09.712075 13432 logs.go:276] 0 containers: [] W0814 13:49:09.712084 13432 logs.go:278] No container was found matching "kube-controller-manager" I0814 13:49:09.712089 13432 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]} I0814 13:49:09.712266 13432 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet I0814 13:49:09.912395 13432 cri.go:89] found id: "" I0814 13:49:09.912409 13432 logs.go:276] 0 containers: [] W0814 13:49:09.912418 13432 logs.go:278] No container was found matching "kindnet" I0814 13:49:09.912429 13432 logs.go:123] Gathering logs for dmesg ... I0814 13:49:09.912439 13432 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0814 13:49:10.022582 13432 logs.go:123] Gathering logs for describe nodes ... I0814 13:49:10.022597 13432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0814 13:49:10.162998 13432 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: E0814 10:49:10.154305 3312 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:10.154569 3312 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:10.156764 3312 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:10.156895 3312 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:10.158753 3312 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** E0814 10:49:10.154305 3312 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:10.154569 3312 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:10.156764 3312 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:10.156895 3312 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:10.158753 3312 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** I0814 13:49:10.163021 13432 logs.go:123] Gathering logs for Docker ... I0814 13:49:10.163031 13432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400" I0814 13:49:10.318917 13432 logs.go:123] Gathering logs for container status ... I0814 13:49:10.319105 13432 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0814 13:49:10.569328 13432 logs.go:123] Gathering logs for kubelet ... I0814 13:49:10.569344 13432 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" W0814 13:49:10.695623 13432 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.30.0 [preflight] Running pre-flight checks stderr: [WARNING Swap]: swap is supported for cgroup v2 only; the NodeSwap feature gate of the kubelet is beta but disabled by default [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher W0814 13:49:10.695689 13432 out.go:239] W0814 13:49:10.695874 13432 out.go:239] 💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.30.0 [preflight] Running pre-flight checks stderr: [WARNING Swap]: swap is supported for cgroup v2 only; the NodeSwap feature gate of the kubelet is beta but disabled by default [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher W0814 13:49:10.696028 13432 out.go:239] W0814 13:49:10.699504 13432 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ 😿 If the above advice does not help, please let us know: │ │ 👉 https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰───────────────────────────────────────────────────────────────────────────────────────────╯ I0814 13:49:10.713743 13432 out.go:177] W0814 13:49:10.717794 13432 out.go:239] ❌ Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.30.0 [preflight] Running pre-flight checks stderr: [WARNING Swap]: swap is supported for cgroup v2 only; the NodeSwap feature gate of the kubelet is beta but disabled by default [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR KubeletVersion]: couldn't get kubelet version: cannot execute 'kubelet --version': executable file not found in $PATH [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher W0814 13:49:10.717885 13432 out.go:239] W0814 13:49:10.720633 13432 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ 😿 If the above advice does not help, please let us know: │ │ 👉 https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ │ ╰───────────────────────────────────────────────────────────────────────────────────────────╯ I0814 13:49:10.726643 13432 out.go:177] ==> Docker <== Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"format\\\"\". Proceed without further sandbox information." Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"format\\\"\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="Failed to delete corrupt checkpoint for sandbox format\": invalid key: \"format\\\"\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"format\\\"\". Proceed without further sandbox information." Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"format\\\"\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="Failed to delete corrupt checkpoint for sandbox format\": invalid key: \"format\\\"\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"format\\\"\". Proceed without further sandbox information." Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"format\\\"\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="invalid key: \"format\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDformat\"" Aug 14 10:49:05 minikube cri-dockerd[1344]: time="2024-08-14T10:49:05Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"format\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Failed to delete corrupt checkpoint for sandbox endpoint=\"/var/run/cri-dockerd.sock\": invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\". Proceed without further sandbox information." Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="CNI failed to delete loopback network: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"Failed to delete corrupt checkpoint for sandboxpodSandboxIDendpoint=\"/var/run/cri-dockerd.sock\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Error deleting network when building cni runtime conf: could not retrieve port mappings: invalid key: \"endpoint=\\\"/var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:06 minikube cri-dockerd[1344]: time="2024-08-14T10:49:06Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:07 minikube cri-dockerd[1344]: time="2024-08-14T10:49:07Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" Aug 14 10:49:07 minikube cri-dockerd[1344]: time="2024-08-14T10:49:07Z" level=error msg="Failed to delete corrupt checkpoint for sandbox URL=\"unix:///var/run/cri-dockerd.sock\": invalid key: \"URL=\\\"unix:///var/run/cri-dockerd.sock\\\"\"" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD ==> describe nodes <== command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: E0814 10:49:44.356086 3481 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:44.356302 3481 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:44.358147 3481 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:44.358336 3481 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused E0814 10:49:44.360204 3481 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp [::1]:8443: connect: connection refused The connection to the server localhost:8443 was refused - did you specify the right host or port? ==> dmesg <== [Aug14 10:45] netlink: 'init': attribute type 4 has an invalid length. [ +0.040430] fakeowner: loading out-of-tree module taints kernel. ==> kernel <== 10:49:44 up 4 min, 0 users, load average: 1.87, 0.99, 0.40 Linux minikube 6.10.0-linuxkit #1 SMP Wed Jul 17 10:51:09 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 22.04.4 LTS" ==> kubelet <== Aug 14 10:48:44 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. Aug 14 10:48:44 minikube systemd[1548]: kubelet.service: Failed to execute /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:44 minikube systemd[1548]: kubelet.service: Failed at step EXEC spawning /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:44 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC Aug 14 10:48:44 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 14 10:48:45 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 14 10:48:45 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Aug 14 10:48:45 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. Aug 14 10:48:45 minikube systemd[1551]: kubelet.service: Failed to execute /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:45 minikube systemd[1551]: kubelet.service: Failed at step EXEC spawning /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:45 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC Aug 14 10:48:45 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 14 10:48:46 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 14 10:48:46 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Aug 14 10:48:46 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. Aug 14 10:48:46 minikube systemd[1573]: kubelet.service: Failed to execute /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:46 minikube systemd[1573]: kubelet.service: Failed at step EXEC spawning /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:46 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC Aug 14 10:48:46 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 14 10:48:47 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 14 10:48:47 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Aug 14 10:48:47 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. Aug 14 10:48:47 minikube systemd[1597]: kubelet.service: Failed to execute /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:47 minikube systemd[1597]: kubelet.service: Failed at step EXEC spawning /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:47 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC Aug 14 10:48:47 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 14 10:48:47 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 14 10:48:47 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Aug 14 10:48:47 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. Aug 14 10:48:47 minikube systemd[1618]: kubelet.service: Failed to execute /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:47 minikube systemd[1618]: kubelet.service: Failed at step EXEC spawning /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:47 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC Aug 14 10:48:47 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 14 10:48:48 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 14 10:48:48 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Aug 14 10:48:48 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. Aug 14 10:48:48 minikube systemd[1641]: kubelet.service: Failed to execute /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:48 minikube systemd[1641]: kubelet.service: Failed at step EXEC spawning /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:48 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC Aug 14 10:48:48 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 14 10:48:49 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Aug 14 10:48:49 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Aug 14 10:48:49 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. Aug 14 10:48:49 minikube systemd[1658]: kubelet.service: Failed to execute /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:49 minikube systemd[1658]: kubelet.service: Failed at step EXEC spawning /var/lib/minikube/binaries/v1.30.0/kubelet: No such file or directory Aug 14 10:48:49 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC Aug 14 10:48:49 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 14 10:48:49 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.