* * ==> Audit <== * |---------|------|----------|---------|---------|---------------------|----------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|----------|---------|---------|---------------------|----------| | start | | minikube | sadique | v1.26.1 | 17 Aug 22 12:36 IST | | |---------|------|----------|---------|---------|---------------------|----------| * * ==> Last Start <== * Log file created at: 2022/08/17 12:36:02 Running on machine: sadique Binary: Built with gc go1.18.3 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0817 12:36:02.194920 17705 out.go:296] Setting OutFile to fd 1 ... I0817 12:36:02.195208 17705 out.go:348] isatty.IsTerminal(1) = true I0817 12:36:02.195215 17705 out.go:309] Setting ErrFile to fd 2... I0817 12:36:02.195227 17705 out.go:348] isatty.IsTerminal(2) = true I0817 12:36:02.195398 17705 root.go:333] Updating PATH: /home/sadique/.minikube/bin W0817 12:36:02.195587 17705 root.go:310] Error reading config file at /home/sadique/.minikube/config/config.json: open /home/sadique/.minikube/config/config.json: no such file or directory I0817 12:36:02.196445 17705 out.go:303] Setting JSON to false I0817 12:36:02.221373 17705 start.go:115] hostinfo: {"hostname":"sadique","uptime":6381,"bootTime":1660713581,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"5.15.0-46-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"edcbd977-6afb-41d6-a136-66c2173deefa"} I0817 12:36:02.221471 17705 start.go:125] virtualization: kvm host I0817 12:36:02.223830 17705 out.go:177] ๐Ÿ˜„ minikube v1.26.1 on Ubuntu 22.04 W0817 12:36:02.225331 17705 preload.go:295] Failed to list preload files: open /home/sadique/.minikube/cache/preloaded-tarball: no such file or directory I0817 12:36:02.225410 17705 notify.go:193] Checking for updates... I0817 12:36:02.225434 17705 driver.go:365] Setting default libvirt URI to qemu:///system I0817 12:36:02.225462 17705 global.go:111] Querying for installed drivers using PATH=/home/sadique/.minikube/bin:/home/sadique/.yarn/bin:/home/sadique/.config/yarn/global/node_modules/.bin:/home/sadique/.nvm/versions/node/v16.15.0/bin:/home/sadique/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin:/home/sadique/.local/share/JetBrains/Toolbox/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin:/usr/local/go/bin:/home/sadique/.jdks/corretto-17.0.3/bin:/home/sadique/Android/Gradle/gradle-4.10.3-all/gradle-4.10.3/bin I0817 12:36:02.225472 17705 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0817 12:36:02.267938 17705 virtualbox.go:136] virtual box version: 6.1.34_Ubuntur150636 I0817 12:36:02.267953 17705 global.go:119] virtualbox default: true priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:6.1.34_Ubuntur150636 } I0817 12:36:02.268120 17705 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:} I0817 12:36:02.386685 17705 docker.go:137] docker version: linux-20.10.17 I0817 12:36:02.386814 17705 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0817 12:36:02.643156 17705 info.go:265] docker info: {ID:KAKH:35OZ:XSZG:KC2C:NDSN:HKVH:NXCU:W3XU:Q6W3:Z7ZS:B4XV:ZEKZ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-17 07:06:02.49040098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:2957619200 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/libexec/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/libexec/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0817 12:36:02.643281 17705 docker.go:254] overlay module found I0817 12:36:02.643290 17705 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0817 12:36:02.643441 17705 global.go:119] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/ Version:} I0817 12:36:02.659108 17705 global.go:119] none default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0817 12:36:02.659326 17705 global.go:119] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0817 12:36:02.659412 17705 global.go:119] qemu2 default: true priority: 3, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0817 12:36:02.659439 17705 driver.go:300] not recommending "ssh" due to default: false I0817 12:36:02.659447 17705 driver.go:300] not recommending "none" due to default: false I0817 12:36:02.659455 17705 driver.go:305] not recommending "qemu2" due to priority: 3 I0817 12:36:02.659476 17705 driver.go:335] Picked: docker I0817 12:36:02.659491 17705 driver.go:336] Alternatives: [virtualbox ssh none qemu2 (experimental)] I0817 12:36:02.659502 17705 driver.go:337] Rejects: [vmware kvm2 podman] I0817 12:36:02.661551 17705 out.go:177] โœจ Automatically selected the docker driver. Other choices: virtualbox, ssh, none, qemu2 (experimental) I0817 12:36:02.662953 17705 start.go:284] selected driver: docker I0817 12:36:02.662960 17705 start.go:808] validating driver "docker" against I0817 12:36:02.662975 17705 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0817 12:36:02.663084 17705 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0817 12:36:02.947789 17705 info.go:265] docker info: {ID:KAKH:35OZ:XSZG:KC2C:NDSN:HKVH:NXCU:W3XU:Q6W3:Z7ZS:B4XV:ZEKZ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:51 SystemTime:2022-08-17 07:06:02.772294781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:2957619200 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/libexec/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/libexec/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0817 12:36:02.948035 17705 start_flags.go:296] no existing cluster config was found, will generate one from the flags I0817 12:36:02.968781 17705 start_flags.go:377] Using suggested 2772MB memory alloc based on sys=11867MB, container=2820MB I0817 12:36:02.968986 17705 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true] I0817 12:36:02.970743 17705 out.go:177] ๐Ÿ“Œ Using Docker driver with root privileges I0817 12:36:02.977627 17705 cni.go:95] Creating CNI manager for "" I0817 12:36:02.977646 17705 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0817 12:36:02.977655 17705 start_flags.go:310] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2772 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/sadique:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0817 12:36:02.979383 17705 out.go:177] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0817 12:36:02.980661 17705 cache.go:120] Beginning downloading kic base image for docker with docker I0817 12:36:02.982254 17705 out.go:177] ๐Ÿšœ Pulling base image ... I0817 12:36:02.983680 17705 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker I0817 12:36:02.983708 17705 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon I0817 12:36:03.102291 17705 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 to local cache I0817 12:36:03.102611 17705 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local cache directory I0817 12:36:03.102763 17705 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 to local cache I0817 12:36:04.139841 17705 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.3/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 I0817 12:36:04.139931 17705 cache.go:57] Caching tarball of preloaded images I0817 12:36:04.140462 17705 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker I0817 12:36:04.148412 17705 out.go:177] ๐Ÿ’พ Downloading Kubernetes v1.24.3 preload ... I0817 12:36:04.150539 17705 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 ... I0817 12:36:04.623363 17705 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.3/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4?checksum=md5:ae1c8e7b1fa116b4699d7551d3812287 -> /home/sadique/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 I0817 12:39:25.471101 17705 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 ... I0817 12:39:25.471179 17705 preload.go:256] verifying checksumm of /home/sadique/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 ... I0817 12:39:27.971926 17705 cache.go:60] Finished verifying existence of preloaded tar for v1.24.3 on docker I0817 12:39:27.972312 17705 profile.go:148] Saving config to /home/sadique/.minikube/profiles/minikube/config.json ... I0817 12:39:27.972341 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/profiles/minikube/config.json: {Name:mka0b7b5aefcefadcb4b49d70cf4e468a819e8d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:40:09.558721 17705 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 as a tarball I0817 12:40:09.558744 17705 cache.go:161] Loading gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 from local cache I0817 12:40:12.894683 17705 cache.go:164] successfully loaded gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 from cached tarball I0817 12:40:12.894700 17705 cache.go:170] Downloading gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 to local daemon I0817 12:40:12.894857 17705 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon I0817 12:40:13.002655 17705 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 to local daemon I0817 12:45:31.366839 17705 cache.go:173] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 I0817 12:45:31.367074 17705 cache.go:208] Successfully downloaded all kic artifacts I0817 12:45:31.369420 17705 start.go:371] acquiring machines lock for minikube: {Name:mkf2a9b8c8b2761112bd33d5581f2ff3ef893e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0817 12:45:31.369604 17705 start.go:375] acquired machines lock for "minikube" in 135.075ยตs I0817 12:45:31.369646 17705 start.go:92] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2772 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/sadique:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0817 12:45:31.369761 17705 start.go:132] createHost starting for "" (driver="docker") I0817 12:45:31.376413 17705 out.go:204] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=2772MB) ... I0817 12:45:31.378131 17705 start.go:166] libmachine.API.Create for "minikube" (driver="docker") I0817 12:45:31.378172 17705 client.go:168] LocalClient.Create starting I0817 12:45:31.378691 17705 main.go:134] libmachine: Creating CA: /home/sadique/.minikube/certs/ca.pem I0817 12:45:31.627365 17705 main.go:134] libmachine: Creating client certificate: /home/sadique/.minikube/certs/cert.pem I0817 12:45:31.797838 17705 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0817 12:45:31.904922 17705 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0817 12:45:31.905087 17705 network_create.go:272] running [docker network inspect minikube] to gather additional debugging logs... I0817 12:45:31.905115 17705 cli_runner.go:164] Run: docker network inspect minikube W0817 12:45:32.010810 17705 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0817 12:45:32.010836 17705 network_create.go:275] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0817 12:45:32.010856 17705 network_create.go:277] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0817 12:45:32.010918 17705 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0817 12:45:32.124204 17705 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00058c038] misses:0} I0817 12:45:32.124248 17705 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0817 12:45:32.124278 17705 network_create.go:115] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0817 12:45:32.124355 17705 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0817 12:45:32.296929 17705 network_create.go:99] docker network minikube 192.168.49.0/24 created I0817 12:45:32.298178 17705 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0817 12:45:32.298268 17705 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0817 12:45:32.411756 17705 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0817 12:45:32.520152 17705 oci.go:103] Successfully created a docker volume minikube I0817 12:45:32.520228 17705 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib I0817 12:45:33.718397 17705 cli_runner.go:217] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib: (1.198110757s) I0817 12:45:33.718414 17705 oci.go:107] Successfully prepared a docker volume minikube I0817 12:45:33.718451 17705 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker I0817 12:45:33.718473 17705 kic.go:179] Starting extracting preloaded images to volume ... I0817 12:45:33.718546 17705 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/sadique/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir I0817 12:45:42.446289 17705 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/sadique/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir: (8.726074783s) I0817 12:45:42.446349 17705 kic.go:188] duration metric: took 8.727856 seconds to extract preloaded images to volume W0817 12:45:42.447577 17705 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0817 12:45:42.448328 17705 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0817 12:45:42.448511 17705 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0817 12:45:42.775955 17705 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=2772mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 I0817 12:45:43.695787 17705 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0817 12:45:43.939802 17705 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0817 12:45:44.072433 17705 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0817 12:45:44.330007 17705 oci.go:144] the created container "minikube" has a running status. I0817 12:45:44.330479 17705 kic.go:210] Creating ssh key for kic: /home/sadique/.minikube/machines/minikube/id_rsa... I0817 12:45:44.565460 17705 kic_runner.go:191] docker (temp): /home/sadique/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0817 12:45:44.829856 17705 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0817 12:45:45.048599 17705 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0817 12:45:45.048618 17705 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0817 12:45:45.300136 17705 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0817 12:45:45.532790 17705 machine.go:88] provisioning docker machine ... I0817 12:45:45.547765 17705 ubuntu.go:169] provisioning hostname "minikube" I0817 12:45:45.549794 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:45:45.791705 17705 main.go:134] libmachine: Using SSH client type: native I0817 12:45:45.821722 17705 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7daec0] 0x7ddf20 [] 0s} 127.0.0.1 44133 } I0817 12:45:45.821802 17705 main.go:134] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0817 12:45:46.123431 17705 main.go:134] libmachine: SSH cmd err, output: : minikube I0817 12:45:46.124056 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:45:46.244863 17705 main.go:134] libmachine: Using SSH client type: native I0817 12:45:46.245058 17705 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7daec0] 0x7ddf20 [] 0s} 127.0.0.1 44133 } I0817 12:45:46.245100 17705 main.go:134] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0817 12:45:46.402441 17705 main.go:134] libmachine: SSH cmd err, output: : I0817 12:45:46.402470 17705 ubuntu.go:175] set auth options {CertDir:/home/sadique/.minikube CaCertPath:/home/sadique/.minikube/certs/ca.pem CaPrivateKeyPath:/home/sadique/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/sadique/.minikube/machines/server.pem ServerKeyPath:/home/sadique/.minikube/machines/server-key.pem ClientKeyPath:/home/sadique/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/sadique/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/sadique/.minikube} I0817 12:45:46.402507 17705 ubuntu.go:177] setting up certificates I0817 12:45:46.402521 17705 provision.go:83] configureAuth start I0817 12:45:46.402630 17705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0817 12:45:46.528376 17705 provision.go:138] copyHostCerts I0817 12:45:46.528450 17705 exec_runner.go:151] cp: /home/sadique/.minikube/certs/ca.pem --> /home/sadique/.minikube/ca.pem (1082 bytes) I0817 12:45:46.528575 17705 exec_runner.go:151] cp: /home/sadique/.minikube/certs/cert.pem --> /home/sadique/.minikube/cert.pem (1123 bytes) I0817 12:45:46.528658 17705 exec_runner.go:151] cp: /home/sadique/.minikube/certs/key.pem --> /home/sadique/.minikube/key.pem (1675 bytes) I0817 12:45:46.528724 17705 provision.go:112] generating server cert: /home/sadique/.minikube/machines/server.pem ca-key=/home/sadique/.minikube/certs/ca.pem private-key=/home/sadique/.minikube/certs/ca-key.pem org=sadique.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0817 12:45:46.668600 17705 provision.go:172] copyRemoteCerts I0817 12:45:46.684288 17705 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0817 12:45:46.684377 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:45:46.799370 17705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:44133 SSHKeyPath:/home/sadique/.minikube/machines/minikube/id_rsa Username:docker} I0817 12:45:46.935953 17705 ssh_runner.go:362] scp /home/sadique/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes) I0817 12:45:46.977776 17705 ssh_runner.go:362] scp /home/sadique/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes) I0817 12:45:47.015777 17705 ssh_runner.go:362] scp /home/sadique/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0817 12:45:47.053254 17705 provision.go:86] duration metric: configureAuth took 650.719732ms I0817 12:45:47.053274 17705 ubuntu.go:193] setting minikube options for container-runtime I0817 12:45:47.054202 17705 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3 I0817 12:45:47.054291 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:45:47.172810 17705 main.go:134] libmachine: Using SSH client type: native I0817 12:45:47.173028 17705 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7daec0] 0x7ddf20 [] 0s} 127.0.0.1 44133 } I0817 12:45:47.173046 17705 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0817 12:45:47.338253 17705 main.go:134] libmachine: SSH cmd err, output: : overlay I0817 12:45:47.338286 17705 ubuntu.go:71] root file system type: overlay I0817 12:45:47.340094 17705 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0817 12:45:47.340292 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:45:47.457855 17705 main.go:134] libmachine: Using SSH client type: native I0817 12:45:47.458054 17705 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7daec0] 0x7ddf20 [] 0s} 127.0.0.1 44133 } I0817 12:45:47.458302 17705 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0817 12:45:47.653340 17705 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0817 12:45:47.653442 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:45:47.767195 17705 main.go:134] libmachine: Using SSH client type: native I0817 12:45:47.767366 17705 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x7daec0] 0x7ddf20 [] 0s} 127.0.0.1 44133 } I0817 12:45:47.767387 17705 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0817 12:45:48.865297 17705 main.go:134] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2022-06-06 23:01:03.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-08-17 07:15:47.653598000 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0817 12:45:48.865319 17705 machine.go:91] provisioned docker machine in 3.332516347s I0817 12:45:48.865331 17705 client.go:171] LocalClient.Create took 17.487152225s I0817 12:45:48.865354 17705 start.go:174] duration metric: libmachine.API.Create for "minikube" took 17.487226334s I0817 12:45:48.865364 17705 start.go:307] post-start starting for "minikube" (driver="docker") I0817 12:45:48.865652 17705 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0817 12:45:48.865762 17705 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0817 12:45:48.865826 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:45:48.985029 17705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:44133 SSHKeyPath:/home/sadique/.minikube/machines/minikube/id_rsa Username:docker} I0817 12:45:49.116786 17705 ssh_runner.go:195] Run: cat /etc/os-release I0817 12:45:49.127591 17705 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0817 12:45:49.127634 17705 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0817 12:45:49.127658 17705 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0817 12:45:49.127673 17705 info.go:137] Remote host: Ubuntu 20.04.4 LTS I0817 12:45:49.127694 17705 filesync.go:126] Scanning /home/sadique/.minikube/addons for local assets ... I0817 12:45:49.128024 17705 filesync.go:126] Scanning /home/sadique/.minikube/files for local assets ... I0817 12:45:49.128301 17705 start.go:310] post-start completed in 262.651578ms I0817 12:45:49.128866 17705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0817 12:45:49.240711 17705 profile.go:148] Saving config to /home/sadique/.minikube/profiles/minikube/config.json ... I0817 12:45:49.241212 17705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0817 12:45:49.241260 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:45:49.356718 17705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:44133 SSHKeyPath:/home/sadique/.minikube/machines/minikube/id_rsa Username:docker} I0817 12:45:49.469594 17705 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0817 12:45:49.491672 17705 start.go:135] duration metric: createHost completed in 18.12189261s I0817 12:45:49.491696 17705 start.go:82] releasing machines lock for "minikube", held for 18.122079712s I0817 12:45:49.491936 17705 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0817 12:45:49.602113 17705 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0817 12:45:49.602188 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:45:49.603190 17705 ssh_runner.go:195] Run: systemctl --version I0817 12:45:49.603243 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:45:49.744073 17705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:44133 SSHKeyPath:/home/sadique/.minikube/machines/minikube/id_rsa Username:docker} I0817 12:45:49.749557 17705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:44133 SSHKeyPath:/home/sadique/.minikube/machines/minikube/id_rsa Username:docker} I0817 12:45:50.350860 17705 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0817 12:45:50.409441 17705 cruntime.go:273] skipping containerd shutdown because we are bound to it I0817 12:45:50.409678 17705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0817 12:45:50.454710 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0817 12:45:50.490198 17705 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0817 12:45:50.641471 17705 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0817 12:45:50.771457 17705 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0817 12:45:50.906140 17705 ssh_runner.go:195] Run: sudo systemctl restart docker I0817 12:45:51.309882 17705 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0817 12:45:51.468643 17705 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0817 12:45:51.596647 17705 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0817 12:45:51.625910 17705 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock I0817 12:45:51.626039 17705 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0817 12:45:51.635442 17705 start.go:471] Will wait 60s for crictl version I0817 12:45:51.635544 17705 ssh_runner.go:195] Run: sudo crictl version I0817 12:45:51.868092 17705 start.go:480] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.17 RuntimeApiVersion: 1.41.0 I0817 12:45:51.868174 17705 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0817 12:45:51.979231 17705 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0817 12:45:52.045835 17705 out.go:204] ๐Ÿณ Preparing Kubernetes v1.24.3 on Docker 20.10.17 ... I0817 12:45:52.046007 17705 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0817 12:45:52.149662 17705 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0817 12:45:52.158127 17705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0817 12:45:52.181863 17705 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker I0817 12:45:52.181953 17705 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0817 12:45:52.241463 17705 docker.go:611] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3 k8s.gcr.io/kube-controller-manager:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0817 12:45:52.241480 17705 docker.go:542] Images already preloaded, skipping extraction I0817 12:45:52.241553 17705 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0817 12:45:52.296828 17705 docker.go:611] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.3 k8s.gcr.io/kube-controller-manager:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0817 12:45:52.296848 17705 cache_images.go:84] Images are preloaded, skipping loading I0817 12:45:52.296937 17705 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0817 12:45:52.434503 17705 cni.go:95] Creating CNI manager for "" I0817 12:45:52.434522 17705 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0817 12:45:52.435375 17705 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0817 12:45:52.435406 17705 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0817 12:45:52.435540 17705 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.24.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0817 12:45:52.436127 17705 kubeadm.go:961] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.24.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0817 12:45:52.436210 17705 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3 I0817 12:45:52.454132 17705 binaries.go:44] Found k8s binaries, skipping transfer I0817 12:45:52.454239 17705 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0817 12:45:52.470634 17705 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (470 bytes) I0817 12:45:52.500412 17705 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0817 12:45:52.529141 17705 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes) I0817 12:45:52.557265 17705 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0817 12:45:52.565139 17705 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0817 12:45:52.586150 17705 certs.go:54] Setting up /home/sadique/.minikube/profiles/minikube for IP: 192.168.49.2 I0817 12:45:52.586236 17705 certs.go:187] generating minikubeCA CA: /home/sadique/.minikube/ca.key I0817 12:45:52.906711 17705 crypto.go:156] Writing cert to /home/sadique/.minikube/ca.crt ... I0817 12:45:52.906738 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/ca.crt: {Name:mk7cdfca3c8a900d91d157435ae3aee7be9402d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:45:52.906994 17705 crypto.go:164] Writing key to /home/sadique/.minikube/ca.key ... I0817 12:45:52.907007 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/ca.key: {Name:mkf8db22f135b3aaf0a1cd17218476acfad29c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:45:52.907177 17705 certs.go:187] generating proxyClientCA CA: /home/sadique/.minikube/proxy-client-ca.key I0817 12:45:53.356800 17705 crypto.go:156] Writing cert to /home/sadique/.minikube/proxy-client-ca.crt ... I0817 12:45:53.356813 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/proxy-client-ca.crt: {Name:mk91b2a757162cc60e4cd9562a87ca5f37acf639 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:45:53.357009 17705 crypto.go:164] Writing key to /home/sadique/.minikube/proxy-client-ca.key ... I0817 12:45:53.357017 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/proxy-client-ca.key: {Name:mkcb5308d9701a9e0cce25702c3fd529acb132d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:45:53.357172 17705 certs.go:302] generating minikube-user signed cert: /home/sadique/.minikube/profiles/minikube/client.key I0817 12:45:53.357187 17705 crypto.go:68] Generating cert /home/sadique/.minikube/profiles/minikube/client.crt with IP's: [] I0817 12:45:53.610580 17705 crypto.go:156] Writing cert to /home/sadique/.minikube/profiles/minikube/client.crt ... I0817 12:45:53.610593 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/profiles/minikube/client.crt: {Name:mkf77343e703729f93e145fcd0a94d755280b624 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:45:53.610771 17705 crypto.go:164] Writing key to /home/sadique/.minikube/profiles/minikube/client.key ... I0817 12:45:53.610779 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/profiles/minikube/client.key: {Name:mkeb827f486c85d4a36e101f09de3cf800382f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:45:53.610892 17705 certs.go:302] generating minikube signed cert: /home/sadique/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0817 12:45:53.610913 17705 crypto.go:68] Generating cert /home/sadique/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0817 12:45:53.728171 17705 crypto.go:156] Writing cert to /home/sadique/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0817 12:45:53.728186 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk20308e6c1a1d93e235284d2497831ac86caa78 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:45:53.728372 17705 crypto.go:164] Writing key to /home/sadique/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0817 12:45:53.728381 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkd75509f6a33fdeaa42202dffe4c7f0bf07eb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:45:53.728493 17705 certs.go:320] copying /home/sadique/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/sadique/.minikube/profiles/minikube/apiserver.crt I0817 12:45:53.728582 17705 certs.go:324] copying /home/sadique/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/sadique/.minikube/profiles/minikube/apiserver.key I0817 12:45:53.728652 17705 certs.go:302] generating aggregator signed cert: /home/sadique/.minikube/profiles/minikube/proxy-client.key I0817 12:45:53.728670 17705 crypto.go:68] Generating cert /home/sadique/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0817 12:45:54.151801 17705 crypto.go:156] Writing cert to /home/sadique/.minikube/profiles/minikube/proxy-client.crt ... I0817 12:45:54.151816 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/profiles/minikube/proxy-client.crt: {Name:mk886571e3e58734eff1b27d2f2ab5904878e9e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:45:54.152012 17705 crypto.go:164] Writing key to /home/sadique/.minikube/profiles/minikube/proxy-client.key ... I0817 12:45:54.152021 17705 lock.go:35] WriteFile acquiring /home/sadique/.minikube/profiles/minikube/proxy-client.key: {Name:mkf545fb7e67aa0fbce7aec974b6ff8aafc506eb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:45:54.152279 17705 certs.go:388] found cert: /home/sadique/.minikube/certs/home/sadique/.minikube/certs/ca-key.pem (1679 bytes) I0817 12:45:54.152324 17705 certs.go:388] found cert: /home/sadique/.minikube/certs/home/sadique/.minikube/certs/ca.pem (1082 bytes) I0817 12:45:54.152360 17705 certs.go:388] found cert: /home/sadique/.minikube/certs/home/sadique/.minikube/certs/cert.pem (1123 bytes) I0817 12:45:54.152395 17705 certs.go:388] found cert: /home/sadique/.minikube/certs/home/sadique/.minikube/certs/key.pem (1675 bytes) I0817 12:45:54.160722 17705 ssh_runner.go:362] scp /home/sadique/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0817 12:45:54.200629 17705 ssh_runner.go:362] scp /home/sadique/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0817 12:45:54.239540 17705 ssh_runner.go:362] scp /home/sadique/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0817 12:45:54.280318 17705 ssh_runner.go:362] scp /home/sadique/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0817 12:45:54.318786 17705 ssh_runner.go:362] scp /home/sadique/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0817 12:45:54.356178 17705 ssh_runner.go:362] scp /home/sadique/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0817 12:45:54.394189 17705 ssh_runner.go:362] scp /home/sadique/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0817 12:45:54.431492 17705 ssh_runner.go:362] scp /home/sadique/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0817 12:45:54.469032 17705 ssh_runner.go:362] scp /home/sadique/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0817 12:45:54.518728 17705 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0817 12:45:54.547734 17705 ssh_runner.go:195] Run: openssl version I0817 12:45:54.560973 17705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0817 12:45:54.578968 17705 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0817 12:45:54.589191 17705 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug 17 07:15 /usr/share/ca-certificates/minikubeCA.pem I0817 12:45:54.589277 17705 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0817 12:45:54.601144 17705 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0817 12:45:54.618771 17705 kubeadm.go:395] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2772 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/sadique:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0817 12:45:54.618959 17705 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0817 12:45:54.669731 17705 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0817 12:45:54.687137 17705 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0817 12:45:54.703346 17705 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0817 12:45:54.703462 17705 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0817 12:45:54.720459 17705 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0817 12:45:54.720572 17705 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0817 12:46:19.307164 17705 out.go:204] โ–ช Generating certificates and keys ... I0817 12:46:19.373636 17705 out.go:204] โ–ช Booting up control plane ... I0817 12:46:19.422068 17705 out.go:204] โ–ช Configuring RBAC rules ... I0817 12:46:19.435181 17705 cni.go:95] Creating CNI manager for "" I0817 12:46:19.435221 17705 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0817 12:46:19.435313 17705 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0817 12:46:19.435610 17705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0817 12:46:19.456636 17705 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=62e108c3dfdec8029a890ad6d8ef96b6461426dc minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_08_17T12_46_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0817 12:46:19.741981 17705 kubeadm.go:1045] duration metric: took 306.639345ms to wait for elevateKubeSystemPrivileges. I0817 12:46:19.742001 17705 ops.go:34] apiserver oom_adj: -16 I0817 12:46:22.300020 17705 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=62e108c3dfdec8029a890ad6d8ef96b6461426dc minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_08_17T12_46_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (2.843336008s) I0817 12:46:22.300058 17705 kubeadm.go:397] StartCluster complete in 27.681294723s I0817 12:46:22.300080 17705 settings.go:142] acquiring lock: {Name:mk2e44fbb8d40d8a16c91db17e178648d0f24968 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0817 12:46:22.300193 17705 settings.go:150] Updating kubeconfig: /home/sadique/.kube/config I0817 12:46:22.371231 17705 lock.go:35] WriteFile acquiring /home/sadique/.kube/config: {Name:mk383c78dda733b6f759844a7c33c81a9a5cfdb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} W0817 12:46:52.473103 17705 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0817 12:47:22.974937 17705 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0817 12:47:53.475971 17705 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0817 12:48:23.975558 17705 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0817 12:48:54.475236 17705 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout W0817 12:49:24.476673 17705 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout I0817 12:49:24.482243 17705 kapi.go:241] timed out trying to rescale deployment "coredns" in namespace "kube-system" and context "minikube" to 1: timed out waiting for the condition E0817 12:49:24.482368 17705 start.go:267] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: timed out waiting for the condition I0817 12:49:24.482634 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0817 12:49:24.485334 17705 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3 I0817 12:49:24.493313 17705 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0817 12:49:24.496372 17705 out.go:177] ๐Ÿ”Ž Verifying Kubernetes components... I0817 12:49:24.493933 17705 addons.go:412] enableAddons start: toEnable=map[], additional=[] I0817 12:49:24.496653 17705 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0817 12:49:24.498270 17705 addons.go:153] Setting addon storage-provisioner=true in "minikube" I0817 12:49:24.498336 17705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet W0817 12:49:24.498337 17705 addons.go:162] addon storage-provisioner should already be in state true I0817 12:49:24.496681 17705 addons.go:65] Setting default-storageclass=true in profile "minikube" I0817 12:49:24.498821 17705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0817 12:49:24.500944 17705 host.go:66] Checking if "minikube" exists ... I0817 12:49:24.502320 17705 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0817 12:49:24.502786 17705 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0817 12:49:24.820682 17705 out.go:177] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0817 12:49:24.819843 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0817 12:49:24.823337 17705 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml I0817 12:49:24.820047 17705 api_server.go:51] waiting for apiserver process to appear ... I0817 12:49:24.823352 17705 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0817 12:49:24.823423 17705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0817 12:49:24.823427 17705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 12:49:24.963144 17705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:44133 SSHKeyPath:/home/sadique/.minikube/machines/minikube/id_rsa Username:docker} I0817 12:49:25.288977 17705 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0817 12:49:26.529106 17705 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.705653441s) I0817 12:49:26.529128 17705 api_server.go:71] duration metric: took 2.035732544s to wait for apiserver process to appear ... I0817 12:49:26.529126 17705 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.705841661s) I0817 12:49:26.529138 17705 api_server.go:87] waiting for apiserver healthz status ... I0817 12:49:26.529148 17705 start.go:809] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0817 12:49:26.529150 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:49:26.628480 17705 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.339467645s) I0817 12:49:31.532792 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:49:32.033773 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:49:37.034660 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:49:37.533819 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:49:42.535103 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:49:43.033887 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:49:48.035306 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:49:48.533072 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:49:53.534111 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:49:53.534237 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... W0817 12:49:54.842904 17705 out.go:239] โ— Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8443: i/o timeout] I0817 12:49:54.846742 17705 out.go:177] ๐ŸŒŸ Enabled addons: storage-provisioner I0817 12:49:54.848310 17705 addons.go:414] enableAddons completed in 30.354740182s I0817 12:49:58.535299 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:49:59.033077 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:50:04.034011 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:50:04.533702 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:50:09.534827 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:50:10.033884 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:50:15.034496 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:50:15.532932 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:50:20.533838 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:50:20.533896 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:50:25.534943 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:50:26.033698 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:50:26.104551 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:50:26.104641 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:50:26.179798 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:50:26.179902 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:50:26.241393 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:50:26.241486 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:50:26.296016 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:50:26.296118 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:50:26.355955 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:50:26.356046 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:50:26.414935 17705 logs.go:274] 0 containers: [] W0817 12:50:26.414950 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:50:26.415034 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:50:26.471284 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:50:26.471377 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:50:26.529576 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:50:26.529629 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:50:26.529644 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:50:26.676626 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:50:26.676644 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:50:26.769536 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:50:26.769552 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:50:26.847441 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:50:26.847457 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:50:26.905020 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:50:26.905041 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:50:26.962873 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:50:26.962889 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:50:27.046257 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:50:27.046284 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:50:27.121801 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:50:27.121821 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:50:27.242357 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:50:27.242374 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:50:27.266908 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:50:27.266928 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:50:27.328793 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:50:27.328809 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:50:27.427425 17705 logs.go:123] Gathering logs for Docker ... I0817 12:50:27.427443 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:50:27.459010 17705 logs.go:123] Gathering logs for container status ... I0817 12:50:27.459031 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:50:30.010516 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:50:35.011155 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:50:35.033462 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:50:35.090385 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:50:35.090473 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:50:35.158914 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:50:35.159004 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:50:35.212814 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:50:35.212901 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:50:35.269473 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:50:35.269566 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:50:35.325299 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:50:35.325395 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:50:35.377482 17705 logs.go:274] 0 containers: [] W0817 12:50:35.377500 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:50:35.377574 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:50:35.444355 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:50:35.444456 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:50:35.520466 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:50:35.520487 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:50:35.520497 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:50:35.627841 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:50:35.627869 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:50:35.698825 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:50:35.698841 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:50:35.799737 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:50:35.799754 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:50:35.883027 17705 logs.go:123] Gathering logs for container status ... I0817 12:50:35.883049 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:50:35.937016 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:50:35.937035 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:50:35.956809 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:50:35.956831 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:50:36.104416 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:50:36.104439 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:50:36.191024 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:50:36.191048 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:50:36.300748 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:50:36.300771 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:50:36.396934 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:50:36.396952 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:50:36.470240 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:50:36.470257 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:50:36.532138 17705 logs.go:123] Gathering logs for Docker ... I0817 12:50:36.532159 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:50:39.066703 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:50:44.068174 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:50:44.534187 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:50:44.614061 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:50:44.614151 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:50:44.674561 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:50:44.674657 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:50:44.729295 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:50:44.729391 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:50:44.786258 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:50:44.786351 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:50:44.846930 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:50:44.847030 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:50:44.898992 17705 logs.go:274] 0 containers: [] W0817 12:50:44.899009 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:50:44.899070 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:50:44.963730 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:50:44.963825 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:50:45.019564 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:50:45.019593 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:50:45.019609 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:50:45.040281 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:50:45.040302 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:50:45.214989 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:50:45.215011 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:50:45.297535 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:50:45.297568 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:50:45.390108 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:50:45.390127 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:50:45.458935 17705 logs.go:123] Gathering logs for container status ... I0817 12:50:45.458965 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:50:45.537430 17705 logs.go:123] Gathering logs for Docker ... I0817 12:50:45.537448 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:50:45.574251 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:50:45.574270 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:50:45.678186 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:50:45.678208 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:50:45.759123 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:50:45.759138 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:50:45.820980 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:50:45.821004 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:50:45.883237 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:50:45.883257 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:50:45.944233 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:50:45.944250 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:50:48.540197 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:50:53.541641 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:50:54.033466 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:50:54.116707 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:50:54.116797 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:50:54.168081 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:50:54.168150 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:50:54.230948 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:50:54.231042 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:50:54.289633 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:50:54.289713 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:50:54.348656 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:50:54.348738 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:50:54.405688 17705 logs.go:274] 0 containers: [] W0817 12:50:54.405703 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:50:54.405782 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:50:54.467700 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:50:54.467808 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:50:54.525515 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:50:54.525544 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:50:54.525556 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:50:54.586662 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:50:54.586682 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:50:54.672222 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:50:54.672240 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:50:54.750077 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:50:54.750094 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:50:54.835246 17705 logs.go:123] Gathering logs for Docker ... I0817 12:50:54.835263 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:50:54.870984 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:50:54.871001 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:50:54.929802 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:50:54.929822 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:50:54.951068 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:50:54.951084 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:50:55.093525 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:50:55.093542 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:50:55.175545 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:50:55.175567 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:50:55.255150 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:50:55.255168 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:50:55.320124 17705 logs.go:123] Gathering logs for container status ... I0817 12:50:55.320140 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:50:55.370717 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:50:55.370735 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:50:57.989406 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:51:02.990849 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:51:03.033357 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:51:03.101412 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:51:03.101509 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:51:03.169956 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:51:03.170047 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:51:03.233412 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:51:03.233505 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:51:03.293105 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:51:03.293191 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:51:03.347518 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:51:03.347617 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:51:03.402754 17705 logs.go:274] 0 containers: [] W0817 12:51:03.402768 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:51:03.402839 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:51:03.464404 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:51:03.464483 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:51:03.526116 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:51:03.526141 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:51:03.526157 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:51:03.597245 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:51:03.597266 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:51:03.684950 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:51:03.684968 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:51:03.746448 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:51:03.746468 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:51:03.831810 17705 logs.go:123] Gathering logs for Docker ... I0817 12:51:03.831827 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:51:03.866267 17705 logs.go:123] Gathering logs for container status ... I0817 12:51:03.866283 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:51:03.920977 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:51:03.920993 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:51:04.029408 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:51:04.029426 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:51:04.052307 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:51:04.052327 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:51:04.186989 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:51:04.187006 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:51:04.260682 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:51:04.260698 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:51:04.341258 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:51:04.341274 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:51:04.397545 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:51:04.397566 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:51:06.962364 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:51:11.963733 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:51:12.033258 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:51:12.110623 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:51:12.110710 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:51:12.166675 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:51:12.166761 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:51:12.221471 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:51:12.221559 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:51:12.276108 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:51:12.276192 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:51:12.329815 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:51:12.329898 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:51:12.385134 17705 logs.go:274] 0 containers: [] W0817 12:51:12.385147 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:51:12.385226 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:51:12.440065 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:51:12.440157 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:51:12.506101 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:51:12.506122 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:51:12.506149 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:51:12.527163 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:51:12.527180 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:51:12.606815 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:51:12.606835 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:51:12.664630 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:51:12.664650 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:51:12.730005 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:51:12.730021 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:51:12.816846 17705 logs.go:123] Gathering logs for container status ... I0817 12:51:12.816867 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:51:12.903678 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:51:12.903702 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:51:13.027533 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:51:13.027553 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:51:13.173095 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:51:13.173113 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:51:13.258670 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:51:13.258687 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:51:13.322406 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:51:13.322425 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:51:13.408507 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:51:13.408528 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:51:13.475081 17705 logs.go:123] Gathering logs for Docker ... I0817 12:51:13.475103 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:51:16.012680 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:51:21.013152 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:51:21.033701 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:51:21.113773 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:51:21.113872 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:51:21.165308 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:51:21.165403 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:51:21.219248 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:51:21.219333 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:51:21.270846 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:51:21.270937 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:51:21.323342 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:51:21.323437 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:51:21.377530 17705 logs.go:274] 0 containers: [] W0817 12:51:21.377545 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:51:21.377629 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:51:21.432211 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:51:21.432300 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:51:21.486376 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:51:21.486399 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:51:21.486413 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:51:21.590393 17705 logs.go:123] Gathering logs for container status ... I0817 12:51:21.590414 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:51:21.639350 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:51:21.639367 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:51:21.743230 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:51:21.743248 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:51:21.766465 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:51:21.766484 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:51:21.852792 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:51:21.852808 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:51:21.930850 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:51:21.930866 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:51:21.990601 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:51:21.990618 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:51:22.081637 17705 logs.go:123] Gathering logs for Docker ... I0817 12:51:22.081654 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:51:22.116553 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:51:22.116568 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:51:22.255151 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:51:22.255169 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:51:22.317817 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:51:22.317832 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:51:22.373940 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:51:22.373959 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:51:24.936420 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:51:29.937109 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:51:30.033769 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:51:30.109318 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:51:30.109407 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:51:30.169724 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:51:30.169810 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:51:30.226065 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:51:30.226155 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:51:30.285323 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:51:30.285417 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:51:30.340116 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:51:30.340217 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:51:30.392671 17705 logs.go:274] 0 containers: [] W0817 12:51:30.392684 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:51:30.392754 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:51:30.475011 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:51:30.475103 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:51:30.549071 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:51:30.549098 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:51:30.549111 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:51:30.623914 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:51:30.623931 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:51:30.728855 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:51:30.728872 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:51:30.801609 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:51:30.801625 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:51:30.865790 17705 logs.go:123] Gathering logs for Docker ... I0817 12:51:30.865807 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:51:30.898720 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:51:30.898735 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:51:30.962370 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:51:30.962390 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:51:31.049095 17705 logs.go:123] Gathering logs for container status ... I0817 12:51:31.049111 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:51:31.125227 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:51:31.125244 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:51:31.237956 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:51:31.237978 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:51:31.258485 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:51:31.258504 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:51:31.389325 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:51:31.389340 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:51:31.457053 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:51:31.457071 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:51:34.039147 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:51:39.040465 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:51:39.533122 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:51:39.585485 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:51:39.585584 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:51:39.647384 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:51:39.647476 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:51:39.710048 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:51:39.710147 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:51:39.767679 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:51:39.767763 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:51:39.822336 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:51:39.822435 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:51:39.892876 17705 logs.go:274] 0 containers: [] W0817 12:51:39.892893 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:51:39.892957 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:51:39.954614 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:51:39.954697 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:51:40.017565 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:51:40.017593 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:51:40.017607 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:51:40.122773 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:51:40.122789 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:51:40.143844 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:51:40.143882 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:51:40.309996 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:51:40.310012 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:51:40.370469 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:51:40.370488 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:51:40.470457 17705 logs.go:123] Gathering logs for Docker ... I0817 12:51:40.470480 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:51:40.521179 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:51:40.521195 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:51:40.600028 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:51:40.600044 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:51:40.687287 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:51:40.687304 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:51:40.753199 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:51:40.753215 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:51:40.841212 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:51:40.841229 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:51:40.904621 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:51:40.904639 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:51:40.963957 17705 logs.go:123] Gathering logs for container status ... I0817 12:51:40.963973 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:51:43.521697 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:51:48.522110 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:51:48.533694 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:51:48.610722 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:51:48.610797 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:51:48.676628 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:51:48.676722 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:51:48.735081 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:51:48.735173 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:51:48.788182 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:51:48.788275 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:51:48.845554 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:51:48.845647 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:51:48.898956 17705 logs.go:274] 0 containers: [] W0817 12:51:48.898978 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:51:48.899052 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:51:48.958826 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:51:48.958911 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:51:49.015134 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:51:49.015163 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:51:49.015175 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:51:49.132274 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:51:49.132292 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:51:49.151975 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:51:49.151992 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:51:49.225660 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:51:49.225675 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:51:49.284320 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:51:49.284343 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:51:49.375705 17705 logs.go:123] Gathering logs for Docker ... I0817 12:51:49.375721 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:51:49.412537 17705 logs.go:123] Gathering logs for container status ... I0817 12:51:49.412554 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:51:49.465796 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:51:49.465816 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:51:49.626393 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:51:49.626414 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:51:49.724006 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:51:49.724025 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:51:49.789761 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:51:49.789781 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:51:49.882765 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:51:49.882785 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:51:49.946728 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:51:49.946744 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:51:52.523196 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:51:57.523580 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:51:57.534187 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:51:57.607673 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:51:57.607800 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:51:57.667764 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:51:57.667847 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:51:57.725742 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:51:57.725832 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:51:57.781049 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:51:57.781145 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:51:57.838420 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:51:57.838517 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:51:57.891931 17705 logs.go:274] 0 containers: [] W0817 12:51:57.891952 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:51:57.892031 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:51:57.962026 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:51:57.962122 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:51:58.022889 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:51:58.022915 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:51:58.022930 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:51:58.188897 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:51:58.188912 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:51:58.273635 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:51:58.273654 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:51:58.335384 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:51:58.335407 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:51:58.395421 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:51:58.395438 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:51:58.471359 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:51:58.471377 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:51:58.492988 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:51:58.493007 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:51:58.570664 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:51:58.570680 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:51:58.638789 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:51:58.638811 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:51:58.739354 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:51:58.739377 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:51:58.798599 17705 logs.go:123] Gathering logs for Docker ... I0817 12:51:58.798617 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:51:58.834471 17705 logs.go:123] Gathering logs for container status ... I0817 12:51:58.834489 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:51:58.890853 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:51:58.890869 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:52:01.504032 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:52:06.504606 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:52:06.534189 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:52:06.606530 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:52:06.606633 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:52:06.663259 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:52:06.663360 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:52:06.726248 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:52:06.726340 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:52:06.787142 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:52:06.787240 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:52:06.855534 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:52:06.855614 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:52:06.914778 17705 logs.go:274] 0 containers: [] W0817 12:52:06.914794 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:52:06.914870 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:52:06.973060 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:52:06.973136 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:52:07.034069 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:52:07.034098 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:52:07.034114 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:52:07.190993 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:52:07.191009 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:52:07.212560 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:52:07.212578 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:52:07.293233 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:52:07.293249 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:52:07.415305 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:52:07.415327 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:52:07.477937 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:52:07.477952 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:52:07.535140 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:52:07.535161 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:52:07.630414 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:52:07.630433 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:52:07.700328 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:52:07.700348 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:52:07.810126 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:52:07.810145 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:52:07.899720 17705 logs.go:123] Gathering logs for Docker ... I0817 12:52:07.899737 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:52:07.934181 17705 logs.go:123] Gathering logs for container status ... I0817 12:52:07.934198 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:52:07.989283 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:52:07.989301 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:52:10.550223 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:52:15.551352 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:52:16.033284 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:52:16.105028 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:52:16.105115 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:52:16.166436 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:52:16.166528 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:52:16.219627 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:52:16.219706 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:52:16.311678 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:52:16.311764 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:52:16.371841 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:52:16.371951 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:52:16.429965 17705 logs.go:274] 0 containers: [] W0817 12:52:16.429979 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:52:16.430059 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:52:16.492127 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:52:16.492215 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:52:16.545411 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:52:16.545433 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:52:16.545447 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:52:16.611702 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:52:16.611720 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:52:16.712282 17705 logs.go:123] Gathering logs for container status ... I0817 12:52:16.712299 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:52:16.777778 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:52:16.777795 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:52:16.890149 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:52:16.890174 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:52:16.955093 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:52:16.955114 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:52:17.012748 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:52:17.012764 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:52:17.087998 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:52:17.088015 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:52:17.190179 17705 logs.go:123] Gathering logs for Docker ... I0817 12:52:17.190196 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:52:17.234350 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:52:17.234372 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:52:17.260152 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:52:17.260172 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:52:17.407215 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:52:17.407231 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:52:17.489111 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:52:17.489133 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:52:20.080148 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:52:25.081394 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:52:25.533141 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:52:25.603635 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:52:25.603729 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:52:25.668441 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:52:25.668532 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:52:25.725662 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:52:25.725748 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:52:25.779124 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:52:25.779220 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:52:25.839605 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:52:25.839676 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:52:25.893336 17705 logs.go:274] 0 containers: [] W0817 12:52:25.893355 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:52:25.893442 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:52:25.948792 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:52:25.948865 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:52:26.004887 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:52:26.004915 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:52:26.004931 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:52:26.123178 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:52:26.123199 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:52:26.295276 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:52:26.295295 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:52:26.450402 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:52:26.450429 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:52:26.515607 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:52:26.515630 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:52:26.574940 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:52:26.574963 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:52:26.673603 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:52:26.673620 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:52:26.755939 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:52:26.755959 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:52:26.784910 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:52:26.784926 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:52:26.861408 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:52:26.861423 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:52:26.925456 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:52:26.925473 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:52:26.983590 17705 logs.go:123] Gathering logs for Docker ... I0817 12:52:26.983608 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:52:27.019536 17705 logs.go:123] Gathering logs for container status ... I0817 12:52:27.019561 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:52:29.575803 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:52:34.577291 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:52:35.033232 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:52:35.124149 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:52:35.124238 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:52:35.180731 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:52:35.180828 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:52:35.257099 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:52:35.257196 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:52:35.315193 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:52:35.315285 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:52:35.370152 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:52:35.370248 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:52:35.424720 17705 logs.go:274] 0 containers: [] W0817 12:52:35.424738 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:52:35.424818 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:52:35.503602 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:52:35.503680 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:52:35.565805 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:52:35.565833 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:52:35.565848 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:52:35.662190 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:52:35.662207 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:52:35.755923 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:52:35.755941 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:52:35.839545 17705 logs.go:123] Gathering logs for Docker ... I0817 12:52:35.839563 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:52:35.875046 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:52:35.875062 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:52:36.025548 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:52:36.025564 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:52:36.045786 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:52:36.045802 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:52:36.123300 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:52:36.123317 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:52:36.182549 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:52:36.182570 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:52:36.246628 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:52:36.246645 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:52:36.306271 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:52:36.306288 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:52:36.364407 17705 logs.go:123] Gathering logs for container status ... I0817 12:52:36.364424 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:52:36.426178 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:52:36.426198 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:52:39.048169 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:52:44.049493 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:52:44.533302 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:52:44.617418 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:52:44.617507 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:52:44.682310 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:52:44.682397 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:52:44.738836 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:52:44.738928 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:52:44.794769 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:52:44.794857 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:52:44.848120 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:52:44.848214 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:52:44.908377 17705 logs.go:274] 0 containers: [] W0817 12:52:44.908395 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:52:44.908473 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:52:44.972905 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:52:44.972984 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:52:45.027870 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:52:45.027898 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:52:45.027913 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:52:45.111900 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:52:45.111917 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:52:45.196732 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:52:45.196749 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:52:45.263565 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:52:45.263585 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:52:45.347571 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:52:45.347591 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:52:45.414805 17705 logs.go:123] Gathering logs for Docker ... I0817 12:52:45.414823 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:52:45.455248 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:52:45.455270 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:52:45.623941 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:52:45.623957 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:52:45.643746 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:52:45.643762 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:52:45.705354 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:52:45.705377 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:52:45.771047 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:52:45.771067 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:52:45.851415 17705 logs.go:123] Gathering logs for container status ... I0817 12:52:45.851437 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:52:45.907122 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:52:45.907141 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:52:48.514857 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:52:53.516186 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: i/o timeout (Client.Timeout exceeded while awaiting headers) I0817 12:52:53.533629 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:52:53.611449 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:52:53.611542 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:52:53.668010 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:52:53.668101 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:52:53.744821 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:52:53.744923 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:52:53.806860 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:52:53.806953 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:52:53.862580 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:52:53.862675 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:52:53.915394 17705 logs.go:274] 0 containers: [] W0817 12:52:53.915412 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:52:53.915484 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:52:53.978611 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:52:53.978709 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:52:54.035192 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:52:54.035222 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:52:54.035235 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:52:54.173043 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:52:54.173066 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:52:54.247878 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:52:54.247899 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:52:54.340477 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:52:54.340498 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:52:54.402214 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:52:54.402234 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:52:54.465332 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:52:54.465350 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:52:54.543559 17705 logs.go:123] Gathering logs for container status ... I0817 12:52:54.543577 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:52:54.605779 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:52:54.605796 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:52:54.720126 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:52:54.720146 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:52:54.741129 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:52:54.741143 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:52:54.822905 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:52:54.822923 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:52:54.888635 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:52:54.888657 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:52:54.953710 17705 logs.go:123] Gathering logs for Docker ... I0817 12:52:54.953730 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:52:57.489491 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:53:02.490519 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:53:02.534298 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:53:02.611252 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:53:02.611348 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:53:02.668064 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:53:02.668155 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:53:02.728437 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:53:02.728522 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:53:02.782843 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:53:02.782928 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:53:02.841893 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:53:02.841988 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:53:02.904708 17705 logs.go:274] 0 containers: [] W0817 12:53:02.904726 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:53:02.904799 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:53:02.969866 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:53:02.969954 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:53:03.022781 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:53:03.022808 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:53:03.022822 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:53:03.083850 17705 logs.go:123] Gathering logs for Docker ... I0817 12:53:03.083888 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:53:03.123141 17705 logs.go:123] Gathering logs for container status ... I0817 12:53:03.123163 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:53:03.194877 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:53:03.194899 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:53:03.353382 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:53:03.353397 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:53:03.427266 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:53:03.427284 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:53:03.512758 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:53:03.512774 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:53:03.573762 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:53:03.573778 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:53:03.642918 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:53:03.642938 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:53:03.753185 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:53:03.753204 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:53:03.775114 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:53:03.775130 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:53:03.847111 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:53:03.847131 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:53:03.931355 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:53:03.931371 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:53:06.510897 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:53:11.512288 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:53:11.533887 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:53:11.616581 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:53:11.616670 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:53:11.684303 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:53:11.684393 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:53:11.741901 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:53:11.741997 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:53:11.799480 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:53:11.799567 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:53:11.853722 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:53:11.853804 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:53:11.905105 17705 logs.go:274] 0 containers: [] W0817 12:53:11.905119 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:53:11.905198 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:53:11.962024 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:53:11.962120 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:53:12.018630 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:53:12.018655 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:53:12.018669 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:53:12.131511 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:53:12.131529 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:53:12.154065 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:53:12.154081 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:53:12.231378 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:53:12.231394 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:53:12.292837 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:53:12.292854 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:53:12.383207 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:53:12.383223 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:53:12.447676 17705 logs.go:123] Gathering logs for container status ... I0817 12:53:12.447691 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:53:12.503508 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:53:12.503526 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:53:12.654806 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:53:12.654828 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:53:12.747263 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:53:12.747282 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:53:12.815068 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:53:12.815088 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:53:12.884865 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:53:12.884882 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:53:12.966097 17705 logs.go:123] Gathering logs for Docker ... I0817 12:53:12.966115 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:53:15.499637 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:53:20.500047 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:53:20.533462 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:53:20.590580 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:53:20.590676 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:53:20.644950 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:53:20.645037 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:53:20.702601 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:53:20.702695 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:53:20.758108 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:53:20.758201 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:53:20.813312 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:53:20.813416 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:53:20.889067 17705 logs.go:274] 0 containers: [] W0817 12:53:20.889085 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:53:20.889163 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:53:20.944354 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:53:20.944446 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:53:21.004107 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:53:21.004137 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:53:21.004150 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:53:21.062314 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:53:21.062336 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:53:21.157440 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:53:21.157457 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:53:21.226152 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:53:21.226168 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:53:21.288013 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:53:21.288031 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:53:21.442709 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:53:21.442726 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:53:21.514187 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:53:21.514203 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:53:21.602687 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:53:21.602704 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:53:21.698831 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:53:21.698848 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:53:21.774695 17705 logs.go:123] Gathering logs for Docker ... I0817 12:53:21.774711 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:53:21.809933 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:53:21.809949 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:53:21.913579 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:53:21.913596 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:53:21.934661 17705 logs.go:123] Gathering logs for container status ... I0817 12:53:21.934680 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:53:24.496032 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:53:29.496575 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:53:29.534169 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:53:29.606207 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:53:29.606292 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:53:29.665731 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:53:29.665820 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:53:29.740551 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:53:29.740648 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:53:29.830858 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:53:29.830935 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:53:29.885566 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:53:29.885656 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:53:29.944312 17705 logs.go:274] 0 containers: [] W0817 12:53:29.944334 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:53:29.944410 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:53:29.999602 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:53:29.999696 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:53:30.056226 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:53:30.056254 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:53:30.056266 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:53:30.076734 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:53:30.076754 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:53:30.214550 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:53:30.214566 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:53:30.276893 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:53:30.276915 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:53:30.363634 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:53:30.363652 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:53:30.427758 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:53:30.427773 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:53:30.507263 17705 logs.go:123] Gathering logs for Docker ... I0817 12:53:30.507279 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:53:30.540910 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:53:30.540926 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:53:30.645118 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:53:30.645136 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:53:30.731748 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:53:30.731766 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:53:30.822960 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:53:30.822979 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:53:30.915604 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:53:30.915621 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:53:31.011278 17705 logs.go:123] Gathering logs for container status ... I0817 12:53:31.011297 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:53:33.572315 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:53:38.573571 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:53:38.573844 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0817 12:53:38.657935 17705 logs.go:274] 1 containers: [16de0fe9f580] I0817 12:53:38.658023 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0817 12:53:38.728668 17705 logs.go:274] 1 containers: [8d4746db3da5] I0817 12:53:38.728753 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0817 12:53:38.791013 17705 logs.go:274] 2 containers: [2e0fa136b09d 8609e3ef02fd] I0817 12:53:38.791108 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0817 12:53:38.848437 17705 logs.go:274] 1 containers: [70b867fbed09] I0817 12:53:38.848519 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0817 12:53:38.908475 17705 logs.go:274] 1 containers: [ef82ea44260e] I0817 12:53:38.908571 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0817 12:53:38.966548 17705 logs.go:274] 0 containers: [] W0817 12:53:38.966566 17705 logs.go:276] No container was found matching "kubernetes-dashboard" I0817 12:53:38.966656 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0817 12:53:39.021664 17705 logs.go:274] 1 containers: [0e7b7ae46342] I0817 12:53:39.021752 17705 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0817 12:53:39.090441 17705 logs.go:274] 1 containers: [b3c183ba6b8f] I0817 12:53:39.090467 17705 logs.go:123] Gathering logs for kube-proxy [ef82ea44260e] ... I0817 12:53:39.090484 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef82ea44260e" I0817 12:53:39.156355 17705 logs.go:123] Gathering logs for kube-controller-manager [b3c183ba6b8f] ... I0817 12:53:39.156371 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c183ba6b8f" I0817 12:53:39.233916 17705 logs.go:123] Gathering logs for Docker ... I0817 12:53:39.233934 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0817 12:53:39.272367 17705 logs.go:123] Gathering logs for container status ... I0817 12:53:39.272383 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0817 12:53:39.330324 17705 logs.go:123] Gathering logs for kubelet ... I0817 12:53:39.330346 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0817 12:53:39.440339 17705 logs.go:123] Gathering logs for dmesg ... I0817 12:53:39.440356 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0817 12:53:39.461035 17705 logs.go:123] Gathering logs for kube-apiserver [16de0fe9f580] ... I0817 12:53:39.461055 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16de0fe9f580" I0817 12:53:39.542089 17705 logs.go:123] Gathering logs for coredns [2e0fa136b09d] ... I0817 12:53:39.542105 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e0fa136b09d" I0817 12:53:39.610947 17705 logs.go:123] Gathering logs for storage-provisioner [0e7b7ae46342] ... I0817 12:53:39.610967 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7b7ae46342" I0817 12:53:39.670172 17705 logs.go:123] Gathering logs for describe nodes ... I0817 12:53:39.670192 17705 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0817 12:53:39.810216 17705 logs.go:123] Gathering logs for etcd [8d4746db3da5] ... I0817 12:53:39.810232 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4746db3da5" I0817 12:53:39.962328 17705 logs.go:123] Gathering logs for coredns [8609e3ef02fd] ... I0817 12:53:39.962349 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8609e3ef02fd" I0817 12:53:40.023777 17705 logs.go:123] Gathering logs for kube-scheduler [70b867fbed09] ... I0817 12:53:40.023796 17705 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 70b867fbed09" I0817 12:53:42.615422 17705 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0817 12:53:47.616777 17705 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0817 12:53:47.625079 17705 out.go:177] W0817 12:53:47.627029 17705 out.go:239] โŒ Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: timed out waiting for the condition W0817 12:53:47.627110 17705 out.go:239] W0817 12:53:47.638638 17705 out.go:239] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚ โ”‚ โ”‚ โ”‚ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ I0817 12:53:47.640504 17705 out.go:177] * * ==> Docker <== * -- Logs begin at Wed 2022-08-17 07:15:44 UTC, end at Wed 2022-08-17 08:10:38 UTC. -- Aug 17 07:15:45 minikube dockerd[131]: time="2022-08-17T07:15:45.399393461Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Aug 17 07:15:45 minikube dockerd[131]: time="2022-08-17T07:15:45.587356702Z" level=info msg="Loading containers: start." Aug 17 07:15:46 minikube dockerd[131]: time="2022-08-17T07:15:46.073578168Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 17 07:15:46 minikube dockerd[131]: time="2022-08-17T07:15:46.282702045Z" level=info msg="Loading containers: done." Aug 17 07:15:46 minikube dockerd[131]: time="2022-08-17T07:15:46.328934154Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17 Aug 17 07:15:46 minikube dockerd[131]: time="2022-08-17T07:15:46.329156612Z" level=info msg="Daemon has completed initialization" Aug 17 07:15:46 minikube systemd[1]: Started Docker Application Container Engine. Aug 17 07:15:46 minikube dockerd[131]: time="2022-08-17T07:15:46.579257225Z" level=info msg="API listen on /run/docker.sock" Aug 17 07:15:48 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Aug 17 07:15:48 minikube systemd[1]: Stopping Docker Application Container Engine... Aug 17 07:15:48 minikube dockerd[131]: time="2022-08-17T07:15:48.510506261Z" level=info msg="Processing signal 'terminated'" Aug 17 07:15:48 minikube dockerd[131]: time="2022-08-17T07:15:48.511911756Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Aug 17 07:15:48 minikube dockerd[131]: time="2022-08-17T07:15:48.513029013Z" level=info msg="Daemon shutdown complete" Aug 17 07:15:48 minikube dockerd[131]: time="2022-08-17T07:15:48.513286467Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby Aug 17 07:15:48 minikube systemd[1]: docker.service: Succeeded. Aug 17 07:15:48 minikube systemd[1]: Stopped Docker Application Container Engine. Aug 17 07:15:48 minikube systemd[1]: Starting Docker Application Container Engine... Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.583733893Z" level=info msg="Starting up" Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.586652700Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.586804345Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.586912593Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.587008669Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.588642445Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.588787695Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.588870593Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.588947753Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.596885173Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.610205653Z" level=info msg="Loading containers: start." Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.751146234Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.812686864Z" level=info msg="Loading containers: done." Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.830577057Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17 Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.830683260Z" level=info msg="Daemon has completed initialization" Aug 17 07:15:48 minikube systemd[1]: Started Docker Application Container Engine. Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.878905236Z" level=info msg="API listen on [::]:2376" Aug 17 07:15:48 minikube dockerd[374]: time="2022-08-17T07:15:48.895881221Z" level=info msg="API listen on /var/run/docker.sock" Aug 17 07:15:50 minikube systemd[1]: Stopping Docker Application Container Engine... Aug 17 07:15:50 minikube dockerd[374]: time="2022-08-17T07:15:50.928764171Z" level=info msg="Processing signal 'terminated'" Aug 17 07:15:50 minikube dockerd[374]: time="2022-08-17T07:15:50.930184995Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Aug 17 07:15:50 minikube dockerd[374]: time="2022-08-17T07:15:50.930712196Z" level=info msg="Daemon shutdown complete" Aug 17 07:15:50 minikube systemd[1]: docker.service: Succeeded. Aug 17 07:15:50 minikube systemd[1]: Stopped Docker Application Container Engine. Aug 17 07:15:50 minikube systemd[1]: Starting Docker Application Container Engine... Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.004794577Z" level=info msg="Starting up" Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.007475567Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.007514263Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.007550850Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.007574929Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.009407863Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.009575183Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.009669698Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.009743321Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.025191065Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.040937709Z" level=info msg="Loading containers: start." Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.192008168Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.254999311Z" level=info msg="Loading containers: done." Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.274572466Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17 Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.274934903Z" level=info msg="Daemon has completed initialization" Aug 17 07:15:51 minikube systemd[1]: Started Docker Application Container Engine. Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.323403616Z" level=info msg="API listen on [::]:2376" Aug 17 07:15:51 minikube dockerd[578]: time="2022-08-17T07:15:51.359022030Z" level=info msg="API listen on /var/run/docker.sock" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 0e7b7ae46342d 6e38f40d628db 51 minutes ago Running storage-provisioner 0 5bd87f9651d21 2e0fa136b09d9 a4ca41631cc7a 54 minutes ago Running coredns 0 cb151dfb9c1fe 8609e3ef02fd8 a4ca41631cc7a 54 minutes ago Running coredns 0 1d9c28e7a3aa5 ef82ea44260e8 2ae1ba6417cbc 54 minutes ago Running kube-proxy 0 6505fa4e3480c 70b867fbed091 3a5aa3a515f5d 54 minutes ago Running kube-scheduler 0 c81a0eeb5bfc9 8d4746db3da51 aebe758cef4cd 54 minutes ago Running etcd 0 ca36033aa753d 16de0fe9f580d d521dd763e2e3 54 minutes ago Running kube-apiserver 0 e019e2b79c9f9 b3c183ba6b8fb 586c112956dfc 54 minutes ago Running kube-controller-manager 0 4dc4ce5c0c204 * * ==> coredns [2e0fa136b09d] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [INFO] Reloading [INFO] plugin/health: Going into lameduck mode for 5s [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 [INFO] Reloading complete * * ==> coredns [8609e3ef02fd] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [INFO] Reloading [INFO] plugin/health: Going into lameduck mode for 5s [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 [INFO] Reloading complete * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=62e108c3dfdec8029a890ad6d8ef96b6461426dc minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_08_17T12_46_19_0700 minikube.k8s.io/version=v1.26.1 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 17 Aug 2022 07:16:13 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Wed, 17 Aug 2022 08:10:33 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Wed, 17 Aug 2022 08:07:31 +0000 Wed, 17 Aug 2022 07:16:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 17 Aug 2022 08:07:31 +0000 Wed, 17 Aug 2022 07:16:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 17 Aug 2022 08:07:31 +0000 Wed, 17 Aug 2022 07:16:13 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 17 Aug 2022 08:07:31 +0000 Wed, 17 Aug 2022 07:16:19 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 65792556Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 2888300Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 65792556Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 2888300Ki pods: 110 System Info: Machine ID: 4c192b04687c403f8fbb9bc7975b21b3 System UUID: 4c192b04687c403f8fbb9bc7975b21b3 Boot ID: f9e129a8-413b-4ddd-adf0-27776be5616c Kernel Version: 5.10.104-linuxkit OS Image: Ubuntu 20.04.4 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.17 Kubelet Version: v1.24.3 Kube-Proxy Version: v1.24.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-6d4b75cb6d-2lg9g 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (2%!)(MISSING) 170Mi (6%!)(MISSING) 54m kube-system coredns-6d4b75cb6d-n9gzg 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (2%!)(MISSING) 170Mi (6%!)(MISSING) 54m kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (3%!)(MISSING) 0 (0%!)(MISSING) 54m kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 54m kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 54m kube-system kube-proxy-4pk5n 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 54m kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 54m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 51m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (42%!)(MISSING) 0 (0%!)(MISSING) memory 240Mi (8%!)(MISSING) 340Mi (12%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 54m kube-proxy Normal Starting 54m kubelet Starting kubelet. Normal NodeHasSufficientMemory 54m (x6 over 54m) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 54m (x5 over 54m) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 54m (x5 over 54m) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 54m kubelet Updated Node Allocatable limit across pods Normal Starting 54m kubelet Starting kubelet. Normal NodeHasSufficientMemory 54m kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 54m kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 54m kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 54m kubelet Updated Node Allocatable limit across pods Normal NodeReady 54m kubelet Node minikube status is now: NodeReady Normal RegisteredNode 54m node-controller Node minikube event: Registered Node minikube in Controller * * ==> dmesg <== * [Aug17 05:22] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds). [ +0.024963] the cryptoloop driver has been deprecated and will be removed in in Linux 5.16 [ +10.499686] grpcfuse: loading out-of-tree module taints kernel. [Aug17 07:13] hrtimer: interrupt took 4519878 ns * * ==> etcd [8d4746db3da5] <== * {"level":"info","ts":"2022-08-17T07:16:08.249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2022-08-17T07:16:08.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2022-08-17T07:16:08.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2022-08-17T07:16:08.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2022-08-17T07:16:08.250Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-08-17T07:16:08.251Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-08-17T07:16:08.252Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2022-08-17T07:16:08.255Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2022-08-17T07:16:08.256Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-08-17T07:16:08.261Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2022-08-17T07:16:08.259Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2022-08-17T07:16:08.271Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-08-17T07:16:08.291Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2022-08-17T07:16:08.291Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-08-17T07:16:08.291Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-08-17T07:16:12.030Z","caller":"traceutil/trace.go:171","msg":"trace[275596698] transaction","detail":"{read_only:false; response_revision:11; number_of_response:1; }","duration":"100.547425ms","start":"2022-08-17T07:16:11.930Z","end":"2022-08-17T07:16:12.030Z","steps":["trace[275596698] 'process raft request' (duration: 44.258206ms)","trace[275596698] 'compare' (duration: 55.295193ms)"],"step_count":2} {"level":"warn","ts":"2022-08-17T07:16:12.044Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.090903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/minikube\" ","response":"range_response_count:0 size:4"} {"level":"info","ts":"2022-08-17T07:16:12.044Z","caller":"traceutil/trace.go:171","msg":"trace[44868844] range","detail":"{range_begin:/registry/csinodes/minikube; range_end:; response_count:0; response_revision:16; }","duration":"104.374374ms","start":"2022-08-17T07:16:11.939Z","end":"2022-08-17T07:16:12.044Z","steps":["trace[44868844] 'agreement among raft nodes before linearized reading' (duration: 104.040627ms)"],"step_count":1} {"level":"warn","ts":"2022-08-17T07:16:12.045Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.688125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"} {"level":"info","ts":"2022-08-17T07:16:12.045Z","caller":"traceutil/trace.go:171","msg":"trace[785871745] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:16; }","duration":"103.803513ms","start":"2022-08-17T07:16:11.941Z","end":"2022-08-17T07:16:12.045Z","steps":["trace[785871745] 'agreement among raft nodes before linearized reading' (duration: 103.664222ms)"],"step_count":1} {"level":"warn","ts":"2022-08-17T07:16:12.049Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"109.754441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" ","response":"range_response_count:0 size:4"} {"level":"info","ts":"2022-08-17T07:16:12.049Z","caller":"traceutil/trace.go:171","msg":"trace[591668493] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:0; response_revision:16; }","duration":"109.90896ms","start":"2022-08-17T07:16:11.939Z","end":"2022-08-17T07:16:12.049Z","steps":["trace[591668493] 'agreement among raft nodes before linearized reading' (duration: 109.694613ms)"],"step_count":1} {"level":"warn","ts":"2022-08-17T07:16:12.236Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"128.111385ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"} {"level":"info","ts":"2022-08-17T07:16:12.236Z","caller":"traceutil/trace.go:171","msg":"trace[1841426714] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:30; }","duration":"128.66856ms","start":"2022-08-17T07:16:12.108Z","end":"2022-08-17T07:16:12.236Z","steps":["trace[1841426714] 'agreement among raft nodes before linearized reading' (duration: 57.822469ms)","trace[1841426714] 'range keys from in-memory index tree' (duration: 70.248716ms)"],"step_count":2} {"level":"warn","ts":"2022-08-17T07:16:12.583Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.902557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/minikube\" ","response":"range_response_count:0 size:4"} {"level":"info","ts":"2022-08-17T07:16:12.584Z","caller":"traceutil/trace.go:171","msg":"trace[1057011509] range","detail":"{range_begin:/registry/csinodes/minikube; range_end:; response_count:0; response_revision:46; }","duration":"122.760233ms","start":"2022-08-17T07:16:12.461Z","end":"2022-08-17T07:16:12.584Z","steps":["trace[1057011509] 'agreement among raft nodes before linearized reading' (duration: 50.380112ms)","trace[1057011509] 'range keys from in-memory index tree' (duration: 71.467291ms)"],"step_count":2} {"level":"warn","ts":"2022-08-17T07:16:15.206Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.216625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:expand-controller\" ","response":"range_response_count:0 size:4"} {"level":"info","ts":"2022-08-17T07:16:15.208Z","caller":"traceutil/trace.go:171","msg":"trace[1947876186] range","detail":"{range_begin:/registry/clusterroles/system:controller:expand-controller; range_end:; response_count:0; response_revision:118; }","duration":"104.835162ms","start":"2022-08-17T07:16:15.103Z","end":"2022-08-17T07:16:15.208Z","steps":["trace[1947876186] 'agreement among raft nodes before linearized reading' (duration: 46.860197ms)","trace[1947876186] 'range keys from in-memory index tree' (duration: 55.289395ms)"],"step_count":2} {"level":"warn","ts":"2022-08-17T07:16:15.895Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"144.42433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:route-controller\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-08-17T07:16:15.897Z","caller":"traceutil/trace.go:171","msg":"trace[1011387277] range","detail":"{range_begin:/registry/clusterroles/system:controller:route-controller; range_end:; response_count:0; response_revision:130; }","duration":"145.769388ms","start":"2022-08-17T07:16:15.751Z","end":"2022-08-17T07:16:15.897Z","steps":["trace[1011387277] 'agreement among raft nodes before linearized reading' (duration: 63.685605ms)","trace[1011387277] 'range keys from in-memory index tree' (duration: 80.839188ms)"],"step_count":2} {"level":"warn","ts":"2022-08-17T07:16:18.417Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"164.243239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" ","response":"range_response_count:1 size:207"} {"level":"info","ts":"2022-08-17T07:16:18.417Z","caller":"traceutil/trace.go:171","msg":"trace[817487691] transaction","detail":"{read_only:false; response_revision:248; number_of_response:1; }","duration":"122.357563ms","start":"2022-08-17T07:16:18.294Z","end":"2022-08-17T07:16:18.417Z","steps":["trace[817487691] 'process raft request' (duration: 32.957655ms)","trace[817487691] 'compare' (duration: 89.188843ms)"],"step_count":2} {"level":"info","ts":"2022-08-17T07:16:18.417Z","caller":"traceutil/trace.go:171","msg":"trace[1168653486] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:247; }","duration":"164.611559ms","start":"2022-08-17T07:16:18.252Z","end":"2022-08-17T07:16:18.417Z","steps":["trace[1168653486] 'agreement among raft nodes before linearized reading' (duration: 74.88043ms)","trace[1168653486] 'range keys from in-memory index tree' (duration: 89.317517ms)"],"step_count":2} {"level":"info","ts":"2022-08-17T07:16:18.417Z","caller":"traceutil/trace.go:171","msg":"trace[1655095223] transaction","detail":"{read_only:false; response_revision:249; number_of_response:1; }","duration":"119.335144ms","start":"2022-08-17T07:16:18.297Z","end":"2022-08-17T07:16:18.417Z","steps":["trace[1655095223] 'process raft request' (duration: 119.27797ms)"],"step_count":1} {"level":"warn","ts":"2022-08-17T07:16:19.477Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"197.159606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-minikube\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-08-17T07:16:19.477Z","caller":"traceutil/trace.go:171","msg":"trace[144134378] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-minikube; range_end:; response_count:0; response_revision:278; }","duration":"197.412394ms","start":"2022-08-17T07:16:19.279Z","end":"2022-08-17T07:16:19.477Z","steps":["trace[144134378] 'agreement among raft nodes before linearized reading' (duration: 78.654176ms)","trace[144134378] 'range keys from in-memory index tree' (duration: 118.445649ms)"],"step_count":2} {"level":"warn","ts":"2022-08-17T07:16:19.479Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.634595ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure:<>>","response":"size:16"} {"level":"info","ts":"2022-08-17T07:16:19.480Z","caller":"traceutil/trace.go:171","msg":"trace[1287350660] transaction","detail":"{read_only:false; response_revision:279; number_of_response:1; }","duration":"200.305243ms","start":"2022-08-17T07:16:19.280Z","end":"2022-08-17T07:16:19.480Z","steps":["trace[1287350660] 'process raft request' (duration: 78.567219ms)","trace[1287350660] 'compare' (duration: 118.347931ms)"],"step_count":2} {"level":"info","ts":"2022-08-17T07:16:19.481Z","caller":"traceutil/trace.go:171","msg":"trace[146849169] transaction","detail":"{read_only:false; response_revision:280; number_of_response:1; }","duration":"197.563434ms","start":"2022-08-17T07:16:19.284Z","end":"2022-08-17T07:16:19.481Z","steps":["trace[146849169] 'process raft request' (duration: 195.989846ms)"],"step_count":1} {"level":"info","ts":"2022-08-17T07:16:19.500Z","caller":"traceutil/trace.go:171","msg":"trace[1512730691] transaction","detail":"{read_only:false; response_revision:281; number_of_response:1; }","duration":"216.386848ms","start":"2022-08-17T07:16:19.284Z","end":"2022-08-17T07:16:19.500Z","steps":["trace[1512730691] 'process raft request' (duration: 212.606344ms)"],"step_count":1} {"level":"info","ts":"2022-08-17T07:16:19.518Z","caller":"traceutil/trace.go:171","msg":"trace[197899362] transaction","detail":"{read_only:false; response_revision:282; number_of_response:1; }","duration":"197.70534ms","start":"2022-08-17T07:16:19.320Z","end":"2022-08-17T07:16:19.518Z","steps":["trace[197899362] 'process raft request' (duration: 192.887755ms)"],"step_count":1} {"level":"info","ts":"2022-08-17T07:16:22.455Z","caller":"traceutil/trace.go:171","msg":"trace[273631830] transaction","detail":"{read_only:false; response_revision:288; number_of_response:1; }","duration":"140.435986ms","start":"2022-08-17T07:16:22.315Z","end":"2022-08-17T07:16:22.455Z","steps":["trace[273631830] 'process raft request' (duration: 55.622064ms)","trace[273631830] 'compare' (duration: 84.278477ms)"],"step_count":2} {"level":"info","ts":"2022-08-17T07:26:08.234Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":509} {"level":"info","ts":"2022-08-17T07:26:08.238Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":509,"took":"2.544572ms"} {"level":"info","ts":"2022-08-17T07:31:08.237Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":717} {"level":"info","ts":"2022-08-17T07:31:08.240Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":717,"took":"2.142756ms"} {"level":"info","ts":"2022-08-17T07:36:08.310Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":926} {"level":"info","ts":"2022-08-17T07:36:08.312Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":926,"took":"1.099802ms"} {"level":"info","ts":"2022-08-17T07:41:08.356Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1135} {"level":"info","ts":"2022-08-17T07:41:08.366Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1135,"took":"8.282769ms"} {"level":"info","ts":"2022-08-17T07:46:08.403Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1343} {"level":"info","ts":"2022-08-17T07:46:08.406Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1343,"took":"2.519325ms"} {"level":"info","ts":"2022-08-17T07:51:08.431Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1550} {"level":"info","ts":"2022-08-17T07:51:08.434Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1550,"took":"2.141865ms"} {"level":"info","ts":"2022-08-17T07:56:08.451Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1760} {"level":"info","ts":"2022-08-17T07:56:08.453Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1760,"took":"1.430861ms"} {"level":"info","ts":"2022-08-17T08:01:08.484Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1968} {"level":"info","ts":"2022-08-17T08:01:08.486Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1968,"took":"1.599217ms"} {"level":"info","ts":"2022-08-17T08:06:08.514Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2176} {"level":"info","ts":"2022-08-17T08:06:08.516Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2176,"took":"1.537225ms"} * * ==> kernel <== * 08:10:39 up 2:48, 0 users, load average: 0.36, 0.42, 0.44 Linux minikube 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.4 LTS" * * ==> kube-apiserver [16de0fe9f580] <== * I0817 07:16:11.685861 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0817 07:16:11.686538 1 secure_serving.go:210] Serving securely on [::]:8443 I0817 07:16:11.686605 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0817 07:16:11.703014 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0817 07:16:11.703034 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0817 07:16:11.704410 1 controller.go:83] Starting OpenAPI AggregationController I0817 07:16:11.705195 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0817 07:16:11.705396 1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller I0817 07:16:11.705683 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0817 07:16:11.705820 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0817 07:16:11.705935 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0817 07:16:11.706037 1 available_controller.go:491] Starting AvailableConditionController I0817 07:16:11.706146 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0817 07:16:11.707450 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0817 07:16:11.709229 1 autoregister_controller.go:141] Starting autoregister controller I0817 07:16:11.709252 1 cache.go:32] Waiting for caches to sync for autoregister controller I0817 07:16:11.709303 1 controller.go:80] Starting OpenAPI V3 AggregationController I0817 07:16:11.711977 1 apf_controller.go:317] Starting API Priority and Fairness config controller I0817 07:16:11.728081 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0817 07:16:11.740173 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0817 07:16:11.740205 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister I0817 07:16:11.744440 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0817 07:16:11.747541 1 controller.go:85] Starting OpenAPI controller I0817 07:16:11.747809 1 controller.go:85] Starting OpenAPI V3 controller I0817 07:16:11.747932 1 naming_controller.go:291] Starting NamingConditionController I0817 07:16:11.748087 1 establishing_controller.go:76] Starting EstablishingController I0817 07:16:11.748192 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0817 07:16:11.748323 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0817 07:16:11.748459 1 crd_finalizer.go:266] Starting CRDFinalizer I0817 07:16:11.905667 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller I0817 07:16:11.906337 1 cache.go:39] Caches are synced for AvailableConditionController controller I0817 07:16:11.906666 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0817 07:16:11.906759 1 controller.go:611] quota admission added evaluator for: namespaces I0817 07:16:11.909315 1 cache.go:39] Caches are synced for autoregister controller I0817 07:16:11.912039 1 apf_controller.go:322] Running API Priority and Fairness config worker I0817 07:16:11.940297 1 shared_informer.go:262] Caches are synced for crd-autoregister I0817 07:16:11.947956 1 shared_informer.go:262] Caches are synced for node_authorizer I0817 07:16:12.266205 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0817 07:16:12.788783 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0817 07:16:12.943313 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0817 07:16:12.943437 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0817 07:16:16.525742 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0817 07:16:16.579302 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0817 07:16:16.762955 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0817 07:16:16.796065 1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0817 07:16:16.801777 1 controller.go:611] quota admission added evaluator for: endpoints I0817 07:16:16.832523 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0817 07:16:16.908756 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0817 07:16:18.885292 1 controller.go:611] quota admission added evaluator for: deployments.apps I0817 07:16:18.961842 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0817 07:16:19.077437 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0817 07:16:19.086654 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0817 07:16:30.591699 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0817 07:16:30.686883 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0817 07:16:31.843031 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io W0817 07:30:38.844912 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0817 07:37:44.143423 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0817 07:47:29.287734 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0817 07:57:25.367475 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted W0817 08:06:25.612578 1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted * * ==> kube-controller-manager [b3c183ba6b8f] <== * I0817 07:16:29.487027 1 ttlafterfinished_controller.go:109] Starting TTL after finished controller I0817 07:16:29.487049 1 shared_informer.go:255] Waiting for caches to sync for TTL after finished I0817 07:16:29.685382 1 controllermanager.go:593] Started "disruption" I0817 07:16:29.685883 1 disruption.go:363] Starting disruption controller I0817 07:16:29.686062 1 shared_informer.go:255] Waiting for caches to sync for disruption I0817 07:16:29.709292 1 shared_informer.go:255] Waiting for caches to sync for resource quota W0817 07:16:29.751914 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0817 07:16:29.761170 1 shared_informer.go:262] Caches are synced for TTL I0817 07:16:29.782803 1 shared_informer.go:262] Caches are synced for node I0817 07:16:29.782939 1 range_allocator.go:173] Starting range CIDR allocator I0817 07:16:29.783081 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator I0817 07:16:29.783188 1 shared_informer.go:262] Caches are synced for cidrallocator I0817 07:16:29.794660 1 shared_informer.go:262] Caches are synced for TTL after finished I0817 07:16:29.796700 1 shared_informer.go:262] Caches are synced for cronjob I0817 07:16:29.820759 1 shared_informer.go:262] Caches are synced for namespace I0817 07:16:29.820882 1 range_allocator.go:374] Set node minikube PodCIDR to [10.244.0.0/24] I0817 07:16:29.821837 1 shared_informer.go:262] Caches are synced for service account I0817 07:16:29.834188 1 shared_informer.go:262] Caches are synced for endpoint I0817 07:16:29.835358 1 shared_informer.go:255] Waiting for caches to sync for garbage collector I0817 07:16:29.836304 1 shared_informer.go:262] Caches are synced for crt configmap I0817 07:16:29.839389 1 shared_informer.go:262] Caches are synced for ephemeral I0817 07:16:29.842815 1 shared_informer.go:262] Caches are synced for stateful set I0817 07:16:29.844381 1 shared_informer.go:262] Caches are synced for PV protection I0817 07:16:29.852383 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator I0817 07:16:29.860516 1 shared_informer.go:262] Caches are synced for endpoint_slice I0817 07:16:29.861980 1 shared_informer.go:262] Caches are synced for certificate-csrapproving I0817 07:16:29.867763 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring I0817 07:16:29.870109 1 shared_informer.go:262] Caches are synced for expand I0817 07:16:29.876434 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving I0817 07:16:29.879051 1 shared_informer.go:262] Caches are synced for PVC protection I0817 07:16:29.882482 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client I0817 07:16:29.883704 1 shared_informer.go:262] Caches are synced for HPA I0817 07:16:29.883757 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown I0817 07:16:29.885970 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client I0817 07:16:29.886123 1 shared_informer.go:262] Caches are synced for daemon sets I0817 07:16:29.890862 1 shared_informer.go:262] Caches are synced for bootstrap_signer I0817 07:16:29.901610 1 shared_informer.go:262] Caches are synced for taint I0817 07:16:29.901873 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: W0817 07:16:29.902030 1 node_lifecycle_controller.go:1014] Missing timestamp for Node minikube. Assuming now as a timestamp. I0817 07:16:29.902162 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal. I0817 07:16:29.902309 1 event.go:294] "Event occurred" object="minikube" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0817 07:16:29.902646 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0817 07:16:29.905741 1 shared_informer.go:262] Caches are synced for job I0817 07:16:29.911902 1 shared_informer.go:262] Caches are synced for GC I0817 07:16:29.916088 1 shared_informer.go:262] Caches are synced for persistent volume I0817 07:16:29.920146 1 shared_informer.go:262] Caches are synced for ReplicaSet I0817 07:16:29.920176 1 shared_informer.go:262] Caches are synced for deployment I0817 07:16:29.920196 1 shared_informer.go:262] Caches are synced for ReplicationController I0817 07:16:29.924596 1 shared_informer.go:262] Caches are synced for attach detach I0817 07:16:29.981756 1 shared_informer.go:262] Caches are synced for resource quota I0817 07:16:29.987216 1 shared_informer.go:262] Caches are synced for disruption I0817 07:16:29.987561 1 disruption.go:371] Sending events to api server. I0817 07:16:30.010482 1 shared_informer.go:262] Caches are synced for resource quota I0817 07:16:30.427976 1 shared_informer.go:262] Caches are synced for garbage collector I0817 07:16:30.429615 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0817 07:16:30.436250 1 shared_informer.go:262] Caches are synced for garbage collector I0817 07:16:30.613308 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4pk5n" I0817 07:16:30.691450 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2" I0817 07:16:30.797776 1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-n9gzg" I0817 07:16:30.820209 1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2lg9g" * * ==> kube-proxy [ef82ea44260e] <== * I0817 07:16:31.655136 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0817 07:16:31.655424 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0817 07:16:31.655511 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0817 07:16:31.819309 1 server_others.go:206] "Using iptables Proxier" I0817 07:16:31.819386 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0817 07:16:31.819402 1 server_others.go:214] "Creating dualStackProxier for iptables" I0817 07:16:31.819418 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0817 07:16:31.819443 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0817 07:16:31.820702 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0817 07:16:31.821236 1 server.go:661] "Version info" version="v1.24.3" I0817 07:16:31.821298 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0817 07:16:31.825373 1 config.go:317] "Starting service config controller" I0817 07:16:31.826259 1 shared_informer.go:255] Waiting for caches to sync for service config I0817 07:16:31.826306 1 config.go:226] "Starting endpoint slice config controller" I0817 07:16:31.826315 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config I0817 07:16:31.834585 1 config.go:444] "Starting node config controller" I0817 07:16:31.834612 1 shared_informer.go:255] Waiting for caches to sync for node config I0817 07:16:31.926656 1 shared_informer.go:262] Caches are synced for endpoint slice config I0817 07:16:31.926735 1 shared_informer.go:262] Caches are synced for service config I0817 07:16:31.937400 1 shared_informer.go:262] Caches are synced for node config * * ==> kube-scheduler [70b867fbed09] <== * E0817 07:16:11.885383 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0817 07:16:12.698326 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0817 07:16:12.698735 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0817 07:16:12.784528 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0817 07:16:12.784655 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0817 07:16:12.802319 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0817 07:16:12.802426 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0817 07:16:12.864581 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0817 07:16:12.864932 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0817 07:16:12.883884 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0817 07:16:12.883972 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0817 07:16:12.968395 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0817 07:16:12.968525 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0817 07:16:13.053716 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0817 07:16:13.053808 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0817 07:16:13.054557 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0817 07:16:13.054635 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0817 07:16:13.060091 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0817 07:16:13.060177 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0817 07:16:13.149275 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0817 07:16:13.149420 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0817 07:16:13.166975 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0817 07:16:13.168738 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0817 07:16:13.370740 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0817 07:16:13.371460 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0817 07:16:13.382514 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0817 07:16:13.383088 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0817 07:16:13.394468 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0817 07:16:13.396003 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0817 07:16:13.440256 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0817 07:16:13.440438 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0817 07:16:14.603782 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0817 07:16:14.604751 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0817 07:16:15.055895 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0817 07:16:15.056964 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0817 07:16:15.056769 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0817 07:16:15.057866 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0817 07:16:15.067427 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0817 07:16:15.068473 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0817 07:16:15.234973 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0817 07:16:15.236164 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0817 07:16:15.287315 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0817 07:16:15.287981 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0817 07:16:15.488792 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0817 07:16:15.488971 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0817 07:16:15.545328 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0817 07:16:15.546120 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0817 07:16:15.670629 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0817 07:16:15.670716 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0817 07:16:15.702576 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0817 07:16:15.703634 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0817 07:16:15.771899 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0817 07:16:15.772201 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0817 07:16:15.855602 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0817 07:16:15.857147 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0817 07:16:16.079739 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0817 07:16:16.079900 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0817 07:16:16.200747 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0817 07:16:16.201236 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope I0817 07:16:21.274821 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Wed 2022-08-17 07:15:44 UTC, end at Wed 2022-08-17 08:10:39 UTC. -- Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.088984 1768 manager.go:610] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.089291 1768 plugin_manager.go:114] "Starting Kubelet Plugin Manager" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.163859 1768 kubelet_network_linux.go:76] "Initialized protocol iptables rules." protocol=IPv6 Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.163894 1768 status_manager.go:161] "Starting to sync pod status with apiserver" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.163916 1768 kubelet.go:1986] "Starting kubelet main sync loop" Aug 17 07:16:19 minikube kubelet[1768]: E0817 07:16:19.163970 1768 kubelet.go:2010] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.222858 1768 kubelet_node_status.go:108] "Node was previously registered" node="minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.222989 1768 kubelet_node_status.go:73] "Successfully registered node" node="minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.264388 1768 topology_manager.go:200] "Topology Admit Handler" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.264544 1768 topology_manager.go:200] "Topology Admit Handler" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.264663 1768 topology_manager.go:200] "Topology Admit Handler" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.264750 1768 topology_manager.go:200] "Topology Admit Handler" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.359902 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af8a252bb89a737e9c95199d01283487-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"af8a252bb89a737e9c95199d01283487\") " pod="kube-system/kube-apiserver-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.359994 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76444121a189d8a30add20fb32ab6d4e-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"76444121a189d8a30add20fb32ab6d4e\") " pod="kube-system/kube-controller-manager-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360045 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/906edd533192a4db2396a938662a5271-etcd-data\") pod \"etcd-minikube\" (UID: \"906edd533192a4db2396a938662a5271\") " pod="kube-system/etcd-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360094 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af8a252bb89a737e9c95199d01283487-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"af8a252bb89a737e9c95199d01283487\") " pod="kube-system/kube-apiserver-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360141 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76444121a189d8a30add20fb32ab6d4e-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"76444121a189d8a30add20fb32ab6d4e\") " pod="kube-system/kube-controller-manager-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360186 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af8a252bb89a737e9c95199d01283487-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"af8a252bb89a737e9c95199d01283487\") " pod="kube-system/kube-apiserver-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360237 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af8a252bb89a737e9c95199d01283487-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"af8a252bb89a737e9c95199d01283487\") " pod="kube-system/kube-apiserver-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360285 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76444121a189d8a30add20fb32ab6d4e-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"76444121a189d8a30add20fb32ab6d4e\") " pod="kube-system/kube-controller-manager-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360375 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76444121a189d8a30add20fb32ab6d4e-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"76444121a189d8a30add20fb32ab6d4e\") " pod="kube-system/kube-controller-manager-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360433 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76444121a189d8a30add20fb32ab6d4e-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"76444121a189d8a30add20fb32ab6d4e\") " pod="kube-system/kube-controller-manager-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360481 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e95d5efbc70e877d20097c03ba4ff89-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"2e95d5efbc70e877d20097c03ba4ff89\") " pod="kube-system/kube-scheduler-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360531 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af8a252bb89a737e9c95199d01283487-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"af8a252bb89a737e9c95199d01283487\") " pod="kube-system/kube-apiserver-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360580 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/76444121a189d8a30add20fb32ab6d4e-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"76444121a189d8a30add20fb32ab6d4e\") " pod="kube-system/kube-controller-manager-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360660 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/906edd533192a4db2396a938662a5271-etcd-certs\") pod \"etcd-minikube\" (UID: \"906edd533192a4db2396a938662a5271\") " pod="kube-system/etcd-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.360713 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76444121a189d8a30add20fb32ab6d4e-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"76444121a189d8a30add20fb32ab6d4e\") " pod="kube-system/kube-controller-manager-minikube" Aug 17 07:16:19 minikube kubelet[1768]: E0817 07:16:19.642073 1768 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Aug 17 07:16:19 minikube kubelet[1768]: I0817 07:16:19.913215 1768 apiserver.go:52] "Watching apiserver" Aug 17 07:16:20 minikube kubelet[1768]: I0817 07:16:20.168147 1768 reconciler.go:157] "Reconciler: start to sync state" Aug 17 07:16:20 minikube kubelet[1768]: E0817 07:16:20.577911 1768 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Aug 17 07:16:20 minikube kubelet[1768]: E0817 07:16:20.783265 1768 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Aug 17 07:16:20 minikube kubelet[1768]: E0817 07:16:20.987714 1768 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Aug 17 07:16:29 minikube kubelet[1768]: I0817 07:16:29.851761 1768 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Aug 17 07:16:29 minikube kubelet[1768]: I0817 07:16:29.853339 1768 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.625410 1768 topology_manager.go:200] "Topology Admit Handler" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.670867 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25e8b7d3-bf07-4471-89fa-ee46229c2f1e-lib-modules\") pod \"kube-proxy-4pk5n\" (UID: \"25e8b7d3-bf07-4471-89fa-ee46229c2f1e\") " pod="kube-system/kube-proxy-4pk5n" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.671000 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2692p\" (UniqueName: \"kubernetes.io/projected/25e8b7d3-bf07-4471-89fa-ee46229c2f1e-kube-api-access-2692p\") pod \"kube-proxy-4pk5n\" (UID: \"25e8b7d3-bf07-4471-89fa-ee46229c2f1e\") " pod="kube-system/kube-proxy-4pk5n" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.671048 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25e8b7d3-bf07-4471-89fa-ee46229c2f1e-xtables-lock\") pod \"kube-proxy-4pk5n\" (UID: \"25e8b7d3-bf07-4471-89fa-ee46229c2f1e\") " pod="kube-system/kube-proxy-4pk5n" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.671089 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25e8b7d3-bf07-4471-89fa-ee46229c2f1e-kube-proxy\") pod \"kube-proxy-4pk5n\" (UID: \"25e8b7d3-bf07-4471-89fa-ee46229c2f1e\") " pod="kube-system/kube-proxy-4pk5n" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.802398 1768 topology_manager.go:200] "Topology Admit Handler" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.829516 1768 topology_manager.go:200] "Topology Admit Handler" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.975443 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46a38aa2-9f21-4558-a1bc-563a5d36043e-config-volume\") pod \"coredns-6d4b75cb6d-2lg9g\" (UID: \"46a38aa2-9f21-4558-a1bc-563a5d36043e\") " pod="kube-system/coredns-6d4b75cb6d-2lg9g" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.976773 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/954af7a8-ec43-4d51-a6c3-5838fadda2cc-config-volume\") pod \"coredns-6d4b75cb6d-n9gzg\" (UID: \"954af7a8-ec43-4d51-a6c3-5838fadda2cc\") " pod="kube-system/coredns-6d4b75cb6d-n9gzg" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.977014 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9674m\" (UniqueName: \"kubernetes.io/projected/46a38aa2-9f21-4558-a1bc-563a5d36043e-kube-api-access-9674m\") pod \"coredns-6d4b75cb6d-2lg9g\" (UID: \"46a38aa2-9f21-4558-a1bc-563a5d36043e\") " pod="kube-system/coredns-6d4b75cb6d-2lg9g" Aug 17 07:16:30 minikube kubelet[1768]: I0817 07:16:30.977151 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hcq4\" (UniqueName: \"kubernetes.io/projected/954af7a8-ec43-4d51-a6c3-5838fadda2cc-kube-api-access-9hcq4\") pod \"coredns-6d4b75cb6d-n9gzg\" (UID: \"954af7a8-ec43-4d51-a6c3-5838fadda2cc\") " pod="kube-system/coredns-6d4b75cb6d-n9gzg" Aug 17 07:16:31 minikube kubelet[1768]: I0817 07:16:31.878947 1768 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1d9c28e7a3aa561486477431f17e96906ec48c5a48e3a89bc08e9f89feb316d5" Aug 17 07:19:26 minikube kubelet[1768]: I0817 07:19:26.648808 1768 topology_manager.go:200] "Topology Admit Handler" Aug 17 07:19:26 minikube kubelet[1768]: I0817 07:19:26.672707 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddhcb\" (UniqueName: \"kubernetes.io/projected/fe645597-f938-45fa-8c97-0f864abd00a4-kube-api-access-ddhcb\") pod \"storage-provisioner\" (UID: \"fe645597-f938-45fa-8c97-0f864abd00a4\") " pod="kube-system/storage-provisioner" Aug 17 07:19:26 minikube kubelet[1768]: I0817 07:19:26.672801 1768 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fe645597-f938-45fa-8c97-0f864abd00a4-tmp\") pod \"storage-provisioner\" (UID: \"fe645597-f938-45fa-8c97-0f864abd00a4\") " pod="kube-system/storage-provisioner" Aug 17 07:21:18 minikube kubelet[1768]: W0817 07:21:18.996579 1768 sysinfo.go:203] Nodes topology is not available, providing CPU topology Aug 17 07:26:18 minikube kubelet[1768]: W0817 07:26:18.974364 1768 sysinfo.go:203] Nodes topology is not available, providing CPU topology Aug 17 07:31:18 minikube kubelet[1768]: W0817 07:31:18.957313 1768 sysinfo.go:203] Nodes topology is not available, providing CPU topology Aug 17 07:36:19 minikube kubelet[1768]: W0817 07:36:19.022108 1768 sysinfo.go:203] Nodes topology is not available, providing CPU topology Aug 17 07:41:19 minikube kubelet[1768]: W0817 07:41:19.060853 1768 sysinfo.go:203] Nodes topology is not available, providing CPU topology Aug 17 07:46:19 minikube kubelet[1768]: W0817 07:46:19.047022 1768 sysinfo.go:203] Nodes topology is not available, providing CPU topology Aug 17 07:51:19 minikube kubelet[1768]: W0817 07:51:19.035971 1768 sysinfo.go:203] Nodes topology is not available, providing CPU topology Aug 17 07:56:19 minikube kubelet[1768]: W0817 07:56:19.036145 1768 sysinfo.go:203] Nodes topology is not available, providing CPU topology Aug 17 08:01:19 minikube kubelet[1768]: W0817 08:01:19.050440 1768 sysinfo.go:203] Nodes topology is not available, providing CPU topology Aug 17 08:06:19 minikube kubelet[1768]: W0817 08:06:19.064700 1768 sysinfo.go:203] Nodes topology is not available, providing CPU topology * * ==> storage-provisioner [0e7b7ae46342] <== * I0817 07:19:27.511837 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0817 07:19:27.536947 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0817 07:19:27.537456 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0817 07:19:27.555385 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0817 07:19:27.555922 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_5ed2fdb0-e9ee-4e0a-9341-37c94848ecec! I0817 07:19:27.557305 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a724f43-1279-4e36-a605-b5f6dd4f5d6e", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_5ed2fdb0-e9ee-4e0a-9341-37c94848ecec became leader I0817 07:19:27.657409 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_5ed2fdb0-e9ee-4e0a-9341-37c94848ecec!