-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman driver does not work on Fedora 34 #11760
Comments
Can only see a timeout ? Is SELinux still enabled |
I think SELinux is enabled. Would it matter? |
@owenthereal unfortunately we dont have fedora test infra in our test fleet |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
I'll go ahead and close this for now. If you try again and still have issues, feel free to reopen with more details. |
Steps to reproduce the issue:
Full output of
minikube logs
command:Running on machine: oinux
Binary: Built with gc go1.16.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0624 11:47:05.631890 94616 out.go:291] Setting OutFile to fd 1 ...
I0624 11:47:05.631963 94616 out.go:343] isatty.IsTerminal(1) = true
I0624 11:47:05.631973 94616 out.go:304] Setting ErrFile to fd 2...
I0624 11:47:05.631982 94616 out.go:343] isatty.IsTerminal(2) = true
I0624 11:47:05.632078 94616 root.go:316] Updating PATH: /home/owen/.minikube/bin
I0624 11:47:05.632290 94616 out.go:298] Setting JSON to false
I0624 11:47:05.645460 94616 start.go:111] hostinfo: {"hostname":"oinux","uptime":3190,"bootTime":1624557235,"procs":348,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"34","kernelVersion":"5.12.11-300.fc34.x86_64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"91634aa4-8b91-4b5c-9bb2-c663e905d594"}
I0624 11:47:05.645531 94616 start.go:121] virtualization: kvm host
I0624 11:47:05.649054 94616 out.go:170] 😄 [wireguardians] minikube v1.21.0 on Fedora 34
I0624 11:47:05.649281 94616 driver.go:335] Setting default libvirt URI to qemu:///system
I0624 11:47:05.649282 94616 notify.go:169] Checking for updates...
I0624 11:47:05.809572 94616 podman.go:121] podman version: 3.2.1
I0624 11:47:05.813507 94616 out.go:170] ✨ Using the podman driver based on user configuration
I0624 11:47:05.813586 94616 start.go:279] selected driver: podman
I0624 11:47:05.813621 94616 start.go:752] validating driver "podman" against
I0624 11:47:05.813692 94616 start.go:763] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0624 11:47:05.814175 94616 cli_runner.go:115] Run: sudo -n podman system info --format json
I0624 11:47:05.965489 94616 info.go:281] podman info: {Host:{BuildahVersion:1.21.0 CgroupVersion:v1 Conmon:{Package:conmon-2.0.27-2.fc34.x86_64 Path:/usr/bin/conmon Version:conmon version 2.0.27, commit: } Distribution:{Distribution:fedora Version:34} MemFree:8831356928 MemTotal:16523653120 OCIRuntime:{Name:crun Package:crun-0.20.1-1.fc34.x86_64 Path:/usr/bin/crun Version:crun version 0.20.1
commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:16924008448 SwapTotal:16924008448 Arch:amd64 Cpus:8 Eventlogger:journald Hostname:oinux Kernel:5.12.11-300.fc34.x86_64 Os:linux Rootless:false Uptime:53m 9.98s} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com docker.io quay.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:extfs NativeOverlayDiff:false SupportsDType:true UsingMetacopy:true} ImageStore:{Number:1} RunRoot:/var/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0624 11:47:05.966571 94616 start_flags.go:259] no existing cluster config was found, will generate one from the flags
I0624 11:47:05.980552 94616 start_flags.go:311] Using suggested 3900MB memory alloc based on sys=15758MB, container=15758MB
I0624 11:47:05.980902 94616 start_flags.go:638] Wait components to verify : map[apiserver:true system_pods:true]
I0624 11:47:05.980973 94616 cni.go:93] Creating CNI manager for ""
I0624 11:47:05.981022 94616 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0624 11:47:05.981053 94616 start_flags.go:273] config:
{Name:wireguardians KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:wireguardians Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0624 11:47:05.985148 94616 out.go:170] 👍 Starting control plane node wireguardians in cluster wireguardians
I0624 11:47:05.985256 94616 cache.go:115] Beginning downloading kic base image for podman with docker
I0624 11:47:05.988583 94616 out.go:170] 🚜 Pulling base image ...
I0624 11:47:05.988698 94616 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0624 11:47:05.988790 94616 preload.go:125] Found local preload: /home/owen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
I0624 11:47:05.988820 94616 cache.go:54] Caching tarball of preloaded images
I0624 11:47:05.988865 94616 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
I0624 11:47:05.989331 94616 preload.go:166] Found /home/owen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0624 11:47:05.989342 94616 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
I0624 11:47:05.989424 94616 cache.go:57] Finished verifying existence of preloaded tar for v1.20.7 on docker
I0624 11:47:05.989461 94616 image.go:61] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory, skipping pull
I0624 11:47:05.989493 94616 image.go:102] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in cache, skipping pull
I0624 11:47:05.989556 94616 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 as a tarball
I0624 11:47:05.990435 94616 profile.go:148] Saving config to /home/owen/.minikube/profiles/wireguardians/config.json ...
I0624 11:47:05.990530 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/config.json: {Name:mk02891f14282352d722bcd04b397f9889d6bf44 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
E0624 11:47:05.990946 94616 cache.go:197] Error downloading kic artifacts: not yet implemented, see issue podman: load kic base image from cache if available for offline mode #8426
I0624 11:47:05.990981 94616 cache.go:202] Successfully downloaded all kic artifacts
I0624 11:47:05.991027 94616 start.go:313] acquiring machines lock for wireguardians: {Name:mka27a057e80ea0e4525b09f607bfe564df50587 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0624 11:47:05.991197 94616 start.go:317] acquired machines lock for "wireguardians" in 119.471µs
I0624 11:47:05.991258 94616 start.go:89] Provisioning new machine with config: &{Name:wireguardians KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:wireguardians Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0624 11:47:05.991518 94616 start.go:126] createHost starting for "" (driver="podman")
I0624 11:47:05.995099 94616 out.go:197] 🔥 Creating podman container (CPUs=2, Memory=3900MB) ...
I0624 11:47:05.995273 94616 start.go:160] libmachine.API.Create for "wireguardians" (driver="podman")
I0624 11:47:05.995297 94616 client.go:168] LocalClient.Create starting
I0624 11:47:05.995366 94616 main.go:128] libmachine: Reading certificate data from /home/owen/.minikube/certs/ca.pem
I0624 11:47:05.995406 94616 main.go:128] libmachine: Decoding PEM data...
I0624 11:47:05.995434 94616 main.go:128] libmachine: Parsing certificate...
I0624 11:47:05.995595 94616 main.go:128] libmachine: Reading certificate data from /home/owen/.minikube/certs/cert.pem
I0624 11:47:05.995617 94616 main.go:128] libmachine: Decoding PEM data...
I0624 11:47:05.995641 94616 main.go:128] libmachine: Parsing certificate...
I0624 11:47:05.996205 94616 cli_runner.go:115] Run: sudo -n podman network inspect wireguardians --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
I0624 11:47:06.154081 94616 network_create.go:67] Found existing network {name:wireguardians subnet:0xc000f0cb10 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:0}
I0624 11:47:06.154147 94616 kic.go:106] calculated static IP "192.168.49.2" for the "wireguardians" container
I0624 11:47:06.154425 94616 cli_runner.go:115] Run: sudo -n podman ps -a --format {{.Names}}
I0624 11:47:06.320135 94616 cli_runner.go:115] Run: sudo -n podman volume create wireguardians --label name.minikube.sigs.k8s.io=wireguardians --label created_by.minikube.sigs.k8s.io=true
I0624 11:47:06.502212 94616 oci.go:102] Successfully created a podman volume wireguardians
I0624 11:47:06.502646 94616 cli_runner.go:115] Run: sudo -n podman run --rm --name wireguardians-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=wireguardians --entrypoint /usr/bin/test -v wireguardians:/var gcr.io/k8s-minikube/kicbase:v0.0.23 -d /var/lib
I0624 11:47:06.985791 94616 oci.go:106] Successfully prepared a podman volume wireguardians
W0624 11:47:06.985938 94616 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0624 11:47:06.985967 94616 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
W0624 11:47:06.985988 94616 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0624 11:47:06.986055 94616 kic.go:179] Starting extracting preloaded images to volume ...
I0624 11:47:06.986551 94616 cli_runner.go:115] Run: sudo -n podman info --format "'{{json .SecurityOptions}}'"
I0624 11:47:06.986551 94616 cli_runner.go:115] Run: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/owen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v wireguardians:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23 -I lz4 -xf /preloaded.tar -C /extractDir
W0624 11:47:07.097268 94616 cli_runner.go:162] sudo -n podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
I0624 11:47:07.097397 94616 cli_runner.go:115] Run: sudo -n podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname wireguardians --name wireguardians --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=wireguardians --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=wireguardians --network wireguardians --ip 192.168.49.2 --volume wireguardians:/var:exec --cpus=2 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23
I0624 11:47:07.453977 94616 cli_runner.go:115] Run: sudo -n podman container inspect wireguardians --format={{.State.Running}}
I0624 11:47:07.608811 94616 cli_runner.go:115] Run: sudo -n podman container inspect wireguardians --format={{.State.Status}}
I0624 11:47:07.760406 94616 cli_runner.go:115] Run: sudo -n podman exec wireguardians stat /var/lib/dpkg/alternatives/iptables
I0624 11:47:08.017595 94616 oci.go:278] the created container "wireguardians" has a running status.
I0624 11:47:08.017618 94616 kic.go:210] Creating ssh key for kic: /home/owen/.minikube/machines/wireguardians/id_rsa...
I0624 11:47:08.211246 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/machines/wireguardians/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0624 11:47:08.211277 94616 kic_runner.go:188] podman (temp): /home/owen/.minikube/machines/wireguardians/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0624 11:47:08.211844 94616 kic_runner.go:241] Run: /usr/bin/sudo -n podman cp /tmp/tmpf-memory-asset712003108 wireguardians:/home/docker/.ssh/authorized_keys
I0624 11:47:08.458197 94616 cli_runner.go:115] Run: sudo -n podman container inspect wireguardians --format={{.State.Status}}
I0624 11:47:08.607180 94616 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0624 11:47:08.607222 94616 kic_runner.go:115] Args: [sudo -n podman exec --privileged wireguardians chown docker:docker /home/docker/.ssh/authorized_keys]
I0624 11:47:09.555974 94616 cli_runner.go:168] Completed: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/owen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v wireguardians:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23 -I lz4 -xf /preloaded.tar -C /extractDir: (2.569266484s)
I0624 11:47:09.555997 94616 kic.go:188] duration metric: took 2.569951 seconds to extract preloaded images to volume
I0624 11:47:09.556100 94616 cli_runner.go:115] Run: sudo -n podman container inspect wireguardians --format={{.State.Status}}
I0624 11:47:09.704640 94616 machine.go:88] provisioning docker machine ...
I0624 11:47:09.704690 94616 ubuntu.go:169] provisioning hostname "wireguardians"
I0624 11:47:09.704887 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:09.862201 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:10.032906 94616 main.go:128] libmachine: Using SSH client type: native
I0624 11:47:10.033485 94616 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 38825 }
I0624 11:47:10.033556 94616 main.go:128] libmachine: About to run SSH command:
sudo hostname wireguardians && echo "wireguardians" | sudo tee /etc/hostname
I0624 11:47:10.220118 94616 main.go:128] libmachine: SSH cmd err, output: : wireguardians
I0624 11:47:10.220492 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:10.397711 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:10.578285 94616 main.go:128] libmachine: Using SSH client type: native
I0624 11:47:10.578591 94616 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 38825 }
I0624 11:47:10.578655 94616 main.go:128] libmachine: About to run SSH command:
I0624 11:47:10.706239 94616 main.go:128] libmachine: SSH cmd err, output: :
I0624 11:47:10.706317 94616 ubuntu.go:175] set auth options {CertDir:/home/owen/.minikube CaCertPath:/home/owen/.minikube/certs/ca.pem CaPrivateKeyPath:/home/owen/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/owen/.minikube/machines/server.pem ServerKeyPath:/home/owen/.minikube/machines/server-key.pem ClientKeyPath:/home/owen/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/owen/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/owen/.minikube}
I0624 11:47:10.706417 94616 ubuntu.go:177] setting up certificates
I0624 11:47:10.706442 94616 provision.go:83] configureAuth start
I0624 11:47:10.706784 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} wireguardians
I0624 11:47:10.892795 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" wireguardians
I0624 11:47:11.040614 94616 provision.go:137] copyHostCerts
I0624 11:47:11.040703 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/certs/ca.pem -> /home/owen/.minikube/ca.pem
I0624 11:47:11.040787 94616 exec_runner.go:145] found /home/owen/.minikube/ca.pem, removing ...
I0624 11:47:11.040812 94616 exec_runner.go:190] rm: /home/owen/.minikube/ca.pem
I0624 11:47:11.040961 94616 exec_runner.go:152] cp: /home/owen/.minikube/certs/ca.pem --> /home/owen/.minikube/ca.pem (1070 bytes)
I0624 11:47:11.041207 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/certs/cert.pem -> /home/owen/.minikube/cert.pem
I0624 11:47:11.041286 94616 exec_runner.go:145] found /home/owen/.minikube/cert.pem, removing ...
I0624 11:47:11.041318 94616 exec_runner.go:190] rm: /home/owen/.minikube/cert.pem
I0624 11:47:11.041455 94616 exec_runner.go:152] cp: /home/owen/.minikube/certs/cert.pem --> /home/owen/.minikube/cert.pem (1115 bytes)
I0624 11:47:11.041645 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/certs/key.pem -> /home/owen/.minikube/key.pem
I0624 11:47:11.041709 94616 exec_runner.go:145] found /home/owen/.minikube/key.pem, removing ...
I0624 11:47:11.041737 94616 exec_runner.go:190] rm: /home/owen/.minikube/key.pem
I0624 11:47:11.041830 94616 exec_runner.go:152] cp: /home/owen/.minikube/certs/key.pem --> /home/owen/.minikube/key.pem (1675 bytes)
I0624 11:47:11.041985 94616 provision.go:111] generating server cert: /home/owen/.minikube/machines/server.pem ca-key=/home/owen/.minikube/certs/ca.pem private-key=/home/owen/.minikube/certs/ca-key.pem org=owen.wireguardians san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube wireguardians]
I0624 11:47:11.353695 94616 provision.go:171] copyRemoteCerts
I0624 11:47:11.353759 94616 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0624 11:47:11.353822 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:11.502704 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:11.658937 94616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38825 SSHKeyPath:/home/owen/.minikube/machines/wireguardians/id_rsa Username:docker}
I0624 11:47:11.777079 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0624 11:47:11.777155 94616 ssh_runner.go:316] scp /home/owen/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0624 11:47:11.801275 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0624 11:47:11.801317 94616 ssh_runner.go:316] scp /home/owen/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes)
I0624 11:47:11.819627 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/machines/server.pem -> /etc/docker/server.pem
I0624 11:47:11.819728 94616 ssh_runner.go:316] scp /home/owen/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0624 11:47:11.836683 94616 provision.go:86] duration metric: configureAuth took 1.130223836s
I0624 11:47:11.836703 94616 ubuntu.go:193] setting minikube options for container-runtime
I0624 11:47:11.836910 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:11.988428 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:12.140846 94616 main.go:128] libmachine: Using SSH client type: native
I0624 11:47:12.141319 94616 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 38825 }
I0624 11:47:12.141405 94616 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0624 11:47:12.323575 94616 main.go:128] libmachine: SSH cmd err, output: : overlay
I0624 11:47:12.323653 94616 ubuntu.go:71] root file system type: overlay
I0624 11:47:12.324248 94616 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0624 11:47:12.324588 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:12.509700 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:12.694622 94616 main.go:128] libmachine: Using SSH client type: native
I0624 11:47:12.695040 94616 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 38825 }
I0624 11:47:12.695355 94616 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0624 11:47:12.886564 94616 main.go:128] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0624 11:47:12.887012 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:13.057904 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:13.216054 94616 main.go:128] libmachine: Using SSH client type: native
I0624 11:47:13.216238 94616 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 38825 }
I0624 11:47:13.216262 94616 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0624 11:47:14.078161 94616 main.go:128] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-24 18:47:12.881389670 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0624 11:47:14.078236 94616 machine.go:91] provisioned docker machine in 4.373564721s$'\thost.minikube.internal$ ' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0624 11:47:14.078248 94616 client.go:171] LocalClient.Create took 8.082944021s
I0624 11:47:14.078266 94616 start.go:168] duration metric: libmachine.API.Create for "wireguardians" took 8.082990342s
I0624 11:47:14.078277 94616 start.go:267] post-start starting for "wireguardians" (driver="podman")
I0624 11:47:14.078286 94616 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0624 11:47:14.078365 94616 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0624 11:47:14.078466 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:14.231740 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:14.423951 94616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38825 SSHKeyPath:/home/owen/.minikube/machines/wireguardians/id_rsa Username:docker}
I0624 11:47:14.536483 94616 ssh_runner.go:149] Run: cat /etc/os-release
I0624 11:47:14.544149 94616 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0624 11:47:14.544222 94616 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0624 11:47:14.544260 94616 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0624 11:47:14.544302 94616 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0624 11:47:14.544343 94616 filesync.go:126] Scanning /home/owen/.minikube/addons for local assets ...
I0624 11:47:14.544502 94616 filesync.go:126] Scanning /home/owen/.minikube/files for local assets ...
I0624 11:47:14.544583 94616 start.go:270] post-start completed in 466.291198ms
I0624 11:47:14.545525 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} wireguardians
I0624 11:47:14.688885 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" wireguardians
I0624 11:47:14.844779 94616 profile.go:148] Saving config to /home/owen/.minikube/profiles/wireguardians/config.json ...
I0624 11:47:14.845497 94616 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0624 11:47:14.845792 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:15.027782 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:15.222338 94616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38825 SSHKeyPath:/home/owen/.minikube/machines/wireguardians/id_rsa Username:docker}
I0624 11:47:15.352067 94616 start.go:129] duration metric: createHost completed in 9.360517959s
I0624 11:47:15.352116 94616 start.go:80] releasing machines lock for "wireguardians", held for 9.360885501s
I0624 11:47:15.352484 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} wireguardians
I0624 11:47:15.500860 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" wireguardians
I0624 11:47:15.650560 94616 ssh_runner.go:149] Run: systemctl --version
I0624 11:47:15.650601 94616 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0624 11:47:15.650655 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:15.650692 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:15.804457 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:15.857093 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:15.948611 94616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38825 SSHKeyPath:/home/owen/.minikube/machines/wireguardians/id_rsa Username:docker}
I0624 11:47:16.003164 94616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38825 SSHKeyPath:/home/owen/.minikube/machines/wireguardians/id_rsa Username:docker}
I0624 11:47:16.092365 94616 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0624 11:47:16.102741 94616 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0624 11:47:16.112185 94616 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0624 11:47:16.112252 94616 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0624 11:47:16.120487 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0624 11:47:16.130548 94616 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0624 11:47:16.190916 94616 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0624 11:47:16.249787 94616 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0624 11:47:16.258288 94616 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0624 11:47:16.316322 94616 ssh_runner.go:149] Run: sudo systemctl start docker
I0624 11:47:16.324927 94616 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0624 11:47:16.367475 94616 out.go:197] 🐳 Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
I0624 11:47:16.367645 94616 cli_runner.go:115] Run: sudo -n podman container inspect --format {{.NetworkSettings.Gateway}} wireguardians
I0624 11:47:16.520691 94616 cli_runner.go:115] Run: sudo -n podman container inspect --format "
{{ if index .NetworkSettings.Networks "wireguardians"}}
{{(index .NetworkSettings.Networks "wireguardians").Gateway}}
{{ end }}
" wireguardians
I0624 11:47:16.711439 94616 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0624 11:47:16.714988 94616 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v
I0624 11:47:16.732000 94616 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0624 11:47:16.732180 94616 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0624 11:47:16.768652 94616 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0624 11:47:16.768681 94616 docker.go:466] Images already preloaded, skipping extraction
I0624 11:47:16.768757 94616 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0624 11:47:16.797968 94616 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0624 11:47:16.797991 94616 cache_images.go:74] Images are preloaded, skipping loading
I0624 11:47:16.798056 94616 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0624 11:47:16.877734 94616 cni.go:93] Creating CNI manager for ""
I0624 11:47:16.877755 94616 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0624 11:47:16.877768 94616 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0624 11:47:16.877785 94616 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:wireguardians NodeName:wireguardians DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0624 11:47:16.877933 94616 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
ttl: 24h0m0s
usages:
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "wireguardians"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0624 11:47:16.878065 94616 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=wireguardians --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]$'\tcontrol-plane.minikube.internal$ ' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
config:
{KubernetesVersion:v1.20.7 ClusterName:wireguardians Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0624 11:47:16.878157 94616 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0624 11:47:16.883462 94616 binaries.go:44] Found k8s binaries, skipping transfer
I0624 11:47:16.883530 94616 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0624 11:47:16.888721 94616 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (339 bytes)
I0624 11:47:16.898402 94616 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0624 11:47:16.908207 94616 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1872 bytes)
I0624 11:47:16.918395 94616 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0624 11:47:16.920530 94616 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v
I0624 11:47:16.927477 94616 certs.go:52] Setting up /home/owen/.minikube/profiles/wireguardians for IP: 192.168.49.2
I0624 11:47:16.927522 94616 certs.go:179] skipping minikubeCA CA generation: /home/owen/.minikube/ca.key
I0624 11:47:16.927536 94616 certs.go:179] skipping proxyClientCA CA generation: /home/owen/.minikube/proxy-client-ca.key
I0624 11:47:16.927586 94616 certs.go:294] generating minikube-user signed cert: /home/owen/.minikube/profiles/wireguardians/client.key
I0624 11:47:16.927600 94616 crypto.go:69] Generating cert /home/owen/.minikube/profiles/wireguardians/client.crt with IP's: []
I0624 11:47:17.099505 94616 crypto.go:157] Writing cert to /home/owen/.minikube/profiles/wireguardians/client.crt ...
I0624 11:47:17.099521 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/client.crt: {Name:mk2d4c100097f28d05d2cc9ca7567e7a1456304e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.099661 94616 crypto.go:165] Writing key to /home/owen/.minikube/profiles/wireguardians/client.key ...
I0624 11:47:17.099669 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/client.key: {Name:mke3f20b884488d03476442fcf3c18861aa72b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.099720 94616 certs.go:294] generating minikube signed cert: /home/owen/.minikube/profiles/wireguardians/apiserver.key.dd3b5fb2
I0624 11:47:17.099726 94616 crypto.go:69] Generating cert /home/owen/.minikube/profiles/wireguardians/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0624 11:47:17.207650 94616 crypto.go:157] Writing cert to /home/owen/.minikube/profiles/wireguardians/apiserver.crt.dd3b5fb2 ...
I0624 11:47:17.207672 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/apiserver.crt.dd3b5fb2: {Name:mk281e3593b2314bafcb2d8d755c9450701bc49b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.207784 94616 crypto.go:165] Writing key to /home/owen/.minikube/profiles/wireguardians/apiserver.key.dd3b5fb2 ...
I0624 11:47:17.207791 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/apiserver.key.dd3b5fb2: {Name:mkf00903d1628b8cff3872aa2b65e339203f5bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.207838 94616 certs.go:305] copying /home/owen/.minikube/profiles/wireguardians/apiserver.crt.dd3b5fb2 -> /home/owen/.minikube/profiles/wireguardians/apiserver.crt
I0624 11:47:17.207882 94616 certs.go:309] copying /home/owen/.minikube/profiles/wireguardians/apiserver.key.dd3b5fb2 -> /home/owen/.minikube/profiles/wireguardians/apiserver.key
I0624 11:47:17.207918 94616 certs.go:294] generating aggregator signed cert: /home/owen/.minikube/profiles/wireguardians/proxy-client.key
I0624 11:47:17.207925 94616 crypto.go:69] Generating cert /home/owen/.minikube/profiles/wireguardians/proxy-client.crt with IP's: []
I0624 11:47:17.639012 94616 crypto.go:157] Writing cert to /home/owen/.minikube/profiles/wireguardians/proxy-client.crt ...
I0624 11:47:17.639034 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/proxy-client.crt: {Name:mk6957cafd393623911abfccfa8e6de09c317d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.639163 94616 crypto.go:165] Writing key to /home/owen/.minikube/profiles/wireguardians/proxy-client.key ...
I0624 11:47:17.639172 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/proxy-client.key: {Name:mk2bb3cda71f89b81d8ea4bd19153989239b77a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.639217 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/profiles/wireguardians/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0624 11:47:17.639229 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/profiles/wireguardians/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0624 11:47:17.639237 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/profiles/wireguardians/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0624 11:47:17.639244 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/profiles/wireguardians/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0624 11:47:17.639251 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0624 11:47:17.639259 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0624 11:47:17.639267 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0624 11:47:17.639278 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0624 11:47:17.639311 94616 certs.go:369] found cert: /home/owen/.minikube/certs/home/owen/.minikube/certs/ca-key.pem (1679 bytes)
I0624 11:47:17.639339 94616 certs.go:369] found cert: /home/owen/.minikube/certs/home/owen/.minikube/certs/ca.pem (1070 bytes)
I0624 11:47:17.639358 94616 certs.go:369] found cert: /home/owen/.minikube/certs/home/owen/.minikube/certs/cert.pem (1115 bytes)
I0624 11:47:17.639379 94616 certs.go:369] found cert: /home/owen/.minikube/certs/home/owen/.minikube/certs/key.pem (1675 bytes)
I0624 11:47:17.639400 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0624 11:47:17.640042 94616 ssh_runner.go:316] scp /home/owen/.minikube/profiles/wireguardians/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0624 11:47:17.652942 94616 ssh_runner.go:316] scp /home/owen/.minikube/profiles/wireguardians/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0624 11:47:17.667353 94616 ssh_runner.go:316] scp /home/owen/.minikube/profiles/wireguardians/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0624 11:47:17.681001 94616 ssh_runner.go:316] scp /home/owen/.minikube/profiles/wireguardians/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0624 11:47:17.694729 94616 ssh_runner.go:316] scp /home/owen/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0624 11:47:17.708312 94616 ssh_runner.go:316] scp /home/owen/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0624 11:47:17.722259 94616 ssh_runner.go:316] scp /home/owen/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0624 11:47:17.736241 94616 ssh_runner.go:316] scp /home/owen/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0624 11:47:17.750351 94616 ssh_runner.go:316] scp /home/owen/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0624 11:47:17.763864 94616 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0624 11:47:17.773960 94616 ssh_runner.go:149] Run: openssl version
I0624 11:47:17.777776 94616 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0624 11:47:17.783864 94616 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0624 11:47:17.786195 94616 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Oct 27 2020 /usr/share/ca-certificates/minikubeCA.pem
I0624 11:47:17.786246 94616 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0624 11:47:17.789638 94616 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0624 11:47:17.795347 94616 kubeadm.go:390] StartCluster: {Name:wireguardians KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:wireguardians Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0624 11:47:17.795479 94616 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0624 11:47:17.821854 94616 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0624 11:47:17.827200 94616 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0624 11:47:17.832708 94616 kubeadm.go:220] ignoring SystemVerification for kubeadm because of podman driver
I0624 11:47:17.832765 94616 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0624 11:47:17.837970 94616 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0624 11:47:17.837999 94616 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0624 11:47:18.411930 94616 out.go:197] ▪ Generating certificates and keys ...
I0624 11:47:20.630653 94616 out.go:197] ▪ Booting up control plane ...
W0624 11:49:15.656919 94616 out.go:235] 💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost wireguardians] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost wireguardians] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0624 11:49:15.657127 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0624 11:49:16.065194 94616 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
I0624 11:49:16.073058 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0624 11:49:16.099843 94616 kubeadm.go:220] ignoring SystemVerification for kubeadm because of podman driver
I0624 11:49:16.099905 94616 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0624 11:49:16.105471 94616 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0624 11:49:16.105505 94616 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0624 11:49:16.771267 94616 out.go:197] ▪ Generating certificates and keys ...
I0624 11:49:17.417888 94616 out.go:197] ▪ Booting up control plane ...
I0624 11:51:12.433646 94616 kubeadm.go:392] StartCluster complete in 3m54.638302541s
I0624 11:51:12.433731 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0624 11:51:12.460943 94616 logs.go:270] 0 containers: []
W0624 11:51:12.460963 94616 logs.go:272] No container was found matching "kube-apiserver"
I0624 11:51:12.461020 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0624 11:51:12.488362 94616 logs.go:270] 0 containers: []
W0624 11:51:12.488390 94616 logs.go:272] No container was found matching "etcd"
I0624 11:51:12.488444 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0624 11:51:12.514976 94616 logs.go:270] 0 containers: []
W0624 11:51:12.514995 94616 logs.go:272] No container was found matching "coredns"
I0624 11:51:12.515056 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0624 11:51:12.542139 94616 logs.go:270] 0 containers: []
W0624 11:51:12.542160 94616 logs.go:272] No container was found matching "kube-scheduler"
I0624 11:51:12.542217 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0624 11:51:12.569174 94616 logs.go:270] 0 containers: []
W0624 11:51:12.569191 94616 logs.go:272] No container was found matching "kube-proxy"
I0624 11:51:12.569246 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0624 11:51:12.596426 94616 logs.go:270] 0 containers: []
W0624 11:51:12.596442 94616 logs.go:272] No container was found matching "kubernetes-dashboard"
I0624 11:51:12.596504 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0624 11:51:12.624036 94616 logs.go:270] 0 containers: []
W0624 11:51:12.624056 94616 logs.go:272] No container was found matching "storage-provisioner"
I0624 11:51:12.624121 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0624 11:51:12.651233 94616 logs.go:270] 0 containers: []
W0624 11:51:12.651252 94616 logs.go:272] No container was found matching "kube-controller-manager"
I0624 11:51:12.651262 94616 logs.go:123] Gathering logs for kubelet ...
I0624 11:51:12.651272 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0624 11:51:12.695359 94616 logs.go:123] Gathering logs for dmesg ...
I0624 11:51:12.695384 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0624 11:51:12.706209 94616 logs.go:123] Gathering logs for describe nodes ...
I0624 11:51:12.706233 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0624 11:51:12.752228 94616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0624 11:51:12.752246 94616 logs.go:123] Gathering logs for Docker ...
I0624 11:51:12.752256 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0624 11:51:12.763792 94616 logs.go:123] Gathering logs for container status ...
I0624 11:51:12.763815 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo
which crictl || echo crictl
ps -a || sudo docker ps -a"W0624 11:51:12.787614 94616 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0624 11:51:12.787682 94616 out.go:235]
W0624 11:51:12.787849 94616 out.go:235] 💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0624 11:51:12.787961 94616 out.go:235]
W0624 11:51:12.788677 94616 out.go:235] �[31m╭────────────────────────────────────────────────────────────────────╮�[0m
W0624 11:51:12.788690 94616 out.go:235] �[31m│�[0m �[31m│�[0m
W0624 11:51:12.788697 94616 out.go:235] �[31m│�[0m 😿 If the above advice does not help, please let us know: �[31m│�[0m
W0624 11:51:12.788702 94616 out.go:235] �[31m│�[0m 👉 https://github.com/kubernetes/minikube/issues/new/choose �[31m│�[0m
W0624 11:51:12.788707 94616 out.go:235] �[31m│�[0m �[31m│�[0m
W0624 11:51:12.788713 94616 out.go:235] �[31m│�[0m Please attach the following file to the GitHub issue: �[31m│�[0m
W0624 11:51:12.788718 94616 out.go:235] �[31m│�[0m - /home/owen/.minikube/logs/lastStart.txt �[31m│�[0m
W0624 11:51:12.788723 94616 out.go:235] �[31m│�[0m �[31m│�[0m
W0624 11:51:12.788728 94616 out.go:235] �[31m╰────────────────────────────────────────────────────────────────────╯�[0m
W0624 11:51:12.788734 94616 out.go:235]
I0624 11:51:12.793631 94616 out.go:170]
W0624 11:51:12.793763 94616 out.go:235] ❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0624 11:51:12.793937 94616 out.go:235] 💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0624 11:51:12.793997 94616 out.go:235] 🍿 Related issue: #4172
==> Docker <==
-- Logs begin at Thu 2021-06-24 18:47:07 UTC, end at Thu 2021-06-24 18:54:45 UTC. --
Jun 24 18:47:07 wireguardians systemd[1]: Starting Docker Application Container Engine...
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.777773486Z" level=info msg="Starting up"
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.779007533Z" level=info msg="parsed scheme: "unix"" module=grpc
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.779027550Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.779045892Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.779059146Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.780928220Z" level=info msg="parsed scheme: "unix"" module=grpc
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.780979160Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.781012583Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.781032914Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.942802869Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.973140662Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.973161882Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.973167452Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jun 24 18:47:07 wireguardians dockerd[203]: time="2021-06-24T18:47:07.973337113Z" level=info msg="Loading containers: start."
Jun 24 18:47:08 wireguardians dockerd[203]: time="2021-06-24T18:47:08.023729327Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 24 18:47:08 wireguardians dockerd[203]: time="2021-06-24T18:47:08.054358325Z" level=info msg="Loading containers: done."
Jun 24 18:47:08 wireguardians dockerd[203]: time="2021-06-24T18:47:08.096279433Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
Jun 24 18:47:08 wireguardians dockerd[203]: time="2021-06-24T18:47:08.096361454Z" level=info msg="Daemon has completed initialization"
Jun 24 18:47:08 wireguardians systemd[1]: Started Docker Application Container Engine.
Jun 24 18:47:08 wireguardians dockerd[203]: time="2021-06-24T18:47:08.129763310Z" level=info msg="API listen on /run/docker.sock"
Jun 24 18:47:13 wireguardians systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Jun 24 18:47:13 wireguardians systemd[1]: Stopping Docker Application Container Engine...
Jun 24 18:47:13 wireguardians dockerd[203]: time="2021-06-24T18:47:13.617215440Z" level=info msg="Processing signal 'terminated'"
Jun 24 18:47:13 wireguardians dockerd[203]: time="2021-06-24T18:47:13.618245731Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
Jun 24 18:47:13 wireguardians dockerd[203]: time="2021-06-24T18:47:13.618889063Z" level=info msg="Daemon shutdown complete"
Jun 24 18:47:13 wireguardians systemd[1]: docker.service: Succeeded.
Jun 24 18:47:13 wireguardians systemd[1]: Stopped Docker Application Container Engine.
Jun 24 18:47:13 wireguardians systemd[1]: Starting Docker Application Container Engine...
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.657285236Z" level=info msg="Starting up"
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.658733474Z" level=info msg="parsed scheme: "unix"" module=grpc
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.658749090Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.658766957Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.658776679Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.659489656Z" level=info msg="parsed scheme: "unix"" module=grpc
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.659501117Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.659514578Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.659523904Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.936063424Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.943611065Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.943626499Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.943631912Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.943639315Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.943644193Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.943648866Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.943653523Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Jun 24 18:47:13 wireguardians dockerd[446]: time="2021-06-24T18:47:13.943762200Z" level=info msg="Loading containers: start."
Jun 24 18:47:14 wireguardians dockerd[446]: time="2021-06-24T18:47:14.017018683Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 24 18:47:14 wireguardians dockerd[446]: time="2021-06-24T18:47:14.049871934Z" level=info msg="Loading containers: done."
Jun 24 18:47:14 wireguardians dockerd[446]: time="2021-06-24T18:47:14.066241650Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
Jun 24 18:47:14 wireguardians dockerd[446]: time="2021-06-24T18:47:14.066423440Z" level=info msg="Daemon has completed initialization"
Jun 24 18:47:14 wireguardians systemd[1]: Started Docker Application Container Engine.
Jun 24 18:47:14 wireguardians dockerd[446]: time="2021-06-24T18:47:14.080636609Z" level=info msg="API listen on [::]:2376"
Jun 24 18:47:14 wireguardians dockerd[446]: time="2021-06-24T18:47:14.084576390Z" level=info msg="API listen on /var/run/docker.sock"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
==> describe nodes <==
==> dmesg <==
[Jun24 17:53] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[ +1.314383] usb: port power management may be unreliable
[ +0.899748] acpi PNP0C14:02: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[ +0.000069] acpi PNP0C14:03: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[ +0.000120] acpi PNP0C14:04: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[ +0.000061] acpi PNP0C14:05: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[ +0.000059] acpi PNP0C14:06: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[ +0.000043] acpi PNP0C14:07: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[ +0.037771] nvme nvme0: missing or invalid SUBNQN field.
[ +1.185520] systemd-sysv-generator[631]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[ +0.000533] systemd-sysv-generator[631]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[ +0.247571] ACPI Error: Needed [Integer/String/Buffer], found [Package] 00000000526683c2 (20210105/exresop-469)
[ +0.000007] ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [OpcodeName unavailable] (20210105/dswexec-431)
[ +0.000004] ACPI Error: Aborting method \ADBG due to previous error (AE_AML_OPERAND_TYPE) (20210105/psparse-529)
[ +0.000004] ACPI Error: Aborting method _SB.HIDD.DSM due to previous error (AE_AML_OPERAND_TYPE) (20210105/psparse-529)
[ +0.000007] ACPI: _SB.HIDD: failed to evaluate _DSM (0x3003)
[ +0.323169] resource sanity check: requesting [mem 0xfed10000-0xfed15fff], which spans more than pnp 00:07 [mem 0xfed10000-0xfed13fff]
[ +0.000004] caller snb_uncore_imc_init_box+0x6a/0xa0 [intel_uncore] mapping multiple BARs
[ +0.024556] iwlwifi 0000:00:14.3: api flags index 2 larger than supported by driver
[ +0.165003] i801_smbus 0000:00:1f.4: Timeout waiting for interrupt!
[ +0.000004] i801_smbus 0000:00:1f.4: Transaction timeout
[ +0.002029] i801_smbus 0000:00:1f.4: Failed terminating the transaction
[ +0.000090] i801_smbus 0000:00:1f.4: SMBus is busy, can't use it!
[ +0.202700] thermal thermal_zone6: failed to read out thermal zone (-61)
[Jun24 17:54] sof-audio-pci-intel-cnl 0000:00:1f.3: ASoC: Parent card not yet available, widget card binding deferred
[ +0.070830] snd_hda_codec_realtek ehdaudio0D0: ASoC: sink widget AIF1TX overwritten
[ +0.000009] snd_hda_codec_realtek ehdaudio0D0: ASoC: source widget AIF1RX overwritten
[ +0.000161] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi3 overwritten
[ +0.000006] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi2 overwritten
[ +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi1 overwritten
[ +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Codec Output Pin1 overwritten
[ +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Codec Input Pin1 overwritten
[ +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Analog Codec Playback overwritten
[ +0.000006] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Digital Codec Playback overwritten
[ +0.000006] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Alt Analog Codec Playback overwritten
[ +0.000007] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Analog Codec Capture overwritten
[ +0.000006] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Digital Codec Capture overwritten
[ +0.000006] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Alt Analog Codec Capture overwritten
[ +0.005881] snd_hda_codec_hdmi ehdaudio0D2: Monitor plugged-in, Failed to power up codec ret=[-13]
[ +0.180388] vboxdrv: loading out-of-tree module taints kernel.
[ +0.024872] VBoxNetFlt: Successfully started.
[ +0.002933] VBoxNetAdp: Successfully started.
[ +0.785772] Bluetooth: hci0: MSFT filter_enable is already on
[ +1.241690] usb 8-2.1: current rate 16000 is different from the runtime rate 24000
[ +0.008000] usb 8-2.1: current rate 16000 is different from the runtime rate 32000
[ +0.008000] usb 8-2.1: current rate 16000 is different from the runtime rate 48000
[ +6.853959] usb 8-2.1: current rate 16000 is different from the runtime rate 48000
[ +0.013046] usb 8-2.1: current rate 16000 is different from the runtime rate 48000
[ +0.009985] usb 8-2.1: current rate 16000 is different from the runtime rate 48000
[ +8.691538] usb 8-2.1: current rate 16000 is different from the runtime rate 48000
[ +0.022014] usb 8-2.1: current rate 16000 is different from the runtime rate 48000
[ +0.011003] usb 8-2.1: current rate 16000 is different from the runtime rate 48000
[Jun24 17:57] systemd-sysv-generator[4619]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[ +0.000038] systemd-sysv-generator[4619]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[ +12.470029] overlayfs: upper fs does not support xattr, falling back to index=off and metacopy=off.
[Jun24 18:14] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to [email protected] if you depend on this functionality.
==> kernel <==
18:54:45 up 1:00, 0 users, load average: 0.78, 0.92, 0.67
Linux wireguardians 5.12.11-300.fc34.x86_64 Need a reliable and low latency local cluster setup for Kubernetes #1 SMP Wed Jun 16 15:47:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
==> kubelet <==
-- Logs begin at Thu 2021-06-24 18:47:07 UTC, end at Thu 2021-06-24 18:54:45 UTC. --
Jun 24 18:54:43 wireguardians kubelet[11515]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).watchDirectory.func2(0xc0006c5320, 0xc000ef5290, 0x2f)
Jun 24 18:54:43 wireguardians kubelet[11515]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:165 +0x68
Jun 24 18:54:43 wireguardians kubelet[11515]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).watchDirectory
Jun 24 18:54:43 wireguardians kubelet[11515]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:164 +0x475
Jun 24 18:54:43 wireguardians kubelet[11515]: goroutine 715 [chan send]:
Jun 24 18:54:43 wireguardians kubelet[11515]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).watchDirectory.func2(0xc0006c5320, 0xc000ef52c0, 0x23)
Jun 24 18:54:43 wireguardians kubelet[11515]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:165 +0x68
Jun 24 18:54:43 wireguardians kubelet[11515]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).watchDirectory
Jun 24 18:54:43 wireguardians kubelet[11515]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:164 +0x475
Jun 24 18:54:43 wireguardians kubelet[11515]: goroutine 716 [chan send]:
Jun 24 18:54:43 wireguardians kubelet[11515]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).watchDirectory.func2(0xc0006c5320, 0xc0010adcc0, 0x17)
Jun 24 18:54:43 wireguardians kubelet[11515]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:165 +0x68
Jun 24 18:54:43 wireguardians kubelet[11515]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).watchDirectory
Jun 24 18:54:43 wireguardians kubelet[11515]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:164 +0x475
Jun 24 18:54:43 wireguardians kubelet[11515]: goroutine 717 [chan send]:
Jun 24 18:54:43 wireguardians kubelet[11515]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).watchDirectory.func2(0xc0006c5320, 0xc000ef52f0, 0x23)
Jun 24 18:54:43 wireguardians kubelet[11515]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:165 +0x68
Jun 24 18:54:43 wireguardians kubelet[11515]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).watchDirectory
Jun 24 18:54:43 wireguardians kubelet[11515]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:164 +0x475
Jun 24 18:54:43 wireguardians kubelet[11515]: goroutine 718 [chan send]:
Jun 24 18:54:43 wireguardians kubelet[11515]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).watchDirectory.func2(0xc0006c5320, 0xc0010adce0, 0x17)
Jun 24 18:54:43 wireguardians kubelet[11515]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:165 +0x68
Jun 24 18:54:43 wireguardians kubelet[11515]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).watchDirectory
Jun 24 18:54:43 wireguardians kubelet[11515]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:164 +0x475
Jun 24 18:54:44 wireguardians systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 45.
Jun 24 18:54:44 wireguardians systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jun 24 18:54:44 wireguardians systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.483774 11673 server.go:416] Version: v1.20.7
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.484017 11673 server.go:837] Client rotation is on, will bootstrap in background
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.486278 11673 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.487045 11673 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.553230 11673 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.553502 11673 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.553514 11673 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.553553 11673 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.553560 11673 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.553564 11673 container_manager_linux.go:315] Creating device plugin manager: true
Jun 24 18:54:44 wireguardians kubelet[11673]: W0624 18:54:44.553597 11673 kubelet.go:297] Using dockershim is deprecated, please consider using a full-fledged CRI implementation
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.553613 11673 client.go:77] Connecting to docker on unix:///var/run/docker.sock
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.553620 11673 client.go:94] Start docker client with request timeout=2m0s
Jun 24 18:54:44 wireguardians kubelet[11673]: W0624 18:54:44.560613 11673 docker_service.go:564] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.560641 11673 docker_service.go:241] Hairpin mode set to "hairpin-veth"
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.564804 11673 docker_service.go:256] Docker cri networking managed by kubernetes.io/no-op
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.572200 11673 docker_service.go:263] Docker Info: &{ID:375L:C2KE:CO6P:RLVV:NXCW:FNCP:JORB:CEYB:463P:LI2I:J4CN:JK5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2021-06-24T18:54:44.565819376Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.12.11-300.fc34.x86_64 OperatingSystem:Ubuntu 20.04.2 LTS (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00016fce0 NCPU:8 MemTotal:16523653120 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:wireguardians Labels:[provider=podman] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[]} io.containerd.runtime.v1.linux:{Path:runc Args:[]} runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support]}
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.572268 11673 docker_service.go:276] Setting cgroupDriver to cgroupfs
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579641 11673 remote_runtime.go:62] parsed scheme: ""
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579658 11673 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579684 11673 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579692 11673 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579723 11673 remote_image.go:50] parsed scheme: ""
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579728 11673 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579735 11673 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579739 11673 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579789 11673 kubelet.go:394] Attempting to sync node with API server
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579798 11673 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579814 11673 kubelet.go:273] Adding apiserver pod source
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.579826 11673 apiserver.go:43] Waiting for node sync before watching apiserver pods
Jun 24 18:54:44 wireguardians kubelet[11673]: E0624 18:54:44.580452 11673 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)wireguardians&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jun 24 18:54:44 wireguardians kubelet[11673]: E0624 18:54:44.580505 11673 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jun 24 18:54:44 wireguardians kubelet[11673]: I0624 18:54:44.589148 11673 kuberuntime_manager.go:216] Container runtime docker initialized, version: 20.10.7, apiVersion: 1.41.0
Full output of failed command:
I0624 11:47:10.220492 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
\ I0624 11:47:10.397711 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
/ I0624 11:47:10.578285 94616 main.go:128] libmachine: Using SSH client type: native
I0624 11:47:10.578591 94616 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 38825 }
I0624 11:47:10.578655 94616 main.go:128] libmachine: About to run SSH command:
I0624 11:47:10.706317 94616 ubuntu.go:175] set auth options {CertDir:/home/owen/.minikube CaCertPath:/home/owen/.minikube/certs/ca.pem CaPrivateKeyPath:/home/owen/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/owen/.minikube/machines/server.pem ServerKeyPath:/home/owen/.minikube/machines/server-key.pem ClientKeyPath:/home/owen/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/owen/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/owen/.minikube}
I0624 11:47:10.706417 94616 ubuntu.go:177] setting up certificates
I0624 11:47:10.706442 94616 provision.go:83] configureAuth start
I0624 11:47:10.706784 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} wireguardians
| I0624 11:47:10.892795 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" wireguardians
I0624 11:47:11.040703 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/certs/ca.pem -> /home/owen/.minikube/ca.pem
I0624 11:47:11.040787 94616 exec_runner.go:145] found /home/owen/.minikube/ca.pem, removing ...
I0624 11:47:11.040812 94616 exec_runner.go:190] rm: /home/owen/.minikube/ca.pem
I0624 11:47:11.040961 94616 exec_runner.go:152] cp: /home/owen/.minikube/certs/ca.pem --> /home/owen/.minikube/ca.pem (1070 bytes)
I0624 11:47:11.041207 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/certs/cert.pem -> /home/owen/.minikube/cert.pem
I0624 11:47:11.041286 94616 exec_runner.go:145] found /home/owen/.minikube/cert.pem, removing ...
I0624 11:47:11.041318 94616 exec_runner.go:190] rm: /home/owen/.minikube/cert.pem
I0624 11:47:11.041455 94616 exec_runner.go:152] cp: /home/owen/.minikube/certs/cert.pem --> /home/owen/.minikube/cert.pem (1115 bytes)
I0624 11:47:11.041645 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/certs/key.pem -> /home/owen/.minikube/key.pem
I0624 11:47:11.041709 94616 exec_runner.go:145] found /home/owen/.minikube/key.pem, removing ...
I0624 11:47:11.041737 94616 exec_runner.go:190] rm: /home/owen/.minikube/key.pem
I0624 11:47:11.041830 94616 exec_runner.go:152] cp: /home/owen/.minikube/certs/key.pem --> /home/owen/.minikube/key.pem (1675 bytes)
I0624 11:47:11.041985 94616 provision.go:111] generating server cert: /home/owen/.minikube/machines/server.pem ca-key=/home/owen/.minikube/certs/ca.pem private-key=/home/owen/.minikube/certs/ca-key.pem org=owen.wireguardians san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube wireguardians]
/ I0624 11:47:11.353695 94616 provision.go:171] copyRemoteCerts
I0624 11:47:11.353759 94616 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0624 11:47:11.353822 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
| I0624 11:47:11.658937 94616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38825 SSHKeyPath:/home/owen/.minikube/machines/wireguardians/id_rsa Username:docker}
/ I0624 11:47:11.777079 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0624 11:47:11.777155 94616 ssh_runner.go:316] scp /home/owen/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0624 11:47:11.801275 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0624 11:47:11.801317 94616 ssh_runner.go:316] scp /home/owen/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes)
I0624 11:47:11.819627 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/machines/server.pem -> /etc/docker/server.pem
I0624 11:47:11.819728 94616 ssh_runner.go:316] scp /home/owen/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0624 11:47:11.836703 94616 ubuntu.go:193] setting minikube options for container-runtime
I0624 11:47:11.836910 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
\ I0624 11:47:11.988428 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
/ I0624 11:47:12.140846 94616 main.go:128] libmachine: Using SSH client type: native
I0624 11:47:12.141319 94616 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 38825 }
I0624 11:47:12.141405 94616 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0624 11:47:12.323653 94616 ubuntu.go:71] root file system type: overlay
I0624 11:47:12.324248 94616 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0624 11:47:12.324588 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
| I0624 11:47:12.509700 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:12.695040 94616 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 38825 }
I0624 11:47:12.695355 94616 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
| I0624 11:47:12.886564 94616 main.go:128] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0624 11:47:12.887012 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
\ I0624 11:47:13.216054 94616 main.go:128] libmachine: Using SSH client type: native
I0624 11:47:13.216238 94616 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 38825 }
I0624 11:47:13.216262 94616 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
| I0624 11:47:14.078161 94616 main.go:128] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-24 18:47:12.881389670 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=podman --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0624 11:47:14.078236 94616 machine.go:91] provisioned docker machine in 4.373564721s
I0624 11:47:14.078248 94616 client.go:171] LocalClient.Create took 8.082944021s
I0624 11:47:14.078266 94616 start.go:168] duration metric: libmachine.API.Create for "wireguardians" took 8.082990342s
I0624 11:47:14.078277 94616 start.go:267] post-start starting for "wireguardians" (driver="podman")
I0624 11:47:14.078286 94616 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0624 11:47:14.078365 94616 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0624 11:47:14.078466 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
/ I0624 11:47:14.231740 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
\ I0624 11:47:14.423951 94616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38825 SSHKeyPath:/home/owen/.minikube/machines/wireguardians/id_rsa Username:docker}
| I0624 11:47:14.536483 94616 ssh_runner.go:149] Run: cat /etc/os-release
/ I0624 11:47:14.544149 94616 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0624 11:47:14.544222 94616 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0624 11:47:14.544260 94616 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0624 11:47:14.544302 94616 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0624 11:47:14.544343 94616 filesync.go:126] Scanning /home/owen/.minikube/addons for local assets ...
I0624 11:47:14.544502 94616 filesync.go:126] Scanning /home/owen/.minikube/files for local assets ...
I0624 11:47:14.544583 94616 start.go:270] post-start completed in 466.291198ms
I0624 11:47:14.545525 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} wireguardians
| I0624 11:47:14.844779 94616 profile.go:148] Saving config to /home/owen/.minikube/profiles/wireguardians/config.json ...
I0624 11:47:14.845497 94616 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0624 11:47:14.845792 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
/ I0624 11:47:15.027782 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
\ I0624 11:47:15.222338 94616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38825 SSHKeyPath:/home/owen/.minikube/machines/wireguardians/id_rsa Username:docker}
/ I0624 11:47:15.352067 94616 start.go:129] duration metric: createHost completed in 9.360517959s
I0624 11:47:15.352116 94616 start.go:80] releasing machines lock for "wireguardians", held for 9.360885501s
I0624 11:47:15.352484 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} wireguardians
| I0624 11:47:15.650560 94616 ssh_runner.go:149] Run: systemctl --version
I0624 11:47:15.650601 94616 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0624 11:47:15.650655 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0624 11:47:15.650692 94616 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
/ I0624 11:47:15.804457 94616 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" wireguardians
I0624 11:47:15.948611 94616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38825 SSHKeyPath:/home/owen/.minikube/machines/wireguardians/id_rsa Username:docker}
\ I0624 11:47:16.003164 94616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38825 SSHKeyPath:/home/owen/.minikube/machines/wireguardians/id_rsa Username:docker}
| I0624 11:47:16.092365 94616 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0624 11:47:16.102741 94616 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0624 11:47:16.112185 94616 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0624 11:47:16.112252 94616 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0624 11:47:16.120487 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0624 11:47:16.130548 94616 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
/ I0624 11:47:16.190916 94616 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0624 11:47:16.249787 94616 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0624 11:47:16.316322 94616 ssh_runner.go:149] Run: sudo systemctl start docker
I0624 11:47:16.324927 94616 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
\ I0624 11:47:16.367475 94616 out.go:197] 🐳 Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
🐳 Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...I0624 11:47:16.367645 94616 cli_runner.go:115] Run: sudo -n podman container inspect --format {{.NetworkSettings.Gateway}} wireguardians
| I0624 11:47:16.520691 94616 cli_runner.go:115] Run: sudo -n podman container inspect --format "
{{ if index .NetworkSettings.Networks "wireguardians"}}
{{(index .NetworkSettings.Networks "wireguardians").Gateway}}
{{ end }}
" wireguardians
I0624 11:47:16.714988 94616 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v
I0624 11:47:16.732000 94616 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0624 11:47:16.732180 94616 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
\ I0624 11:47:16.768652 94616 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0624 11:47:16.768681 94616 docker.go:466] Images already preloaded, skipping extraction
I0624 11:47:16.768757 94616 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0624 11:47:16.797968 94616 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
-- /stdout --
I0624 11:47:16.797991 94616 cache_images.go:74] Images are preloaded, skipping loading
I0624 11:47:16.798056 94616 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
| I0624 11:47:16.877734 94616 cni.go:93] Creating CNI manager for ""
I0624 11:47:16.877755 94616 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0624 11:47:16.877768 94616 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0624 11:47:16.877785 94616 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:wireguardians NodeName:wireguardians DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0624 11:47:16.877933 94616 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
ttl: 24h0m0s
usages:
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "wireguardians"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0624 11:47:16.878065 94616 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=wireguardians --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]$'\tcontrol-plane.minikube.internal$ ' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
config:
{KubernetesVersion:v1.20.7 ClusterName:wireguardians Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0624 11:47:16.878157 94616 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0624 11:47:16.883462 94616 binaries.go:44] Found k8s binaries, skipping transfer
I0624 11:47:16.883530 94616 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0624 11:47:16.888721 94616 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (339 bytes)
I0624 11:47:16.898402 94616 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0624 11:47:16.908207 94616 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1872 bytes)
I0624 11:47:16.918395 94616 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0624 11:47:16.920530 94616 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v
I0624 11:47:16.927477 94616 certs.go:52] Setting up /home/owen/.minikube/profiles/wireguardians for IP: 192.168.49.2
I0624 11:47:16.927522 94616 certs.go:179] skipping minikubeCA CA generation: /home/owen/.minikube/ca.key
I0624 11:47:16.927536 94616 certs.go:179] skipping proxyClientCA CA generation: /home/owen/.minikube/proxy-client-ca.key
I0624 11:47:16.927586 94616 certs.go:294] generating minikube-user signed cert: /home/owen/.minikube/profiles/wireguardians/client.key
I0624 11:47:16.927600 94616 crypto.go:69] Generating cert /home/owen/.minikube/profiles/wireguardians/client.crt with IP's: []
I0624 11:47:17.099521 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/client.crt: {Name:mk2d4c100097f28d05d2cc9ca7567e7a1456304e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.099661 94616 crypto.go:165] Writing key to /home/owen/.minikube/profiles/wireguardians/client.key ...
I0624 11:47:17.099669 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/client.key: {Name:mke3f20b884488d03476442fcf3c18861aa72b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.099720 94616 certs.go:294] generating minikube signed cert: /home/owen/.minikube/profiles/wireguardians/apiserver.key.dd3b5fb2
I0624 11:47:17.099726 94616 crypto.go:69] Generating cert /home/owen/.minikube/profiles/wireguardians/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
\ I0624 11:47:17.207650 94616 crypto.go:157] Writing cert to /home/owen/.minikube/profiles/wireguardians/apiserver.crt.dd3b5fb2 ...
I0624 11:47:17.207672 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/apiserver.crt.dd3b5fb2: {Name:mk281e3593b2314bafcb2d8d755c9450701bc49b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.207784 94616 crypto.go:165] Writing key to /home/owen/.minikube/profiles/wireguardians/apiserver.key.dd3b5fb2 ...
I0624 11:47:17.207791 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/apiserver.key.dd3b5fb2: {Name:mkf00903d1628b8cff3872aa2b65e339203f5bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.207838 94616 certs.go:305] copying /home/owen/.minikube/profiles/wireguardians/apiserver.crt.dd3b5fb2 -> /home/owen/.minikube/profiles/wireguardians/apiserver.crt
I0624 11:47:17.207882 94616 certs.go:309] copying /home/owen/.minikube/profiles/wireguardians/apiserver.key.dd3b5fb2 -> /home/owen/.minikube/profiles/wireguardians/apiserver.key
I0624 11:47:17.207918 94616 certs.go:294] generating aggregator signed cert: /home/owen/.minikube/profiles/wireguardians/proxy-client.key
I0624 11:47:17.207925 94616 crypto.go:69] Generating cert /home/owen/.minikube/profiles/wireguardians/proxy-client.crt with IP's: []
\ I0624 11:47:17.639012 94616 crypto.go:157] Writing cert to /home/owen/.minikube/profiles/wireguardians/proxy-client.crt ...
I0624 11:47:17.639034 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/proxy-client.crt: {Name:mk6957cafd393623911abfccfa8e6de09c317d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.639163 94616 crypto.go:165] Writing key to /home/owen/.minikube/profiles/wireguardians/proxy-client.key ...
I0624 11:47:17.639172 94616 lock.go:36] WriteFile acquiring /home/owen/.minikube/profiles/wireguardians/proxy-client.key: {Name:mk2bb3cda71f89b81d8ea4bd19153989239b77a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0624 11:47:17.639217 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/profiles/wireguardians/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0624 11:47:17.639229 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/profiles/wireguardians/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0624 11:47:17.639237 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/profiles/wireguardians/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0624 11:47:17.639244 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/profiles/wireguardians/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0624 11:47:17.639251 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0624 11:47:17.639259 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0624 11:47:17.639267 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0624 11:47:17.639278 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0624 11:47:17.639311 94616 certs.go:369] found cert: /home/owen/.minikube/certs/home/owen/.minikube/certs/ca-key.pem (1679 bytes)
I0624 11:47:17.639339 94616 certs.go:369] found cert: /home/owen/.minikube/certs/home/owen/.minikube/certs/ca.pem (1070 bytes)
I0624 11:47:17.639358 94616 certs.go:369] found cert: /home/owen/.minikube/certs/home/owen/.minikube/certs/cert.pem (1115 bytes)
I0624 11:47:17.639379 94616 certs.go:369] found cert: /home/owen/.minikube/certs/home/owen/.minikube/certs/key.pem (1675 bytes)
I0624 11:47:17.639400 94616 vm_assets.go:98] NewFileAsset: /home/owen/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0624 11:47:17.640042 94616 ssh_runner.go:316] scp /home/owen/.minikube/profiles/wireguardians/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0624 11:47:17.652942 94616 ssh_runner.go:316] scp /home/owen/.minikube/profiles/wireguardians/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
| I0624 11:47:17.667353 94616 ssh_runner.go:316] scp /home/owen/.minikube/profiles/wireguardians/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0624 11:47:17.681001 94616 ssh_runner.go:316] scp /home/owen/.minikube/profiles/wireguardians/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0624 11:47:17.694729 94616 ssh_runner.go:316] scp /home/owen/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0624 11:47:17.708312 94616 ssh_runner.go:316] scp /home/owen/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0624 11:47:17.722259 94616 ssh_runner.go:316] scp /home/owen/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0624 11:47:17.736241 94616 ssh_runner.go:316] scp /home/owen/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0624 11:47:17.750351 94616 ssh_runner.go:316] scp /home/owen/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
/ I0624 11:47:17.763864 94616 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0624 11:47:17.773960 94616 ssh_runner.go:149] Run: openssl version
I0624 11:47:17.777776 94616 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0624 11:47:17.783864 94616 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0624 11:47:17.786195 94616 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Oct 27 2020 /usr/share/ca-certificates/minikubeCA.pem
I0624 11:47:17.786246 94616 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0624 11:47:17.789638 94616 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0624 11:47:17.795347 94616 kubeadm.go:390] StartCluster: {Name:wireguardians KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:wireguardians Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0624 11:47:17.795479 94616 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0624 11:47:17.821854 94616 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0624 11:47:17.827200 94616 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0624 11:47:17.832708 94616 kubeadm.go:220] ignoring SystemVerification for kubeadm because of podman driver
I0624 11:47:17.832765 94616 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0624 11:47:17.837970 94616 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0624 11:47:17.837999 94616 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
\ I0624 11:47:18.411930 94616 out.go:197] ▪ Generating certificates and keys ...
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost wireguardians] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost wireguardians] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost wireguardians] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost wireguardians] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0624 11:49:15.657127 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0624 11:49:16.065194 94616 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
I0624 11:49:16.073058 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0624 11:49:16.099843 94616 kubeadm.go:220] ignoring SystemVerification for kubeadm because of podman driver
I0624 11:49:16.099905 94616 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0624 11:49:16.105471 94616 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0624 11:49:16.105505 94616 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0624 11:49:16.771267 94616 out.go:197] ▪ Generating certificates and keys ...
▪ Generating certificates and keys ...- I0624 11:49:17.417888 94616 out.go:197] ▪ Booting up control plane ...
I0624 11:51:12.433731 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0624 11:51:12.460943 94616 logs.go:270] 0 containers: []
W0624 11:51:12.460963 94616 logs.go:272] No container was found matching "kube-apiserver"
I0624 11:51:12.461020 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0624 11:51:12.488362 94616 logs.go:270] 0 containers: []
W0624 11:51:12.488390 94616 logs.go:272] No container was found matching "etcd"
I0624 11:51:12.488444 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
\ I0624 11:51:12.514976 94616 logs.go:270] 0 containers: []
W0624 11:51:12.514995 94616 logs.go:272] No container was found matching "coredns"
I0624 11:51:12.515056 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0624 11:51:12.542139 94616 logs.go:270] 0 containers: []
W0624 11:51:12.542160 94616 logs.go:272] No container was found matching "kube-scheduler"
I0624 11:51:12.542217 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0624 11:51:12.569174 94616 logs.go:270] 0 containers: []
W0624 11:51:12.569191 94616 logs.go:272] No container was found matching "kube-proxy"
I0624 11:51:12.569246 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
| I0624 11:51:12.596426 94616 logs.go:270] 0 containers: []
W0624 11:51:12.596442 94616 logs.go:272] No container was found matching "kubernetes-dashboard"
I0624 11:51:12.596504 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0624 11:51:12.624036 94616 logs.go:270] 0 containers: []
W0624 11:51:12.624056 94616 logs.go:272] No container was found matching "storage-provisioner"
I0624 11:51:12.624121 94616 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0624 11:51:12.651233 94616 logs.go:270] 0 containers: []
W0624 11:51:12.651252 94616 logs.go:272] No container was found matching "kube-controller-manager"
I0624 11:51:12.651262 94616 logs.go:123] Gathering logs for kubelet ...
I0624 11:51:12.651272 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0624 11:51:12.695359 94616 logs.go:123] Gathering logs for dmesg ...
I0624 11:51:12.695384 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
/ I0624 11:51:12.706209 94616 logs.go:123] Gathering logs for describe nodes ...
I0624 11:51:12.706233 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0624 11:51:12.752228 94616 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0624 11:51:12.752246 94616 logs.go:123] Gathering logs for Docker ...
I0624 11:51:12.752256 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0624 11:51:12.763792 94616 logs.go:123] Gathering logs for container status ...
I0624 11:51:12.763815 94616 ssh_runner.go:149] Run: /bin/bash -c "sudo
which crictl || echo crictl
ps -a || sudo docker ps -a"W0624 11:51:12.787614 94616 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0624 11:51:12.787682 94616 out.go:235]
W0624 11:51:12.787849 94616 out.go:235] 💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0624 11:51:12.787961 94616 out.go:235]
W0624 11:51:12.788677 94616 out.go:235] ╭────────────────────────────────────────────────────────────────────╮
╭────────────────────────────────────────────────────────────────────╮
W0624 11:51:12.788690 94616 out.go:235] │ │
│ │
W0624 11:51:12.788697 94616 out.go:235] │ 😿 If the above advice does not help, please let us know: │
│ 😿 If the above advice does not help, please let us know: │
W0624 11:51:12.788702 94616 out.go:235] │ 👉 https://github.com/kubernetes/minikube/issues/new/choose │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose │
W0624 11:51:12.788707 94616 out.go:235] │ │
│ │
W0624 11:51:12.788713 94616 out.go:235] │ Please attach the following file to the GitHub issue: │
│ Please attach the following file to the GitHub issue: │
W0624 11:51:12.788718 94616 out.go:235] │ - /home/owen/.minikube/logs/lastStart.txt │
│ - /home/owen/.minikube/logs/lastStart.txt │
W0624 11:51:12.788723 94616 out.go:235] │ │
│ │
W0624 11:51:12.788728 94616 out.go:235] ╰────────────────────────────────────────────────────────────────────╯
╰────────────────────────────────────────────────────────────────────╯
W0624 11:51:12.788734 94616 out.go:235]
I0624 11:51:12.793631 94616 out.go:170]
W0624 11:51:12.793763 94616 out.go:235] ❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0624 11:51:12.793937 94616 out.go:235] 💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0624 11:51:12.793997 94616 out.go:235] 🍿 Related issue: #4172
🍿 Related issue: #4172
I0624 11:51:12.802972 94616 out.go:170]
make: *** [Makefile:91: minikube_start] Error 109
The text was updated successfully, but these errors were encountered: