Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube v1.18.1 not finishing on Fedora 33 using Podman and-or CRI-O #10737

Closed
FilBot3 opened this issue Mar 6, 2021 · 14 comments
Closed

minikube v1.18.1 not finishing on Fedora 33 using Podman and-or CRI-O #10737

FilBot3 opened this issue Mar 6, 2021 · 14 comments
Labels
co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@FilBot3
Copy link

FilBot3 commented Mar 6, 2021

Description

Steps to reproduce the issue:

  1. sudo dnf module list cri-o
  2. sudo dnf module enable cri-o:1.20
  3. sudo dnf install cri-o
  4. sudo dnf install conntrack
  5. minikube config set driver podman
  6. sudo visudo --file=/etc/sudoers.d/podman
  7. minikube start --driver=podman --container-runtime=cri-o --extra-config=kubelet.cgroup-driver=systemd --alsologtostderr -v=7 2>&1 | tee minikube_2021-03-06_Fedroa-33_podman_on_cri-o.log
  8. sudo journalctl -xeu kubelet --no-pager
  9. minikube stop
  10. minikube delete

Full output of failed command:
Log output hidden. Expand details for full logs.

➜  ~ neofetch
          /:-------------:\          filbot@oryx-fedora 
       :-------------------::        ------------------ 
     :-----------/shhOHbmp---:\      OS: Fedora 33 (KDE Plasma) x86_64 
   /-----------omMMMNNNMMD  ---:     Host: Oryx Pro oryp6 
  :-----------sMMMMNMNMP.    ---:    Kernel: 5.10.19-200.fc33.x86_64 
 :-----------:MMMdP-------    ---\   Uptime: 31 mins 
,------------:MMMd--------    ---:   Packages: 2364 (rpm), 24 (flatpak) 
:------------:MMMd-------    .---:   Shell: zsh 5.8 
:----    oNMMMMMMMMMNho     .----:   Resolution: 1920x1080 
:--     .+shhhMMMmhhy++   .------/   DE: Plasma 5.20.5 
:-    -------:MMMd--------------:    WM: KWin 
:-   --------/MMMd-------------;     WM Theme: plastik 
:-    ------/hMMMy------------:      Theme: Breeze Dark [Plasma], Adwaita [GTK2] 
:-- :dMNdhhdNMMNo------------;       Icons: breeze-dark [Plasma], breeze-dark [GTK2/3] 
:---:sdNMMMMNds:------------:        Terminal: konsole 
:------:://:-------------::          CPU: Intel i7-10875H (16) @ 5.100GHz 
:---------------------://            GPU: Intel CometLake-H GT2 [UHD Graphics] 
                                     Memory: 2902MiB / 31977MiB 

➜  ~ podman version
Version:      3.0.1
API Version:  3.0.0
Go Version:   go1.15.8
Built:        Fri Feb 19 10:56:17 2021
OS/Arch:      linux/amd64

➜  ~ sudo dnf module list cri-o 
Last metadata expiration check: 0:50:09 ago on Sat 06 Mar 2021 12:55:40 PM CST.
Fedora Modular 33 - x86_64
Name                                        Stream                                        Profiles                                      Summary                                                                                             
cri-o                                       nightly                                       default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.14                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.15                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.16                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.17                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.18                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     

Fedora Modular 33 - x86_64 - Updates
Name                                        Stream                                        Profiles                                      Summary                                                                                             
cri-o                                       nightly                                       default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.14                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.15                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.16                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.17                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.18                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.19                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     
cri-o                                       1.20                                          default                                       Kubernetes Container Runtime Interface for OCI-based containers                                     

Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
➜  ~ sudo dnf module enable cri-o:1.20
Last metadata expiration check: 0:50:36 ago on Sat 06 Mar 2021 12:55:40 PM CST.
Dependencies resolved.
============================================================================================================================================================================================================================================
 Package                                                  Architecture                                            Version                                                    Repository                                                Size
============================================================================================================================================================================================================================================
Enabling module streams:
 cri-o                                                                                                            1.20                                                                                                                     

Transaction Summary
============================================================================================================================================================================================================================================

Is this ok [y/N]: y
Complete!
➜  ~ sudo dnf install cri-o
Last metadata expiration check: 0:50:44 ago on Sat 06 Mar 2021 12:55:40 PM CST.
Dependencies resolved.
============================================================================================================================================================================================================================================
 Package                                        Architecture                                    Version                                                                      Repository                                                Size
============================================================================================================================================================================================================================================
Installing:
 cri-o                                          x86_64                                          1.20.0-1.module_f33+10488+8050703d                                           updates-modular                                           24 M
Installing weak dependencies:
 runc                                           x86_64                                          2:1.0.0-279.dev.gitdedadbf.fc33                                              fedora                                                   3.1 M

Transaction Summary
============================================================================================================================================================================================================================================
Install  2 Packages

Total download size: 27 M
Installed size: 118 M
Is this ok [y/N]: y
Downloading Packages:
[MIRROR] runc-1.0.0-279.dev.gitdedadbf.fc33.x86_64.rpm: Status code: 404 for https://mirror.genesisadaptive.com/fedora/releases/33/Everything/x86_64/os/Packages/r/runc-1.0.0-279.dev.gitdedadbf.fc33.x86_64.rpm (IP: 64.250.112.70)       
[MIRROR] runc-1.0.0-279.dev.gitdedadbf.fc33.x86_64.rpm: Status code: 404 for http://mirror.genesisadaptive.com/fedora/releases/33/Everything/x86_64/os/Packages/r/runc-1.0.0-279.dev.gitdedadbf.fc33.x86_64.rpm (IP: 64.250.112.70)        
(1/2): runc-1.0.0-279.dev.gitdedadbf.fc33.x86_64.rpm                                                                                                                                                        3.7 MB/s | 3.1 MB     00:00    
(2/2): cri-o-1.20.0-1.module_f33+10488+8050703d.x86_64.rpm                                                                                                                                                   16 MB/s |  24 MB     00:01    
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                        14 MB/s |  27 MB     00:01     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                    1/1 
  Installing       : runc-2:1.0.0-279.dev.gitdedadbf.fc33.x86_64                                                                                                                                                                        1/2 
  Installing       : cri-o-1.20.0-1.module_f33+10488+8050703d.x86_64                                                                                                                                                                    2/2 
  Running scriptlet: cri-o-1.20.0-1.module_f33+10488+8050703d.x86_64                                                                                                                                                                    2/2 
ln: failed to create symbolic link '{_unitdir}/cri-o.service': No such file or directory

  Verifying        : cri-o-1.20.0-1.module_f33+10488+8050703d.x86_64                                                                                                                                                                    1/2 
  Verifying        : runc-2:1.0.0-279.dev.gitdedadbf.fc33.x86_64                                                                                                                                                                        2/2 

Installed:
  cri-o-1.20.0-1.module_f33+10488+8050703d.x86_64                                                                        runc-2:1.0.0-279.dev.gitdedadbf.fc33.x86_64                                                                       

Complete!
➜  ~ sudo systemctl start cri-o
➜  ~ sudo systemctl status cri-o
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/usr/lib/systemd/system/crio.service; disabled; vendor preset: disabled)
     Active: active (running) since Sat 2021-03-06 13:46:43 CST; 5s ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 3195 (crio)
      Tasks: 18
     Memory: 48.8M
        CPU: 369ms
     CGroup: /system.slice/crio.service
             └─3195 /usr/bin/crio

Mar 06 13:46:43 oryx-fedora crio[3195]: time="2021-03-06 13:46:43.469005070-06:00" level=info msg="Conmon does support the --sync option"
Mar 06 13:46:43 oryx-fedora crio[3195]: time="2021-03-06 13:46:43.469207269-06:00" level=info msg="No seccomp profile specified, using the internal default"
Mar 06 13:46:43 oryx-fedora crio[3195]: time="2021-03-06 13:46:43.469238676-06:00" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
Mar 06 13:46:43 oryx-fedora crio[3195]: time="2021-03-06 13:46:43.492751346-06:00" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
Mar 06 13:46:43 oryx-fedora crio[3195]: time="2021-03-06 13:46:43.510285042-06:00" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
Mar 06 13:46:43 oryx-fedora crio[3195]: time="2021-03-06 13:46:43.578769094-06:00" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
Mar 06 13:46:43 oryx-fedora crio[3195]: time="2021-03-06 13:46:43.578849986-06:00" level=info msg="Update default CNI network name to crio"
Mar 06 13:46:43 oryx-fedora crio[3195]: W0306 13:46:43.595951    3195 hostport_manager.go:71] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Mar 06 13:46:43 oryx-fedora crio[3195]: W0306 13:46:43.611977    3195 hostport_manager.go:71] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Mar 06 13:46:43 oryx-fedora systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
➜  ~ sudo dnf install conntrack
Last metadata expiration check: 0:51:29 ago on Sat 06 Mar 2021 12:55:40 PM CST.
Dependencies resolved.
============================================================================================================================================================================================================================================
 Package                                                            Architecture                                       Version                                                     Repository                                          Size
============================================================================================================================================================================================================================================
Installing:
 conntrack-tools                                                    x86_64                                             1.4.5-6.fc33                                                fedora                                             207 k
Installing dependencies:
 libnetfilter_cthelper                                              x86_64                                             1.0.0-18.fc33                                               fedora                                              22 k
 libnetfilter_cttimeout                                             x86_64                                             1.0.0-16.fc33                                               fedora                                              22 k
 libnetfilter_queue                                                 x86_64                                             1.0.2-16.fc33                                               fedora                                              27 k

Transaction Summary
============================================================================================================================================================================================================================================
Install  4 Packages

Total download size: 278 k
Installed size: 769 k
Is this ok [y/N]: y
Downloading Packages:
(1/4): libnetfilter_cthelper-1.0.0-18.fc33.x86_64.rpm                                                                                                                                                        82 kB/s |  22 kB     00:00    
(2/4): libnetfilter_cttimeout-1.0.0-16.fc33.x86_64.rpm                                                                                                                                                       82 kB/s |  22 kB     00:00    
(3/4): libnetfilter_queue-1.0.2-16.fc33.x86_64.rpm                                                                                                                                                          171 kB/s |  27 kB     00:00    
(4/4): conntrack-tools-1.4.5-6.fc33.x86_64.rpm                                                                                                                                                              321 kB/s | 207 kB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                       311 kB/s | 278 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                    1/1 
  Installing       : libnetfilter_queue-1.0.2-16.fc33.x86_64                                                                                                                                                                            1/4 
  Installing       : libnetfilter_cttimeout-1.0.0-16.fc33.x86_64                                                                                                                                                                        2/4 
  Installing       : libnetfilter_cthelper-1.0.0-18.fc33.x86_64                                                                                                                                                                         3/4 
  Installing       : conntrack-tools-1.4.5-6.fc33.x86_64                                                                                                                                                                                4/4 
  Running scriptlet: conntrack-tools-1.4.5-6.fc33.x86_64                                                                                                                                                                                4/4 
  Verifying        : conntrack-tools-1.4.5-6.fc33.x86_64                                                                                                                                                                                1/4 
  Verifying        : libnetfilter_cthelper-1.0.0-18.fc33.x86_64                                                                                                                                                                         2/4 
  Verifying        : libnetfilter_cttimeout-1.0.0-16.fc33.x86_64                                                                                                                                                                        3/4 
  Verifying        : libnetfilter_queue-1.0.2-16.fc33.x86_64                                                                                                                                                                            4/4 

Installed:
  conntrack-tools-1.4.5-6.fc33.x86_64                   libnetfilter_cthelper-1.0.0-18.fc33.x86_64                   libnetfilter_cttimeout-1.0.0-16.fc33.x86_64                   libnetfilter_queue-1.0.2-16.fc33.x86_64                  

Complete!
➜  ~ sudo systemctl restart cri-o
➜  ~ sudo systemctl status cri-o 
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/usr/lib/systemd/system/crio.service; disabled; vendor preset: disabled)
     Active: active (running) since Sat 2021-03-06 13:47:21 CST; 2s ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 3485 (crio)
      Tasks: 18
     Memory: 27.6M
        CPU: 147ms
     CGroup: /system.slice/crio.service
             └─3485 /usr/bin/crio

Mar 06 13:47:21 oryx-fedora crio[3485]: time="2021-03-06 13:47:21.695464504-06:00" level=info msg="Node configuration value for systemd CollectMode is true"
Mar 06 13:47:21 oryx-fedora crio[3485]: time="2021-03-06 13:47:21.696090654-06:00" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVI>
Mar 06 13:47:21 oryx-fedora crio[3485]: time="2021-03-06 13:47:21.699071879-06:00" level=info msg="Conmon does support the --sync option"
Mar 06 13:47:21 oryx-fedora crio[3485]: time="2021-03-06 13:47:21.699196873-06:00" level=info msg="No seccomp profile specified, using the internal default"
Mar 06 13:47:21 oryx-fedora crio[3485]: time="2021-03-06 13:47:21.699216659-06:00" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
Mar 06 13:47:21 oryx-fedora crio[3485]: time="2021-03-06 13:47:21.702900287-06:00" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
Mar 06 13:47:21 oryx-fedora crio[3485]: time="2021-03-06 13:47:21.706019794-06:00" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopback.conf"
Mar 06 13:47:21 oryx-fedora crio[3485]: time="2021-03-06 13:47:21.718154996-06:00" level=info msg="Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist"
Mar 06 13:47:21 oryx-fedora crio[3485]: time="2021-03-06 13:47:21.718212336-06:00" level=info msg="Update default CNI network name to crio"
Mar 06 13:47:21 oryx-fedora systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
➜  ~ minikube config set driver podman
❗  These changes will take effect upon a minikube delete and then a minikube start
➜  ~ minikube start --driver=podman --container-runtime=cri-o

😄  minikube v1.18.1 on Fedora 33
✨  Using the podman driver based on user configuration

💣  Exiting due to PROVIDER_PODMAN_NOT_RUNNING: "sudo -k -n podman version --format " exit status 1: sudo: a password is required
💡  Suggestion: Add your user to the 'sudoers' file: 'filbot ALL=(ALL) NOPASSWD: /usr/bin/podman'
📘  Documentation: https://podman.io

➜  ~ sudo visudo --file=/etc/sudoers.d/podman
➜  ~ minikube start --driver=podman --container-runtime=cri-o --extra-config=kubelet.cgroup-driver=systemd --alsologtostderr -v=7 | tee minikube_2021-03-06_Fedroa-33_podman_on_cri-o.log
I0306 14:02:52.936432   73865 out.go:239] Setting OutFile to fd 1 ...
I0306 14:02:52.936550   73865 out.go:291] isatty.IsTerminal(1) = false
I0306 14:02:52.936560   73865 out.go:252] Setting ErrFile to fd 2...
I0306 14:02:52.936567   73865 out.go:291] isatty.IsTerminal(2) = true
I0306 14:02:52.936682   73865 root.go:308] Updating PATH: /home/filbot/.minikube/bin
I0306 14:02:52.936963   73865 out.go:246] Setting JSON to false
I0306 14:02:52.950991   73865 start.go:108] hostinfo: {"hostname":"oryx-fedora","uptime":1133,"bootTime":1615059840,"procs":365,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"33","kernelVersion":"5.10.19-200.fc33.x86_64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"7872287a-f4bf-4f92-80bf-61cfbfe48c7e"}
I0306 14:02:52.951080   73865 start.go:118] virtualization: kvm host
I0306 14:02:52.952396   73865 out.go:129] 😄  minikube v1.18.1 on Fedora 33
😄  minikube v1.18.1 on Fedora 33
I0306 14:02:52.952609   73865 notify.go:126] Checking for updates...
I0306 14:02:52.952713   73865 driver.go:323] Setting default libvirt URI to qemu:///system
I0306 14:02:53.064319   73865 podman.go:120] podman version: 3.0.1
I0306 14:02:53.065312   73865 out.go:129] ✨  Using the podman driver based on user configuration
I0306 14:02:53.065334   73865 start.go:276] selected driver: podman
I0306 14:02:53.065342   73865 start.go:718] validating driver "podman" against <nil>
✨  Using the podman driver based on user configuration
I0306 14:02:53.065367   73865 start.go:729] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0306 14:02:53.065569   73865 cli_runner.go:115] Run: sudo -n podman system info --format json
I0306 14:02:53.236648   73865 info.go:273] podman info: {Host:{BuildahVersion:1.19.4 CgroupVersion:v2 Conmon:{Package:conmon-2.0.26-1.fc33.x86_64 Path:/usr/bin/conmon Version:conmon version 2.0.26, commit: 777074ecdb5e883b9bec233f3630c5e7fa37d521} Distribution:{Distribution:fedora Version:33} MemFree:26165518336 MemTotal:33530384384 OCIRuntime:{Name:crun Package:crun-0.18-1.fc33.x86_64 Path:/usr/bin/crun Version:crun version 0.18
commit: 808420efe3dc2b44d6db9f1a3fac8361dde42a95
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:4294963200 SwapTotal:4294963200 Arch:amd64 Cpus:16 Eventlogger:journald Hostname:oryx-fedora Kernel:5.10.19-200.fc33.x86_64 Os:linux Rootless:false Uptime:18m 53.18s} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com registry.centos.org docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:btrfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0306 14:02:53.236804   73865 start_flags.go:251] no existing cluster config was found, will generate one from the flags 
I0306 14:02:53.239149   73865 start_flags.go:269] Using suggested 7900MB memory alloc based on sys=31977MB, container=31977MB
I0306 14:02:53.239399   73865 start_flags.go:696] Wait components to verify : map[apiserver:true system_pods:true]
I0306 14:02:53.239440   73865 cni.go:74] Creating CNI manager for ""
I0306 14:02:53.239459   73865 cni.go:121] "podman" driver + crio runtime found, recommending kindnet
I0306 14:02:53.239494   73865 start_flags.go:390] Found "CNI" CNI - setting NetworkPlugin=cni
I0306 14:02:53.239516   73865 start_flags.go:395] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false}
I0306 14:02:53.263867   73865 out.go:129] 👍  Starting control plane node minikube in cluster minikube
I0306 14:02:53.263924   73865 cache.go:112] Driver isn't docker, skipping base image download
👍  Starting control plane node minikube in cluster minikube
I0306 14:02:53.263969   73865 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime crio
I0306 14:02:53.264023   73865 preload.go:105] Found local preload: /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4
I0306 14:02:53.264039   73865 cache.go:54] Caching tarball of preloaded images
I0306 14:02:53.264070   73865 preload.go:131] Found /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0306 14:02:53.264085   73865 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on crio
I0306 14:02:53.264704   73865 profile.go:148] Saving config to /home/filbot/.minikube/profiles/minikube/config.json ...
I0306 14:02:53.264754   73865 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/config.json: {Name:mkde552573cc4fe111badcbccdf8dc701af1839b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0306 14:02:53.265478   73865 cache.go:185] Successfully downloaded all kic artifacts
I0306 14:02:53.265551   73865 start.go:313] acquiring machines lock for minikube: {Name:mkb45fe8b1deef9886cfd2b84df49d68ad57ae09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0306 14:02:53.265664   73865 start.go:317] acquired machines lock for "minikube" in 84.64µs
I0306 14:02:53.265696   73865 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0306 14:02:53.265800   73865 start.go:126] createHost starting for "" (driver="podman")
I0306 14:02:53.267211   73865 out.go:150] 🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
I0306 14:02:53.267550   73865 start.go:160] libmachine.API.Create for "minikube" (driver="podman")
I0306 14:02:53.267603   73865 client.go:168] LocalClient.Create starting
I0306 14:02:53.267687   73865 main.go:121] libmachine: Reading certificate data from /home/filbot/.minikube/certs/ca.pem
I0306 14:02:53.267742   73865 main.go:121] libmachine: Decoding PEM data...
I0306 14:02:53.267780   73865 main.go:121] libmachine: Parsing certificate...
I0306 14:02:53.268012   73865 main.go:121] libmachine: Reading certificate data from /home/filbot/.minikube/certs/cert.pem
I0306 14:02:53.268060   73865 main.go:121] libmachine: Decoding PEM data...
I0306 14:02:53.268097   73865 main.go:121] libmachine: Parsing certificate...
I0306 14:02:53.268840   73865 cli_runner.go:115] Run: sudo -n podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
I0306 14:02:53.407354   73865 network_create.go:64] Found existing network {name:minikube subnet:0xc0008599b0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:0}
I0306 14:02:53.407406   73865 kic.go:101] calculated static IP "192.168.49.2" for the "minikube" container
I0306 14:02:53.407483   73865 cli_runner.go:115] Run: sudo -n podman ps -a --format {{.Names}}
I0306 14:02:53.538731   73865 cli_runner.go:115] Run: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0306 14:02:53.689437   73865 oci.go:102] Successfully created a podman volume minikube
I0306 14:02:53.689577   73865 cli_runner.go:115] Run: sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.18 -d /var/lib
I0306 14:02:54.521864   73865 oci.go:106] Successfully prepared a podman volume minikube
W0306 14:02:54.521959   73865 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0306 14:02:54.521967   73865 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime crio
W0306 14:02:54.521974   73865 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0306 14:02:54.522031   73865 oci.go:233] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I0306 14:02:54.522165   73865 preload.go:105] Found local preload: /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4
I0306 14:02:54.522198   73865 kic.go:168] Starting extracting preloaded images to volume ...
I0306 14:02:54.522233   73865 cli_runner.go:115] Run: sudo -n podman info --format "'{{json .SecurityOptions}}'"
I0306 14:02:54.522313   73865 cli_runner.go:115] Run: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18 -I lz4 -xf /preloaded.tar -C /extractDir
W0306 14:02:54.678707   73865 cli_runner.go:162] sudo -n podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
I0306 14:02:54.679001   73865 cli_runner.go:115] Run: sudo -n podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18
I0306 14:02:55.486325   73865 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format={{.State.Running}}
I0306 14:02:55.623683   73865 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0306 14:02:55.751111   73865 cli_runner.go:115] Run: sudo -n podman exec minikube stat /var/lib/dpkg/alternatives/iptables
I0306 14:02:56.009619   73865 oci.go:278] the created container "minikube" has a running status.
I0306 14:02:56.009646   73865 kic.go:199] Creating ssh key for kic: /home/filbot/.minikube/machines/minikube/id_rsa...
I0306 14:02:56.266857   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0306 14:02:56.266956   73865 kic_runner.go:188] podman (temp): /home/filbot/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0306 14:02:56.267280   73865 kic_runner.go:252] Run: /usr/bin/sudo -n podman cp /tmp/tmpf-memory-asset793036691 minikube:/home/docker/.ssh/authorized_keys
I0306 14:02:56.843521   73865 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0306 14:02:56.957517   73865 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0306 14:02:56.957574   73865 kic_runner.go:115] Args: [sudo -n podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0306 14:02:59.058589   73865 cli_runner.go:168] Completed: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18 -I lz4 -xf /preloaded.tar -C /extractDir: (4.536184691s)
I0306 14:02:59.058621   73865 kic.go:177] duration metric: took 4.536421 seconds to extract preloaded images to volume
I0306 14:02:59.058921   73865 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0306 14:02:59.219229   73865 machine.go:88] provisioning docker machine ...
I0306 14:02:59.219303   73865 ubuntu.go:169] provisioning hostname "minikube"
I0306 14:02:59.219521   73865 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0306 14:02:59.359217   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0306 14:02:59.480570   73865 main.go:121] libmachine: Using SSH client type: native
I0306 14:02:59.480785   73865 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 37413 <nil> <nil>}
I0306 14:02:59.480813   73865 main.go:121] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0306 14:02:59.631688   73865 main.go:121] libmachine: SSH cmd err, output: <nil>: minikube

I0306 14:02:59.631841   73865 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0306 14:02:59.773243   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0306 14:02:59.909735   73865 main.go:121] libmachine: Using SSH client type: native
I0306 14:02:59.910057   73865 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 37413 <nil> <nil>}
I0306 14:02:59.910096   73865 main.go:121] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else 
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
                        fi
                fi
I0306 14:03:00.061690   73865 main.go:121] libmachine: SSH cmd err, output: <nil>: 
I0306 14:03:00.061750   73865 ubuntu.go:175] set auth options {CertDir:/home/filbot/.minikube CaCertPath:/home/filbot/.minikube/certs/ca.pem CaPrivateKeyPath:/home/filbot/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/filbot/.minikube/machines/server.pem ServerKeyPath:/home/filbot/.minikube/machines/server-key.pem ClientKeyPath:/home/filbot/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/filbot/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/filbot/.minikube}
I0306 14:03:00.061798   73865 ubuntu.go:177] setting up certificates
I0306 14:03:00.061820   73865 provision.go:83] configureAuth start
I0306 14:03:00.061944   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0306 14:03:00.190656   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0306 14:03:00.311239   73865 provision.go:137] copyHostCerts
I0306 14:03:00.311282   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/ca.pem -> /home/filbot/.minikube/ca.pem
I0306 14:03:00.311314   73865 exec_runner.go:145] found /home/filbot/.minikube/ca.pem, removing ...
I0306 14:03:00.311328   73865 exec_runner.go:190] rm: /home/filbot/.minikube/ca.pem
I0306 14:03:00.311468   73865 exec_runner.go:152] cp: /home/filbot/.minikube/certs/ca.pem --> /home/filbot/.minikube/ca.pem (1078 bytes)
I0306 14:03:00.311592   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/cert.pem -> /home/filbot/.minikube/cert.pem
I0306 14:03:00.311623   73865 exec_runner.go:145] found /home/filbot/.minikube/cert.pem, removing ...
I0306 14:03:00.311634   73865 exec_runner.go:190] rm: /home/filbot/.minikube/cert.pem
I0306 14:03:00.311689   73865 exec_runner.go:152] cp: /home/filbot/.minikube/certs/cert.pem --> /home/filbot/.minikube/cert.pem (1123 bytes)
I0306 14:03:00.311770   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/key.pem -> /home/filbot/.minikube/key.pem
I0306 14:03:00.311801   73865 exec_runner.go:145] found /home/filbot/.minikube/key.pem, removing ...
I0306 14:03:00.311812   73865 exec_runner.go:190] rm: /home/filbot/.minikube/key.pem
I0306 14:03:00.311862   73865 exec_runner.go:152] cp: /home/filbot/.minikube/certs/key.pem --> /home/filbot/.minikube/key.pem (1679 bytes)
I0306 14:03:00.311939   73865 provision.go:111] generating server cert: /home/filbot/.minikube/machines/server.pem ca-key=/home/filbot/.minikube/certs/ca.pem private-key=/home/filbot/.minikube/certs/ca-key.pem org=filbot.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0306 14:03:00.472466   73865 provision.go:165] copyRemoteCerts
I0306 14:03:00.472523   73865 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0306 14:03:00.472556   73865 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0306 14:03:00.600674   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0306 14:03:00.751387   73865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37413 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker}
I0306 14:03:00.852249   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0306 14:03:00.852353   73865 ssh_runner.go:316] scp /home/filbot/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0306 14:03:00.902934   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/machines/server.pem -> /etc/docker/server.pem
I0306 14:03:00.903042   73865 ssh_runner.go:316] scp /home/filbot/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0306 14:03:00.950157   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0306 14:03:00.950279   73865 ssh_runner.go:316] scp /home/filbot/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0306 14:03:00.996382   73865 provision.go:86] duration metric: configureAuth took 934.525735ms
I0306 14:03:00.996428   73865 ubuntu.go:193] setting minikube options for container-runtime
I0306 14:03:00.996961   73865 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0306 14:03:01.123001   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0306 14:03:01.252375   73865 main.go:121] libmachine: Using SSH client type: native
I0306 14:03:01.252662   73865 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 37413 <nil> <nil>}
I0306 14:03:01.252706   73865 main.go:121] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube
I0306 14:03:01.438384   73865 main.go:121] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

I0306 14:03:01.438432   73865 machine.go:91] provisioned docker machine in 2.219158989s
I0306 14:03:01.438459   73865 client.go:171] LocalClient.Create took 8.17084122s
I0306 14:03:01.438491   73865 start.go:168] duration metric: libmachine.API.Create for "minikube" took 8.17093873s
I0306 14:03:01.438525   73865 start.go:267] post-start starting for "minikube" (driver="podman")
I0306 14:03:01.438543   73865 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0306 14:03:01.438637   73865 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0306 14:03:01.438736   73865 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0306 14:03:01.587931   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0306 14:03:01.739894   73865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37413 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker}
I0306 14:03:01.829034   73865 ssh_runner.go:149] Run: cat /etc/os-release
I0306 14:03:01.831519   73865 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0306 14:03:01.831540   73865 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0306 14:03:01.831581   73865 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0306 14:03:01.831590   73865 info.go:137] Remote host: Ubuntu 20.04.1 LTS
I0306 14:03:01.831600   73865 filesync.go:118] Scanning /home/filbot/.minikube/addons for local assets ...
I0306 14:03:01.831660   73865 filesync.go:118] Scanning /home/filbot/.minikube/files for local assets ...
I0306 14:03:01.831705   73865 start.go:270] post-start completed in 393.165763ms
I0306 14:03:01.831990   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0306 14:03:01.945013   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0306 14:03:02.101141   73865 profile.go:148] Saving config to /home/filbot/.minikube/profiles/minikube/config.json ...
I0306 14:03:02.101632   73865 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0306 14:03:02.101687   73865 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0306 14:03:02.204456   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0306 14:03:02.321956   73865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37413 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker}
I0306 14:03:02.411744   73865 start.go:129] duration metric: createHost completed in 9.145921567s
I0306 14:03:02.411795   73865 start.go:80] releasing machines lock for "minikube", held for 9.146108131s
I0306 14:03:02.411953   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0306 14:03:02.596739   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0306 14:03:02.746483   73865 ssh_runner.go:149] Run: systemctl --version
I0306 14:03:02.746501   73865 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0306 14:03:02.746548   73865 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0306 14:03:02.746576   73865 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0306 14:03:02.870604   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0306 14:03:02.872340   73865 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0306 14:03:02.997461   73865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37413 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker}
I0306 14:03:03.053572   73865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37413 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker}
I0306 14:03:03.240640   73865 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0306 14:03:03.267373   73865 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
I0306 14:03:03.317138   73865 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0306 14:03:03.343773   73865 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0306 14:03:03.370544   73865 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0306 14:03:03.407403   73865 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
I0306 14:03:03.430558   73865 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0306 14:03:03.448192   73865 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0306 14:03:03.458522   73865 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0306 14:03:03.570641   73865 ssh_runner.go:149] Run: sudo systemctl start crio
I0306 14:03:03.856284   73865 start.go:316] Will wait 60s for socket path /var/run/crio/crio.sock
I0306 14:03:03.856395   73865 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
I0306 14:03:03.864715   73865 ssh_runner.go:149] Run: crio --version
I0306 14:03:03.941204   73865 out.go:129] 🎁  Preparing Kubernetes v1.20.2 on CRI-O 1.20.0 ...
🎁  Preparing Kubernetes v1.20.2 on CRI-O 1.20.0 ...
I0306 14:03:03.941348   73865 cli_runner.go:115] Run: sudo -n podman container inspect --format {{.NetworkSettings.Gateway}} minikube
I0306 14:03:04.058223   73865 ssh_runner.go:149] Run: grep <nil>        host.minikube.internal$ /etc/hosts
I0306 14:03:04.061566   73865 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "<nil>       host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0306 14:03:04.084629   73865 out.go:129]     ▪ kubelet.cgroup-driver=systemd
    ▪ kubelet.cgroup-driver=systemd
I0306 14:03:04.084712   73865 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime crio
I0306 14:03:04.084758   73865 preload.go:105] Found local preload: /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4
I0306 14:03:04.084805   73865 ssh_runner.go:149] Run: sudo crictl images --output json
I0306 14:03:04.141502   73865 crio.go:345] all images are preloaded for cri-o runtime.
I0306 14:03:04.141527   73865 crio.go:260] Images already preloaded, skipping extraction
I0306 14:03:04.141566   73865 ssh_runner.go:149] Run: sudo crictl images --output json
I0306 14:03:04.158909   73865 crio.go:345] all images are preloaded for cri-o runtime.
I0306 14:03:04.158935   73865 cache_images.go:73] Images are preloaded, skipping loading
I0306 14:03:04.158980   73865 ssh_runner.go:149] Run: crio config
I0306 14:03:04.237610   73865 cni.go:74] Creating CNI manager for ""
I0306 14:03:04.237632   73865 cni.go:121] "podman" driver + crio runtime found, recommending kindnet
I0306 14:03:04.237642   73865 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0306 14:03:04.237658   73865 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0306 14:03:04.237787   73865 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249

I0306 14:03:04.237940   73865 kubeadm.go:919] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0306 14:03:04.238002   73865 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
I0306 14:03:04.246506   73865 binaries.go:44] Found k8s binaries, skipping transfer
I0306 14:03:04.246557   73865 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0306 14:03:04.254728   73865 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (511 bytes)
I0306 14:03:04.270187   73865 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0306 14:03:04.286471   73865 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1838 bytes)
I0306 14:03:04.303698   73865 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0306 14:03:04.307004   73865 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2       control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0306 14:03:04.318568   73865 certs.go:52] Setting up /home/filbot/.minikube/profiles/minikube for IP: 192.168.49.2
I0306 14:03:04.318656   73865 certs.go:171] skipping minikubeCA CA generation: /home/filbot/.minikube/ca.key
I0306 14:03:04.318702   73865 certs.go:171] skipping proxyClientCA CA generation: /home/filbot/.minikube/proxy-client-ca.key
I0306 14:03:04.318762   73865 certs.go:279] generating minikube-user signed cert: /home/filbot/.minikube/profiles/minikube/client.key
I0306 14:03:04.318776   73865 crypto.go:69] Generating cert /home/filbot/.minikube/profiles/minikube/client.crt with IP's: []
I0306 14:03:04.516997   73865 crypto.go:157] Writing cert to /home/filbot/.minikube/profiles/minikube/client.crt ...
I0306 14:03:04.517035   73865 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/client.crt: {Name:mk38f2e53a26a660a8ca42427d273aa5beb3ccab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0306 14:03:04.517472   73865 crypto.go:165] Writing key to /home/filbot/.minikube/profiles/minikube/client.key ...
I0306 14:03:04.517481   73865 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/client.key: {Name:mkdf6546b3ad86d5f6c8bcb5a998110a20e341d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0306 14:03:04.517692   73865 certs.go:279] generating minikube signed cert: /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0306 14:03:04.517717   73865 crypto.go:69] Generating cert /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0306 14:03:04.599758   73865 crypto.go:157] Writing cert to /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0306 14:03:04.599776   73865 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkb9a4037bd1b0390d311aa3c944d4ed10024f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0306 14:03:04.600059   73865 crypto.go:165] Writing key to /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0306 14:03:04.600068   73865 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk90feeb569bacbb36c9355394dc89da215946ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0306 14:03:04.600268   73865 certs.go:290] copying /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/filbot/.minikube/profiles/minikube/apiserver.crt
I0306 14:03:04.600398   73865 certs.go:294] copying /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/filbot/.minikube/profiles/minikube/apiserver.key
I0306 14:03:04.600536   73865 certs.go:279] generating aggregator signed cert: /home/filbot/.minikube/profiles/minikube/proxy-client.key
I0306 14:03:04.600544   73865 crypto.go:69] Generating cert /home/filbot/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0306 14:03:04.691313   73865 crypto.go:157] Writing cert to /home/filbot/.minikube/profiles/minikube/proxy-client.crt ...
I0306 14:03:04.691354   73865 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/proxy-client.crt: {Name:mka6499079057c8e41a72c98ff24037a0c7ae319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0306 14:03:04.691534   73865 crypto.go:165] Writing key to /home/filbot/.minikube/profiles/minikube/proxy-client.key ...
I0306 14:03:04.691560   73865 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/proxy-client.key: {Name:mk2a89d42af381d951e18297cc690ed8fd50bb29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0306 14:03:04.691707   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0306 14:03:04.691746   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0306 14:03:04.691777   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0306 14:03:04.691813   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0306 14:03:04.691830   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0306 14:03:04.691847   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0306 14:03:04.691862   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0306 14:03:04.691878   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0306 14:03:04.691927   73865 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/ca-key.pem (1679 bytes)
I0306 14:03:04.691999   73865 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/ca.pem (1078 bytes)
I0306 14:03:04.692033   73865 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/cert.pem (1123 bytes)
I0306 14:03:04.692067   73865 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/key.pem (1679 bytes)
I0306 14:03:04.692103   73865 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0306 14:03:04.693052   73865 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0306 14:03:04.710850   73865 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0306 14:03:04.731636   73865 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0306 14:03:04.753553   73865 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0306 14:03:04.773398   73865 ssh_runner.go:316] scp /home/filbot/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0306 14:03:04.794553   73865 ssh_runner.go:316] scp /home/filbot/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0306 14:03:04.819466   73865 ssh_runner.go:316] scp /home/filbot/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0306 14:03:04.844293   73865 ssh_runner.go:316] scp /home/filbot/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0306 14:03:04.871311   73865 ssh_runner.go:316] scp /home/filbot/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0306 14:03:04.901954   73865 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0306 14:03:04.924264   73865 ssh_runner.go:149] Run: openssl version
I0306 14:03:04.932368   73865 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0306 14:03:04.945150   73865 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0306 14:03:04.950490   73865 certs.go:395] hashing: -rw-r--r--. 1 root root 1111 Mar  6 19:50 /usr/share/ca-certificates/minikubeCA.pem
I0306 14:03:04.950557   73865 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0306 14:03:04.959084   73865 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0306 14:03:04.971850   73865 kubeadm.go:385] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false}
I0306 14:03:04.972019   73865 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0306 14:03:04.972108   73865 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0306 14:03:04.992721   73865 cri.go:76] found id: ""
I0306 14:03:04.992785   73865 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0306 14:03:05.002006   73865 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0306 14:03:05.011015   73865 kubeadm.go:219] ignoring SystemVerification for kubeadm because of podman driver
I0306 14:03:05.011074   73865 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0306 14:03:05.019987   73865 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0306 14:03:05.020029   73865 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0306 14:07:07.220960   73865 out.go:150]     ▪ Generating certificates and keys ...
    ▪ Generating certificates and keys ...
I0306 14:07:07.287232   73865 out.go:150]     ▪ Booting up control plane ...
    ▪ Booting up control plane ...
W0306 14:07:07.296565   73865 out.go:191] 💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I0306 14:07:07.297008   73865 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
I0306 14:07:08.040525   73865 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
I0306 14:07:08.051002   73865 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
I0306 14:07:08.051077   73865 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0306 14:07:08.067975   73865 cri.go:76] found id: ""
I0306 14:07:08.068034   73865 kubeadm.go:219] ignoring SystemVerification for kubeadm because of podman driver
I0306 14:07:08.068086   73865 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0306 14:07:08.080734   73865 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0306 14:07:08.080803   73865 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0306 14:07:08.359108   73865 out.go:150]     ▪ Generating certificates and keys ...
    ▪ Generating certificates and keys ...
I0306 14:07:09.209987   73865 out.go:150]     ▪ Booting up control plane ...
    ▪ Booting up control plane ...
I0306 14:09:04.236911   73865 kubeadm.go:387] StartCluster complete in 5m59.265065271s
I0306 14:09:04.237005   73865 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0306 14:09:04.237107   73865 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0306 14:09:04.266655   73865 cri.go:76] found id: ""
I0306 14:09:04.266676   73865 logs.go:255] 0 containers: []
W0306 14:09:04.266685   73865 logs.go:257] No container was found matching "kube-apiserver"
I0306 14:09:04.266695   73865 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0306 14:09:04.266736   73865 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
I0306 14:09:04.282038   73865 cri.go:76] found id: ""
I0306 14:09:04.282066   73865 logs.go:255] 0 containers: []
W0306 14:09:04.282084   73865 logs.go:257] No container was found matching "etcd"
I0306 14:09:04.282098   73865 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0306 14:09:04.282153   73865 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
I0306 14:09:04.296980   73865 cri.go:76] found id: ""
I0306 14:09:04.297003   73865 logs.go:255] 0 containers: []
W0306 14:09:04.297015   73865 logs.go:257] No container was found matching "coredns"
I0306 14:09:04.297026   73865 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0306 14:09:04.297070   73865 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0306 14:09:04.312235   73865 cri.go:76] found id: ""
I0306 14:09:04.312255   73865 logs.go:255] 0 containers: []
W0306 14:09:04.312285   73865 logs.go:257] No container was found matching "kube-scheduler"
I0306 14:09:04.312311   73865 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0306 14:09:04.312347   73865 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0306 14:09:04.327155   73865 cri.go:76] found id: ""
I0306 14:09:04.327192   73865 logs.go:255] 0 containers: []
W0306 14:09:04.327227   73865 logs.go:257] No container was found matching "kube-proxy"
I0306 14:09:04.327235   73865 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I0306 14:09:04.327324   73865 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0306 14:09:04.341634   73865 cri.go:76] found id: ""
I0306 14:09:04.341673   73865 logs.go:255] 0 containers: []
W0306 14:09:04.341684   73865 logs.go:257] No container was found matching "kubernetes-dashboard"
I0306 14:09:04.341693   73865 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I0306 14:09:04.341733   73865 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0306 14:09:04.356730   73865 cri.go:76] found id: ""
I0306 14:09:04.356754   73865 logs.go:255] 0 containers: []
W0306 14:09:04.356765   73865 logs.go:257] No container was found matching "storage-provisioner"
I0306 14:09:04.356776   73865 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0306 14:09:04.356818   73865 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0306 14:09:04.371589   73865 cri.go:76] found id: ""
I0306 14:09:04.371610   73865 logs.go:255] 0 containers: []
W0306 14:09:04.371627   73865 logs.go:257] No container was found matching "kube-controller-manager"
I0306 14:09:04.371647   73865 logs.go:122] Gathering logs for CRI-O ...
I0306 14:09:04.371667   73865 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I0306 14:09:04.491152   73865 logs.go:122] Gathering logs for container status ...
I0306 14:09:04.491244   73865 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0306 14:09:04.510562   73865 logs.go:122] Gathering logs for kubelet ...
I0306 14:09:04.510616   73865 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0306 14:09:04.577535   73865 logs.go:122] Gathering logs for dmesg ...
I0306 14:09:04.577570   73865 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0306 14:09:04.591172   73865 logs.go:122] Gathering logs for describe nodes ...
I0306 14:09:04.591209   73865 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0306 14:09:04.663807   73865 logs.go:129] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: 
** stderr ** 
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
W0306 14:09:04.663871   73865 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0306 14:09:04.664041   73865 out.go:191] 

W0306 14:09:04.664287   73865 out.go:191] 💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0306 14:09:04.664586   73865 out.go:191] 

W0306 14:09:04.664633   73865 out.go:191] 😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
W0306 14:09:04.664684   73865 out.go:191] 👉  https://github.com/kubernetes/minikube/issues/new/choose
👉  https://github.com/kubernetes/minikube/issues/new/choose
I0306 14:09:04.683626   73865 out.go:129] 

W0306 14:09:04.684021   73865 out.go:191] ❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0306 14:09:04.684413   73865 out.go:191] 💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0306 14:09:04.684483   73865 out.go:191] 🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
I0306 14:09:04.692381   73865 out.go:129] 

➜  ~ sudo journalctl -xeu kubelet --no-pager
[sudo] password for filbot: 
-- Logs begin at Tue 2021-03-02 21:26:42 CST, end at Sat 2021-03-06 14:09:45 CST. --
-- No entries --
➜  ~ minikube stop                                                                                                                                                                       
✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
🛑  1 nodes stopped.
➜  ~ minikube delete                                                                                                                                                                     
🔥  Deleting "minikube" in podman ...
🔥  Deleting container "minikube" ...
🔥  Removing /home/filbot/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.
➜  ~ 

Full output of minikube start command used, if not already included:

See above.

Minikube Logs

Optional: Full output of minikube logs command:

➜  ~ minikube logs 2>&1 | tee minikube_log_output.log
* ==> CRI-O <==
* -- Logs begin at Sat 2021-03-06 20:35:31 UTC, end at Sat 2021-03-06 20:46:48 UTC. --
* Mar 06 20:46:11 minikube crio[353]: time="2021-03-06 20:46:11.390094377Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=085cd399-2d42-44bf-99d1-7878321918ef name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:11 minikube crio[353]: time="2021-03-06 20:46:11.400315431Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=085cd399-2d42-44bf-99d1-7878321918ef name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:12 minikube crio[353]: time="2021-03-06 20:46:12.594737255Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=64780c8e-2924-4d4a-bec0-2edd0966d7de name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:12 minikube crio[353]: time="2021-03-06 20:46:12.597568098Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=64780c8e-2924-4d4a-bec0-2edd0966d7de name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:13 minikube crio[353]: time="2021-03-06 20:46:13.843047585Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=03699315-96f0-4c67-8e67-ba154a959ac8 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:13 minikube crio[353]: time="2021-03-06 20:46:13.844767871Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=03699315-96f0-4c67-8e67-ba154a959ac8 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:15 minikube crio[353]: time="2021-03-06 20:46:15.144365632Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=ae914578-3eab-4435-b75e-f66957e33821 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:15 minikube crio[353]: time="2021-03-06 20:46:15.146376674Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ae914578-3eab-4435-b75e-f66957e33821 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:16 minikube crio[353]: time="2021-03-06 20:46:16.393811843Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=62106cf5-eb7e-4f7e-abf1-ba9accba7e79 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:16 minikube crio[353]: time="2021-03-06 20:46:16.398304121Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=62106cf5-eb7e-4f7e-abf1-ba9accba7e79 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:17 minikube crio[353]: time="2021-03-06 20:46:17.680009587Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=01ccf487-5220-4940-859d-29652d10ed76 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:17 minikube crio[353]: time="2021-03-06 20:46:17.682456027Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=01ccf487-5220-4940-859d-29652d10ed76 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:18 minikube crio[353]: time="2021-03-06 20:46:18.860973473Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=fa51c847-8b73-428c-8bca-4d0201806f70 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:18 minikube crio[353]: time="2021-03-06 20:46:18.862670740Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=fa51c847-8b73-428c-8bca-4d0201806f70 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:20 minikube crio[353]: time="2021-03-06 20:46:20.146657422Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=c662c29f-63aa-41fd-b1d4-0deea5bc927a name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:20 minikube crio[353]: time="2021-03-06 20:46:20.149175705Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c662c29f-63aa-41fd-b1d4-0deea5bc927a name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:21 minikube crio[353]: time="2021-03-06 20:46:21.406792594Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2bc503b0-0388-402f-ab1b-04cc51030d5c name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:21 minikube crio[353]: time="2021-03-06 20:46:21.411072907Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2bc503b0-0388-402f-ab1b-04cc51030d5c name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:22 minikube crio[353]: time="2021-03-06 20:46:22.653283289Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=fce6b2eb-7f00-4b9f-989f-3793b55fe117 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:22 minikube crio[353]: time="2021-03-06 20:46:22.655416503Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=fce6b2eb-7f00-4b9f-989f-3793b55fe117 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:23 minikube crio[353]: time="2021-03-06 20:46:23.879653898Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=ce007093-9951-4f16-bea3-780d101b8f20 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:23 minikube crio[353]: time="2021-03-06 20:46:23.884460456Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ce007093-9951-4f16-bea3-780d101b8f20 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:25 minikube crio[353]: time="2021-03-06 20:46:25.220698311Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=044ee5b2-472f-40e0-a190-4c19699ebd5b name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:25 minikube crio[353]: time="2021-03-06 20:46:25.222421289Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=044ee5b2-472f-40e0-a190-4c19699ebd5b name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:26 minikube crio[353]: time="2021-03-06 20:46:26.398095595Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=749a7535-0406-4ae7-aa2e-1de655da7380 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:26 minikube crio[353]: time="2021-03-06 20:46:26.404010859Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=749a7535-0406-4ae7-aa2e-1de655da7380 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:27 minikube crio[353]: time="2021-03-06 20:46:27.681066206Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=0cbab769-bed8-4ee9-b049-18f27ae3ecf4 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:27 minikube crio[353]: time="2021-03-06 20:46:27.683356042Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0cbab769-bed8-4ee9-b049-18f27ae3ecf4 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:28 minikube crio[353]: time="2021-03-06 20:46:28.921090438Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=a6d4f6f2-8c55-46a9-97fc-f503cbd0b518 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:28 minikube crio[353]: time="2021-03-06 20:46:28.926846059Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a6d4f6f2-8c55-46a9-97fc-f503cbd0b518 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:30 minikube crio[353]: time="2021-03-06 20:46:30.144390106Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8b45fab9-c7da-407d-a675-6c16c433762f name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:30 minikube crio[353]: time="2021-03-06 20:46:30.146975366Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8b45fab9-c7da-407d-a675-6c16c433762f name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:31 minikube crio[353]: time="2021-03-06 20:46:31.435316256Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=15083d01-291a-48a5-93ec-2c02cb3d5f64 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:31 minikube crio[353]: time="2021-03-06 20:46:31.438547666Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=15083d01-291a-48a5-93ec-2c02cb3d5f64 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:32 minikube crio[353]: time="2021-03-06 20:46:32.684429524Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=d417d000-fc46-411d-9094-0320bace6e8f name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:32 minikube crio[353]: time="2021-03-06 20:46:32.686567442Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d417d000-fc46-411d-9094-0320bace6e8f name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:33 minikube crio[353]: time="2021-03-06 20:46:33.919373171Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=7d1d904e-5822-4336-89e4-6cfc31765edb name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:33 minikube crio[353]: time="2021-03-06 20:46:33.922099510Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7d1d904e-5822-4336-89e4-6cfc31765edb name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:35 minikube crio[353]: time="2021-03-06 20:46:35.168576907Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=ac7fda81-c97c-489f-98df-412c385e5215 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:35 minikube crio[353]: time="2021-03-06 20:46:35.170229490Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ac7fda81-c97c-489f-98df-412c385e5215 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:36 minikube crio[353]: time="2021-03-06 20:46:36.392695515Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=0318795a-275f-45f7-a7af-dbf062de46c6 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:36 minikube crio[353]: time="2021-03-06 20:46:36.394486467Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0318795a-275f-45f7-a7af-dbf062de46c6 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:37 minikube crio[353]: time="2021-03-06 20:46:37.590810566Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=39d50eb7-5890-4152-8c4a-b6f9a5fa1657 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:37 minikube crio[353]: time="2021-03-06 20:46:37.595308230Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=39d50eb7-5890-4152-8c4a-b6f9a5fa1657 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:38 minikube crio[353]: time="2021-03-06 20:46:38.957514131Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=0d5c8a56-c4f6-46b6-a55c-f5e588d163db name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:38 minikube crio[353]: time="2021-03-06 20:46:38.961430291Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0d5c8a56-c4f6-46b6-a55c-f5e588d163db name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:40 minikube crio[353]: time="2021-03-06 20:46:40.138653724Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=d42f3977-dabe-4e74-a5c9-177166296715 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:40 minikube crio[353]: time="2021-03-06 20:46:40.143164312Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d42f3977-dabe-4e74-a5c9-177166296715 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:41 minikube crio[353]: time="2021-03-06 20:46:41.439270316Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=e4471f29-83a9-42d1-8a54-ebe47aae8448 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:41 minikube crio[353]: time="2021-03-06 20:46:41.441014649Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e4471f29-83a9-42d1-8a54-ebe47aae8448 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:42 minikube crio[353]: time="2021-03-06 20:46:42.620151807Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=1568c33f-84f6-4049-a493-f3885a6de18f name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:42 minikube crio[353]: time="2021-03-06 20:46:42.622076607Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1568c33f-84f6-4049-a493-f3885a6de18f name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:43 minikube crio[353]: time="2021-03-06 20:46:43.942269606Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=71afd856-3148-47dc-a698-b395507facef name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:43 minikube crio[353]: time="2021-03-06 20:46:43.945067440Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=71afd856-3148-47dc-a698-b395507facef name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:45 minikube crio[353]: time="2021-03-06 20:46:45.159544081Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=38b66e55-24c6-47ee-be26-6900f33eae8d name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:45 minikube crio[353]: time="2021-03-06 20:46:45.161705136Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=38b66e55-24c6-47ee-be26-6900f33eae8d name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:46 minikube crio[353]: time="2021-03-06 20:46:46.363155725Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=e3810452-9d69-40ce-ab90-69c1a38eed1b name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:46 minikube crio[353]: time="2021-03-06 20:46:46.365333523Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e3810452-9d69-40ce-ab90-69c1a38eed1b name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:47 minikube crio[353]: time="2021-03-06 20:46:47.630951630Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8cb2f5cd-9463-4ba7-a5ea-28c5fd556895 name=/runtime.v1alpha2.ImageService/ImageStatus
* Mar 06 20:46:47 minikube crio[353]: time="2021-03-06 20:46:47.636271638Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8cb2f5cd-9463-4ba7-a5ea-28c5fd556895 name=/runtime.v1alpha2.ImageService/ImageStatus
* 
* ==> container status <==
* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
* 
* ==> describe nodes <==
E0306 14:46:48.409207  353232 logs.go:183] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
* 
* ==> dmesg <==
* [Mar 6 19:44] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
* [  +0.337294] i2c_hid i2c-PNP0C50:00: supply vdd not found, using dummy regulator
* [  +0.000042] i2c_hid i2c-PNP0C50:00: supply vddl not found, using dummy regulator
* [ +11.765571] kauditd_printk_skb: 18 callbacks suppressed
* [  +0.885197] systemd-sysv-generator[998]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
* [  +0.000047] systemd-sysv-generator[998]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
* [  +0.083602] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
* [  +0.413857] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
* [  +0.002966] system76: loading out-of-tree module taints kernel.
* [  +0.140532] iwlwifi 0000:00:14.3: api flags index 2 larger than supported by driver
* [  +0.416551] thermal thermal_zone2: failed to read out thermal zone (-61)
* [Mar 6 19:46] systemd-sysv-generator[2987]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
* [  +0.000034] systemd-sysv-generator[2987]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
* [Mar 6 19:47] systemd-sysv-generator[3350]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
* [  +0.000023] systemd-sysv-generator[3350]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
* 
* ==> kernel <==
*  20:46:48 up  1:02,  0 users,  load average: 1.44, 1.40, 1.08
* Linux minikube 5.10.19-200.fc33.x86_64 #1 SMP Fri Feb 26 16:21:30 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
* PRETTY_NAME="Ubuntu 20.04.1 LTS"
* 
* ==> kubelet <==
* -- Logs begin at Sat 2021-03-06 20:35:31 UTC, end at Sat 2021-03-06 20:46:48 UTC. --
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00023b180)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
* Mar 06 20:46:47 minikube kubelet[78804]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
* Mar 06 20:46:47 minikube kubelet[78804]: goroutine 350 [syscall]:
* Mar 06 20:46:47 minikube kubelet[78804]: syscall.Syscall(0x0, 0x1a, 0xc000e4ff88, 0x10000, 0x0, 0x0, 0x0)
* Mar 06 20:46:47 minikube kubelet[78804]:         /usr/local/go/src/syscall/asm_linux_amd64.s:18 +0x5
* Mar 06 20:46:47 minikube kubelet[78804]: syscall.read(0x1a, 0xc000e4ff88, 0x10000, 0x10000, 0x0, 0x0, 0x0)
* Mar 06 20:46:47 minikube kubelet[78804]:         /usr/local/go/src/syscall/zsyscall_linux_amd64.go:686 +0x5a
* Mar 06 20:46:47 minikube kubelet[78804]: syscall.Read(...)
* Mar 06 20:46:47 minikube kubelet[78804]:         /usr/local/go/src/syscall/syscall_unix.go:187
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/k8s.io/utils/inotify.(*Watcher).readEvents(0xc00136c980)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/utils/inotify/inotify_linux.go:139 +0x37e
* Mar 06 20:46:47 minikube kubelet[78804]: created by k8s.io/kubernetes/vendor/k8s.io/utils/inotify.NewWatcher
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/utils/inotify/inotify_linux.go:55 +0x1de
* Mar 06 20:46:47 minikube kubelet[78804]: goroutine 351 [chan receive]:
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/utils/oomparser.(*OomParser).StreamOoms(0xc000687930, 0xc00137c480)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/utils/oomparser/oomparser.go:121 +0xd3
* Mar 06 20:46:47 minikube kubelet[78804]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewOoms
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1209 +0xec
* Mar 06 20:46:47 minikube kubelet[78804]: goroutine 352 [chan receive]:
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewOoms.func1(0xc00137c480, 0xc000f5e500)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1212 +0x59
* Mar 06 20:46:47 minikube kubelet[78804]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewOoms
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1211 +0x11b
* Mar 06 20:46:47 minikube kubelet[78804]: goroutine 353 [select]:
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeepingTick(0xc00027a000, 0xc000815680, 0x5f5e100, 0xc000010900)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:536 +0x127
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeeping(0xc00027a000)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:494 +0x25a
* Mar 06 20:46:47 minikube kubelet[78804]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).Start
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:114 +0x3f
* Mar 06 20:46:47 minikube kubelet[78804]: goroutine 402 [select]:
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeepingTick(0xc00030d200, 0xc000f44720, 0x5f5e100, 0xc000c38000)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:536 +0x127
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeeping(0xc00030d200)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:494 +0x25a
* Mar 06 20:46:47 minikube kubelet[78804]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).Start
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:114 +0x3f
* Mar 06 20:46:47 minikube kubelet[78804]: goroutine 577 [select]:
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).Start.func1(0xc000267f60, 0xc000ce7020)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:91 +0x125
* Mar 06 20:46:47 minikube kubelet[78804]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).Start
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:89 +0x477
* Mar 06 20:46:47 minikube kubelet[78804]: goroutine 578 [select]:
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers.func1(0xc000f5e500, 0xc000d5e8b0, 0xc0000f5ec0)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1164 +0xe5
* Mar 06 20:46:47 minikube kubelet[78804]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1162 +0x21d
* Mar 06 20:46:47 minikube kubelet[78804]: goroutine 579 [select]:
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).globalHousekeeping(0xc000f5e500, 0xc0009ed6e0)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:385 +0x145
* Mar 06 20:46:47 minikube kubelet[78804]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:319 +0x585
* Mar 06 20:46:47 minikube kubelet[78804]: goroutine 580 [select]:
* Mar 06 20:46:47 minikube kubelet[78804]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc000f5e500, 0xc0009ed740)
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:357 +0xd4
* Mar 06 20:46:47 minikube kubelet[78804]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
* Mar 06 20:46:47 minikube kubelet[78804]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:323 +0x608
* 
* ==> Audit <==
* |---------|-------------------|----------|--------|---------|-------------------------------|-------------------------------|
| Command |       Args        | Profile  |  User  | Version |          Start Time           |           End Time            |
|---------|-------------------|----------|--------|---------|-------------------------------|-------------------------------|
| config  | set driver podman | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 13:47:58 CST | Sat, 06 Mar 2021 13:47:58 CST |
| stop    |                   | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 13:59:33 CST | Sat, 06 Mar 2021 13:59:34 CST |
| delete  |                   | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 13:59:39 CST | Sat, 06 Mar 2021 13:59:42 CST |
| stop    |                   | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:12:02 CST | Sat, 06 Mar 2021 14:12:04 CST |
| delete  |                   | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:12:10 CST | Sat, 06 Mar 2021 14:12:12 CST |
| stop    |                   | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:20:54 CST | Sat, 06 Mar 2021 14:20:55 CST |
| delete  |                   | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:20:58 CST | Sat, 06 Mar 2021 14:21:00 CST |
| stop    |                   | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:28:59 CST | Sat, 06 Mar 2021 14:29:00 CST |
| delete  |                   | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:29:03 CST | Sat, 06 Mar 2021 14:29:05 CST |
|---------|-------------------|----------|--------|---------|-------------------------------|-------------------------------|

* 
* ==> Last Start <==
* Log file created at: 2021/03/06 14:35:29
* Running on machine: oryx-fedora
* Binary: Built with gc go1.16 for linux/amd64
* Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
* I0306 14:35:29.586380  269267 out.go:239] Setting OutFile to fd 1 ...
* I0306 14:35:29.586527  269267 out.go:291] isatty.IsTerminal(1) = false
* I0306 14:35:29.586533  269267 out.go:252] Setting ErrFile to fd 2...
* I0306 14:35:29.586537  269267 out.go:291] isatty.IsTerminal(2) = false
* I0306 14:35:29.586625  269267 root.go:308] Updating PATH: /home/filbot/.minikube/bin
* I0306 14:35:29.586869  269267 out.go:246] Setting JSON to false
* I0306 14:35:29.600982  269267 start.go:108] hostinfo: {"hostname":"oryx-fedora","uptime":3090,"bootTime":1615059840,"procs":363,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"33","kernelVersion":"5.10.19-200.fc33.x86_64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"7872287a-f4bf-4f92-80bf-61cfbfe48c7e"}
* I0306 14:35:29.601066  269267 start.go:118] virtualization: kvm host
* I0306 14:35:29.642193  269267 out.go:129] * minikube v1.18.1 on Fedora 33
* I0306 14:35:29.642532  269267 notify.go:126] Checking for updates...
* I0306 14:35:29.642794  269267 driver.go:323] Setting default libvirt URI to qemu:///system
* I0306 14:35:29.733816  269267 podman.go:120] podman version: 3.0.1
* I0306 14:35:29.734709  269267 out.go:129] * Using the podman driver based on user configuration
* I0306 14:35:29.734719  269267 start.go:276] selected driver: podman
* I0306 14:35:29.734722  269267 start.go:718] validating driver "podman" against <nil>
* I0306 14:35:29.734732  269267 start.go:729] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
* I0306 14:35:29.734824  269267 cli_runner.go:115] Run: sudo -n podman system info --format json
* I0306 14:35:29.828352  269267 info.go:273] podman info: {Host:{BuildahVersion:1.19.4 CgroupVersion:v2 Conmon:{Package:conmon-2.0.26-1.fc33.x86_64 Path:/usr/bin/conmon Version:conmon version 2.0.26, commit: 777074ecdb5e883b9bec233f3630c5e7fa37d521} Distribution:{Distribution:fedora Version:33} MemFree:24518914048 MemTotal:33530384384 OCIRuntime:{Name:crun Package:crun-0.18-1.fc33.x86_64 Path:/usr/bin/crun Version:crun version 0.18
* commit: 808420efe3dc2b44d6db9f1a3fac8361dde42a95
* spec: 1.0.0
* +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:4294963200 SwapTotal:4294963200 Arch:amd64 Cpus:16 Eventlogger:journald Hostname:oryx-fedora Kernel:5.10.19-200.fc33.x86_64 Os:linux Rootless:false Uptime:51m 29.78s} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com registry.centos.org docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:btrfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
* I0306 14:35:29.828428  269267 start_flags.go:251] no existing cluster config was found, will generate one from the flags 
* I0306 14:35:29.829504  269267 start_flags.go:269] Using suggested 7900MB memory alloc based on sys=31977MB, container=31977MB
* I0306 14:35:29.829629  269267 start_flags.go:696] Wait components to verify : map[apiserver:true system_pods:true]
* I0306 14:35:29.829648  269267 cni.go:74] Creating CNI manager for ""
* I0306 14:35:29.829654  269267 cni.go:121] "podman" driver + crio runtime found, recommending kindnet
* I0306 14:35:29.829665  269267 start_flags.go:390] Found "CNI" CNI - setting NetworkPlugin=cni
* I0306 14:35:29.829672  269267 start_flags.go:395] config:
* {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false}
* I0306 14:35:29.841532  269267 out.go:129] * Starting control plane node minikube in cluster minikube
* I0306 14:35:29.841562  269267 cache.go:112] Driver isn't docker, skipping base image download
* I0306 14:35:29.841577  269267 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime crio
* I0306 14:35:29.841676  269267 preload.go:105] Found local preload: /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4
* I0306 14:35:29.841684  269267 cache.go:54] Caching tarball of preloaded images
* I0306 14:35:29.841698  269267 preload.go:131] Found /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
* I0306 14:35:29.841702  269267 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on crio
* I0306 14:35:29.841973  269267 profile.go:148] Saving config to /home/filbot/.minikube/profiles/minikube/config.json ...
* I0306 14:35:29.841995  269267 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/config.json: {Name:mkde552573cc4fe111badcbccdf8dc701af1839b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
* I0306 14:35:29.842335  269267 cache.go:185] Successfully downloaded all kic artifacts
* I0306 14:35:29.842366  269267 start.go:313] acquiring machines lock for minikube: {Name:mkb45fe8b1deef9886cfd2b84df49d68ad57ae09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
* I0306 14:35:29.842415  269267 start.go:317] acquired machines lock for "minikube" in 38.549µs
* I0306 14:35:29.842429  269267 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
* I0306 14:35:29.842486  269267 start.go:126] createHost starting for "" (driver="podman")
* I0306 14:35:29.843991  269267 out.go:150] * Creating podman container (CPUs=2, Memory=7900MB) ...
* I0306 14:35:29.844169  269267 start.go:160] libmachine.API.Create for "minikube" (driver="podman")
* I0306 14:35:29.844191  269267 client.go:168] LocalClient.Create starting
* I0306 14:35:29.844239  269267 main.go:121] libmachine: Reading certificate data from /home/filbot/.minikube/certs/ca.pem
* I0306 14:35:29.844270  269267 main.go:121] libmachine: Decoding PEM data...
* I0306 14:35:29.844287  269267 main.go:121] libmachine: Parsing certificate...
* I0306 14:35:29.844393  269267 main.go:121] libmachine: Reading certificate data from /home/filbot/.minikube/certs/cert.pem
* I0306 14:35:29.844412  269267 main.go:121] libmachine: Decoding PEM data...
* I0306 14:35:29.844426  269267 main.go:121] libmachine: Parsing certificate...
* I0306 14:35:29.844761  269267 cli_runner.go:115] Run: sudo -n podman network inspect minikube --format ""
* I0306 14:35:29.940525  269267 network_create.go:64] Found existing network {name:minikube subnet:0xc0003fe060 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:0}
* I0306 14:35:29.940567  269267 kic.go:101] calculated static IP "192.168.49.2" for the "minikube" container
* I0306 14:35:29.940622  269267 cli_runner.go:115] Run: sudo -n podman ps -a --format 
* I0306 14:35:30.038386  269267 cli_runner.go:115] Run: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
* I0306 14:35:30.169796  269267 oci.go:102] Successfully created a podman volume minikube
* I0306 14:35:30.169862  269267 cli_runner.go:115] Run: sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.18 -d /var/lib
* I0306 14:35:30.902820  269267 oci.go:106] Successfully prepared a podman volume minikube
* I0306 14:35:30.902867  269267 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime crio
* W0306 14:35:30.902891  269267 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
* W0306 14:35:30.902902  269267 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
* W0306 14:35:30.902909  269267 oci.go:233] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
* I0306 14:35:30.902933  269267 preload.go:105] Found local preload: /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4
* I0306 14:35:30.902939  269267 kic.go:168] Starting extracting preloaded images to volume ...
E0306 14:46:48.430941  353232 out.go:335] unable to parse "* I0306 14:35:30.903059  269267 cli_runner.go:115] Run: sudo -n podman info --format \"'{{json .SecurityOptions}}'\"\n": template: * I0306 14:35:30.903059  269267 cli_runner.go:115] Run: sudo -n podman info --format "'{{json .SecurityOptions}}'"
:1: function "json" not defined - returning raw string.
* I0306 14:35:30.903059  269267 cli_runner.go:115] Run: sudo -n podman info --format "'{{json .SecurityOptions}}'"
* I0306 14:35:30.903066  269267 cli_runner.go:115] Run: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18 -I lz4 -xf /preloaded.tar -C /extractDir
E0306 14:46:48.430986  353232 out.go:335] unable to parse "* W0306 14:35:31.004049  269267 cli_runner.go:162] sudo -n podman info --format \"'{{json .SecurityOptions}}'\" returned with exit code 125\n": template: * W0306 14:35:31.004049  269267 cli_runner.go:162] sudo -n podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
:1: function "json" not defined - returning raw string.
* W0306 14:35:31.004049  269267 cli_runner.go:162] sudo -n podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
* I0306 14:35:31.004133  269267 cli_runner.go:115] Run: sudo -n podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18
* I0306 14:35:31.557230  269267 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format=
* I0306 14:35:31.634195  269267 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format=
* I0306 14:35:31.717203  269267 cli_runner.go:115] Run: sudo -n podman exec minikube stat /var/lib/dpkg/alternatives/iptables
* I0306 14:35:31.884255  269267 oci.go:278] the created container "minikube" has a running status.
* I0306 14:35:31.884272  269267 kic.go:199] Creating ssh key for kic: /home/filbot/.minikube/machines/minikube/id_rsa...
* I0306 14:35:32.145727  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys
* I0306 14:35:32.145754  269267 kic_runner.go:188] podman (temp): /home/filbot/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
* I0306 14:35:32.146000  269267 kic_runner.go:252] Run: /usr/bin/sudo -n podman cp /tmp/tmpf-memory-asset851216604 minikube:/home/docker/.ssh/authorized_keys
* I0306 14:35:32.482543  269267 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format=
* I0306 14:35:32.551560  269267 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
* I0306 14:35:32.551576  269267 kic_runner.go:115] Args: [sudo -n podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
* I0306 14:35:34.209786  269267 cli_runner.go:168] Completed: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.18 -I lz4 -xf /preloaded.tar -C /extractDir: (3.306698982s)
* I0306 14:35:34.209801  269267 kic.go:177] duration metric: took 3.306860 seconds to extract preloaded images to volume
* I0306 14:35:34.209842  269267 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format=
* I0306 14:35:34.281838  269267 machine.go:88] provisioning docker machine ...
* I0306 14:35:34.281860  269267 ubuntu.go:169] provisioning hostname "minikube"
* I0306 14:35:34.281897  269267 cli_runner.go:115] Run: sudo -n podman version --format 
E0306 14:46:48.431455  353232 out.go:340] unable to execute * I0306 14:35:34.345458  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: * I0306 14:35:34.345458  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:104: executing "* I0306 14:35:34.345458  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
* I0306 14:35:34.345458  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
* I0306 14:35:34.413955  269267 main.go:121] libmachine: Using SSH client type: native
E0306 14:46:48.431538  353232 out.go:335] unable to parse "* I0306 14:35:34.414120  269267 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 40935 <nil> <nil>}\n": template: * I0306 14:35:34.414120  269267 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 40935 <nil> <nil>}
:1: unexpected "{" in command - returning raw string.
* I0306 14:35:34.414120  269267 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 40935 <nil> <nil>}
* I0306 14:35:34.414136  269267 main.go:121] libmachine: About to run SSH command:
* sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
* I0306 14:35:34.544786  269267 main.go:121] libmachine: SSH cmd err, output: <nil>: minikube
* 
* I0306 14:35:34.545448  269267 cli_runner.go:115] Run: sudo -n podman version --format 
E0306 14:46:48.431740  353232 out.go:340] unable to execute * I0306 14:35:34.695977  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: * I0306 14:35:34.695977  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:104: executing "* I0306 14:35:34.695977  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
* I0306 14:35:34.695977  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
* I0306 14:35:34.853178  269267 main.go:121] libmachine: Using SSH client type: native
E0306 14:46:48.431789  353232 out.go:335] unable to parse "* I0306 14:35:34.853378  269267 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 40935 <nil> <nil>}\n": template: * I0306 14:35:34.853378  269267 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 40935 <nil> <nil>}
:1: unexpected "{" in command - returning raw string.
* I0306 14:35:34.853378  269267 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 40935 <nil> <nil>}
* I0306 14:35:34.853411  269267 main.go:121] libmachine: About to run SSH command:
* 
* 		if ! grep -xq '.*\sminikube' /etc/hosts; then
* 			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
* 				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
* 			else 
* 				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
* 			fi
* 		fi
* I0306 14:35:35.002755  269267 main.go:121] libmachine: SSH cmd err, output: <nil>: 
* I0306 14:35:35.002799  269267 ubuntu.go:175] set auth options {CertDir:/home/filbot/.minikube CaCertPath:/home/filbot/.minikube/certs/ca.pem CaPrivateKeyPath:/home/filbot/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/filbot/.minikube/machines/server.pem ServerKeyPath:/home/filbot/.minikube/machines/server-key.pem ClientKeyPath:/home/filbot/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/filbot/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/filbot/.minikube}
* I0306 14:35:35.002838  269267 ubuntu.go:177] setting up certificates
* I0306 14:35:35.002854  269267 provision.go:83] configureAuth start
* I0306 14:35:35.002961  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f  minikube
* I0306 14:35:35.127295  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "" minikube
* I0306 14:35:35.201014  269267 provision.go:137] copyHostCerts
* I0306 14:35:35.201040  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/ca.pem -> /home/filbot/.minikube/ca.pem
* I0306 14:35:35.201060  269267 exec_runner.go:145] found /home/filbot/.minikube/ca.pem, removing ...
* I0306 14:35:35.201066  269267 exec_runner.go:190] rm: /home/filbot/.minikube/ca.pem
* I0306 14:35:35.201145  269267 exec_runner.go:152] cp: /home/filbot/.minikube/certs/ca.pem --> /home/filbot/.minikube/ca.pem (1078 bytes)
* I0306 14:35:35.201240  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/cert.pem -> /home/filbot/.minikube/cert.pem
* I0306 14:35:35.201256  269267 exec_runner.go:145] found /home/filbot/.minikube/cert.pem, removing ...
* I0306 14:35:35.201259  269267 exec_runner.go:190] rm: /home/filbot/.minikube/cert.pem
* I0306 14:35:35.201301  269267 exec_runner.go:152] cp: /home/filbot/.minikube/certs/cert.pem --> /home/filbot/.minikube/cert.pem (1123 bytes)
* I0306 14:35:35.201352  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/key.pem -> /home/filbot/.minikube/key.pem
* I0306 14:35:35.201369  269267 exec_runner.go:145] found /home/filbot/.minikube/key.pem, removing ...
* I0306 14:35:35.201373  269267 exec_runner.go:190] rm: /home/filbot/.minikube/key.pem
* I0306 14:35:35.201401  269267 exec_runner.go:152] cp: /home/filbot/.minikube/certs/key.pem --> /home/filbot/.minikube/key.pem (1679 bytes)
* I0306 14:35:35.201448  269267 provision.go:111] generating server cert: /home/filbot/.minikube/machines/server.pem ca-key=/home/filbot/.minikube/certs/ca.pem private-key=/home/filbot/.minikube/certs/ca-key.pem org=filbot.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
* I0306 14:35:35.390098  269267 provision.go:165] copyRemoteCerts
* I0306 14:35:35.390127  269267 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
* I0306 14:35:35.390149  269267 cli_runner.go:115] Run: sudo -n podman version --format 
E0306 14:46:48.432580  353232 out.go:340] unable to execute * I0306 14:35:35.444350  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: * I0306 14:35:35.444350  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:104: executing "* I0306 14:35:35.444350  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
* I0306 14:35:35.444350  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
* I0306 14:35:35.512731  269267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40935 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker}
* I0306 14:35:35.603062  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/ca.pem -> /etc/docker/ca.pem
* I0306 14:35:35.603174  269267 ssh_runner.go:316] scp /home/filbot/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
* I0306 14:35:35.649217  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/machines/server.pem -> /etc/docker/server.pem
* I0306 14:35:35.649308  269267 ssh_runner.go:316] scp /home/filbot/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
* I0306 14:35:35.694977  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
* I0306 14:35:35.695069  269267 ssh_runner.go:316] scp /home/filbot/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
* I0306 14:35:35.741177  269267 provision.go:86] duration metric: configureAuth took 738.30455ms
* I0306 14:35:35.741216  269267 ubuntu.go:193] setting minikube options for container-runtime
* I0306 14:35:35.741842  269267 cli_runner.go:115] Run: sudo -n podman version --format 
E0306 14:46:48.432845  353232 out.go:340] unable to execute * I0306 14:35:35.881984  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: * I0306 14:35:35.881984  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:104: executing "* I0306 14:35:35.881984  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
* I0306 14:35:35.881984  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
* I0306 14:35:35.992404  269267 main.go:121] libmachine: Using SSH client type: native
E0306 14:46:48.432897  353232 out.go:335] unable to parse "* I0306 14:35:35.992549  269267 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 40935 <nil> <nil>}\n": template: * I0306 14:35:35.992549  269267 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 40935 <nil> <nil>}
:1: unexpected "{" in command - returning raw string.
* I0306 14:35:35.992549  269267 main.go:121] libmachine: &{{{<nil> 0 [] [] []} docker [0x7fb7a0] 0x7fb760 <nil>  [] 0s} 127.0.0.1 40935 <nil> <nil>}
* I0306 14:35:35.992573  269267 main.go:121] libmachine: About to run SSH command:
* sudo mkdir -p /etc/sysconfig && printf %s "
* CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
* " | sudo tee /etc/sysconfig/crio.minikube
* I0306 14:35:36.162982  269267 main.go:121] libmachine: SSH cmd err, output: <nil>: 
* CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
* 
* I0306 14:35:36.163021  269267 machine.go:91] provisioned docker machine in 1.88116479s
* I0306 14:35:36.163036  269267 client.go:171] LocalClient.Create took 6.318838928s
* I0306 14:35:36.163088  269267 start.go:168] duration metric: libmachine.API.Create for "minikube" took 6.31889096s
* I0306 14:35:36.163104  269267 start.go:267] post-start starting for "minikube" (driver="podman")
* I0306 14:35:36.163114  269267 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
* I0306 14:35:36.163225  269267 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
* I0306 14:35:36.163314  269267 cli_runner.go:115] Run: sudo -n podman version --format 
E0306 14:46:48.433314  353232 out.go:340] unable to execute * I0306 14:35:36.278885  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: * I0306 14:35:36.278885  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:104: executing "* I0306 14:35:36.278885  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
* I0306 14:35:36.278885  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
* I0306 14:35:36.418448  269267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40935 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker}
* I0306 14:35:36.525215  269267 ssh_runner.go:149] Run: cat /etc/os-release
* I0306 14:35:36.531814  269267 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
* I0306 14:35:36.531872  269267 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
* I0306 14:35:36.531901  269267 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
* I0306 14:35:36.531916  269267 info.go:137] Remote host: Ubuntu 20.04.1 LTS
* I0306 14:35:36.531935  269267 filesync.go:118] Scanning /home/filbot/.minikube/addons for local assets ...
* I0306 14:35:36.532034  269267 filesync.go:118] Scanning /home/filbot/.minikube/files for local assets ...
* I0306 14:35:36.532096  269267 start.go:270] post-start completed in 368.981023ms
* I0306 14:35:36.533923  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f  minikube
* I0306 14:35:36.655784  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "" minikube
* I0306 14:35:36.726526  269267 profile.go:148] Saving config to /home/filbot/.minikube/profiles/minikube/config.json ...
* I0306 14:35:36.726764  269267 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
* I0306 14:35:36.726796  269267 cli_runner.go:115] Run: sudo -n podman version --format 
E0306 14:46:48.433662  353232 out.go:340] unable to execute * I0306 14:35:36.806586  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: * I0306 14:35:36.806586  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:104: executing "* I0306 14:35:36.806586  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
* I0306 14:35:36.806586  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
* I0306 14:35:36.878744  269267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40935 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker}
* I0306 14:35:36.961983  269267 start.go:129] duration metric: createHost completed in 7.119482584s
* I0306 14:35:36.962019  269267 start.go:80] releasing machines lock for "minikube", held for 7.119592189s
* I0306 14:35:36.962224  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f  minikube
* I0306 14:35:37.088530  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "" minikube
* I0306 14:35:37.155705  269267 ssh_runner.go:149] Run: systemctl --version
* I0306 14:35:37.155725  269267 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
* I0306 14:35:37.155738  269267 cli_runner.go:115] Run: sudo -n podman version --format 
* I0306 14:35:37.155755  269267 cli_runner.go:115] Run: sudo -n podman version --format 
E0306 14:46:48.433920  353232 out.go:340] unable to execute * I0306 14:35:37.225255  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: * I0306 14:35:37.225255  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:104: executing "* I0306 14:35:37.225255  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
* I0306 14:35:37.225255  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
E0306 14:46:48.433974  353232 out.go:340] unable to execute * I0306 14:35:37.275702  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
: template: * I0306 14:35:37.275702  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
:1:104: executing "* I0306 14:35:37.275702  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f \"'{{(index (index .NetworkSettings.Ports \"22/tcp\") 0).HostPort}}'\" minikube\n" at <index .NetworkSettings.Ports "22/tcp">: error calling index: index of untyped nil - returning raw string.
* I0306 14:35:37.275702  269267 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
* I0306 14:35:37.293157  269267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40935 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker}
* I0306 14:35:37.344228  269267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40935 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker}
* I0306 14:35:37.515543  269267 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
* I0306 14:35:37.542104  269267 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
* I0306 14:35:37.591875  269267 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
* I0306 14:35:37.617856  269267 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
* I0306 14:35:37.644567  269267 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
* image-endpoint: unix:///var/run/crio/crio.sock
* " | sudo tee /etc/crictl.yaml"
* I0306 14:35:37.681174  269267 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
* I0306 14:35:37.703364  269267 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
* I0306 14:35:37.719396  269267 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
* I0306 14:35:37.731645  269267 ssh_runner.go:149] Run: sudo systemctl daemon-reload
* I0306 14:35:37.830741  269267 ssh_runner.go:149] Run: sudo systemctl start crio
* I0306 14:35:38.011285  269267 start.go:316] Will wait 60s for socket path /var/run/crio/crio.sock
* I0306 14:35:38.011400  269267 ssh_runner.go:149] Run: stat /var/run/crio/crio.sock
* I0306 14:35:38.019713  269267 ssh_runner.go:149] Run: crio --version
* I0306 14:35:38.130564  269267 out.go:129] * Preparing Kubernetes v1.20.2 on CRI-O 1.20.0 ...
* I0306 14:35:38.130603  269267 cli_runner.go:115] Run: sudo -n podman container inspect --format  minikube
* I0306 14:35:38.203747  269267 ssh_runner.go:149] Run: grep <nil>	host.minikube.internal$ /etc/hosts
* I0306 14:35:38.205879  269267 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "<nil>	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
* I0306 14:35:38.216914  269267 out.go:129]   - kubelet.cgroup-driver=systemd
* I0306 14:35:38.216949  269267 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime crio
* I0306 14:35:38.216968  269267 preload.go:105] Found local preload: /home/filbot/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-cri-o-overlay-amd64.tar.lz4
* I0306 14:35:38.216991  269267 ssh_runner.go:149] Run: sudo crictl images --output json
* I0306 14:35:38.255884  269267 crio.go:345] all images are preloaded for cri-o runtime.
* I0306 14:35:38.255901  269267 crio.go:260] Images already preloaded, skipping extraction
* I0306 14:35:38.255937  269267 ssh_runner.go:149] Run: sudo crictl images --output json
* I0306 14:35:38.267795  269267 crio.go:345] all images are preloaded for cri-o runtime.
* I0306 14:35:38.267813  269267 cache_images.go:73] Images are preloaded, skipping loading
* I0306 14:35:38.267851  269267 ssh_runner.go:149] Run: crio config
* I0306 14:35:38.329278  269267 cni.go:74] Creating CNI manager for ""
* I0306 14:35:38.329287  269267 cni.go:121] "podman" driver + crio runtime found, recommending kindnet
* I0306 14:35:38.329294  269267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
* I0306 14:35:38.329304  269267 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
* I0306 14:35:38.329412  269267 kubeadm.go:154] kubeadm config:
* apiVersion: kubeadm.k8s.io/v1beta2
* kind: InitConfiguration
* localAPIEndpoint:
*   advertiseAddress: 192.168.49.2
*   bindPort: 8443
* bootstrapTokens:
*   - groups:
*       - system:bootstrappers:kubeadm:default-node-token
*     ttl: 24h0m0s
*     usages:
*       - signing
*       - authentication
* nodeRegistration:
*   criSocket: /var/run/crio/crio.sock
*   name: "minikube"
*   kubeletExtraArgs:
*     node-ip: 192.168.49.2
*   taints: []
* ---
* apiVersion: kubeadm.k8s.io/v1beta2
* kind: ClusterConfiguration
* apiServer:
*   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
*   extraArgs:
*     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
* controllerManager:
*   extraArgs:
*     allocate-node-cidrs: "true"
*     leader-elect: "false"
* scheduler:
*   extraArgs:
*     leader-elect: "false"
* certificatesDir: /var/lib/minikube/certs
* clusterName: mk
* controlPlaneEndpoint: control-plane.minikube.internal:8443
* dns:
*   type: CoreDNS
* etcd:
*   local:
*     dataDir: /var/lib/minikube/etcd
*     extraArgs:
*       proxy-refresh-interval: "70000"
* kubernetesVersion: v1.20.2
* networking:
*   dnsDomain: cluster.local
*   podSubnet: "10.244.0.0/16"
*   serviceSubnet: 10.96.0.0/12
* ---
* apiVersion: kubelet.config.k8s.io/v1beta1
* kind: KubeletConfiguration
* authentication:
*   x509:
*     clientCAFile: /var/lib/minikube/certs/ca.crt
* cgroupDriver: systemd
* clusterDomain: "cluster.local"
* # disable disk resource management by default
* imageGCHighThresholdPercent: 100
* evictionHard:
*   nodefs.available: "0%"
*   nodefs.inodesFree: "0%"
*   imagefs.available: "0%"
* failSwapOn: false
* staticPodPath: /etc/kubernetes/manifests
* ---
* apiVersion: kubeproxy.config.k8s.io/v1alpha1
* kind: KubeProxyConfiguration
* clusterCIDR: "10.244.0.0/16"
* metricsBindAddress: 0.0.0.0:10249
* 
* I0306 14:35:38.329498  269267 kubeadm.go:919] kubelet [Unit]
* Wants=docker.socket
* 
* [Service]
* ExecStart=
* ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
* 
* [Install]
*  config:
* {KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
* I0306 14:35:38.329544  269267 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
* I0306 14:35:38.334395  269267 binaries.go:44] Found k8s binaries, skipping transfer
* I0306 14:35:38.334421  269267 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
* I0306 14:35:38.339116  269267 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (511 bytes)
* I0306 14:35:38.348026  269267 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
* I0306 14:35:38.357972  269267 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1838 bytes)
* I0306 14:35:38.370031  269267 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
* I0306 14:35:38.372687  269267 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
* I0306 14:35:38.381448  269267 certs.go:52] Setting up /home/filbot/.minikube/profiles/minikube for IP: 192.168.49.2
* I0306 14:35:38.381486  269267 certs.go:171] skipping minikubeCA CA generation: /home/filbot/.minikube/ca.key
* I0306 14:35:38.381498  269267 certs.go:171] skipping proxyClientCA CA generation: /home/filbot/.minikube/proxy-client-ca.key
* I0306 14:35:38.381538  269267 certs.go:279] generating minikube-user signed cert: /home/filbot/.minikube/profiles/minikube/client.key
* I0306 14:35:38.381546  269267 crypto.go:69] Generating cert /home/filbot/.minikube/profiles/minikube/client.crt with IP's: []
* I0306 14:35:38.576903  269267 crypto.go:157] Writing cert to /home/filbot/.minikube/profiles/minikube/client.crt ...
* I0306 14:35:38.576916  269267 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/client.crt: {Name:mk38f2e53a26a660a8ca42427d273aa5beb3ccab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
* I0306 14:35:38.577142  269267 crypto.go:165] Writing key to /home/filbot/.minikube/profiles/minikube/client.key ...
* I0306 14:35:38.577148  269267 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/client.key: {Name:mkdf6546b3ad86d5f6c8bcb5a998110a20e341d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
* I0306 14:35:38.577208  269267 certs.go:279] generating minikube signed cert: /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
* I0306 14:35:38.577213  269267 crypto.go:69] Generating cert /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
* I0306 14:35:38.915645  269267 crypto.go:157] Writing cert to /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
* I0306 14:35:38.915661  269267 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkb9a4037bd1b0390d311aa3c944d4ed10024f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
* I0306 14:35:38.915770  269267 crypto.go:165] Writing key to /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
* I0306 14:35:38.915776  269267 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk90feeb569bacbb36c9355394dc89da215946ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
* I0306 14:35:38.915826  269267 certs.go:290] copying /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/filbot/.minikube/profiles/minikube/apiserver.crt
* I0306 14:35:38.915893  269267 certs.go:294] copying /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/filbot/.minikube/profiles/minikube/apiserver.key
* I0306 14:35:38.915936  269267 certs.go:279] generating aggregator signed cert: /home/filbot/.minikube/profiles/minikube/proxy-client.key
* I0306 14:35:38.915939  269267 crypto.go:69] Generating cert /home/filbot/.minikube/profiles/minikube/proxy-client.crt with IP's: []
* I0306 14:35:39.078863  269267 crypto.go:157] Writing cert to /home/filbot/.minikube/profiles/minikube/proxy-client.crt ...
* I0306 14:35:39.078877  269267 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/proxy-client.crt: {Name:mka6499079057c8e41a72c98ff24037a0c7ae319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
* I0306 14:35:39.078975  269267 crypto.go:165] Writing key to /home/filbot/.minikube/profiles/minikube/proxy-client.key ...
* I0306 14:35:39.078981  269267 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/proxy-client.key: {Name:mk2a89d42af381d951e18297cc690ed8fd50bb29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
* I0306 14:35:39.079031  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
* I0306 14:35:39.079041  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
* I0306 14:35:39.079046  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
* I0306 14:35:39.079051  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
* I0306 14:35:39.079056  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
* I0306 14:35:39.079060  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
* I0306 14:35:39.079065  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
* I0306 14:35:39.079070  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
* I0306 14:35:39.079094  269267 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/ca-key.pem (1679 bytes)
* I0306 14:35:39.079111  269267 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/ca.pem (1078 bytes)
* I0306 14:35:39.079128  269267 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/cert.pem (1123 bytes)
* I0306 14:35:39.079140  269267 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/key.pem (1679 bytes)
* I0306 14:35:39.079155  269267 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
* I0306 14:35:39.079685  269267 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
* I0306 14:35:39.089795  269267 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
* I0306 14:35:39.099358  269267 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
* I0306 14:35:39.108843  269267 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
* I0306 14:35:39.118287  269267 ssh_runner.go:316] scp /home/filbot/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
* I0306 14:35:39.129439  269267 ssh_runner.go:316] scp /home/filbot/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
* I0306 14:35:39.143540  269267 ssh_runner.go:316] scp /home/filbot/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
* I0306 14:35:39.161589  269267 ssh_runner.go:316] scp /home/filbot/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
* I0306 14:35:39.179533  269267 ssh_runner.go:316] scp /home/filbot/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
* I0306 14:35:39.197705  269267 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
* I0306 14:35:39.210631  269267 ssh_runner.go:149] Run: openssl version
* I0306 14:35:39.215485  269267 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
* I0306 14:35:39.223207  269267 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
* I0306 14:35:39.226029  269267 certs.go:395] hashing: -rw-r--r--. 1 root root 1111 Mar  6 19:50 /usr/share/ca-certificates/minikubeCA.pem
* I0306 14:35:39.226061  269267 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
* I0306 14:35:39.230716  269267 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
* I0306 14:35:39.237566  269267 kubeadm.go:385] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cgroup-driver Value:systemd}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false}
* I0306 14:35:39.237629  269267 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
* I0306 14:35:39.237660  269267 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
* I0306 14:35:39.252503  269267 cri.go:76] found id: ""
* I0306 14:35:39.252555  269267 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
* I0306 14:35:39.260320  269267 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
* I0306 14:35:39.268402  269267 kubeadm.go:219] ignoring SystemVerification for kubeadm because of podman driver
* I0306 14:35:39.268447  269267 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
* I0306 14:35:39.276084  269267 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
* stdout:
* 
* stderr:
* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
* I0306 14:35:39.276115  269267 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
* I0306 14:37:36.542256  269267 out.go:150]   - Generating certificates and keys ...
* I0306 14:37:36.590709  269267 out.go:150]   - Booting up control plane ...
* W0306 14:37:36.593706  269267 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
* stdout:
* [init] Using Kubernetes version: v1.20.2
* [preflight] Running pre-flight checks
* [preflight] Pulling images required for setting up a Kubernetes cluster
* [preflight] This might take a minute or two, depending on the speed of your internet connection
* [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
* [certs] Using certificateDir folder "/var/lib/minikube/certs"
* [certs] Using existing ca certificate authority
* [certs] Using existing apiserver certificate and key on disk
* [certs] Generating "apiserver-kubelet-client" certificate and key
* [certs] Generating "front-proxy-ca" certificate and key
* [certs] Generating "front-proxy-client" certificate and key
* [certs] Generating "etcd/ca" certificate and key
* [certs] Generating "etcd/server" certificate and key
* [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
* [certs] Generating "etcd/peer" certificate and key
* [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
* [certs] Generating "etcd/healthcheck-client" certificate and key
* [certs] Generating "apiserver-etcd-client" certificate and key
* [certs] Generating "sa" key and public key
* [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
* [kubeconfig] Writing "admin.conf" kubeconfig file
* [kubeconfig] Writing "kubelet.conf" kubeconfig file
* [kubeconfig] Writing "controller-manager.conf" kubeconfig file
* [kubeconfig] Writing "scheduler.conf" kubeconfig file
* [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
* [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
* [kubelet-start] Starting the kubelet
* [control-plane] Using manifest folder "/etc/kubernetes/manifests"
* [control-plane] Creating static Pod manifest for "kube-apiserver"
* [control-plane] Creating static Pod manifest for "kube-controller-manager"
* [control-plane] Creating static Pod manifest for "kube-scheduler"
* [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
* [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
* [kubelet-check] Initial timeout of 40s passed.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* 
* 	Unfortunately, an error has occurred:
* 		timed out waiting for the condition
* 
* 	This error is likely caused by:
* 		- The kubelet is not running
* 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
* 
* 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
* 		- 'systemctl status kubelet'
* 		- 'journalctl -xeu kubelet'
* 
* 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
* 	To troubleshoot, list all containers using your preferred container runtimes CLI.
* 
* 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
* 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
* 		Once you have found the failing container, you can inspect its logs with:
* 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
* 
* 
* stderr:
* 	[WARNING Swap]: running with swap on is not supported. Please disable swap
* 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
* error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
* To see the stack trace of this error execute with --v=5 or higher
* 
* I0306 14:37:36.593739  269267 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
* I0306 14:37:37.142929  269267 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
* I0306 14:37:37.166520  269267 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
* I0306 14:37:37.166571  269267 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
* I0306 14:37:37.178955  269267 cri.go:76] found id: ""
* I0306 14:37:37.178985  269267 kubeadm.go:219] ignoring SystemVerification for kubeadm because of podman driver
* I0306 14:37:37.179011  269267 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
* I0306 14:37:37.184102  269267 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
* stdout:
* 
* stderr:
* ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
* ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
* ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
* ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
* I0306 14:37:37.184128  269267 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
* I0306 14:37:37.371137  269267 out.go:150]   - Generating certificates and keys ...
* I0306 14:37:38.082242  269267 out.go:150]   - Booting up control plane ...
* I0306 14:39:33.104089  269267 kubeadm.go:387] StartCluster complete in 3m53.866518933s
* I0306 14:39:33.104178  269267 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
* I0306 14:39:33.104384  269267 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
* I0306 14:39:33.133382  269267 cri.go:76] found id: ""
* I0306 14:39:33.133399  269267 logs.go:255] 0 containers: []
* W0306 14:39:33.133407  269267 logs.go:257] No container was found matching "kube-apiserver"
* I0306 14:39:33.133413  269267 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
* I0306 14:39:33.133454  269267 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
* I0306 14:39:33.146986  269267 cri.go:76] found id: ""
* I0306 14:39:33.147002  269267 logs.go:255] 0 containers: []
* W0306 14:39:33.147009  269267 logs.go:257] No container was found matching "etcd"
* I0306 14:39:33.147014  269267 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
* I0306 14:39:33.147051  269267 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
* I0306 14:39:33.158951  269267 cri.go:76] found id: ""
* I0306 14:39:33.158966  269267 logs.go:255] 0 containers: []
* W0306 14:39:33.158972  269267 logs.go:257] No container was found matching "coredns"
* I0306 14:39:33.158976  269267 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
* I0306 14:39:33.159019  269267 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
* I0306 14:39:33.171184  269267 cri.go:76] found id: ""
* I0306 14:39:33.171200  269267 logs.go:255] 0 containers: []
* W0306 14:39:33.171206  269267 logs.go:257] No container was found matching "kube-scheduler"
* I0306 14:39:33.171213  269267 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
* I0306 14:39:33.171247  269267 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
* I0306 14:39:33.182670  269267 cri.go:76] found id: ""
* I0306 14:39:33.182699  269267 logs.go:255] 0 containers: []
* W0306 14:39:33.182707  269267 logs.go:257] No container was found matching "kube-proxy"
* I0306 14:39:33.182714  269267 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
* I0306 14:39:33.182767  269267 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
* I0306 14:39:33.193128  269267 cri.go:76] found id: ""
* I0306 14:39:33.193141  269267 logs.go:255] 0 containers: []
* W0306 14:39:33.193146  269267 logs.go:257] No container was found matching "kubernetes-dashboard"
* I0306 14:39:33.193150  269267 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
* I0306 14:39:33.193176  269267 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
* I0306 14:39:33.203332  269267 cri.go:76] found id: ""
* I0306 14:39:33.203347  269267 logs.go:255] 0 containers: []
* W0306 14:39:33.203351  269267 logs.go:257] No container was found matching "storage-provisioner"
* I0306 14:39:33.203356  269267 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
* I0306 14:39:33.203386  269267 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
* I0306 14:39:33.213818  269267 cri.go:76] found id: ""
* I0306 14:39:33.213831  269267 logs.go:255] 0 containers: []
* W0306 14:39:33.213859  269267 logs.go:257] No container was found matching "kube-controller-manager"
* I0306 14:39:33.213866  269267 logs.go:122] Gathering logs for dmesg ...
* I0306 14:39:33.213874  269267 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
* I0306 14:39:33.222075  269267 logs.go:122] Gathering logs for describe nodes ...
* I0306 14:39:33.222091  269267 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
* W0306 14:39:33.265772  269267 logs.go:129] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
* stdout:
* 
* stderr:
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
*  output: 
* ** stderr ** 
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* 
* ** /stderr **
* I0306 14:39:33.265783  269267 logs.go:122] Gathering logs for CRI-O ...
* I0306 14:39:33.265791  269267 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
* I0306 14:39:33.349823  269267 logs.go:122] Gathering logs for container status ...
* I0306 14:39:33.349840  269267 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
* I0306 14:39:33.360152  269267 logs.go:122] Gathering logs for kubelet ...
* I0306 14:39:33.360167  269267 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
* W0306 14:39:33.398330  269267 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
* stdout:
* [init] Using Kubernetes version: v1.20.2
* [preflight] Running pre-flight checks
* [preflight] Pulling images required for setting up a Kubernetes cluster
* [preflight] This might take a minute or two, depending on the speed of your internet connection
* [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
* [certs] Using certificateDir folder "/var/lib/minikube/certs"
* [certs] Using existing ca certificate authority
* [certs] Using existing apiserver certificate and key on disk
* [certs] Using existing apiserver-kubelet-client certificate and key on disk
* [certs] Using existing front-proxy-ca certificate authority
* [certs] Using existing front-proxy-client certificate and key on disk
* [certs] Using existing etcd/ca certificate authority
* [certs] Using existing etcd/server certificate and key on disk
* [certs] Using existing etcd/peer certificate and key on disk
* [certs] Using existing etcd/healthcheck-client certificate and key on disk
* [certs] Using existing apiserver-etcd-client certificate and key on disk
* [certs] Using the existing "sa" key
* [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
* [kubeconfig] Writing "admin.conf" kubeconfig file
* [kubeconfig] Writing "kubelet.conf" kubeconfig file
* [kubeconfig] Writing "controller-manager.conf" kubeconfig file
* [kubeconfig] Writing "scheduler.conf" kubeconfig file
* [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
* [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
* [kubelet-start] Starting the kubelet
* [control-plane] Using manifest folder "/etc/kubernetes/manifests"
* [control-plane] Creating static Pod manifest for "kube-apiserver"
* [control-plane] Creating static Pod manifest for "kube-controller-manager"
* [control-plane] Creating static Pod manifest for "kube-scheduler"
* [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
* [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
* [kubelet-check] Initial timeout of 40s passed.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* 
* 	Unfortunately, an error has occurred:
* 		timed out waiting for the condition
* 
* 	This error is likely caused by:
* 		- The kubelet is not running
* 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
* 
* 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
* 		- 'systemctl status kubelet'
* 		- 'journalctl -xeu kubelet'
* 
* 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
* 	To troubleshoot, list all containers using your preferred container runtimes CLI.
* 
* 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
* 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
* 		Once you have found the failing container, you can inspect its logs with:
* 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
* 
* 
* stderr:
* 	[WARNING Swap]: running with swap on is not supported. Please disable swap
* 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
* error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
* To see the stack trace of this error execute with --v=5 or higher
* W0306 14:39:33.398387  269267 out.go:191] * 
* W0306 14:39:33.398521  269267 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
* stdout:
* [init] Using Kubernetes version: v1.20.2
* [preflight] Running pre-flight checks
* [preflight] Pulling images required for setting up a Kubernetes cluster
* [preflight] This might take a minute or two, depending on the speed of your internet connection
* [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
* [certs] Using certificateDir folder "/var/lib/minikube/certs"
* [certs] Using existing ca certificate authority
* [certs] Using existing apiserver certificate and key on disk
* [certs] Using existing apiserver-kubelet-client certificate and key on disk
* [certs] Using existing front-proxy-ca certificate authority
* [certs] Using existing front-proxy-client certificate and key on disk
* [certs] Using existing etcd/ca certificate authority
* [certs] Using existing etcd/server certificate and key on disk
* [certs] Using existing etcd/peer certificate and key on disk
* [certs] Using existing etcd/healthcheck-client certificate and key on disk
* [certs] Using existing apiserver-etcd-client certificate and key on disk
* [certs] Using the existing "sa" key
* [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
* [kubeconfig] Writing "admin.conf" kubeconfig file
* [kubeconfig] Writing "kubelet.conf" kubeconfig file
* [kubeconfig] Writing "controller-manager.conf" kubeconfig file
* [kubeconfig] Writing "scheduler.conf" kubeconfig file
* [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
* [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
* [kubelet-start] Starting the kubelet
* [control-plane] Using manifest folder "/etc/kubernetes/manifests"
* [control-plane] Creating static Pod manifest for "kube-apiserver"
* [control-plane] Creating static Pod manifest for "kube-controller-manager"
* [control-plane] Creating static Pod manifest for "kube-scheduler"
* [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
* [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
* [kubelet-check] Initial timeout of 40s passed.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* 
* 	Unfortunately, an error has occurred:
* 		timed out waiting for the condition
* 
* 	This error is likely caused by:
* 		- The kubelet is not running
* 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
* 
* 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
* 		- 'systemctl status kubelet'
* 		- 'journalctl -xeu kubelet'
* 
* 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
* 	To troubleshoot, list all containers using your preferred container runtimes CLI.
* 
* 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
* 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
* 		Once you have found the failing container, you can inspect its logs with:
* 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
* 
* 
* stderr:
* 	[WARNING Swap]: running with swap on is not supported. Please disable swap
* 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
* error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
* To see the stack trace of this error execute with --v=5 or higher
* 
* W0306 14:39:33.398553  269267 out.go:191] * 
* W0306 14:39:33.398570  269267 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue:
* W0306 14:39:33.398591  269267 out.go:191]   - https://github.com/kubernetes/minikube/issues/new/choose
* I0306 14:39:33.409311  269267 out.go:129] 
* W0306 14:39:33.409465  269267 out.go:191] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
* stdout:
* [init] Using Kubernetes version: v1.20.2
* [preflight] Running pre-flight checks
* [preflight] Pulling images required for setting up a Kubernetes cluster
* [preflight] This might take a minute or two, depending on the speed of your internet connection
* [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
* [certs] Using certificateDir folder "/var/lib/minikube/certs"
* [certs] Using existing ca certificate authority
* [certs] Using existing apiserver certificate and key on disk
* [certs] Using existing apiserver-kubelet-client certificate and key on disk
* [certs] Using existing front-proxy-ca certificate authority
* [certs] Using existing front-proxy-client certificate and key on disk
* [certs] Using existing etcd/ca certificate authority
* [certs] Using existing etcd/server certificate and key on disk
* [certs] Using existing etcd/peer certificate and key on disk
* [certs] Using existing etcd/healthcheck-client certificate and key on disk
* [certs] Using existing apiserver-etcd-client certificate and key on disk
* [certs] Using the existing "sa" key
* [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
* [kubeconfig] Writing "admin.conf" kubeconfig file
* [kubeconfig] Writing "kubelet.conf" kubeconfig file
* [kubeconfig] Writing "controller-manager.conf" kubeconfig file
* [kubeconfig] Writing "scheduler.conf" kubeconfig file
* [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
* [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
* [kubelet-start] Starting the kubelet
* [control-plane] Using manifest folder "/etc/kubernetes/manifests"
* [control-plane] Creating static Pod manifest for "kube-apiserver"
* [control-plane] Creating static Pod manifest for "kube-controller-manager"
* [control-plane] Creating static Pod manifest for "kube-scheduler"
* [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
* [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
* [kubelet-check] Initial timeout of 40s passed.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* [kubelet-check] It seems like the kubelet isn't running or healthy.
* [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
* 
* 	Unfortunately, an error has occurred:
* 		timed out waiting for the condition
* 
* 	This error is likely caused by:
* 		- The kubelet is not running
* 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
* 
* 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
* 		- 'systemctl status kubelet'
* 		- 'journalctl -xeu kubelet'
* 
* 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
* 	To troubleshoot, list all containers using your preferred container runtimes CLI.
* 
* 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
* 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
* 		Once you have found the failing container, you can inspect its logs with:
* 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
* 
* 
* stderr:
* 	[WARNING Swap]: running with swap on is not supported. Please disable swap
* 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
* error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
* To see the stack trace of this error execute with --v=5 or higher
* 
* W0306 14:39:33.409544  269267 out.go:191] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* W0306 14:39:33.409603  269267 out.go:191] * Related issue: https://github.com/kubernetes/minikube/issues/4172

! unable to fetch logs for: describe nodes
@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 7, 2021

If you are using the "podman" driver, you don't have to install software (cri-o, conntrack) also on the host.

I think the actual issue is the same as #10649 - it got broken in the podman3 upgrade, worked in podman2

@afbjorklund afbjorklund added co/podman-driver podman driver issues os/linux kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. co/runtime/crio CRIO related issues labels Mar 7, 2021
@FilBot3
Copy link
Author

FilBot3 commented Mar 8, 2021

I've updated the issue with my Podman version, you are correct. I'm on Podman v3. I thought I was still on Podman v2.

~ podman version
Version:      3.0.1
API Version:  3.0.0
Go Version:   go1.15.8
Built:        Fri Feb 19 10:56:17 2021
OS/Arch:      linux/amd64

@afbjorklund
Copy link
Collaborator

Using the --container-runtime=docker also seems to work, so it's something with the combo podman3 + cri-o

@FilBot3
Copy link
Author

FilBot3 commented Mar 8, 2021

My run of Docker-CE: #10754

No luck on my side.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 8, 2021

Thanks for testing, it worked on Ubuntu 20.04 (with podman version 3.0.1)

But I think you misunderstood, I meant --driver=podman --container-runtime=docker

@afbjorklund
Copy link
Collaborator

The safe bet is otherwise to use a VM (--vm=true), see #10237 (comment)

There are a lot of experimental things happening at once in Fedora, otherwise

@FilBot3
Copy link
Author

FilBot3 commented Mar 8, 2021

That's what I feared. I didn't want to run more VM's on my system already, but I'm thinking Ubuntu may be what I try next to get this all working.

@afbjorklund
Copy link
Collaborator

It is supposed to work, to use the "alternative" Fedora and Podman and CRI-O too...

Just that it is less tested, and less stable, than using Ubuntu and Docker and Docker.

We could need some community support for local Kubernetes, especially testing ?

Red Hat is mostly focusing on OpenShift and CodeReady Containers (minishift4)

@FilBot3
Copy link
Author

FilBot3 commented Mar 13, 2021

I'd be willing to help test, just need to know what would need to be done to test and what we're looking for. Since I haven't had a plain Kubernetes install yet work on Fedora with CRI-O, or much luck getting all of that to work except k3s using containerd-shims.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 11, 2021
@ilya-zuyev ilya-zuyev removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 12, 2021
@sharifelgamal sharifelgamal removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 20, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 18, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 17, 2022
@FilBot3 FilBot3 closed this as completed Feb 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

7 participants