Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube start fails with podman or kvm2 drivers on s390x #11658

Closed
vmorris opened this issue Jun 15, 2021 · 4 comments
Closed

minikube start fails with podman or kvm2 drivers on s390x #11658

vmorris opened this issue Jun 15, 2021 · 4 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence.
Milestone

Comments

@vmorris
Copy link

vmorris commented Jun 15, 2021

minikube claims to run on s390x, but I have not had success yet. I am on Fedora 33 here, so perhaps another disto would would better, but I've seen Ubuntu fail similarly using the kvm2 driver.

Steps to reproduce the issue:

  1. minikube config set driver podman && minikube delete
  2. minikube start

or

  1. minikube config set driver kvm2 && minikube delete
  2. minikube start

Full output of minikube logs command (for podman)

==> Last Start <== Log file created at: 2021/06/15 14:42:08 Running on machine: minikube1 Binary: Built with gc go1.16.4 for linux/s390x Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0615 14:42:08.728959 30552 out.go:291] Setting OutFile to fd 1 ... I0615 14:42:08.729059 30552 out.go:343] isatty.IsTerminal(1) = true I0615 14:42:08.729067 30552 out.go:304] Setting ErrFile to fd 2... I0615 14:42:08.729071 30552 out.go:343] isatty.IsTerminal(2) = true I0615 14:42:08.729176 30552 root.go:316] Updating PATH: /home/fedora/.minikube/bin I0615 14:42:08.729372 30552 out.go:298] Setting JSON to false I0615 14:42:08.730027 30552 start.go:111] hostinfo: {"hostname":"minikube1.zdalisv.dfw.ibm.com","uptime":3644,"bootTime":1623764485,"procs":351,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"33","kernelVersion":"5.12.10-200.fc33.s390x","kernelArch":"s390x","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"457c81ed-72d0-436d-a743-6a916ac685bb"} I0615 14:42:08.730074 30552 start.go:121] virtualization: kvm host I0615 14:42:08.732896 30552 out.go:170] 😄 minikube v1.21.0 on Fedora 33 (s390x) I0615 14:42:08.733072 30552 notify.go:169] Checking for updates... I0615 14:42:08.733074 30552 driver.go:335] Setting default libvirt URI to qemu:///system I0615 14:42:08.893533 30552 podman.go:121] podman version: 3.1.2 I0615 14:42:08.894674 30552 out.go:170] ✨ Using the podman driver based on user configuration I0615 14:42:08.894726 30552 start.go:279] selected driver: podman I0615 14:42:08.894729 30552 start.go:752] validating driver "podman" against I0615 14:42:08.894736 30552 start.go:763] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0615 14:42:08.894791 30552 cli_runner.go:115] Run: sudo -n podman system info --format json I0615 14:42:09.043747 30552 info.go:281] podman info: {Host:{BuildahVersion:1.20.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.27-2.fc33.s390x Path:/usr/bin/conmon Version:conmon version 2.0.27, commit: } Distribution:{Distribution:fedora Version:33} MemFree:1246912512 MemTotal:4198801408 OCIRuntime:{Name:crun Package:crun-0.19.1-3.fc33.s390x Path:/usr/bin/crun Version:crun version 0.19.1 commit: 1535fedf0b83fb898d449f9680000f729ba719f5 spec: 1.0.0 +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:s390x Cpus:2 Eventlogger:journald Hostname:minikube1.zdalisv.dfw.ibm.com Kernel:5.12.10-200.fc33.s390x Os:linux Rootless:false Uptime:1h 0m 43.41s (Approximately 0.04 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:extfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:0} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}} I0615 14:42:09.043798 30552 start_flags.go:259] no existing cluster config was found, will generate one from the flags I0615 14:42:09.043940 30552 start_flags.go:311] Using suggested 2200MB memory alloc based on sys=4004MB, container=4004MB I0615 14:42:09.044002 30552 start_flags.go:638] Wait components to verify : map[apiserver:true system_pods:true] I0615 14:42:09.044013 30552 cni.go:93] Creating CNI manager for "" I0615 14:42:09.044019 30552 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0615 14:42:09.044022 30552 start_flags.go:273] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0615 14:42:09.045663 30552 out.go:170] 👍 Starting control plane node minikube in cluster minikube I0615 14:42:09.045679 30552 cache.go:115] Beginning downloading kic base image for podman with docker I0615 14:42:09.046691 30552 out.go:170] 🚜 Pulling base image ... I0615 14:42:09.046719 30552 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker I0615 14:42:09.046784 30552 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache I0615 14:42:09.046910 30552 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory I0615 14:42:09.046927 30552 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache W0615 14:42:09.106869 30552 preload.go:140] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-s390x.tar.lz4 status code: 404 I0615 14:42:09.107061 30552 profile.go:148] Saving config to /home/fedora/.minikube/profiles/minikube/config.json ... I0615 14:42:09.107073 30552 lock.go:36] WriteFile acquiring /home/fedora/.minikube/profiles/minikube/config.json: {Name:mkb8d756e7e807b96e4ac95488558156b33e3a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0615 14:42:09.107211 30552 cache.go:108] acquiring lock: {Name:mk848f9056a0b7e8deba61c4d9ce1ec9447fb24f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.107243 30552 cache.go:116] /home/fedora/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists I0615 14:42:09.107250 30552 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/fedora/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 40.935µs I0615 14:42:09.107256 30552 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/fedora/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded I0615 14:42:09.107265 30552 cache.go:108] acquiring lock: {Name:mke06d93f1117b77439f99f030a503df9447235e Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.107293 30552 cache.go:116] /home/fedora/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.7 exists I0615 14:42:09.107299 30552 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.20.7" -> "/home/fedora/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.7" took 35.361µs I0615 14:42:09.107304 30552 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.20.7 -> /home/fedora/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.7 succeeded I0615 14:42:09.107312 30552 cache.go:108] acquiring lock: {Name:mk2f2a385b1cfc81a9ca9ce72bba6acd289ea36a Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.107339 30552 cache.go:116] /home/fedora/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.7 exists I0615 14:42:09.107344 30552 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.20.7" -> "/home/fedora/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.7" took 33.518µs I0615 14:42:09.107349 30552 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.7 -> /home/fedora/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.7 succeeded I0615 14:42:09.107357 30552 cache.go:108] acquiring lock: {Name:mkbf90ca757cecf35c48d78ea90ab30ec4c88b47 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.107383 30552 cache.go:116] /home/fedora/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.7 exists I0615 14:42:09.107388 30552 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.20.7" -> "/home/fedora/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.7" took 32.516µs I0615 14:42:09.107393 30552 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.20.7 -> /home/fedora/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.7 succeeded I0615 14:42:09.107400 30552 cache.go:108] acquiring lock: {Name:mk42495b17f7ffd0cee7bc305c566b5bc1271eab Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.107426 30552 cache.go:116] /home/fedora/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.7 exists I0615 14:42:09.107432 30552 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.20.7" -> "/home/fedora/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.7" took 32.595µs I0615 14:42:09.107437 30552 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.20.7 -> /home/fedora/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.7 succeeded I0615 14:42:09.107445 30552 cache.go:108] acquiring lock: {Name:mk15a5e7d0f8daa4a4ee292d980fdb92a25be656 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.107470 30552 cache.go:116] /home/fedora/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists I0615 14:42:09.107476 30552 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/fedora/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 32.821µs I0615 14:42:09.107480 30552 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/fedora/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded I0615 14:42:09.107488 30552 cache.go:108] acquiring lock: {Name:mke705c7f34a678272a870eb3ee742e676627306 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.107513 30552 cache.go:116] /home/fedora/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 exists I0615 14:42:09.107519 30552 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/home/fedora/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 32.046µs I0615 14:42:09.107523 30552 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> /home/fedora/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 succeeded I0615 14:42:09.107531 30552 cache.go:108] acquiring lock: {Name:mk58b9291a6bc97c1f1359b79ec84521b6f848fc Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.107557 30552 cache.go:116] /home/fedora/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 exists I0615 14:42:09.107563 30552 cache.go:97] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/fedora/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 33.11µs I0615 14:42:09.107567 30552 cache.go:81] save to tar file k8s.gcr.io/coredns:1.7.0 -> /home/fedora/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 succeeded I0615 14:42:09.107575 30552 cache.go:108] acquiring lock: {Name:mk92ab5604f143a7cec05887d019b5678bdb0226 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.107602 30552 cache.go:116] /home/fedora/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists I0615 14:42:09.107607 30552 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/fedora/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 33.748µs I0615 14:42:09.107613 30552 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/fedora/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded I0615 14:42:09.107620 30552 cache.go:108] acquiring lock: {Name:mk9e35de248c46aa08df049e2c423012df463dc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.107645 30552 cache.go:116] /home/fedora/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists I0615 14:42:09.107651 30552 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/fedora/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 31.152µs I0615 14:42:09.107655 30552 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/fedora/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded I0615 14:42:09.107658 30552 cache.go:88] Successfully saved all images to host disk. E0615 14:42:09.283440 30552 cache.go:197] Error downloading kic artifacts: not yet implemented, see issue #8426 I0615 14:42:09.283450 30552 cache.go:202] Successfully downloaded all kic artifacts I0615 14:42:09.283462 30552 start.go:313] acquiring machines lock for minikube: {Name:mkfbd64e670de175ef3ec6dd8be25ea1851f8d07 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0615 14:42:09.283495 30552 start.go:317] acquired machines lock for "minikube" in 27.196µs I0615 14:42:09.283504 30552 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true} I0615 14:42:09.283536 30552 start.go:126] createHost starting for "" (driver="podman") I0615 14:42:09.285059 30552 out.go:197] 🔥 Creating podman container (CPUs=2, Memory=2200MB) ... I0615 14:42:09.285218 30552 start.go:160] libmachine.API.Create for "minikube" (driver="podman") I0615 14:42:09.285230 30552 client.go:168] LocalClient.Create starting I0615 14:42:09.285267 30552 main.go:128] libmachine: Reading certificate data from /home/fedora/.minikube/certs/ca.pem I0615 14:42:09.285286 30552 main.go:128] libmachine: Decoding PEM data... I0615 14:42:09.285297 30552 main.go:128] libmachine: Parsing certificate... I0615 14:42:09.285368 30552 main.go:128] libmachine: Reading certificate data from /home/fedora/.minikube/certs/cert.pem I0615 14:42:09.285384 30552 main.go:128] libmachine: Decoding PEM data... I0615 14:42:09.285393 30552 main.go:128] libmachine: Parsing certificate... I0615 14:42:09.285627 30552 cli_runner.go:115] Run: sudo -n podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}" I0615 14:42:09.393427 30552 network_create.go:67] Found existing network {name:minikube subnet:0xc000b75920 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:0} I0615 14:42:09.393438 30552 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0615 14:42:09.393480 30552 cli_runner.go:115] Run: sudo -n podman ps -a --format {{.Names}} I0615 14:42:09.523500 30552 cli_runner.go:115] Run: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0615 14:42:09.653396 30552 oci.go:102] Successfully created a podman volume minikube I0615 14:42:09.653434 30552 cli_runner.go:115] Run: sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.23 -d /var/lib W0615 14:42:10.133503 30552 cli_runner.go:162] sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.23 -d /var/lib returned with exit code 125 I0615 14:42:10.133523 30552 client.go:171] LocalClient.Create took 848.289535ms I0615 14:42:12.134090 30552 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0615 14:42:12.134129 30552 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}} I0615 14:42:12.243575 30552 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube W0615 14:42:12.353526 30552 cli_runner.go:162] sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125 I0615 14:42:12.353575 30552 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125 stdout:

stderr:
Error: error inspecting object: no such container minikube
I0615 14:42:12.630208 30552 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0615 14:42:12.733566 30552 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0615 14:42:12.843522 30552 cli_runner.go:162] sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0615 14:42:12.843573 30552 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
I0615 14:42:13.384633 30552 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0615 14:42:13.513535 30552 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0615 14:42:13.623653 30552 cli_runner.go:162] sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0615 14:42:13.623710 30552 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
I0615 14:42:14.279839 30552 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0615 14:42:14.423423 30552 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0615 14:42:14.543373 30552 cli_runner.go:162] sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
W0615 14:42:14.543426 30552 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube

W0615 14:42:14.543432 30552 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
I0615 14:42:14.543436 30552 start.go:129] duration metric: createHost completed in 5.259896732s
I0615 14:42:14.543440 30552 start.go:80] releasing machines lock for "minikube", held for 5.259942055s
W0615 14:42:14.543449 30552 start.go:518] error starting host: creating host: create: creating: setting up container node: preparing volume for minikube container: sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.23 -d /var/lib: exit status 125
stdout:

stderr:
Trying to pull gcr.io/k8s-minikube/kicbase:v0.0.23...
no image found in manifest list for architecture s390x, variant "", OS linux
Error: Error choosing an image from manifest list docker://gcr.io/k8s-minikube/kicbase:v0.0.23: no image found in manifest list for architecture s390x, variant "", OS linux
I0615 14:42:14.543773 30552 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
W0615 14:42:14.683483 30552 cli_runner.go:162] sudo -n podman container inspect minikube --format={{.State.Status}} returned with exit code 125
I0615 14:42:14.683504 30552 delete.go:82] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": sudo -n podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
W0615 14:42:14.683568 30552 out.go:235] 🤦 StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: sudo -n podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.23 -d /var/lib: exit status 125
stdout:

stderr:
Trying to pull gcr.io/k8s-minikube/kicbase:v0.0.23...
no image found in manifest list for architecture s390x, variant "", OS linux
Error: Error choosing an image from manifest list docker://gcr.io/k8s-minikube/kicbase:v0.0.23: no image found in manifest list for architecture s390x, variant "", OS linux

I0615 14:42:14.683719 30552 start.go:533] Will try again in 5 seconds ...
I0615 14:42:19.684326 30552 start.go:313] acquiring machines lock for minikube: {Name:mkfbd64e670de175ef3ec6dd8be25ea1851f8d07 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0615 14:42:19.684380 30552 start.go:317] acquired machines lock for "minikube" in 42.164µs
I0615 14:42:19.684389 30552 start.go:93] Skipping create...Using existing machine configuration
I0615 14:42:19.684393 30552 fix.go:55] fixHost starting:
I0615 14:42:19.684563 30552 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
W0615 14:42:19.813413 30552 cli_runner.go:162] sudo -n podman container inspect minikube --format={{.State.Status}} returned with exit code 125
I0615 14:42:19.813429 30552 fix.go:108] recreateIfNeeded on minikube: state= err=unknown state "minikube": sudo -n podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
I0615 14:42:19.813439 30552 fix.go:113] machineExists: true. err=unknown state "minikube": sudo -n podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
W0615 14:42:19.813444 30552 fix.go:134] unexpected machine state, will restart: unknown state "minikube": sudo -n podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
I0615 14:42:19.815104 30552 out.go:170] 🔄 Restarting existing podman container for "minikube" ...
I0615 14:42:19.815145 30552 cli_runner.go:115] Run: sudo -n podman start --cgroup-manager cgroupfs minikube
W0615 14:42:19.953379 30552 cli_runner.go:162] sudo -n podman start --cgroup-manager cgroupfs minikube returned with exit code 125
I0615 14:42:19.953414 30552 cli_runner.go:115] Run: sudo -n podman inspect minikube
I0615 14:42:20.073749 30552 errors.go:84] Postmortem inspect ("sudo -n podman inspect minikube"): -- stdout --
[
{
"Name": "minikube",
"Driver": "local",
"Mountpoint": "/var/lib/containers/storage/volumes/minikube/_data",
"CreatedAt": "2021-06-15T14:42:09.614053773Z",
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"name.minikube.sigs.k8s.io": "minikube"
},
"Scope": "local",
"Options": {}
}
]

-- /stdout --
I0615 14:42:20.073838 30552 cli_runner.go:115] Run: sudo -n podman logs --timestamps minikube
W0615 14:42:20.203437 30552 cli_runner.go:162] sudo -n podman logs --timestamps minikube returned with exit code 125
W0615 14:42:20.203447 30552 errors.go:89] Failed to get postmortem logs. sudo -n podman logs --timestamps minikube :sudo -n podman logs --timestamps minikube: exit status 125
stdout:

stderr:
Error: no container with name or ID "minikube" found: no such container
I0615 14:42:20.203478 30552 cli_runner.go:115] Run: sudo -n podman system info --format json
I0615 14:42:20.353543 30552 info.go:281] podman info: {Host:{BuildahVersion:1.20.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.27-2.fc33.s390x Path:/usr/bin/conmon Version:conmon version 2.0.27, commit: } Distribution:{Distribution:fedora Version:33} MemFree:1243226112 MemTotal:4198801408 OCIRuntime:{Name:crun Package:crun-0.19.1-3.fc33.s390x Path:/usr/bin/crun Version:crun version 0.19.1
commit: 1535fedf0b83fb898d449f9680000f729ba719f5
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:s390x Cpus:2 Eventlogger:journald Hostname:minikube1.zdalisv.dfw.ibm.com Kernel:5.12.10-200.fc33.s390x Os:linux Rootless:false Uptime:1h 0m 54.72s (Approximately 0.04 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:extfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:0} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0615 14:42:20.353562 30552 errors.go:106] postmortem podman info: {Host:{BuildahVersion:1.20.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.27-2.fc33.s390x Path:/usr/bin/conmon Version:conmon version 2.0.27, commit: } Distribution:{Distribution:fedora Version:33} MemFree:1243226112 MemTotal:4198801408 OCIRuntime:{Name:crun Package:crun-0.19.1-3.fc33.s390x Path:/usr/bin/crun Version:crun version 0.19.1
commit: 1535fedf0b83fb898d449f9680000f729ba719f5
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:s390x Cpus:2 Eventlogger:journald Hostname:minikube1.zdalisv.dfw.ibm.com Kernel:5.12.10-200.fc33.s390x Os:linux Rootless:false Uptime:1h 0m 54.72s (Approximately 0.04 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:extfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:0} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0615 14:42:20.353590 30552 network_create.go:255] running [podman network inspect minikube] to gather additional debugging logs...
I0615 14:42:20.353609 30552 cli_runner.go:115] Run: sudo -n podman network inspect minikube
I0615 14:42:20.483538 30552 network_create.go:260] output of [sudo -n podman network inspect minikube]: -- stdout --
[
{
"cniVersion": "0.4.0",
"name": "minikube",
"plugins": [
{
"bridge": "cni-podman1",
"hairpinMode": true,
"ipMasq": true,
"ipam": {
"ranges": [
[
{
"gateway": "192.168.49.1",
"subnet": "192.168.49.0/24"
}
]
],
"routes": [
{
"dst": "0.0.0.0/0"
}
],
"type": "host-local"
},
"isGateway": true,
"type": "bridge"
},
{
"capabilities": {
"portMappings": true
},
"type": "portmap"
},
{
"backend": "",
"type": "firewall"
},
{
"type": "tuning"
},
{
"capabilities": {
"aliases": true
},
"domainName": "dns.podman",
"type": "dnsname"
}
]
}
]

-- /stdout --
I0615 14:42:20.483583 30552 cli_runner.go:115] Run: sudo -n podman system info --format json
I0615 14:42:20.643548 30552 info.go:281] podman info: {Host:{BuildahVersion:1.20.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.27-2.fc33.s390x Path:/usr/bin/conmon Version:conmon version 2.0.27, commit: } Distribution:{Distribution:fedora Version:33} MemFree:1243791360 MemTotal:4198801408 OCIRuntime:{Name:crun Package:crun-0.19.1-3.fc33.s390x Path:/usr/bin/crun Version:crun version 0.19.1
commit: 1535fedf0b83fb898d449f9680000f729ba719f5
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:s390x Cpus:2 Eventlogger:journald Hostname:minikube1.zdalisv.dfw.ibm.com Kernel:5.12.10-200.fc33.s390x Os:linux Rootless:false Uptime:1h 0m 55.01s (Approximately 0.04 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:extfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:0} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0615 14:42:20.643777 30552 cli_runner.go:115] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
W0615 14:42:20.793518 30552 cli_runner.go:162] sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube returned with exit code 125
I0615 14:42:20.793559 30552 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0615 14:42:20.793592 30552 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0615 14:42:20.943495 30552 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0615 14:42:21.083399 30552 cli_runner.go:162] sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0615 14:42:21.083457 30552 retry.go:31] will retry after 234.428547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
I0615 14:42:21.318877 30552 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0615 14:42:21.443589 30552 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0615 14:42:21.573394 30552 cli_runner.go:162] sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0615 14:42:21.573444 30552 retry.go:31] will retry after 346.739061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
I0615 14:42:21.920931 30552 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0615 14:42:22.053534 30552 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0615 14:42:22.203360 30552 cli_runner.go:162] sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
I0615 14:42:22.203409 30552 retry.go:31] will retry after 667.587979ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
I0615 14:42:22.871144 30552 cli_runner.go:115] Run: sudo -n podman version --format {{.Version}}
I0615 14:42:22.993505 30552 cli_runner.go:115] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0615 14:42:23.123357 30552 cli_runner.go:162] sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 125
W0615 14:42:23.123417 30552 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube

W0615 14:42:23.123423 30552 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube
I0615 14:42:23.123428 30552 fix.go:57] fixHost completed within 3.439034957s
I0615 14:42:23.123432 30552 start.go:80] releasing machines lock for "minikube", held for 3.43904856s
W0615 14:42:23.123519 30552 out.go:235] 😿 Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube

I0615 14:42:23.125747 30552 out.go:170]
W0615 14:42:23.125806 30552 out.go:235] ❌ Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: sudo -n podman container inspect -f minikube: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube

W0615 14:42:23.125913 30552 out.go:235]
W0615 14:42:23.126777 30552 out.go:235] ╭────────────────────────────────────────────────────────────────────╮
W0615 14:42:23.126788 30552 out.go:235] │ │
W0615 14:42:23.126807 30552 out.go:235] │ 😿 If the above advice does not help, please let us know: │
W0615 14:42:23.126827 30552 out.go:235] │ 👉 https://github.com/kubernetes/minikube/issues/new/choose
W0615 14:42:23.126842 30552 out.go:235] │ │
W0615 14:42:23.126856 30552 out.go:235] │ Please attach the following file to the GitHub issue: │
W0615 14:42:23.126867 30552 out.go:235] │ - /home/fedora/.minikube/logs/lastStart.txt │
W0615 14:42:23.126877 30552 out.go:235] │ │
W0615 14:42:23.126888 30552 out.go:235] ╰────────────────────────────────────────────────────────────────────╯
W0615 14:42:23.126900 30552 out.go:235]

❌ Exiting due to GUEST_STATUS: state: unknown state "minikube": sudo -n podman container inspect minikube --format=: exit status 125
stdout:

stderr:
Error: error inspecting object: no such container minikube

╭───────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please attach the following file to the GitHub issue: │
│ - /tmp/minikube_logs_f30b94c7b8be27a1785d74f9772c624a74c09c39_0.log │
│ │
╰───────────────────────────────────────────────────────────────────────────╯

Full output of minikube logs command (for kvm2)

==> Audit <== |---------|-------------------|----------|--------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-------------------|----------|--------|---------|-------------------------------|-------------------------------| | config | set driver kvm2 | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 13:57:05 UTC | Tue, 15 Jun 2021 13:57:05 UTC | | delete | | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 13:57:13 UTC | Tue, 15 Jun 2021 13:57:13 UTC | | start | --help | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:29:38 UTC | Tue, 15 Jun 2021 14:29:38 UTC | | start | --help | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:29:49 UTC | Tue, 15 Jun 2021 14:29:49 UTC | | config | set driver podman | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:37:35 UTC | Tue, 15 Jun 2021 14:37:35 UTC | | delete | | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:37:40 UTC | Tue, 15 Jun 2021 14:37:40 UTC | | config | set driver podman | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:40:18 UTC | Tue, 15 Jun 2021 14:40:18 UTC | | config | set driver podman | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:40:22 UTC | Tue, 15 Jun 2021 14:40:22 UTC | | delete | | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:40:22 UTC | Tue, 15 Jun 2021 14:40:22 UTC | | config | set driver kvm2 | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:40:56 UTC | Tue, 15 Jun 2021 14:40:56 UTC | | delete | | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:40:56 UTC | Tue, 15 Jun 2021 14:40:56 UTC | | config | set driver podman | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:42:06 UTC | Tue, 15 Jun 2021 14:42:06 UTC | | delete | | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:42:06 UTC | Tue, 15 Jun 2021 14:42:07 UTC | | config | set driver kvm2 | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:45:05 UTC | Tue, 15 Jun 2021 14:45:05 UTC | | delete | | minikube | fedora | v1.21.0 | Tue, 15 Jun 2021 14:45:05 UTC | Tue, 15 Jun 2021 14:45:05 UTC | |---------|-------------------|----------|--------|---------|-------------------------------|-------------------------------|

==> Last Start <==
Log file created at: 2021/06/15 14:45:09
Running on machine: minikube1
Binary: Built with gc go1.16.4 for linux/s390x
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0615 14:45:09.494355 35758 out.go:291] Setting OutFile to fd 1 ...
I0615 14:45:09.494462 35758 out.go:343] isatty.IsTerminal(1) = true
I0615 14:45:09.494465 35758 out.go:304] Setting ErrFile to fd 2...
I0615 14:45:09.494468 35758 out.go:343] isatty.IsTerminal(2) = true
I0615 14:45:09.494545 35758 root.go:316] Updating PATH: /home/fedora/.minikube/bin
I0615 14:45:09.494725 35758 out.go:298] Setting JSON to false
I0615 14:45:09.495146 35758 start.go:111] hostinfo: {"hostname":"minikube1.zdalisv.dfw.ibm.com","uptime":3824,"bootTime":1623764485,"procs":92,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"33","kernelVersion":"5.12.10-200.fc33.s390x","kernelArch":"s390x","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"457c81ed-72d0-436d-a743-6a916ac685bb"}
I0615 14:45:09.495190 35758 start.go:121] virtualization: kvm host
I0615 14:45:09.496967 35758 out.go:170] 😄 minikube v1.21.0 on Fedora 33 (s390x)
I0615 14:45:09.497089 35758 notify.go:169] Checking for updates...
I0615 14:45:09.497486 35758 driver.go:335] Setting default libvirt URI to qemu:///system
I0615 14:45:09.498712 35758 out.go:170] ✨ Using the kvm2 driver based on user configuration
I0615 14:45:09.498722 35758 start.go:279] selected driver: kvm2
I0615 14:45:09.498725 35758 start.go:752] validating driver "kvm2" against
I0615 14:45:09.498731 35758 start.go:763] status for kvm2: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:libvirt group membership check failed:
error getting current user's GIDs: user: GroupIds requires cgo Reason:PR_KVM_USER_PERMISSION Fix:Check that libvirtd is properly installed and that you are a member of the appropriate libvirt group (remember to relogin for group changes to take effect!) Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
I0615 14:45:09.499768 35758 out.go:170]
W0615 14:45:09.499816 35758 out.go:235] 🚫 Exiting due to PR_KVM_USER_PERMISSION: libvirt group membership check failed:
error getting current user's GIDs: user: GroupIds requires cgo
W0615 14:45:09.499939 35758 out.go:235] 💡 Suggestion: Ensure that you are a member of the appropriate libvirt group (remember to relogin for group changes to take effect!)
W0615 14:45:09.499974 35758 out.go:235] 📘 Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
W0615 14:45:09.499985 35758 out.go:235] 🍿 Related issues:
W0615 14:45:09.500018 35758 out.go:235] ▪ #5617
W0615 14:45:09.500049 35758 out.go:235] ▪ #10070

🤷 Profile "minikube" not found. Run "minikube profile list" to view all profiles.
👉 To start a cluster, run: "minikube start"

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 15, 2021

minikube claims to run on s390x

Only the "none" driver, as for now. I suppose the remote "ssh" driver could work as well.

Currently we are adding support for arm64 as the second architecture (beyond amd64).
The docker* driver should support both, and next up is making the iso available for both.

But there is no ETA on other kubernetes architectures, whether arm(v7) or s390x/ppc64le.
We do build the minikube client for all of them (and then some), but that's a small comfort...

* and podman, same image (KIC)


So kubectl should work on the client, and kubeadm and friends should work on the server.

The problem is that we don't have an OS distribution, not for the KIC image and not for the ISO.

You might want to try if kicbase and ubuntu works, by building the docker/podman image yourself ?

But there is currenly no way of testing these exotic architectures, so it will have to be contributed...

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 15, 2021

I think I might have messed up the warnings, when I refactored SupportedDrivers

var (
	// SupportedArchitectures is the list of supported architectures
	SupportedArchitectures = [5]string{"amd64", "arm", "arm64", "ppc64le", "s390x"}
)

We need to filter the supportedDrivers as well, the same way as done for darwin

@vmorris
Copy link
Author

vmorris commented Jun 15, 2021

But there is currenly no way of testing these exotic architectures, so it will have to be contributed...

I may be able to help with this..

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 15, 2021

KIC (container)
You could start with sudo podman run -it ubuntu:focal-20210401, the basis for the kicbase image...
Apparently the instructions on how to build it (from the Dockerfile) are all broken at the moment, though.

ISO (hypervisor)
It seems "unlikely" that we will do a Buildroot distribution for IBM, but maybe with an Ubuntu ISO (#9992) ?
If you get a VM running (Ubuntu 20.04), then using the "ssh" driver against is supposed to work as well.


In general we could use some more instructions on how to install an environment for the none/ssh driver.

Previously I have been using Vagrant for such testing, so having those Vagrantfile available might help a bit.

@spowelljr spowelljr added the kind/support Categorizes issue or PR as a support question. label Jun 15, 2021
@afbjorklund afbjorklund added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/support Categorizes issue or PR as a support question. labels Jul 16, 2021
@sharifelgamal sharifelgamal added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Jul 28, 2021
@afbjorklund afbjorklund added this to the 1.23.0 milestone Sep 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

4 participants