Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exiting due to K8S_KUBELET_NOT_RUNNING #11932

Closed
rafacouto opened this issue Jul 8, 2021 · 17 comments · Fixed by #12990
Closed

Exiting due to K8S_KUBELET_NOT_RUNNING #11932

rafacouto opened this issue Jul 8, 2021 · 17 comments · Fixed by #12990
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question.

Comments

@rafacouto
Copy link

Steps to reproduce the issue:

  1. minikube start

Full output of minikube logs command:

  • ==> Docker <==

  • -- Logs begin at Thu 2021-07-08 09:23:44 UTC, end at Thu 2021-07-08 09:33:41 UTC. --
    Jul 08 09:23:44 minikube systemd[1]: Starting Docker Application Container Engine...
    Jul 08 09:23:44 minikube dockerd[148]: time="2021-07-08T09:23:44.998760636Z" level=info msg="Starting up"
    Jul 08 09:23:44 minikube dockerd[148]: time="2021-07-08T09:23:44.999619832Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jul 08 09:23:44 minikube dockerd[148]: time="2021-07-08T09:23:44.999639457Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jul 08 09:23:44 minikube dockerd[148]: time="2021-07-08T09:23:44.999659502Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jul 08 09:23:44 minikube dockerd[148]: time="2021-07-08T09:23:44.999668581Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jul 08 09:23:45 minikube dockerd[148]: time="2021-07-08T09:23:45.000894168Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jul 08 09:23:45 minikube dockerd[148]: time="2021-07-08T09:23:45.000921896Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jul 08 09:23:45 minikube dockerd[148]: time="2021-07-08T09:23:45.000941940Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jul 08 09:23:45 minikube dockerd[148]: time="2021-07-08T09:23:45.000951439Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jul 08 09:23:45 minikube dockerd[148]: time="2021-07-08T09:23:45.033814395Z" level=info msg="Loading containers: start."
    Jul 08 09:23:45 minikube dockerd[148]: time="2021-07-08T09:23:45.110522049Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
    Jul 08 09:23:45 minikube dockerd[148]: time="2021-07-08T09:23:45.185317850Z" level=info msg="Loading containers: done."
    Jul 08 09:23:45 minikube dockerd[148]: time="2021-07-08T09:23:45.204623430Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=btrfs version=20.10.7
    Jul 08 09:23:45 minikube dockerd[148]: time="2021-07-08T09:23:45.204710383Z" level=info msg="Daemon has completed initialization"
    Jul 08 09:23:45 minikube systemd[1]: Started Docker Application Container Engine.
    Jul 08 09:23:45 minikube dockerd[148]: time="2021-07-08T09:23:45.240065021Z" level=info msg="API listen on /run/docker.sock"
    Jul 08 09:23:47 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
    Jul 08 09:23:48 minikube systemd[1]: Stopping Docker Application Container Engine...
    Jul 08 09:23:48 minikube dockerd[148]: time="2021-07-08T09:23:48.056899649Z" level=info msg="Processing signal 'terminated'"
    Jul 08 09:23:48 minikube dockerd[148]: time="2021-07-08T09:23:48.058097719Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
    Jul 08 09:23:48 minikube dockerd[148]: time="2021-07-08T09:23:48.058785872Z" level=info msg="Daemon shutdown complete"
    Jul 08 09:23:48 minikube systemd[1]: docker.service: Succeeded.
    Jul 08 09:23:48 minikube systemd[1]: Stopped Docker Application Container Engine.
    Jul 08 09:23:48 minikube systemd[1]: Starting Docker Application Container Engine...
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.107522303Z" level=info msg="Starting up"
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.108825275Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.108855796Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.108892952Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.108922495Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.109826390Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.109852650Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.109872765Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.109884638Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.123476795Z" level=info msg="Loading containers: start."
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.303878277Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.364591769Z" level=info msg="Loading containers: done."
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.372909600Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=btrfs version=20.10.7
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.373007379Z" level=info msg="Daemon has completed initialization"
    Jul 08 09:23:48 minikube systemd[1]: Started Docker Application Container Engine.
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.397703432Z" level=info msg="API listen on [::]:2376"
    Jul 08 09:23:48 minikube dockerd[392]: time="2021-07-08T09:23:48.400915954Z" level=info msg="API listen on /var/run/docker.sock"

  • ==> container status <==

  • CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID

  • ==> describe nodes <==

  • ==> dmesg <==

  • [ +0.016734] FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
    [Jul 4 19:47] NFSD: Using UMH upcall client tracking operations.
    [Jul 5 16:27] Chrome_ChildIOT invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=300
    [ +0.000004] CPU: 1 PID: 25301 Comm: Chrome_ChildIOT Not tainted 5.10.0-7-amd64 Need a reliable and low latency local cluster setup for Kubernetes  #1 Debian 5.10.40-1
    [ +0.000001] Hardware name: HUAWEI KLVL-WXX9/KLVL-WXX9-PCB, BIOS 1.06 09/14/2020
    [ +0.000000] Call Trace:
    [ +0.000007] dump_stack+0x6b/0x83
    [ +0.000002] dump_header+0x4a/0x1f0
    [ +0.000002] oom_kill_process.cold+0xb/0x10
    [ +0.000002] out_of_memory+0x1bd/0x500
    [ +0.000002] __alloc_pages_slowpath.constprop.0+0xb8c/0xc60
    [ +0.000002] __alloc_pages_nodemask+0x2da/0x310
    [ +0.000001] pagecache_get_page+0x16d/0x380
    [ +0.000002] filemap_fault+0x69e/0x900
    [ +0.000002] ? filemap_map_pages+0x223/0x410
    [ +0.000001] __do_fault+0x36/0x120
    [ +0.000002] handle_mm_fault+0x118e/0x1b80
    [ +0.000003] do_user_addr_fault+0x1bb/0x3f0
    [ +0.000002] ? _copy_to_user+0x1c/0x30
    [ +0.000002] exc_page_fault+0x7b/0x160
    [ +0.000002] ? asm_exc_page_fault+0x8/0x30
    [ +0.000001] asm_exc_page_fault+0x1e/0x30
    [ +0.000001] RIP: 0033:0x564776cdafff
    [ +0.000004] Code: Unable to access opcode bytes at RIP 0x564776cdafd5.
    [ +0.000001] RSP: 002b:00007f6de45b80d0 EFLAGS: 00010246
    [ +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000149f8a85dd78
    [ +0.000000] RDX: 00007f6de45b8180 RSI: 0000000000000000 RDI: 0000000000000000
    [ +0.000001] RBP: 00007f6de45b8170 R08: 0000149f8a85dd00 R09: 0000149f8a85dd00
    [ +0.000001] R10: 00007ffdff5df000 R11: 0000000000000286 R12: 0000149f8a85da80
    [ +0.000000] R13: 0000149f8bdc2628 R14: 0000000000000000 R15: 0000149f8a85dd68
    [ +0.000002] Mem-Info:
    [ +0.000004] active_anon:1581 inactive_anon:3639196 isolated_anon:0
    active_file:279 inactive_file:4213 isolated_file:376
    unevictable:3018 dirty:48 writeback:0
    slab_reclaimable:18688 slab_unreclaimable:52330
    mapped:116625 shmem:122041 pagetables:21825 bounce:0
    free:32819 free_pcp:6283 free_cma:0
    [ +0.000002] Node 0 active_anon:6324kB inactive_anon:14556784kB active_file:1116kB inactive_file:16852kB unevictable:12072kB isolated(anon):0kB isolated(file):1504kB mapped:466500kB dirty:192kB writeback:0kB shmem:488164kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 2007040kB writeback_tmp:0kB kernel_stack:25536kB all_unreclaimable? no
    [ +0.000001] Node 0 DMA free:15904kB min:68kB low:84kB high:100kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15904kB mlocked:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
    [ +0.000002] lowmem_reserve[]: 0 3052 15274 15274 15274
    [ +0.000002] Node 0 DMA32 free:58848kB min:13492kB low:16864kB high:20236kB reserved_highatomic:0KB active_anon:0kB inactive_anon:3014072kB active_file:800kB inactive_file:2032kB unevictable:0kB writepending:20kB present:3271116kB managed:3270552kB mlocked:0kB pagetables:1964kB bounce:0kB free_pcp:9572kB local_pcp:696kB free_cma:0kB
    [ +0.000002] lowmem_reserve[]: 0 0 12221 12221 12221
    [ +0.000002] Node 0 Normal free:56524kB min:175560kB low:189064kB high:202568kB reserved_highatomic:2048KB active_anon:6324kB inactive_anon:11542712kB active_file:1764kB inactive_file:14948kB unevictable:12072kB writepending:172kB present:12832000kB managed:12519996kB mlocked:12072kB pagetables:85336kB bounce:0kB free_pcp:15652kB local_pcp:1312kB free_cma:0kB
    [ +0.000002] lowmem_reserve[]: 0 0 0 0 0
    [ +0.000002] Node 0 DMA: 24kB (U) 18kB (U) 116kB (U) 232kB (U) 164kB (U) 1128kB (U) 1256kB (U) 0512kB 11024kB (U) 12048kB (M) 34096kB (M) = 15904kB
    [ +0.000007] Node 0 DMA32: 35
    4kB (UME) 388kB (UME) 4116kB (UE) 18332kB (UE) 19064kB (UME) 134128kB (UE) 82256kB (UE) 2512kB (M) 11024kB (M) 02048kB 04096kB = 59308kB
    [ +0.000007] Node 0 Normal: 7984kB (UMEH) 17468kB (UMEH) 148916kB (UEH) 50332kB (UEH) 464kB (UH) 0128kB 0256kB 0512kB 01024kB 02048kB 0*4096kB = 57336kB
    [ +0.000008] 129421 total pagecache pages
    [ +0.000000] 0 pages in swap cache
    [ +0.000001] Swap cache stats: add 0, delete 0, find 0/0
    [ +0.000000] Free swap = 0kB
    [ +0.000001] Total swap = 0kB
    [ +0.000000] 4029777 pages RAM
    [ +0.000001] 0 pages HighMem/MovableOnly
    [ +0.000000] 78164 pages reserved
    [ +0.000000] 0 pages hwpoisoned
    [ +0.000271] Out of memory: Killed process 4917 (Web Content) total-vm:39932412kB, anon-rss:8848848kB, file-rss:0kB, shmem-rss:62100kB, UID:1000 pgtables:31076kB oom_score_adj:0
    [Jul 6 15:49] kauditd_printk_skb: 14 callbacks suppressed
    [Jul 6 15:51] kauditd_printk_skb: 8 callbacks suppressed
    [Jul 7 06:15] psi: inconsistent task state! task=610341:gnome-control-c cpu=8 psi_flags=0 clear=1 set=0

  • ==> kernel <==

  • 09:33:41 up 5 days, 22:43, 0 users, load average: 1.38, 0.65, 0.48
    Linux minikube 5.10.0-7-amd64 Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Debian 5.10.40-1 (2021-05-28) x86_64 x86_64 x86_64 GNU/Linux
    PRETTY_NAME="Ubuntu 20.04.2 LTS"

  • ==> kubelet <==

  • -- Logs begin at Thu 2021-07-08 09:23:44 UTC, end at Thu 2021-07-08 09:33:41 UTC. --
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.344583 33857 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.354613 33857 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.354644 33857 status_manager.go:157] "Starting to sync pod status with apiserver"
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.354668 33857 kubelet.go:1846] "Starting kubelet main sync loop"
    Jul 08 09:33:38 minikube kubelet[33857]: E0708 09:33:38.354719 33857 kubelet.go:1870] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
    Jul 08 09:33:38 minikube kubelet[33857]: E0708 09:33:38.355255 33857 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.356369 33857 client.go:86] parsed scheme: "unix"
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.356388 33857 client.go:86] scheme "unix" not registered, fallback to default scheme
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.356414 33857 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.356425 33857 clientconn.go:948] ClientConn switching balancer to "pick_first"
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403071 33857 cpu_manager.go:199] "Starting CPU manager" policy="none"
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403088 33857 cpu_manager.go:200] "Reconciling" reconcilePeriod="10s"
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403104 33857 state_mem.go:36] "Initialized new in-memory state store"
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403196 33857 state_mem.go:88] "Updated default CPUSet" cpuSet=""
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403207 33857 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
    Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403214 33857 policy_none.go:44] "None policy: Start"
    Jul 08 09:33:38 minikube kubelet[33857]: W0708 09:33:38.403234 33857 fs.go:588] stat failed on /dev/mapper/nvme0n1p3_crypt with error: no such file or directory
    Jul 08 09:33:38 minikube kubelet[33857]: E0708 09:33:38.403252 33857 kubelet.go:1384] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 27 in cached partitions map"
    Jul 08 09:33:38 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
    Jul 08 09:33:38 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
    Jul 08 09:33:39 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 120.
    Jul 08 09:33:39 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
    Jul 08 09:33:39 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.391576 34059 server.go:440] "Kubelet version" kubeletVersion="v1.21.2"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.392133 34059 server.go:851] "Client rotation is on, will bootstrap in background"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.396240 34059 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.398294 34059 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
    Jul 08 09:33:39 minikube kubelet[34059]: W0708 09:33:39.398449 34059 manager.go:159] Cannot detect current cgroup on cgroup v2
    Jul 08 09:33:39 minikube kubelet[34059]: W0708 09:33:39.469453 34059 fs.go:214] stat failed on /dev/mapper/nvme0n1p3_crypt with error: no such file or directory
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516652 34059 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516881 34059 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516936 34059 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516951 34059 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516961 34059 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516969 34059 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.517027 34059 kubelet.go:307] "Using dockershim is deprecated, please consider using a full-fledged CRI implementation"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.517054 34059 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/docker.sock"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.517066 34059 client.go:97] "Start docker client with request timeout" timeout="2m0s"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.524936 34059 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth" hairpinMode=promiscuous-bridge
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.524978 34059 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.531424 34059 docker_service.go:257] "Docker cri networking managed by the network plugin" networkPluginName="kubernetes.io/no-op"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.538875 34059 docker_service.go:264] "Docker Info" dockerInfo=&{ID:MFPM:VZAV:XQDQ:VCBD:DLJJ:VCKN:KBP5:4XYA:VUKJ:CNKR:Y6E4:7XEY Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:btrfs DriverStatus:[[Build Version Btrfs v5.4.1 ] [Library Version 102]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:35 SystemTime:2021-07-08T09:33:39.531799523Z LoggingDriver:json-file CgroupDriver:systemd CgroupVersion:2 NEventsListener:0 KernelVersion:5.10.0-7-amd64 OperatingSystem:Ubuntu 20.04.2 LTS OSVersion:20.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00062e150 NCPU:12 MemTotal:16185806848 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:} io.containerd.runtime.v1.linux:{Path:runc Args:[] Shim:} runc:{Path:runc Args:[] Shim:}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: DefaultAddressPools:[] Warnings:[]}
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.538897 34059 docker_service.go:277] "Setting cgroupDriver" cgroupDriver="systemd"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.548983 34059 remote_runtime.go:62] parsed scheme: ""
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549002 34059 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549028 34059 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549038 34059 clientconn.go:948] ClientConn switching balancer to "pick_first"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549069 34059 remote_image.go:50] parsed scheme: ""
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549075 34059 remote_image.go:50] scheme "" not registered, fallback to default scheme
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549083 34059 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549089 34059 clientconn.go:948] ClientConn switching balancer to "pick_first"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549152 34059 kubelet.go:404] "Attempting to sync node with API server"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549165 34059 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549186 34059 kubelet.go:283] "Adding apiserver pod source"
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549198 34059 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
    Jul 08 09:33:39 minikube kubelet[34059]: E0708 09:33:39.549796 34059 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
    Jul 08 09:33:39 minikube kubelet[34059]: E0708 09:33:39.549830 34059 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
    Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.557065 34059 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="docker" version="20.10.7" apiVersion="1.41.0"
    Jul 08 09:33:40 minikube kubelet[34059]: E0708 09:33:40.643187 34059 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
    Jul 08 09:33:40 minikube kubelet[34059]: E0708 09:33:40.782927 34059 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused

@medyagh
Copy link
Member

medyagh commented Jul 8, 2021

What version of minikube do you use?

@rafacouto
Copy link
Author

rafacouto commented Jul 8, 2021

@medyagh

┌── caligari@d14 ~ 
└─$ minikube version
minikube version: v1.22.0
commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e

┌── caligari@d14 ~ 
└─$ docker version
Client: Docker Engine - Community
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        f0df350
 Built:             Wed Jun  2 11:57:01 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:   
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:55:12 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

┌── caligari@d14 ~ 
└─$ cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

@spowelljr spowelljr added the kind/support Categorizes issue or PR as a support question. label Jul 13, 2021
@medyagh
Copy link
Member

medyagh commented Sep 1, 2021

@rafacouto
Would you please attach (drag) thelogs.txt file to this issue which can be generated by running this command:

$ minikube logs --file=logs.txt

@medyagh
Copy link
Member

medyagh commented Sep 1, 2021

/triage needs-information

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Sep 1, 2021
@rafacouto
Copy link
Author

@medyagh Here it goes: minikube_debian11.log This log was generated right now! (something may be different from the opening issue, like debian 11 is the current stable...)

Tip: the issue could be related to the docker because minikube runs OK with the kvm2 driver.

@spowelljr spowelljr removed the triage/needs-information Indicates an issue needs more information in order to work on it. label Sep 22, 2021
@spowelljr
Copy link
Member

Hi @rafacouto, in your logs I see the following output:

❗  docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance

btrfs currently does not work nicely with minikube and we strongly recommend switching to overlay2.

Here's some documentation related to switching your storage driver in Docker.

https://docs.docker.com/storage/storagedriver/overlayfs-driver/

@spowelljr spowelljr added the co/docker-driver Issues related to kubernetes in container label Sep 22, 2021
@rafacouto
Copy link
Author

I confirm that btrfs storage driver is been used because the filesystem is btrfs. I don't have any problem with rest of the docker ecosystem. However, it seems minikube requires overlay2 with docker to the image preload optimizations.

If so, this should be advised on the starting process with a check and a fatal message. Currently, this warning message is shown:

❗  docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance

@spowelljr
Copy link
Member

@rafacouto You are correct, we should probably exit the program if the user is using btrfs. What I'll probably do it add the fatal, but also add a --force-btrfs flag incase the user really wants to try btrfs and will advertise the flag in the fatal message.

@rafacouto
Copy link
Author

That flag would be interesting if its usage also disables the image preload optimizations in order to allow minikube with docker + btrfs 🚀

@sharifelgamal
Copy link
Collaborator

You can always pass --preload=false into minikube start to skip any preload download and usage.

@medyagh
Copy link
Member

medyagh commented Dec 1, 2021

@rafacouto I am curious does adding this option fix the issue ?

miniukube delete --all
minikube start --feature-gates="LocalStorageCapacityIsolation=false"

there is a PR #12990 that could fix this

@rafacouto
Copy link
Author

@medyagh Yes! it does 👍

┌── caligari@d14
└─$ minikube delete --all
🔥  Successfully deleted all profiles
┌── caligari@d14 
└─$ minikube start --feature-gates="LocalStorageCapacityIsolation=false"
😄  minikube v1.24.0 on Debian 11.1
✨  Automatically selected the docker driver. Other choices: kvm2, ssh
❗  docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
    > gcr.io/k8s-minikube/kicbase: 355.78 MiB / 355.78 MiB  100.00% 12.95 MiB p
🔥  Creating docker container (CPUs=2, Memory=3800MB) ...
    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm: 43.71 MiB / 43.71 MiB [-------------] 100.00% 28.35 MiB p/s 1.7s
    > kubelet: 115.57 MiB / 115.57 MiB [-----------] 100.00% 20.18 MiB p/s 5.9s
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@spowelljr
Copy link
Member

spowelljr commented Dec 2, 2021

Hi @rafacouto, could you trying using this binary from the PR @medyagh mentioned above and try doing minikube delete --al and minikube start without the --feature-gates="LocalStorageCapacityIsolation=false", it should auto apply the feature gate, thanks!

https://storage.googleapis.com/minikube-builds/12990/minikube-linux-amd64

@rafacouto
Copy link
Author

@spowelljr It fails when Preparing Kubernetes v1.22.3 on Docker 20.10.8 see console output.

The --feature-gates flag indicated by @medyagh really helps.

@spowelljr
Copy link
Member

@rafacouto Just double checking, you used this binary https://storage.googleapis.com/minikube-builds/12990/minikube-linux-amd64 and not your existing minikube binary?

@rafacouto
Copy link
Author

@spowelljr You are right: double checked and that release works. Good job!

image

@spowelljr
Copy link
Member

@rafacouto Perfect, thanks for testing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants