Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Follow the Using Multi-Node Clusters is not able to access the worker node. #11669

Closed
charleech opened this issue Jun 16, 2021 · 4 comments
Closed
Labels
co/multinode Issues related to multinode clusters kind/support Categorizes issue or PR as a support question.

Comments

@charleech
Copy link

  • minikube version
minikube version: v1.21.0
commit: 76d74191d82c47883dc7e1319ef7cebd3e00ee11
  • minikube start --nodes 2 --docker-opt bip=172.18.0.1/16
* minikube v1.21.0 on Centos 7.7.1908
  - MINIKUBE_HOME=/opt/minikube.home
* Automatically selected the docker driver
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Downloading Kubernetes v1.20.7 preload ...
    > preloaded-images-k8s-v11-v1...: 492.20 MiB / 492.20 MiB  100.00% 29.16 Mi
    > gcr.io/k8s-minikube/kicbase...: 359.09 MiB / 359.09 MiB  100.00% 10.08 Mi
* Creating docker container (CPUs=2, Memory=4000MB) ...
* Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
  - opt bip=172.18.0.1/16
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass

* Starting node minikube-m02 in cluster minikube
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=4000MB) ...
* Found network options:
  - NO_PROXY=192.168.49.2
* Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
  - opt bip=172.18.0.1/16
  - env NO_PROXY=192.168.49.2
* Verifying Kubernetes components...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

I follow the example from https://minikube.sigs.k8s.io/docs/tutorials/multi_node/ by creating the hello-svc.yaml and hello-deployment.yaml as the following: -

#
# hello-svc.yaml
#
apiVersion: v1
kind: Service
metadata:
  name: hello
spec:
  type: NodePort
  selector:
    app: hello
  ports:
    - protocol: TCP
      nodePort: 31000
      port: 80
      targetPort: http
#
# hello-deployment.yaml
#
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 100%
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions: [{ key: app, operator: In, values: [hello] }]
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: hello-from
        image: pbitty/hello-from:latest
        ports:
          - name: http
            containerPort: 80
      terminationGracePeriodSeconds: 1

Steps to reproduce the issue:

  1. minikube delete --all --purge
  2. minikube start --nodes 2 --docker-opt bip=172.18.0.1/16
  3. kubectl get nodes
NAME           STATUS   ROLES                  AGE    VERSION
minikube       Ready    control-plane,master   113s   v1.20.7
minikube-m02   Ready    <none>                 72s    v1.20.7
  1. minikube status
minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

minikube-m02
type: Worker
host: Running
kubelet: Running
  1. kubectl apply -f hello-deployment.yaml
deployment.apps/hello created
  1. kubectl rollout status deployment/hello
Waiting for deployment "hello" rollout to finish: 0 of 2 updated replicas are available...
Waiting for deployment "hello" rollout to finish: 1 of 2 updated replicas are available...
deployment "hello" successfully rolled out
  1. kubectl apply -f hello-svc.yaml
service/hello created
  1. kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
hello-695c67cf9c-64b5z   1/1     Running   0          34s   10.244.1.2   minikube-m02   <none>           <none>
hello-695c67cf9c-q7dfx   1/1     Running   0          34s   10.244.0.3   minikube       <none>           <none>
  1. minikube service list
|-------------|------------|--------------|---------------------------|
|  NAMESPACE  |    NAME    | TARGET PORT  |            URL            |
|-------------|------------|--------------|---------------------------|
| default     | hello      |           80 | http://192.168.49.2:31000 |
| default     | kubernetes | No node port |
| kube-system | kube-dns   | No node port |
|-------------|------------|--------------|---------------------------|
for i in `seq 1 10`; do curl http://192.168.49.2:31000; echo; done
Hello from hello-695c67cf9c-q7dfx (10.244.0.3) # <---- Only able to connect to node = `minikube`
Hello from hello-695c67cf9c-q7dfx (10.244.0.3)
curl: (28) Failed to connect to 192.168.49.2 port 31000: Connection timed out # <---- Is not able to connect to node = `minikube-m02`

Hello from hello-695c67cf9c-q7dfx (10.244.0.3)
Hello from hello-695c67cf9c-q7dfx (10.244.0.3)
curl: (28) Failed to connect to 192.168.49.2 port 31000: Connection timed out # <---- Is not able to connect to node = `minikube-m02`

curl: (28) Failed to connect to 192.168.49.2 port 31000: Connection timed out # <---- Is not able to connect to node = `minikube-m02`

Hello from hello-695c67cf9c-q7dfx (10.244.0.3)
Hello from hello-695c67cf9c-q7dfx (10.244.0.3)
Hello from hello-695c67cf9c-q7dfx (10.244.0.3)

Full output of minikube logs command:
N/A

Full output of failed command:
N/A

@charleech
Copy link
Author

Full output of minikube logs command:

  • ==> Audit <==

|---------|--------------------------------|----------|-------|---------|-------------------------------|-------------------------------|

Command Args Profile User Version Start Time End Time
start --nodes 2 --docker-opt minikube admin v1.21.0 Wed, 16 Jun 2021 13:32:41 +07 Wed, 16 Jun 2021 13:34:37 +07
bip=172.18.0.1/16
service list minikube admin v1.21.0 Wed, 16 Jun 2021 13:36:35 +07 Wed, 16 Jun 2021 13:36:35 +07
--------- -------------------------------- ---------- ------- --------- ------------------------------- -------------------------------
  • ==> Last Start <==
  • Log file created at: 2021/06/16 13:32:41
    Running on machine: aspg3
    Binary: Built with gc go1.16.4 for linux/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0616 13:32:41.404479 3489548 out.go:291] Setting OutFile to fd 1 ...
    I0616 13:32:41.405213 3489548 out.go:338] TERM=xterm,COLORTERM=, which probably does not support color
    I0616 13:32:41.405218 3489548 out.go:304] Setting ErrFile to fd 2...
    I0616 13:32:41.405223 3489548 out.go:338] TERM=xterm,COLORTERM=, which probably does not support color
    I0616 13:32:41.405435 3489548 root.go:316] Updating PATH: /opt/minikube.home/.minikube/bin
    W0616 13:32:41.405650 3489548 root.go:291] Error reading config file at /opt/minikube.home/.minikube/config/config.json: open /opt/minikube.home/.minikube/config/config.json: no such file or directory
    I0616 13:32:41.406369 3489548 out.go:298] Setting JSON to false
    I0616 13:32:41.413419 3489548 start.go:111] hostinfo: {"hostname":"aspg3","uptime":5376274,"bootTime":1618448887,"procs":704,"os":"linux","platform":"centos","platformFamily":"rhel","platformVersion":"7.7.1908","kernelVersion":"3.10.0-1062.el7.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"ec7c5b1c-91bd-43ae-8085-2fdb0a16ec8e"}
    I0616 13:32:41.413554 3489548 start.go:121] virtualization:
    I0616 13:32:41.415067 3489548 out.go:170] * minikube v1.21.0 on Centos 7.7.1908
    I0616 13:32:41.415651 3489548 out.go:170] - MINIKUBE_HOME=/opt/minikube.home
    I0616 13:32:41.415301 3489548 notify.go:169] Checking for updates...
    I0616 13:32:41.415939 3489548 driver.go:335] Setting default libvirt URI to qemu:///system
    I0616 13:32:41.415979 3489548 global.go:111] Querying for installed drivers using PATH=/opt/minikube.home/.minikube/bin:/opt/git/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/admin/.local/bin:/home/admin/bin
    I0616 13:32:41.483034 3489548 docker.go:132] docker version: linux-20.10.7
    I0616 13:32:41.483137 3489548 cli_runner.go:115] Run: docker system info --format "{{json .}}"
    I0616 13:32:41.604572 3489548 info.go:261] docker info: {ID:ZRET:RDUN:CVNM:U6BD:2FX4:IUCO:QK3Y:KHKH:OEMW:BAGZ:MYLF:ZHFO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:30 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:37 SystemTime:2021-06-16 13:32:41.52516184 +0700 +07 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1062.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:32 MemTotal:33562738688 GenericResources: DockerRootDir:/opt/docker-lib-image HTTPProxy: HTTPSProxy: NoProxy: Name:aspg3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}}
    I0616 13:32:41.604962 3489548 docker.go:244] overlay module found
    I0616 13:32:41.604973 3489548 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0616 13:32:41.605023 3489548 global.go:119] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
    I0616 13:32:41.622688 3489548 global.go:119] none default: false priority: 4, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Reason: Fix: Doc:}
    I0616 13:32:41.622748 3489548 global.go:119] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
    I0616 13:32:41.622765 3489548 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0616 13:32:41.622818 3489548 global.go:119] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
    I0616 13:32:41.622845 3489548 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
    I0616 13:32:41.622867 3489548 driver.go:270] not recommending "ssh" due to default: false
    I0616 13:32:41.622883 3489548 driver.go:305] Picked: docker
    I0616 13:32:41.622909 3489548 driver.go:306] Alternatives: [ssh]
    I0616 13:32:41.622913 3489548 driver.go:307] Rejects: [kvm2 none podman virtualbox vmware]
    I0616 13:32:41.623941 3489548 out.go:170] * Automatically selected the docker driver
    I0616 13:32:41.624329 3489548 start.go:279] selected driver: docker
    I0616 13:32:41.624339 3489548 start.go:752] validating driver "docker" against
    I0616 13:32:41.624357 3489548 start.go:763] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
    I0616 13:32:41.624430 3489548 cli_runner.go:115] Run: docker system info --format "{{json .}}"
    I0616 13:32:41.745513 3489548 info.go:261] docker info: {ID:ZRET:RDUN:CVNM:U6BD:2FX4:IUCO:QK3Y:KHKH:OEMW:BAGZ:MYLF:ZHFO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:30 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:37 SystemTime:2021-06-16 13:32:41.669781651 +0700 +07 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1062.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:32 MemTotal:33562738688 GenericResources: DockerRootDir:/opt/docker-lib-image HTTPProxy: HTTPSProxy: NoProxy: Name:aspg3 Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}}
    I0616 13:32:41.745656 3489548 start_flags.go:259] no existing cluster config was found, will generate one from the flags
    I0616 13:32:41.749575 3489548 start_flags.go:311] Using suggested 4000MB memory alloc based on sys=32007MB, container=32007MB
    I0616 13:32:41.749708 3489548 start_flags.go:638] Wait components to verify : map[apiserver:true system_pods:true]
    I0616 13:32:41.749736 3489548 cni.go:93] Creating CNI manager for ""
    I0616 13:32:41.749743 3489548 cni.go:154] 0 nodes found, recommending kindnet
    I0616 13:32:41.749769 3489548 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
    I0616 13:32:41.749774 3489548 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
    I0616 13:32:41.749780 3489548 start_flags.go:268] Found "CNI" CNI - setting NetworkPlugin=cni
    I0616 13:32:41.749792 3489548 start_flags.go:273] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[bip=172.18.0.1/16] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
    I0616 13:32:41.750966 3489548 out.go:170] * Starting control plane node minikube in cluster minikube
    I0616 13:32:41.751011 3489548 cache.go:115] Beginning downloading kic base image for docker with docker
    I0616 13:32:41.751372 3489548 out.go:170] * Pulling base image ...
    I0616 13:32:41.751415 3489548 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
    I0616 13:32:41.751512 3489548 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
    I0616 13:32:41.752422 3489548 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
    I0616 13:32:41.752572 3489548 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
    I0616 13:32:41.994031 3489548 preload.go:145] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
    I0616 13:32:41.994051 3489548 cache.go:54] Caching tarball of preloaded images
    I0616 13:32:41.994228 3489548 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
    I0616 13:32:41.995127 3489548 out.go:170] * Downloading Kubernetes v1.20.7 preload ...
    I0616 13:32:41.995156 3489548 preload.go:230] getting checksum for preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 ...
    I0616 13:32:42.310476 3489548 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4?checksum=md5:f41702d59ddd4fa1749fa672343212b9 -> /opt/minikube.home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
    I0616 13:33:01.284539 3489548 preload.go:240] saving checksum for preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 ...
    I0616 13:33:01.284677 3489548 preload.go:247] verifying checksumm of /opt/minikube.home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 ...
    I0616 13:33:02.657856 3489548 cache.go:57] Finished verifying existence of preloaded tar for v1.20.7 on docker
    I0616 13:33:02.658355 3489548 profile.go:148] Saving config to /opt/minikube.home/.minikube/profiles/minikube/config.json ...
    I0616 13:33:02.658386 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/profiles/minikube/config.json: {Name:mk8185e657d9242de217d74c9490811391973ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
    I0616 13:33:19.307751 3489548 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 as a tarball
    I0616 13:33:19.307763 3489548 image.go:74] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon
    I0616 13:33:19.379380 3489548 image.go:78] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon, skipping pull
    I0616 13:33:19.379400 3489548 cache.go:146] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in daemon, skipping load
    I0616 13:33:19.379454 3489548 cache.go:202] Successfully downloaded all kic artifacts
    I0616 13:33:19.379513 3489548 start.go:313] acquiring machines lock for minikube: {Name:mk7594c7330b695a1380bd61a1b33aabb766a4e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
    I0616 13:33:19.379682 3489548 start.go:317] acquired machines lock for "minikube" in 150.219µs
    I0616 13:33:19.379737 3489548 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[bip=172.18.0.1/16] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true} &{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
    I0616 13:33:19.379825 3489548 start.go:126] createHost starting for "" (driver="docker")
    I0616 13:33:19.457096 3489548 out.go:197] * Creating docker container (CPUs=2, Memory=4000MB) ...
    I0616 13:33:19.457506 3489548 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
    I0616 13:33:19.457555 3489548 client.go:168] LocalClient.Create starting
    I0616 13:33:19.457763 3489548 main.go:128] libmachine: Creating CA: /opt/minikube.home/.minikube/certs/ca.pem
    I0616 13:33:19.584567 3489548 main.go:128] libmachine: Creating client certificate: /opt/minikube.home/.minikube/certs/cert.pem
    I0616 13:33:19.760127 3489548 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
    W0616 13:33:19.809080 3489548 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
    I0616 13:33:19.809175 3489548 network_create.go:255] running [docker network inspect minikube] to gather additional debugging logs...
    I0616 13:33:19.809198 3489548 cli_runner.go:115] Run: docker network inspect minikube
    W0616 13:33:19.856509 3489548 cli_runner.go:162] docker network inspect minikube returned with exit code 1
    I0616 13:33:19.856541 3489548 network_create.go:258] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
    stdout:
    []

stderr:
Error: No such network: minikube
I0616 13:33:19.856556 3489548 network_create.go:260] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr **
Error: No such network: minikube

** /stderr **
I0616 13:33:19.856607 3489548 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0616 13:33:19.906289 3489548 network.go:263] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b9a528] misses:0}
I0616 13:33:19.906368 3489548 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0616 13:33:19.906390 3489548 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0616 13:33:19.906450 3489548 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0616 13:33:20.155847 3489548 network_create.go:90] docker network minikube 192.168.49.0/24 created
I0616 13:33:20.155875 3489548 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0616 13:33:20.155950 3489548 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0616 13:33:20.204770 3489548 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0616 13:33:20.264702 3489548 oci.go:102] Successfully created a docker volume minikube
I0616 13:33:20.264769 3489548 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
I0616 13:33:21.828888 3489548 cli_runner.go:168] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib: (1.564043834s)
I0616 13:33:21.828930 3489548 oci.go:106] Successfully prepared a docker volume minikube
I0616 13:33:21.829049 3489548 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0616 13:33:21.829076 3489548 kic.go:179] Starting extracting preloaded images to volume ...
W0616 13:33:21.829434 3489548 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0616 13:33:21.829448 3489548 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0616 13:33:21.829930 3489548 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /opt/minikube.home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
I0616 13:33:21.829993 3489548 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0616 13:33:21.957280 3489548 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
I0616 13:33:22.743967 3489548 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I0616 13:33:22.795280 3489548 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0616 13:33:22.843473 3489548 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0616 13:33:22.958013 3489548 oci.go:278] the created container "minikube" has a running status.
I0616 13:33:22.958035 3489548 kic.go:210] Creating ssh key for kic: /opt/minikube.home/.minikube/machines/minikube/id_rsa...
I0616 13:33:23.166997 3489548 kic_runner.go:188] docker (temp): /opt/minikube.home/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0616 13:33:23.433634 3489548 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0616 13:33:23.481862 3489548 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0616 13:33:23.481881 3489548 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0616 13:33:36.761416 3489548 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /opt/minikube.home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (14.931450017s)
I0616 13:33:36.761448 3489548 kic.go:188] duration metric: took 14.932357 seconds to extract preloaded images to volume
I0616 13:33:36.762141 3489548 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0616 13:33:36.814488 3489548 machine.go:88] provisioning docker machine ...
I0616 13:33:36.814532 3489548 ubuntu.go:169] provisioning hostname "minikube"
I0616 13:33:36.814588 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:33:36.864931 3489548 main.go:128] libmachine: Using SSH client type: native
I0616 13:33:36.865476 3489548 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 49532 }
I0616 13:33:36.865487 3489548 main.go:128] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0616 13:33:37.003957 3489548 main.go:128] libmachine: SSH cmd err, output: : minikube

I0616 13:33:37.004230 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:33:37.053506 3489548 main.go:128] libmachine: Using SSH client type: native
I0616 13:33:37.053682 3489548 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 49532 }
I0616 13:33:37.053697 3489548 main.go:128] libmachine: About to run SSH command:

            if ! grep -xq '.*\sminikube' /etc/hosts; then
                    if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                            sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                    else
                            echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
                    fi
            fi

I0616 13:33:37.177897 3489548 main.go:128] libmachine: SSH cmd err, output: :
I0616 13:33:37.177926 3489548 ubuntu.go:175] set auth options {CertDir:/opt/minikube.home/.minikube CaCertPath:/opt/minikube.home/.minikube/certs/ca.pem CaPrivateKeyPath:/opt/minikube.home/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/opt/minikube.home/.minikube/machines/server.pem ServerKeyPath:/opt/minikube.home/.minikube/machines/server-key.pem ClientKeyPath:/opt/minikube.home/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/opt/minikube.home/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/opt/minikube.home/.minikube}
I0616 13:33:37.177943 3489548 ubuntu.go:177] setting up certificates
I0616 13:33:37.177952 3489548 provision.go:83] configureAuth start
I0616 13:33:37.178011 3489548 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0616 13:33:37.225668 3489548 provision.go:137] copyHostCerts
I0616 13:33:37.226272 3489548 exec_runner.go:152] cp: /opt/minikube.home/.minikube/certs/key.pem --> /opt/minikube.home/.minikube/key.pem (1679 bytes)
I0616 13:33:37.226447 3489548 exec_runner.go:152] cp: /opt/minikube.home/.minikube/certs/ca.pem --> /opt/minikube.home/.minikube/ca.pem (1074 bytes)
I0616 13:33:37.226517 3489548 exec_runner.go:152] cp: /opt/minikube.home/.minikube/certs/cert.pem --> /opt/minikube.home/.minikube/cert.pem (1119 bytes)
I0616 13:33:37.226577 3489548 provision.go:111] generating server cert: /opt/minikube.home/.minikube/machines/server.pem ca-key=/opt/minikube.home/.minikube/certs/ca.pem private-key=/opt/minikube.home/.minikube/certs/ca-key.pem org=admin.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0616 13:33:37.399349 3489548 provision.go:171] copyRemoteCerts
I0616 13:33:37.399412 3489548 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0616 13:33:37.399453 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:33:37.448515 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49532 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube/id_rsa Username:docker}
I0616 13:33:37.539493 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes)
I0616 13:33:37.562265 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I0616 13:33:37.583857 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0616 13:33:37.605245 3489548 provision.go:86] duration metric: configureAuth took 427.274886ms
I0616 13:33:37.605263 3489548 ubuntu.go:193] setting minikube options for container-runtime
I0616 13:33:37.605488 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:33:37.655164 3489548 main.go:128] libmachine: Using SSH client type: native
I0616 13:33:37.655375 3489548 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 49532 }
I0616 13:33:37.655385 3489548 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0616 13:33:37.780226 3489548 main.go:128] libmachine: SSH cmd err, output: : overlay

I0616 13:33:37.780244 3489548 ubuntu.go:71] root file system type: overlay
I0616 13:33:37.780460 3489548 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0616 13:33:37.780517 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:33:37.830047 3489548 main.go:128] libmachine: Using SSH client type: native
I0616 13:33:37.830257 3489548 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 49532 }
I0616 13:33:37.830339 3489548 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --bip=172.18.0.1/16
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0616 13:33:37.967107 3489548 main.go:128] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --bip=172.18.0.1/16
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

I0616 13:33:37.967178 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:33:38.015734 3489548 main.go:128] libmachine: Using SSH client type: native
I0616 13:33:38.015925 3489548 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 49532 }
I0616 13:33:38.015942 3489548 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0616 13:33:39.014476 3489548 main.go:128] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-16 06:33:37.964403287 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --bip=172.18.0.1/16
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process
-OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0616 13:33:39.014506 3489548 machine.go:91] provisioned docker machine in 2.200002749s
I0616 13:33:39.014519 3489548 client.go:171] LocalClient.Create took 19.556959082s
I0616 13:33:39.014539 3489548 start.go:168] duration metric: libmachine.API.Create for "minikube" took 19.557035623s
I0616 13:33:39.014552 3489548 start.go:267] post-start starting for "minikube" (driver="docker")
I0616 13:33:39.014556 3489548 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0616 13:33:39.014623 3489548 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0616 13:33:39.014663 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:33:39.066078 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49532 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube/id_rsa Username:docker}
I0616 13:33:39.156620 3489548 ssh_runner.go:149] Run: cat /etc/os-release
I0616 13:33:39.160319 3489548 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0616 13:33:39.160335 3489548 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0616 13:33:39.160343 3489548 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0616 13:33:39.160362 3489548 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0616 13:33:39.160380 3489548 filesync.go:126] Scanning /opt/minikube.home/.minikube/addons for local assets ...
I0616 13:33:39.160441 3489548 filesync.go:126] Scanning /opt/minikube.home/.minikube/files for local assets ...
I0616 13:33:39.160465 3489548 start.go:270] post-start completed in 145.908161ms
I0616 13:33:39.160980 3489548 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0616 13:33:39.208082 3489548 profile.go:148] Saving config to /opt/minikube.home/.minikube/profiles/minikube/config.json ...
I0616 13:33:39.209055 3489548 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0616 13:33:39.209096 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:33:39.258968 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49532 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube/id_rsa Username:docker}
I0616 13:33:39.345089 3489548 start.go:129] duration metric: createHost completed in 19.96524946s
I0616 13:33:39.345106 3489548 start.go:80] releasing machines lock for "minikube", held for 19.965417146s
I0616 13:33:39.345196 3489548 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0616 13:33:39.393992 3489548 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0616 13:33:39.394050 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:33:39.394152 3489548 ssh_runner.go:149] Run: systemctl --version
I0616 13:33:39.394197 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:33:39.443096 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49532 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube/id_rsa Username:docker}
I0616 13:33:39.443253 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49532 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube/id_rsa Username:docker}
I0616 13:33:39.770428 3489548 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0616 13:33:39.784527 3489548 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0616 13:33:39.795972 3489548 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0616 13:33:39.796089 3489548 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0616 13:33:39.807424 3489548 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0616 13:33:39.823832 3489548 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0616 13:33:39.899061 3489548 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0616 13:33:39.984288 3489548 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0616 13:33:39.996824 3489548 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0616 13:33:40.072419 3489548 ssh_runner.go:149] Run: sudo systemctl start docker
I0616 13:33:40.084694 3489548 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0616 13:33:40.151423 3489548 out.go:197] * Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
I0616 13:33:40.151970 3489548 out.go:170] - opt bip=172.18.0.1/16
I0616 13:33:40.152072 3489548 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0616 13:33:40.202619 3489548 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0616 13:33:40.208097 3489548 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0616 13:33:40.221329 3489548 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0616 13:33:40.221396 3489548 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0616 13:33:40.274704 3489548 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0616 13:33:40.274717 3489548 docker.go:466] Images already preloaded, skipping extraction
I0616 13:33:40.274781 3489548 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0616 13:33:40.327927 3489548 docker.go:535] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-scheduler:v1.20.7
gcr.io/k8s-minikube/storage-provisioner:v5
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0616 13:33:40.327942 3489548 cache_images.go:74] Images are preloaded, skipping loading
I0616 13:33:40.328008 3489548 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0616 13:33:40.441253 3489548 cni.go:93] Creating CNI manager for ""
I0616 13:33:40.441272 3489548 cni.go:154] 1 nodes found, recommending kindnet
I0616 13:33:40.441288 3489548 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0616 13:33:40.441332 3489548 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0616 13:33:40.441481 3489548 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: "minikube"
      kubeletExtraArgs:
      node-ip: 192.168.49.2
      taints: []

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"

disable disk resource management by default

imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0

I0616 13:33:40.441632 3489548 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2

[Install]
config:
{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0616 13:33:40.441687 3489548 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0616 13:33:40.451131 3489548 binaries.go:44] Found k8s binaries, skipping transfer
I0616 13:33:40.451182 3489548 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0616 13:33:40.460188 3489548 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
I0616 13:33:40.476336 3489548 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0616 13:33:40.492389 3489548 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1867 bytes)
I0616 13:33:40.508543 3489548 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0616 13:33:40.512207 3489548 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0616 13:33:40.523996 3489548 certs.go:52] Setting up /opt/minikube.home/.minikube/profiles/minikube for IP: 192.168.49.2
I0616 13:33:40.524041 3489548 certs.go:183] generating minikubeCA CA: /opt/minikube.home/.minikube/ca.key
I0616 13:33:40.783423 3489548 crypto.go:157] Writing cert to /opt/minikube.home/.minikube/ca.crt ...
I0616 13:33:40.783442 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/ca.crt: {Name:mkdb7524228cad6b8d8eedbaff727ad4a211eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:33:40.784646 3489548 crypto.go:165] Writing key to /opt/minikube.home/.minikube/ca.key ...
I0616 13:33:40.784655 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/ca.key: {Name:mk10a06730793d15868f29aaf11855b4cec88a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:33:40.784781 3489548 certs.go:183] generating proxyClientCA CA: /opt/minikube.home/.minikube/proxy-client-ca.key
I0616 13:33:40.857745 3489548 crypto.go:157] Writing cert to /opt/minikube.home/.minikube/proxy-client-ca.crt ...
I0616 13:33:40.857765 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/proxy-client-ca.crt: {Name:mkaa3721f0e2e3f4d5548f1666e1c19549c36b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:33:40.858054 3489548 crypto.go:165] Writing key to /opt/minikube.home/.minikube/proxy-client-ca.key ...
I0616 13:33:40.858061 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/proxy-client-ca.key: {Name:mk866131df199268dc42da148ca20d15f585d1d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:33:40.858222 3489548 certs.go:294] generating minikube-user signed cert: /opt/minikube.home/.minikube/profiles/minikube/client.key
I0616 13:33:40.858231 3489548 crypto.go:69] Generating cert /opt/minikube.home/.minikube/profiles/minikube/client.crt with IP's: []
I0616 13:33:40.987343 3489548 crypto.go:157] Writing cert to /opt/minikube.home/.minikube/profiles/minikube/client.crt ...
I0616 13:33:40.987363 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/profiles/minikube/client.crt: {Name:mk36beef73c60d867b27ae0a9264fd2395ca1c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:33:40.987654 3489548 crypto.go:165] Writing key to /opt/minikube.home/.minikube/profiles/minikube/client.key ...
I0616 13:33:40.987661 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/profiles/minikube/client.key: {Name:mkcfa2c71f86c6cbf5dd74004742d903b053ff25 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:33:40.987773 3489548 certs.go:294] generating minikube signed cert: /opt/minikube.home/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0616 13:33:40.987780 3489548 crypto.go:69] Generating cert /opt/minikube.home/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0616 13:33:41.100166 3489548 crypto.go:157] Writing cert to /opt/minikube.home/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0616 13:33:41.100187 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk2fc285bb881de72cd80cf5132ab46f54755c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:33:41.100509 3489548 crypto.go:165] Writing key to /opt/minikube.home/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0616 13:33:41.100521 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk71906a4e9ae0e2d3ebe5df5db958874762c8bc Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:33:41.100639 3489548 certs.go:305] copying /opt/minikube.home/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /opt/minikube.home/.minikube/profiles/minikube/apiserver.crt
I0616 13:33:41.100734 3489548 certs.go:309] copying /opt/minikube.home/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /opt/minikube.home/.minikube/profiles/minikube/apiserver.key
I0616 13:33:41.100801 3489548 certs.go:294] generating aggregator signed cert: /opt/minikube.home/.minikube/profiles/minikube/proxy-client.key
I0616 13:33:41.100806 3489548 crypto.go:69] Generating cert /opt/minikube.home/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0616 13:33:41.240413 3489548 crypto.go:157] Writing cert to /opt/minikube.home/.minikube/profiles/minikube/proxy-client.crt ...
I0616 13:33:41.240436 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/profiles/minikube/proxy-client.crt: {Name:mkbbd75090ea667772078f4c8a343ac4da4de3ef Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:33:41.240737 3489548 crypto.go:165] Writing key to /opt/minikube.home/.minikube/profiles/minikube/proxy-client.key ...
I0616 13:33:41.240744 3489548 lock.go:36] WriteFile acquiring /opt/minikube.home/.minikube/profiles/minikube/proxy-client.key: {Name:mka4903937d4cbcddd0cd50737086e8ca0c10b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:33:41.240963 3489548 certs.go:369] found cert: /opt/minikube.home/.minikube/certs/opt/minikube.home/.minikube/certs/ca-key.pem (1675 bytes)
I0616 13:33:41.241003 3489548 certs.go:369] found cert: /opt/minikube.home/.minikube/certs/opt/minikube.home/.minikube/certs/ca.pem (1074 bytes)
I0616 13:33:41.241027 3489548 certs.go:369] found cert: /opt/minikube.home/.minikube/certs/opt/minikube.home/.minikube/certs/cert.pem (1119 bytes)
I0616 13:33:41.241053 3489548 certs.go:369] found cert: /opt/minikube.home/.minikube/certs/opt/minikube.home/.minikube/certs/key.pem (1679 bytes)
I0616 13:33:41.242403 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0616 13:33:41.265267 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0616 13:33:41.286766 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0616 13:33:41.308246 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0616 13:33:41.330266 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0616 13:33:41.352259 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0616 13:33:41.373466 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0616 13:33:41.394852 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0616 13:33:41.417957 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0616 13:33:41.440226 3489548 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0616 13:33:41.456579 3489548 ssh_runner.go:149] Run: openssl version
I0616 13:33:41.462742 3489548 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0616 13:33:41.472652 3489548 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0616 13:33:41.476618 3489548 certs.go:410] hashing: -rw-r--r--. 1 root root 1111 Jun 16 06:33 /usr/share/ca-certificates/minikubeCA.pem
I0616 13:33:41.476650 3489548 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0616 13:33:41.482625 3489548 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0616 13:33:41.491887 3489548 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[bip=172.18.0.1/16] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
I0616 13:33:41.491984 3489548 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0616 13:33:41.539864 3489548 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0616 13:33:41.549404 3489548 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0616 13:33:41.558736 3489548 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0616 13:33:41.558786 3489548 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0616 13:33:41.567484 3489548 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0616 13:33:41.567518 3489548 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0616 13:34:04.281920 3489548 out.go:197] - Generating certificates and keys ...
I0616 13:34:04.283975 3489548 out.go:197] - Booting up control plane ...
I0616 13:34:04.285578 3489548 out.go:197] - Configuring RBAC rules ...
I0616 13:34:04.287951 3489548 cni.go:93] Creating CNI manager for ""
I0616 13:34:04.287962 3489548 cni.go:154] 1 nodes found, recommending kindnet
I0616 13:34:04.288642 3489548 out.go:170] * Configuring CNI (Container Networking Interface) ...
I0616 13:34:04.289164 3489548 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
I0616 13:34:04.333369 3489548 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.7/kubectl ...
I0616 13:34:04.333384 3489548 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0616 13:34:04.351301 3489548 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0616 13:34:04.894158 3489548 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0616 13:34:04.894252 3489548 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0 minikube.k8s.io/commit=76d74191d82c47883dc7e1319ef7cebd3e00ee11 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_06_16T13_34_04_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0616 13:34:04.894252 3489548 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0616 13:34:04.913319 3489548 ops.go:34] apiserver oom_adj: -16
I0616 13:34:05.062971 3489548 kubeadm.go:985] duration metric: took 168.791688ms to wait for elevateKubeSystemPrivileges.
I0616 13:34:05.062994 3489548 kubeadm.go:392] StartCluster complete in 23.571114352s
I0616 13:34:05.063015 3489548 settings.go:142] acquiring lock: {Name:mk4cec9904761f5b9117b94abac2ea2e700cd7ee Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:34:05.063181 3489548 settings.go:150] Updating kubeconfig: /home/admin/.kube/config
I0616 13:34:05.065012 3489548 lock.go:36] WriteFile acquiring /home/admin/.kube/config: {Name:mke67f3fca0d83c9582d62529c22503d3ab0ffe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0616 13:34:05.584161 3489548 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0616 13:34:05.584214 3489548 start.go:214] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
I0616 13:34:05.584947 3489548 out.go:170] * Verifying Kubernetes components...
I0616 13:34:05.584296 3489548 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0616 13:34:05.584381 3489548 addons.go:342] enableAddons start: toEnable=map[], additional=[]
I0616 13:34:05.585032 3489548 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0616 13:34:05.585053 3489548 addons.go:59] Setting storage-provisioner=true in profile "minikube"
I0616 13:34:05.585054 3489548 addons.go:59] Setting default-storageclass=true in profile "minikube"
I0616 13:34:05.585072 3489548 addons.go:135] Setting addon storage-provisioner=true in "minikube"
W0616 13:34:05.585078 3489548 addons.go:147] addon storage-provisioner should already be in state true
I0616 13:34:05.585080 3489548 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0616 13:34:05.585106 3489548 host.go:66] Checking if "minikube" exists ...
I0616 13:34:05.585434 3489548 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0616 13:34:05.585594 3489548 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0616 13:34:05.638353 3489548 out.go:170] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0616 13:34:05.638500 3489548 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0616 13:34:05.638509 3489548 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0616 13:34:05.638562 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:34:05.643658 3489548 addons.go:135] Setting addon default-storageclass=true in "minikube"
W0616 13:34:05.643668 3489548 addons.go:147] addon default-storageclass should already be in state true
I0616 13:34:05.643692 3489548 host.go:66] Checking if "minikube" exists ...
I0616 13:34:05.644056 3489548 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0616 13:34:05.675111 3489548 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . /etc/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0616 13:34:05.677643 3489548 api_server.go:50] waiting for apiserver process to appear ...
I0616 13:34:05.677682 3489548 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0616 13:34:05.688802 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49532 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube/id_rsa Username:docker}
I0616 13:34:05.704292 3489548 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0616 13:34:05.704317 3489548 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0616 13:34:05.704371 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:34:05.755320 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49532 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube/id_rsa Username:docker}
I0616 13:34:05.850222 3489548 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0616 13:34:05.947999 3489548 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0616 13:34:06.157335 3489548 start.go:725] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
I0616 13:34:06.157389 3489548 api_server.go:70] duration metric: took 573.143265ms to wait for apiserver process to appear ...
I0616 13:34:06.157405 3489548 api_server.go:86] waiting for apiserver healthz status ...
I0616 13:34:06.157425 3489548 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0616 13:34:06.162843 3489548 api_server.go:249] https://192.168.49.2:8443/healthz returned 200:
ok
I0616 13:34:06.163927 3489548 api_server.go:139] control plane version: v1.20.7
I0616 13:34:06.163944 3489548 api_server.go:129] duration metric: took 6.533171ms to wait for apiserver health ...
I0616 13:34:06.163960 3489548 system_pods.go:43] waiting for kube-system pods to appear ...
I0616 13:34:06.172368 3489548 system_pods.go:59] 0 kube-system pods found
I0616 13:34:06.172384 3489548 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
I0616 13:34:06.366380 3489548 out.go:170] * Enabled addons: storage-provisioner, default-storageclass
I0616 13:34:06.366411 3489548 addons.go:344] enableAddons completed in 782.080378ms
I0616 13:34:06.438624 3489548 system_pods.go:59] 1 kube-system pods found
I0616 13:34:06.438655 3489548 system_pods.go:61] "storage-provisioner" [f5998c12-2265-4d3d-aa55-4ed55c73a87c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0616 13:34:06.438664 3489548 retry.go:31] will retry after 381.329545ms: only 1 pod(s) have shown up
I0616 13:34:06.824195 3489548 system_pods.go:59] 1 kube-system pods found
I0616 13:34:06.824216 3489548 system_pods.go:61] "storage-provisioner" [f5998c12-2265-4d3d-aa55-4ed55c73a87c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0616 13:34:06.824225 3489548 retry.go:31] will retry after 422.765636ms: only 1 pod(s) have shown up
I0616 13:34:07.250549 3489548 system_pods.go:59] 1 kube-system pods found
I0616 13:34:07.250571 3489548 system_pods.go:61] "storage-provisioner" [f5998c12-2265-4d3d-aa55-4ed55c73a87c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0616 13:34:07.250581 3489548 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up
I0616 13:34:07.727032 3489548 system_pods.go:59] 1 kube-system pods found
I0616 13:34:07.727054 3489548 system_pods.go:61] "storage-provisioner" [f5998c12-2265-4d3d-aa55-4ed55c73a87c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0616 13:34:07.727063 3489548 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
I0616 13:34:08.317738 3489548 system_pods.go:59] 1 kube-system pods found
I0616 13:34:08.317763 3489548 system_pods.go:61] "storage-provisioner" [f5998c12-2265-4d3d-aa55-4ed55c73a87c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0616 13:34:08.317775 3489548 retry.go:31] will retry after 834.206799ms: only 1 pod(s) have shown up
I0616 13:34:09.155920 3489548 system_pods.go:59] 1 kube-system pods found
I0616 13:34:09.155948 3489548 system_pods.go:61] "storage-provisioner" [f5998c12-2265-4d3d-aa55-4ed55c73a87c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0616 13:34:09.155958 3489548 retry.go:31] will retry after 746.553905ms: only 1 pod(s) have shown up
I0616 13:34:09.905892 3489548 system_pods.go:59] 1 kube-system pods found
I0616 13:34:09.905916 3489548 system_pods.go:61] "storage-provisioner" [f5998c12-2265-4d3d-aa55-4ed55c73a87c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0616 13:34:09.905926 3489548 retry.go:31] will retry after 987.362415ms: only 1 pod(s) have shown up
I0616 13:34:10.897613 3489548 system_pods.go:59] 1 kube-system pods found
I0616 13:34:10.897635 3489548 system_pods.go:61] "storage-provisioner" [f5998c12-2265-4d3d-aa55-4ed55c73a87c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0616 13:34:10.897644 3489548 retry.go:31] will retry after 1.189835008s: only 1 pod(s) have shown up
I0616 13:34:12.092569 3489548 system_pods.go:59] 5 kube-system pods found
I0616 13:34:12.092591 3489548 system_pods.go:61] "etcd-minikube" [fcdbb2b2-5c42-4573-b108-e46f3ef14a13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0616 13:34:12.092595 3489548 system_pods.go:61] "kube-apiserver-minikube" [33961b19-bb08-47e7-b40c-ebb3bc941b6e] Pending
I0616 13:34:12.092600 3489548 system_pods.go:61] "kube-controller-manager-minikube" [58f14622-a579-468f-bd3c-54e9f5639803] Pending
I0616 13:34:12.092605 3489548 system_pods.go:61] "kube-scheduler-minikube" [8d4e1ac6-d8e3-4382-889e-87995f4d559b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0616 13:34:12.092609 3489548 system_pods.go:61] "storage-provisioner" [f5998c12-2265-4d3d-aa55-4ed55c73a87c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0616 13:34:12.092621 3489548 system_pods.go:74] duration metric: took 5.928655653s to wait for pod list to return data ...
I0616 13:34:12.092634 3489548 kubeadm.go:547] duration metric: took 6.508390669s to wait for : map[apiserver:true system_pods:true] ...
I0616 13:34:12.092649 3489548 node_conditions.go:102] verifying NodePressure condition ...
I0616 13:34:12.095805 3489548 node_conditions.go:122] node storage ephemeral capacity is 91723496Ki
I0616 13:34:12.095827 3489548 node_conditions.go:123] node cpu capacity is 32
I0616 13:34:12.095844 3489548 node_conditions.go:105] duration metric: took 3.191154ms to run NodePressure ...
I0616 13:34:12.095863 3489548 start.go:219] waiting for startup goroutines ...
I0616 13:34:12.096778 3489548 out.go:170]
I0616 13:34:12.097054 3489548 profile.go:148] Saving config to /opt/minikube.home/.minikube/profiles/minikube/config.json ...
I0616 13:34:12.098055 3489548 out.go:170] * Starting node minikube-m02 in cluster minikube
I0616 13:34:12.098077 3489548 cache.go:115] Beginning downloading kic base image for docker with docker
I0616 13:34:12.098524 3489548 out.go:170] * Pulling base image ...
I0616 13:34:12.098550 3489548 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0616 13:34:12.098560 3489548 cache.go:54] Caching tarball of preloaded images
I0616 13:34:12.098633 3489548 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
I0616 13:34:12.099133 3489548 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
I0616 13:34:12.099145 3489548 image.go:61] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory, skipping pull
I0616 13:34:12.099148 3489548 image.go:102] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in cache, skipping pull
I0616 13:34:12.099153 3489548 preload.go:166] Found /opt/minikube.home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0616 13:34:12.099168 3489548 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 as a tarball
I0616 13:34:12.099171 3489548 image.go:74] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon
I0616 13:34:12.099179 3489548 cache.go:57] Finished verifying existence of preloaded tar for v1.20.7 on docker
I0616 13:34:12.099292 3489548 profile.go:148] Saving config to /opt/minikube.home/.minikube/profiles/minikube/config.json ...
I0616 13:34:12.174170 3489548 image.go:78] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon, skipping pull
I0616 13:34:12.174188 3489548 cache.go:146] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in daemon, skipping load
I0616 13:34:12.174203 3489548 cache.go:202] Successfully downloaded all kic artifacts
I0616 13:34:12.174236 3489548 start.go:313] acquiring machines lock for minikube-m02: {Name:mkc5af9a4d9ae5f0bdb581ab035b1998c2038850 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0616 13:34:12.174393 3489548 start.go:317] acquired machines lock for "minikube-m02" in 139.138µs
I0616 13:34:12.174421 3489548 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[bip=172.18.0.1/16] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true} &{Name:m02 IP: Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}
I0616 13:34:12.174494 3489548 start.go:126] createHost starting for "m02" (driver="docker")
I0616 13:34:12.175424 3489548 out.go:197] * Creating docker container (CPUs=2, Memory=4000MB) ...
I0616 13:34:12.175519 3489548 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0616 13:34:12.175543 3489548 client.go:168] LocalClient.Create starting
I0616 13:34:12.175767 3489548 main.go:128] libmachine: Reading certificate data from /opt/minikube.home/.minikube/certs/ca.pem
I0616 13:34:12.175796 3489548 main.go:128] libmachine: Decoding PEM data...
I0616 13:34:12.175819 3489548 main.go:128] libmachine: Parsing certificate...
I0616 13:34:12.175953 3489548 main.go:128] libmachine: Reading certificate data from /opt/minikube.home/.minikube/certs/cert.pem
I0616 13:34:12.175968 3489548 main.go:128] libmachine: Decoding PEM data...
I0616 13:34:12.175985 3489548 main.go:128] libmachine: Parsing certificate...
I0616 13:34:12.176284 3489548 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0616 13:34:12.232923 3489548 network_create.go:67] Found existing network {name:minikube subnet:0xc021f391a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
I0616 13:34:12.232974 3489548 kic.go:106] calculated static IP "192.168.49.3" for the "minikube-m02" container
I0616 13:34:12.233033 3489548 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I0616 13:34:12.288903 3489548 cli_runner.go:115] Run: docker volume create minikube-m02 --label name.minikube.sigs.k8s.io=minikube-m02 --label created_by.minikube.sigs.k8s.io=true
I0616 13:34:12.356964 3489548 oci.go:102] Successfully created a docker volume minikube-m02
I0616 13:34:12.357031 3489548 cli_runner.go:115] Run: docker run --rm --name minikube-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube-m02 --entrypoint /usr/bin/test -v minikube-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
I0616 13:34:13.322160 3489548 oci.go:106] Successfully prepared a docker volume minikube-m02
I0616 13:34:13.322353 3489548 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
I0616 13:34:13.322379 3489548 kic.go:179] Starting extracting preloaded images to volume ...
W0616 13:34:13.322597 3489548 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0616 13:34:13.322612 3489548 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0616 13:34:13.323064 3489548 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /opt/minikube.home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
I0616 13:34:13.323190 3489548 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I0616 13:34:13.447976 3489548 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube-m02 --name minikube-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube-m02 --network minikube --ip 192.168.49.3 --volume minikube-m02:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
I0616 13:34:14.252691 3489548 cli_runner.go:115] Run: docker container inspect minikube-m02 --format={{.State.Running}}
I0616 13:34:14.304098 3489548 cli_runner.go:115] Run: docker container inspect minikube-m02 --format={{.State.Status}}
I0616 13:34:14.354790 3489548 cli_runner.go:115] Run: docker exec minikube-m02 stat /var/lib/dpkg/alternatives/iptables
I0616 13:34:14.458723 3489548 oci.go:278] the created container "minikube-m02" has a running status.
I0616 13:34:14.458750 3489548 kic.go:210] Creating ssh key for kic: /opt/minikube.home/.minikube/machines/minikube-m02/id_rsa...
I0616 13:34:14.668546 3489548 kic_runner.go:188] docker (temp): /opt/minikube.home/.minikube/machines/minikube-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0616 13:34:14.989555 3489548 cli_runner.go:115] Run: docker container inspect minikube-m02 --format={{.State.Status}}
I0616 13:34:15.039233 3489548 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0616 13:34:15.039248 3489548 kic_runner.go:115] Args: [docker exec --privileged minikube-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
I0616 13:34:23.887056 3489548 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /opt/minikube.home/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (10.563950945s)
I0616 13:34:23.887078 3489548 kic.go:188] duration metric: took 10.564695 seconds to extract preloaded images to volume
I0616 13:34:23.887483 3489548 cli_runner.go:115] Run: docker container inspect minikube-m02 --format={{.State.Status}}
I0616 13:34:23.942462 3489548 machine.go:88] provisioning docker machine ...
I0616 13:34:23.942492 3489548 ubuntu.go:169] provisioning hostname "minikube-m02"
I0616 13:34:23.942547 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0616 13:34:23.993209 3489548 main.go:128] libmachine: Using SSH client type: native
I0616 13:34:23.993420 3489548 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 49537 }
I0616 13:34:23.993430 3489548 main.go:128] libmachine: About to run SSH command:
sudo hostname minikube-m02 && echo "minikube-m02" | sudo tee /etc/hostname
I0616 13:34:24.131738 3489548 main.go:128] libmachine: SSH cmd err, output: : minikube-m02

I0616 13:34:24.131807 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0616 13:34:24.182552 3489548 main.go:128] libmachine: Using SSH client type: native
I0616 13:34:24.182711 3489548 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 49537 }
I0616 13:34:24.182725 3489548 main.go:128] libmachine: About to run SSH command:

            if ! grep -xq '.*\sminikube-m02' /etc/hosts; then
                    if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                            sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube-m02/g' /etc/hosts;
                    else
                            echo '127.0.1.1 minikube-m02' | sudo tee -a /etc/hosts;
                    fi
            fi

I0616 13:34:24.308591 3489548 main.go:128] libmachine: SSH cmd err, output: :
I0616 13:34:24.308610 3489548 ubuntu.go:175] set auth options {CertDir:/opt/minikube.home/.minikube CaCertPath:/opt/minikube.home/.minikube/certs/ca.pem CaPrivateKeyPath:/opt/minikube.home/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/opt/minikube.home/.minikube/machines/server.pem ServerKeyPath:/opt/minikube.home/.minikube/machines/server-key.pem ClientKeyPath:/opt/minikube.home/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/opt/minikube.home/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/opt/minikube.home/.minikube}
I0616 13:34:24.308624 3489548 ubuntu.go:177] setting up certificates
I0616 13:34:24.308632 3489548 provision.go:83] configureAuth start
I0616 13:34:24.308688 3489548 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m02
I0616 13:34:24.357791 3489548 provision.go:137] copyHostCerts
I0616 13:34:24.358398 3489548 exec_runner.go:145] found /opt/minikube.home/.minikube/key.pem, removing ...
I0616 13:34:24.358406 3489548 exec_runner.go:190] rm: /opt/minikube.home/.minikube/key.pem
I0616 13:34:24.358466 3489548 exec_runner.go:152] cp: /opt/minikube.home/.minikube/certs/key.pem --> /opt/minikube.home/.minikube/key.pem (1679 bytes)
I0616 13:34:24.358677 3489548 exec_runner.go:145] found /opt/minikube.home/.minikube/ca.pem, removing ...
I0616 13:34:24.358682 3489548 exec_runner.go:190] rm: /opt/minikube.home/.minikube/ca.pem
I0616 13:34:24.358707 3489548 exec_runner.go:152] cp: /opt/minikube.home/.minikube/certs/ca.pem --> /opt/minikube.home/.minikube/ca.pem (1074 bytes)
I0616 13:34:24.358795 3489548 exec_runner.go:145] found /opt/minikube.home/.minikube/cert.pem, removing ...
I0616 13:34:24.358798 3489548 exec_runner.go:190] rm: /opt/minikube.home/.minikube/cert.pem
I0616 13:34:24.358819 3489548 exec_runner.go:152] cp: /opt/minikube.home/.minikube/certs/cert.pem --> /opt/minikube.home/.minikube/cert.pem (1119 bytes)
I0616 13:34:24.358893 3489548 provision.go:111] generating server cert: /opt/minikube.home/.minikube/machines/server.pem ca-key=/opt/minikube.home/.minikube/certs/ca.pem private-key=/opt/minikube.home/.minikube/certs/ca-key.pem org=admin.minikube-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube minikube-m02]
I0616 13:34:24.496366 3489548 provision.go:171] copyRemoteCerts
I0616 13:34:24.496808 3489548 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0616 13:34:24.496849 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0616 13:34:24.546907 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49537 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube-m02/id_rsa Username:docker}
I0616 13:34:24.638716 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes)
I0616 13:34:24.661135 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0616 13:34:24.682786 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0616 13:34:24.706068 3489548 provision.go:86] duration metric: configureAuth took 397.424937ms
I0616 13:34:24.706089 3489548 ubuntu.go:193] setting minikube options for container-runtime
I0616 13:34:24.706321 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0616 13:34:24.758507 3489548 main.go:128] libmachine: Using SSH client type: native
I0616 13:34:24.758711 3489548 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 49537 }
I0616 13:34:24.758719 3489548 main.go:128] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0616 13:34:24.884413 3489548 main.go:128] libmachine: SSH cmd err, output: : overlay

I0616 13:34:24.884430 3489548 ubuntu.go:71] root file system type: overlay
I0616 13:34:24.884636 3489548 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0616 13:34:24.884706 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0616 13:34:24.934953 3489548 main.go:128] libmachine: Using SSH client type: native
I0616 13:34:24.935142 3489548 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 49537 }
I0616 13:34:24.935223 3489548 main.go:128] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

Environment="NO_PROXY=192.168.49.2"

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --bip=172.18.0.1/16
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0616 13:34:25.072893 3489548 main.go:128] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure

Environment=NO_PROXY=192.168.49.2

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --bip=172.18.0.1/16
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

I0616 13:34:25.072975 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0616 13:34:25.123447 3489548 main.go:128] libmachine: Using SSH client type: native
I0616 13:34:25.123629 3489548 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x802f80] 0x802f40 [] 0s} 127.0.0.1 49537 }
I0616 13:34:25.123645 3489548 main.go:128] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0616 13:34:26.234512 3489548 main.go:128] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2021-06-16 06:34:25.070406145 +0000
@@ -1,30 +1,33 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
+BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60

[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+Environment=NO_PROXY=192.168.49.2
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --bip=172.18.0.1/16
+ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +35,16 @@
LimitNPROC=infinity
LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process
-OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0616 13:34:26.234538 3489548 machine.go:91] provisioned docker machine in 2.292062476s
I0616 13:34:26.234548 3489548 client.go:171] LocalClient.Create took 14.059000613s
I0616 13:34:26.234564 3489548 start.go:168] duration metric: libmachine.API.Create for "minikube" took 14.05904423s
I0616 13:34:26.234571 3489548 start.go:267] post-start starting for "minikube-m02" (driver="docker")
I0616 13:34:26.234575 3489548 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0616 13:34:26.234633 3489548 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0616 13:34:26.234670 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0616 13:34:26.285066 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49537 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube-m02/id_rsa Username:docker}
I0616 13:34:26.375649 3489548 ssh_runner.go:149] Run: cat /etc/os-release
I0616 13:34:26.379072 3489548 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0616 13:34:26.379091 3489548 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0616 13:34:26.379100 3489548 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0616 13:34:26.379106 3489548 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0616 13:34:26.379115 3489548 filesync.go:126] Scanning /opt/minikube.home/.minikube/addons for local assets ...
I0616 13:34:26.379166 3489548 filesync.go:126] Scanning /opt/minikube.home/.minikube/files for local assets ...
I0616 13:34:26.379188 3489548 start.go:270] post-start completed in 144.612775ms
I0616 13:34:26.379653 3489548 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m02
I0616 13:34:26.427866 3489548 profile.go:148] Saving config to /opt/minikube.home/.minikube/profiles/minikube/config.json ...
I0616 13:34:26.428841 3489548 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0616 13:34:26.428886 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0616 13:34:26.480013 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49537 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube-m02/id_rsa Username:docker}
I0616 13:34:26.566906 3489548 start.go:129] duration metric: createHost completed in 14.39239401s
I0616 13:34:26.566923 3489548 start.go:80] releasing machines lock for "minikube-m02", held for 14.392522395s
I0616 13:34:26.567007 3489548 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m02
I0616 13:34:26.617611 3489548 out.go:170] * Found network options:
I0616 13:34:26.618232 3489548 out.go:170] - NO_PROXY=192.168.49.2
W0616 13:34:26.618285 3489548 proxy.go:118] fail to check proxy env: Error ip not in block
W0616 13:34:26.618336 3489548 proxy.go:118] fail to check proxy env: Error ip not in block
I0616 13:34:26.618435 3489548 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0616 13:34:26.618463 3489548 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0616 13:34:26.618482 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0616 13:34:26.618522 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02
I0616 13:34:26.667215 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49537 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube-m02/id_rsa Username:docker}
I0616 13:34:26.670478 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49537 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube-m02/id_rsa Username:docker}
I0616 13:34:26.912668 3489548 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0616 13:34:26.925192 3489548 cruntime.go:225] skipping containerd shutdown because we are bound to it
I0616 13:34:26.925248 3489548 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0616 13:34:26.936619 3489548 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0616 13:34:26.953957 3489548 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0616 13:34:27.027015 3489548 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0616 13:34:27.098967 3489548 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0616 13:34:27.110473 3489548 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0616 13:34:27.189382 3489548 ssh_runner.go:149] Run: sudo systemctl start docker
I0616 13:34:27.201798 3489548 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0616 13:34:27.310689 3489548 out.go:197] * Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
I0616 13:34:27.311393 3489548 out.go:170] - opt bip=172.18.0.1/16
I0616 13:34:27.311778 3489548 out.go:170] - env NO_PROXY=192.168.49.2
I0616 13:34:27.311853 3489548 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0616 13:34:27.362453 3489548 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0616 13:34:27.366735 3489548 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0616 13:34:27.379098 3489548 certs.go:52] Setting up /opt/minikube.home/.minikube/profiles/minikube for IP: 192.168.49.3
I0616 13:34:27.379139 3489548 certs.go:179] skipping minikubeCA CA generation: /opt/minikube.home/.minikube/ca.key
I0616 13:34:27.379154 3489548 certs.go:179] skipping proxyClientCA CA generation: /opt/minikube.home/.minikube/proxy-client-ca.key
I0616 13:34:27.379476 3489548 certs.go:369] found cert: /opt/minikube.home/.minikube/certs/opt/minikube.home/.minikube/certs/ca-key.pem (1675 bytes)
I0616 13:34:27.379516 3489548 certs.go:369] found cert: /opt/minikube.home/.minikube/certs/opt/minikube.home/.minikube/certs/ca.pem (1074 bytes)
I0616 13:34:27.379540 3489548 certs.go:369] found cert: /opt/minikube.home/.minikube/certs/opt/minikube.home/.minikube/certs/cert.pem (1119 bytes)
I0616 13:34:27.379561 3489548 certs.go:369] found cert: /opt/minikube.home/.minikube/certs/opt/minikube.home/.minikube/certs/key.pem (1679 bytes)
I0616 13:34:27.380078 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0616 13:34:27.402590 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0616 13:34:27.424891 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0616 13:34:27.447010 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0616 13:34:27.468967 3489548 ssh_runner.go:316] scp /opt/minikube.home/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0616 13:34:27.491230 3489548 ssh_runner.go:149] Run: openssl version
I0616 13:34:27.498437 3489548 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0616 13:34:27.508381 3489548 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0616 13:34:27.512587 3489548 certs.go:410] hashing: -rw-r--r--. 1 root root 1111 Jun 16 06:33 /usr/share/ca-certificates/minikubeCA.pem
I0616 13:34:27.512622 3489548 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0616 13:34:27.519011 3489548 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0616 13:34:27.528375 3489548 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0616 13:34:27.646176 3489548 cni.go:93] Creating CNI manager for ""
I0616 13:34:27.646188 3489548 cni.go:154] 2 nodes found, recommending kindnet
I0616 13:34:27.646200 3489548 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0616 13:34:27.646215 3489548 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube-m02 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0616 13:34:27.646363 3489548 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.3
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: "minikube-m02"
      kubeletExtraArgs:
      node-ip: 192.168.49.3
      taints: []

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.7
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"

disable disk resource management by default

imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0

I0616 13:34:27.646444 3489548 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3

[Install]
config:
{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0616 13:34:27.646503 3489548 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
I0616 13:34:27.655894 3489548 binaries.go:44] Found k8s binaries, skipping transfer
I0616 13:34:27.655945 3489548 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0616 13:34:27.665203 3489548 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (390 bytes)
I0616 13:34:27.681371 3489548 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0616 13:34:27.697169 3489548 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0616 13:34:27.700851 3489548 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0616 13:34:27.713119 3489548 host.go:66] Checking if "minikube" exists ...
I0616 13:34:27.713386 3489548 start.go:229] JoinCluster: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[bip=172.18.0.1/16] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true}
I0616 13:34:27.713471 3489548 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm token create --print-join-command --ttl=0"
I0616 13:34:27.713512 3489548 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0616 13:34:27.762871 3489548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49532 SSHKeyPath:/opt/minikube.home/.minikube/machines/minikube/id_rsa Username:docker}
I0616 13:34:27.930864 3489548 start.go:250] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}
I0616 13:34:27.930894 3489548 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm join control-plane.minikube.internal:8443 --token tpranw.bu8b0gdktzff13b3 --discovery-token-ca-cert-hash sha256:e8ae470cf9ad5a769e812af8f0f4b8dc55ec14b720af1516071628b5d1787c13 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=minikube-m02"
I0616 13:34:36.136486 3489548 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm join control-plane.minikube.internal:8443 --token tpranw.bu8b0gdktzff13b3 --discovery-token-ca-cert-hash sha256:e8ae470cf9ad5a769e812af8f0f4b8dc55ec14b720af1516071628b5d1787c13 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=minikube-m02": (8.205571776s)
I0616 13:34:36.136510 3489548 ssh_runner.go:149] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0616 13:34:36.350212 3489548 start.go:231] JoinCluster complete in 8.636820109s
I0616 13:34:36.433001 3489548 cni.go:93] Creating CNI manager for ""
I0616 13:34:36.433015 3489548 cni.go:154] 2 nodes found, recommending kindnet
I0616 13:34:36.433077 3489548 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
I0616 13:34:36.438278 3489548 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.7/kubectl ...
I0616 13:34:36.438288 3489548 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0616 13:34:36.455814 3489548 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0616 13:34:36.969078 3489548 start.go:214] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.20.7 ControlPlane:false Worker:true}
I0616 13:34:36.969729 3489548 out.go:170] * Verifying Kubernetes components...
I0616 13:34:36.969793 3489548 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0616 13:34:36.985676 3489548 kubeadm.go:547] duration metric: took 16.561958ms to wait for : map[apiserver:true system_pods:true] ...
I0616 13:34:36.985694 3489548 node_conditions.go:102] verifying NodePressure condition ...
I0616 13:34:36.989052 3489548 node_conditions.go:122] node storage ephemeral capacity is 91723496Ki
I0616 13:34:36.989066 3489548 node_conditions.go:123] node cpu capacity is 32
I0616 13:34:36.989075 3489548 node_conditions.go:122] node storage ephemeral capacity is 91723496Ki
I0616 13:34:36.989079 3489548 node_conditions.go:123] node cpu capacity is 32
I0616 13:34:36.989087 3489548 node_conditions.go:105] duration metric: took 3.384928ms to run NodePressure ...
I0616 13:34:36.989095 3489548 start.go:219] waiting for startup goroutines ...
I0616 13:34:37.057534 3489548 start.go:463] kubectl: 1.21.1, cluster: 1.20.7 (minor skew: 1)
I0616 13:34:37.069214 3489548 out.go:170] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

  • ==> Docker <==

  • -- Logs begin at Wed 2021-06-16 06:33:23 UTC, end at Wed 2021-06-16 06:37:21 UTC. --
    Jun 16 06:33:23 minikube systemd[1]: Starting Docker Application Container Engine...
    Jun 16 06:33:23 minikube dockerd[222]: time="2021-06-16T06:33:23.678552835Z" level=info msg="Starting up"
    Jun 16 06:33:23 minikube dockerd[222]: time="2021-06-16T06:33:23.680477505Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 16 06:33:23 minikube dockerd[222]: time="2021-06-16T06:33:23.680508085Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 16 06:33:23 minikube dockerd[222]: time="2021-06-16T06:33:23.680535268Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 16 06:33:23 minikube dockerd[222]: time="2021-06-16T06:33:23.680551635Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 16 06:33:23 minikube dockerd[222]: time="2021-06-16T06:33:23.682217682Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 16 06:33:23 minikube dockerd[222]: time="2021-06-16T06:33:23.682245613Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 16 06:33:23 minikube dockerd[222]: time="2021-06-16T06:33:23.682261916Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 16 06:33:23 minikube dockerd[222]: time="2021-06-16T06:33:23.682271244Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 16 06:33:27 minikube dockerd[222]: time="2021-06-16T06:33:27.150738229Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
    Jun 16 06:33:29 minikube dockerd[222]: time="2021-06-16T06:33:29.572399709Z" level=info msg="Loading containers: start."
    Jun 16 06:33:36 minikube dockerd[222]: time="2021-06-16T06:33:36.326108248Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
    Jun 16 06:33:36 minikube dockerd[222]: time="2021-06-16T06:33:36.525082925Z" level=info msg="Loading containers: done."
    Jun 16 06:33:36 minikube dockerd[222]: time="2021-06-16T06:33:36.921858195Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
    Jun 16 06:33:36 minikube dockerd[222]: time="2021-06-16T06:33:36.922059593Z" level=info msg="Daemon has completed initialization"
    Jun 16 06:33:36 minikube systemd[1]: Started Docker Application Container Engine.
    Jun 16 06:33:36 minikube dockerd[222]: time="2021-06-16T06:33:36.944398163Z" level=info msg="API listen on /run/docker.sock"
    Jun 16 06:33:38 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
    Jun 16 06:33:38 minikube systemd[1]: Stopping Docker Application Container Engine...
    Jun 16 06:33:38 minikube dockerd[222]: time="2021-06-16T06:33:38.603059814Z" level=info msg="Processing signal 'terminated'"
    Jun 16 06:33:38 minikube dockerd[222]: time="2021-06-16T06:33:38.604506360Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
    Jun 16 06:33:38 minikube dockerd[222]: time="2021-06-16T06:33:38.605288102Z" level=info msg="Daemon shutdown complete"
    Jun 16 06:33:38 minikube systemd[1]: docker.service: Succeeded.
    Jun 16 06:33:38 minikube systemd[1]: Stopped Docker Application Container Engine.
    Jun 16 06:33:38 minikube systemd[1]: Starting Docker Application Container Engine...
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.691997664Z" level=info msg="Starting up"
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.694381084Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.694410974Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.694435977Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.694455050Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.695673803Z" level=info msg="parsed scheme: "unix"" module=grpc
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.695695425Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.695714258Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.695724161Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.808691939Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.816903864Z" level=info msg="Loading containers: start."
    Jun 16 06:33:38 minikube dockerd[478]: time="2021-06-16T06:33:38.936227240Z" level=info msg="Loading containers: done."
    Jun 16 06:33:39 minikube dockerd[478]: time="2021-06-16T06:33:39.002102703Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
    Jun 16 06:33:39 minikube dockerd[478]: time="2021-06-16T06:33:39.002181554Z" level=info msg="Daemon has completed initialization"
    Jun 16 06:33:39 minikube systemd[1]: Started Docker Application Container Engine.
    Jun 16 06:33:39 minikube dockerd[478]: time="2021-06-16T06:33:39.017022015Z" level=info msg="API listen on [::]:2376"
    Jun 16 06:33:39 minikube dockerd[478]: time="2021-06-16T06:33:39.024241785Z" level=info msg="API listen on /var/run/docker.sock"

  • ==> container status <==

  • CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
    9c1dea389f50d pbitty/hello-from@sha256:815b60bcc226e5e8c43f5d97f778238cd96937e1e0b34da00881b3881cbfbd08 About a minute ago Running hello-from 0 ba706e8622474
    9eb5aa7db9161 bfe3a36ebd252 2 minutes ago Running coredns 0 bdb5df88cb094
    4c856e5c8e827 6e38f40d628db 2 minutes ago Running storage-provisioner 0 8b2781e440855
    6470e13e7f5c5 kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c 2 minutes ago Running kindnet-cni 0 3fa939d61fe5e
    8b570b95dbdc1 ff54c88b8ecfa 2 minutes ago Running kube-proxy 0 28bde7cc6ec32
    e8096459b1ecd 22d1a2072ec7b 3 minutes ago Running kube-controller-manager 0 0c4f6b6e179d3
    e39bcfc293787 0369cf4303ffd 3 minutes ago Running etcd 0 05f17721fa05b
    e19380f3aa2f2 034671b24f0f1 3 minutes ago Running kube-apiserver 0 8f6d387c09a3d
    1731ddaeda2f5 38f903b540101 3 minutes ago Running kube-scheduler 0 a3b88d6868194

  • ==> coredns [9eb5aa7db916] <==

  • .:53
    [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
    CoreDNS-1.7.0
    linux/amd64, go1.14.4, f59c03d

  • ==> describe nodes <==

  • Name: minikube
    Roles: control-plane,master
    Labels: beta.kubernetes.io/arch=amd64
    beta.kubernetes.io/os=linux
    kubernetes.io/arch=amd64
    kubernetes.io/hostname=minikube
    kubernetes.io/os=linux
    minikube.k8s.io/commit=76d74191d82c47883dc7e1319ef7cebd3e00ee11
    minikube.k8s.io/name=minikube
    minikube.k8s.io/updated_at=2021_06_16T13_34_04_0700
    minikube.k8s.io/version=v1.21.0
    node-role.kubernetes.io/control-plane=
    node-role.kubernetes.io/master=
    Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
    node.alpha.kubernetes.io/ttl: 0
    volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp: Wed, 16 Jun 2021 06:34:01 +0000
    Taints:
    Unschedulable: false
    Lease:
    HolderIdentity: minikube
    AcquireTime:
    RenewTime: Wed, 16 Jun 2021 06:37:13 +0000
    Conditions:
    Type Status LastHeartbeatTime LastTransitionTime Reason Message


    MemoryPressure False Wed, 16 Jun 2021 06:36:13 +0000 Wed, 16 Jun 2021 06:33:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
    DiskPressure False Wed, 16 Jun 2021 06:36:13 +0000 Wed, 16 Jun 2021 06:33:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
    PIDPressure False Wed, 16 Jun 2021 06:36:13 +0000 Wed, 16 Jun 2021 06:33:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
    Ready True Wed, 16 Jun 2021 06:36:13 +0000 Wed, 16 Jun 2021 06:34:43 +0000 KubeletReady kubelet is posting ready status
    Addresses:
    InternalIP: 192.168.49.2
    Hostname: minikube
    Capacity:
    cpu: 32
    ephemeral-storage: 91723496Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 32776112Ki
    pods: 110
    Allocatable:
    cpu: 32
    ephemeral-storage: 91723496Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    memory: 32776112Ki
    pods: 110
    System Info:
    Machine ID: b77ec962e3734760b1e756ffc5e83152
    System UUID: 0531346f-2988-4960-916f-9883dc59aa9b
    Boot ID: fde6f6b1-b9dc-44ab-9293-b4035ea3aedd
    Kernel Version: 3.10.0-1062.el7.x86_64
    OS Image: Ubuntu 20.04.2 LTS
    Operating System: linux
    Architecture: amd64
    Container Runtime Version: docker://20.10.7
    Kubelet Version: v1.20.7
    Kube-Proxy Version: v1.20.7
    PodCIDR: 10.244.0.0/24
    PodCIDRs: 10.244.0.0/24
    Non-terminated Pods: (9 in total)
    Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


    default hello-695c67cf9c-q7dfx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 91s
    kube-system coredns-74ff55c5b-6fs46 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 2m57s
    kube-system etcd-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 3m10s
    kube-system kindnet-d9g4h 100m (0%!)(MISSING) 100m (0%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 2m57s
    kube-system kube-apiserver-minikube 250m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m10s
    kube-system kube-controller-manager-minikube 200m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m10s
    kube-system kube-proxy-vwqrw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m57s
    kube-system kube-scheduler-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m10s
    kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m15s
    Allocated resources:
    (Total limits may be over 100 percent, i.e., overcommitted.)
    Resource Requests Limits


    cpu 850m (2%!)(MISSING) 100m (0%!)(MISSING)
    memory 220Mi (0%!)(MISSING) 220Mi (0%!)(MISSING)
    ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
    hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
    Events:
    Type Reason Age From Message


    Normal NodeHasSufficientMemory 3m29s (x5 over 3m29s) kubelet Node minikube status is now: NodeHasSufficientMemory
    Normal NodeHasNoDiskPressure 3m29s (x4 over 3m29s) kubelet Node minikube status is now: NodeHasNoDiskPressure
    Normal NodeHasSufficientPID 3m29s (x4 over 3m29s) kubelet Node minikube status is now: NodeHasSufficientPID
    Normal Starting 3m11s kubelet Starting kubelet.
    Normal NodeHasSufficientMemory 3m10s kubelet Node minikube status is now: NodeHasSufficientMemory
    Normal NodeHasNoDiskPressure 3m10s kubelet Node minikube status is now: NodeHasNoDiskPressure
    Normal NodeHasSufficientPID 3m10s kubelet Node minikube status is now: NodeHasSufficientPID
    Normal NodeAllocatableEnforced 3m10s kubelet Updated Node Allocatable limit across pods
    Normal Starting 2m56s kube-proxy Starting kube-proxy.
    Normal NodeReady 2m38s kubelet Node minikube status is now: NodeReady

Name: minikube-m02
Roles:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 16 Jun 2021 06:34:35 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube-m02
AcquireTime:
RenewTime: Wed, 16 Jun 2021 06:37:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Wed, 16 Jun 2021 06:36:06 +0000 Wed, 16 Jun 2021 06:34:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 16 Jun 2021 06:36:06 +0000 Wed, 16 Jun 2021 06:34:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 16 Jun 2021 06:36:06 +0000 Wed, 16 Jun 2021 06:34:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 16 Jun 2021 06:36:06 +0000 Wed, 16 Jun 2021 06:34:55 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.3
Hostname: minikube-m02
Capacity:
cpu: 32
ephemeral-storage: 91723496Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32776112Ki
pods: 110
Allocatable:
cpu: 32
ephemeral-storage: 91723496Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32776112Ki
pods: 110
System Info:
Machine ID: b77ec962e3734760b1e756ffc5e83152
System UUID: 27195c5a-8004-4fce-af9e-09e2014635f9
Boot ID: fde6f6b1-b9dc-44ab-9293-b4035ea3aedd
Kernel Version: 3.10.0-1062.el7.x86_64
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.20.7
Kube-Proxy Version: v1.20.7
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


default hello-695c67cf9c-64b5z 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 91s
kube-system kindnet-pr2zj 100m (0%!)(MISSING) 100m (0%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 2m46s
kube-system kube-proxy-8vvw7 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m46s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 100m (0%!)(MISSING) 100m (0%!)(MISSING)
memory 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message


Normal Starting 2m46s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m46s kubelet Node minikube-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m46s kubelet Node minikube-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m46s kubelet Node minikube-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m46s kubelet Updated Node Allocatable limit across pods
Normal Starting 2m44s kube-proxy Starting kube-proxy.
Normal NodeReady 2m26s kubelet Node minikube-m02 status is now: NodeReady

  • ==> dmesg <==

  • [Apr15 01:08] ACPI: RSDP 00000000000f6a10 00024 (v02 PTLTD )
    [ +0.000000] ACPI: XSDT 00000000bfeee9f5 0005C (v01 INTEL 440BX 06040000 VMW 01324272)
    [ +0.000000] ACPI: FACP 00000000bfefee73 000F4 (v04 INTEL 440BX 06040000 PTL 000F4240)
    [ +0.000000] ACPI: DSDT 00000000bfeef139 0FD3A (v01 PTLTD Custom 06040000 MSFT 03000001)
    [ +0.000000] ACPI: FACS 00000000bfefffc0 00040
    [ +0.000000] ACPI: BOOT 00000000bfeef111 00028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001)
    [ +0.000000] ACPI: APIC 00000000bfeeedfd 00202 (v01 PTLTD ? APIC 06040000 LTP 00000000)
    [ +0.000000] ACPI: MCFG 00000000bfeeedc1 0003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001)
    [ +0.000000] ACPI: SRAT 00000000bfeeeaf1 002D0 (v02 VMWARE MEMPLUG 06040000 VMW 00000001)
    [ +0.000000] ACPI: HPET 00000000bfeeeab9 00038 (v01 VMWARE VMW HPET 06040000 VMW 00000001)
    [ +0.000000] ACPI: WAET 00000000bfeeea91 00028 (v01 VMWARE VMW WAET 06040000 VMW 00000001)
    [ +0.000000] Zone ranges:
    [ +0.000000] DMA [mem 0x00001000-0x00ffffff]
    [ +0.000000] DMA32 [mem 0x01000000-0xffffffff]
    [ +0.000000] Normal [mem 0x100000000-0x83fffffff]
    [ +0.000000] Movable zone start for each node
    [ +0.000000] Early memory node ranges
    [ +0.000000] node 0: [mem 0x00001000-0x0009efff]
    [ +0.000000] node 0: [mem 0x00100000-0xbfedffff]
    [ +0.000000] node 0: [mem 0xbff00000-0xbfffffff]
    [ +0.000000] node 0: [mem 0x100000000-0x43fffffff]
    [ +0.000000] node 1: [mem 0x440000000-0x83fffffff]
    [ +0.000000] Built 2 zonelists in Zone order, mobility grouping on. Total pages: 8257385
    [ +0.000000] Policy zone: Normal
    [ +0.000000] ACPI: All ACPI Tables successfully acquired
    [ +0.051766] core: CPUID marked event: 'cpu cycles' unavailable
    [ +0.000001] core: CPUID marked event: 'instructions' unavailable
    [ +0.000001] core: CPUID marked event: 'bus cycles' unavailable
    [ +0.000001] core: CPUID marked event: 'cache references' unavailable
    [ +0.000001] core: CPUID marked event: 'cache misses' unavailable
    [ +0.000001] core: CPUID marked event: 'branch instructions' unavailable
    [ +0.000001] core: CPUID marked event: 'branch misses' unavailable
    [ +0.001645] NMI watchdog: disabled (cpu0): hardware events not enabled
    [ +0.144036] pmd_set_huge: Cannot satisfy [mem 0xf0000000-0xf0200000] with a huge-page mapping due to MTRR override.
    [ +0.025957] ACPI: Enabled 4 GPEs in block 00 to 0F
    [ +0.720446] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
    [ +0.104833] systemd[1]: [/run/systemd/generator/dev-mapper-centos\x2droot.device.d/timeout.conf:3] Unknown lvalue 'JobRunningTimeoutSec' in section 'Unit'
    [ +0.287606] sd 0:0:0:0: [sda] Assuming drive cache: write through
    [ +0.000003] sd 0:0:1:0: [sdb] Assuming drive cache: write through
    [ +3.477169] piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled!
    [Apr15 01:35] TECH PREVIEW: Overlay filesystem may not be fully supported.
    Please review provided documentation for limitations.
    [May22 04:22] sched: RT throttling activated

  • ==> etcd [e39bcfc29378] <==

  • 2021-06-16 06:33:54.842158 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
    raft2021/06/16 06:33:54 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
    2021-06-16 06:33:54.842740 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
    2021-06-16 06:33:54.844301 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
    2021-06-16 06:33:54.844388 I | embed: listening for peers on 192.168.49.2:2380
    2021-06-16 06:33:54.844626 I | embed: listening for metrics on http://127.0.0.1:2381
    raft2021/06/16 06:33:55 INFO: aec36adc501070cc is starting a new election at term 1
    raft2021/06/16 06:33:55 INFO: aec36adc501070cc became candidate at term 2
    raft2021/06/16 06:33:55 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
    raft2021/06/16 06:33:55 INFO: aec36adc501070cc became leader at term 2
    raft2021/06/16 06:33:55 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
    2021-06-16 06:33:55.439847 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
    2021-06-16 06:33:55.439873 I | embed: ready to serve client requests
    2021-06-16 06:33:55.440103 I | embed: ready to serve client requests
    2021-06-16 06:33:55.440249 I | etcdserver: setting up the initial cluster version to 3.4
    2021-06-16 06:33:55.441641 N | etcdserver/membership: set the initial cluster version to 3.4
    2021-06-16 06:33:55.441740 I | etcdserver/api: enabled capabilities for version 3.4
    2021-06-16 06:33:55.443111 I | embed: serving client requests on 192.168.49.2:2379
    2021-06-16 06:33:55.443168 I | embed: serving client requests on 127.0.0.1:2379
    2021-06-16 06:34:15.485228 W | etcdserver: request "header:<ID:8128005651780311358 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/secrets/kube-system/service-account-controller-token-5prjf" mod_revision:0 > success:<request_put:<key:"/registry/secrets/kube-system/service-account-controller-token-5prjf" value_size:2732 >> failure:<>>" with result "size:16" took too long (127.06512ms) to execute
    2021-06-16 06:34:15.734572 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/expand-controller" " with result "range_response_count:1 size:201" took too long (180.519451ms) to execute
    2021-06-16 06:34:16.104983 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/persistent-volume-binder" " with result "range_response_count:0 size:5" took too long (252.797523ms) to execute
    2021-06-16 06:34:16.586454 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/bootstrap-signer" " with result "range_response_count:0 size:5" took too long (334.621242ms) to execute
    2021-06-16 06:34:17.028118 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/certificate-controller" " with result "range_response_count:1 size:263" took too long (415.579481ms) to execute
    2021-06-16 06:34:17.337234 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/ttl-controller" " with result "range_response_count:1 size:195" took too long (290.825785ms) to execute
    2021-06-16 06:34:18.385426 W | wal: sync duration of 1.042072824s, expected less than 1s
    2021-06-16 06:34:18.457525 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/pod-garbage-collector" " with result "range_response_count:0 size:5" took too long (1.110202297s) to execute
    2021-06-16 06:34:18.457924 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:34:19.383958 W | etcdserver: read-only range request "key:"/registry/health" " with result "range_response_count:0 size:5" took too long (480.314134ms) to execute
    2021-06-16 06:34:19.384071 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/namespace-controller" " with result "range_response_count:1 size:207" took too long (857.120055ms) to execute
    2021-06-16 06:34:20.003182 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/node-controller" " with result "range_response_count:1 size:242" took too long (600.154514ms) to execute
    2021-06-16 06:34:22.315594 W | wal: sync duration of 1.326262064s, expected less than 1s
    2021-06-16 06:34:22.861335 W | etcdserver: read-only range request "key:"/registry/health" " with result "range_response_count:0 size:5" took too long (2.593396999s) to execute
    2021-06-16 06:34:22.861458 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller" " with result "range_response_count:1 size:234" took too long (2.803991213s) to execute
    2021-06-16 06:34:22.861674 W | etcdserver: request "header:<ID:8128005651780311441 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/leases/kube-node-lease/minikube" mod_revision:299 > success:<request_put:<key:"/registry/leases/kube-node-lease/minikube" value_size:536 >> failure:<request_range:<key:"/registry/leases/kube-node-lease/minikube" > >>" with result "size:16" took too long (1.872256693s) to execute
    2021-06-16 06:34:22.862467 W | etcdserver: read-only range request "key:"/registry/health" " with result "range_response_count:0 size:5" took too long (588.025493ms) to execute
    2021-06-16 06:34:22.862492 W | etcdserver: read-only range request "key:"/registry/namespaces/default" " with result "range_response_count:1 size:257" took too long (219.240153ms) to execute
    2021-06-16 06:34:23.322865 W | etcdserver: read-only range request "key:"/registry/pods/kube-system/kube-apiserver-minikube" " with result "range_response_count:1 size:7360" took too long (454.724971ms) to execute
    2021-06-16 06:34:23.322935 W | etcdserver: request "header:<ID:8128005651780311450 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.49.2" mod_revision:312 > success:<request_put:<key:"/registry/masterleases/192.168.49.2" value_size:67 lease:8128005651780311448 >> failure:<request_range:<key:"/registry/masterleases/192.168.49.2" > >>" with result "size:16" took too long (140.963549ms) to execute
    2021-06-16 06:34:23.622821 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/resourcequota-controller" " with result "range_response_count:0 size:5" took too long (293.222953ms) to execute
    2021-06-16 06:34:23.623111 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:420" took too long (298.278631ms) to execute
    2021-06-16 06:34:27.441235 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:34:37.441403 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:34:47.441493 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:34:51.701408 W | etcdserver: read-only range request "key:"/registry/health" " with result "range_response_count:0 size:5" took too long (433.520882ms) to execute
    2021-06-16 06:34:57.441441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:35:07.441386 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:35:17.441429 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:35:27.441234 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:35:37.441245 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:35:47.441554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:35:57.441332 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:36:07.441328 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:36:17.441285 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:36:27.441232 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:36:37.441228 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:36:47.441557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:36:57.441513 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:37:07.441386 I | etcdserver/api/etcdhttp: /health OK (status code 200)
    2021-06-16 06:37:17.441431 I | etcdserver/api/etcdhttp: /health OK (status code 200)

  • ==> kernel <==

  • 06:37:21 up 62 days, 5:29, 0 users, load average: 0.17, 0.45, 0.38
    Linux minikube 3.10.0-1062.el7.x86_64 Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
    PRETTY_NAME="Ubuntu 20.04.2 LTS"

  • ==> kube-apiserver [e19380f3aa2f] <==

  • Trace[1763014438]: ---"Transaction committed" 1112ms (06:34:00.458)
    Trace[1763014438]: [1.113022175s] [1.113022175s] END
    I0616 06:34:18.458683 1 trace.go:205] Trace[1279573877]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector,user-agent:kube-controller-manager/v1.20.7 (linux/amd64) kubernetes/132a687/kube-controller-manager,client:192.168.49.2 (16-Jun-2021 06:34:17.347) (total time: 1111ms):
    Trace[1279573877]: [1.111615257s] [1.111615257s] END
    I0616 06:34:18.458693 1 trace.go:205] Trace[114148878]: "Update" url:/api/v1/namespaces/kube-system/serviceaccounts/ttl-controller,user-agent:kube-controller-manager/v1.20.7 (linux/amd64) kubernetes/132a687/tokens-controller,client:192.168.49.2 (16-Jun-2021 06:34:17.345) (total time: 1113ms):
    Trace[114148878]: ---"Object stored in database" 1113ms (06:34:00.458)
    Trace[114148878]: [1.113314019s] [1.113314019s] END
    I0616 06:34:19.384709 1 trace.go:205] Trace[792977307]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/namespace-controller,user-agent:kube-controller-manager/v1.20.7 (linux/amd64) kubernetes/132a687/tokens-controller,client:192.168.49.2 (16-Jun-2021 06:34:18.526) (total time: 858ms):
    Trace[792977307]: ---"About to write a response" 858ms (06:34:00.384)
    Trace[792977307]: [858.087575ms] [858.087575ms] END
    I0616 06:34:20.003945 1 trace.go:205] Trace[572261259]: "GuaranteedUpdate etcd3" type:*core.ServiceAccount (16-Jun-2021 06:34:19.395) (total time: 608ms):
    Trace[572261259]: ---"Transaction committed" 608ms (06:34:00.003)
    Trace[572261259]: [608.460828ms] [608.460828ms] END
    I0616 06:34:20.003994 1 trace.go:205] Trace[1848935072]: "GuaranteedUpdate etcd3" type:*core.Pod (16-Jun-2021 06:34:19.391) (total time: 612ms):
    Trace[1848935072]: ---"Transaction committed" 609ms (06:34:00.003)
    Trace[1848935072]: [612.338272ms] [612.338272ms] END
    I0616 06:34:20.004095 1 trace.go:205] Trace[29263614]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/node-controller,user-agent:kube-controller-manager/v1.20.7 (linux/amd64) kubernetes/132a687/kube-controller-manager,client:192.168.49.2 (16-Jun-2021 06:34:19.402) (total time: 601ms):
    Trace[29263614]: ---"About to write a response" 601ms (06:34:00.004)
    Trace[29263614]: [601.375292ms] [601.375292ms] END
    I0616 06:34:20.004143 1 trace.go:205] Trace[953352250]: "Update" url:/api/v1/namespaces/kube-system/serviceaccounts/namespace-controller,user-agent:kube-controller-manager/v1.20.7 (linux/amd64) kubernetes/132a687/tokens-controller,client:192.168.49.2 (16-Jun-2021 06:34:19.395) (total time: 608ms):
    Trace[953352250]: ---"Object stored in database" 608ms (06:34:00.003)
    Trace[953352250]: [608.746637ms] [608.746637ms] END
    I0616 06:34:20.004386 1 trace.go:205] Trace[1973152312]: "Patch" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube/status,user-agent:kubelet/v1.20.7 (linux/amd64) kubernetes/132a687,client:192.168.49.2 (16-Jun-2021 06:34:19.391) (total time: 612ms):
    Trace[1973152312]: ---"Object stored in database" 609ms (06:34:00.004)
    Trace[1973152312]: [612.874444ms] [612.874444ms] END
    I0616 06:34:22.862064 1 trace.go:205] Trace[162576069]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/endpointslicemirroring-controller,user-agent:kube-controller-manager/v1.20.7 (linux/amd64) kubernetes/132a687/tokens-controller,client:192.168.49.2 (16-Jun-2021 06:34:20.057) (total time: 2804ms):
    Trace[162576069]: ---"About to write a response" 2804ms (06:34:00.861)
    Trace[162576069]: [2.80490604s] [2.80490604s] END
    I0616 06:34:22.862968 1 trace.go:205] Trace[1351053616]: "GuaranteedUpdate etcd3" type:*coordination.Lease (16-Jun-2021 06:34:20.988) (total time: 1874ms):
    Trace[1351053616]: ---"Transaction committed" 1874ms (06:34:00.862)
    Trace[1351053616]: [1.874833801s] [1.874833801s] END
    I0616 06:34:22.863119 1 trace.go:205] Trace[639082149]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube,user-agent:kubelet/v1.20.7 (linux/amd64) kubernetes/132a687,client:192.168.49.2 (16-Jun-2021 06:34:20.987) (total time: 1875ms):
    Trace[639082149]: ---"Object stored in database" 1874ms (06:34:00.862)
    Trace[639082149]: [1.875152818s] [1.875152818s] END
    I0616 06:34:22.862971 1 trace.go:205] Trace[1659717606]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.20.7 (linux/amd64) kubernetes/132a687,client:192.168.49.2 (16-Jun-2021 06:34:22.272) (total time: 590ms):
    Trace[1659717606]: ---"Object stored in database" 590ms (06:34:00.862)
    Trace[1659717606]: [590.362791ms] [590.362791ms] END
    I0616 06:34:22.863118 1 trace.go:205] Trace[1668264363]: "GuaranteedUpdate etcd3" type:*core.Node (16-Jun-2021 06:34:21.310) (total time: 1552ms):
    Trace[1668264363]: ---"Transaction committed" 1551ms (06:34:00.862)
    Trace[1668264363]: [1.552999863s] [1.552999863s] END
    I0616 06:34:22.864007 1 trace.go:205] Trace[1224418505]: "Patch" url:/api/v1/nodes/minikube/status,user-agent:kubelet/v1.20.7 (linux/amd64) kubernetes/132a687,client:192.168.49.2 (16-Jun-2021 06:34:21.309) (total time: 1554ms):
    Trace[1224418505]: ---"Object stored in database" 1552ms (06:34:00.863)
    Trace[1224418505]: [1.554040176s] [1.554040176s] END
    I0616 06:34:24.372241 1 controller.go:609] quota admission added evaluator for: replicasets.apps
    I0616 06:34:24.442529 1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
    I0616 06:34:30.018357 1 client.go:360] parsed scheme: "passthrough"
    I0616 06:34:30.018416 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0616 06:34:30.018426 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0616 06:35:10.811705 1 client.go:360] parsed scheme: "passthrough"
    I0616 06:35:10.811762 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0616 06:35:10.811772 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0616 06:35:53.264154 1 client.go:360] parsed scheme: "passthrough"
    I0616 06:35:53.264221 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0616 06:35:53.264231 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0616 06:36:25.478283 1 client.go:360] parsed scheme: "passthrough"
    I0616 06:36:25.478363 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0616 06:36:25.478374 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
    I0616 06:36:58.911095 1 client.go:360] parsed scheme: "passthrough"
    I0616 06:36:58.911150 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
    I0616 06:36:58.911159 1 clientconn.go:948] ClientConn switching balancer to "pick_first"

  • ==> kube-controller-manager [e8096459b1ec] <==

  • I0616 06:34:24.137318 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
    I0616 06:34:24.137345 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
    I0616 06:34:24.137374 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
    I0616 06:34:24.137476 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
    I0616 06:34:24.146640 1 shared_informer.go:247] Caches are synced for TTL
    I0616 06:34:24.146654 1 shared_informer.go:247] Caches are synced for expand
    I0616 06:34:24.146680 1 shared_informer.go:247] Caches are synced for crt configmap
    E0616 06:34:24.147839 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
    I0616 06:34:24.150247 1 shared_informer.go:247] Caches are synced for node
    I0616 06:34:24.150277 1 range_allocator.go:172] Starting range CIDR allocator
    I0616 06:34:24.150282 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
    I0616 06:34:24.150286 1 shared_informer.go:247] Caches are synced for cidrallocator
    I0616 06:34:24.153480 1 shared_informer.go:247] Caches are synced for PV protection
    I0616 06:34:24.157368 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
    I0616 06:34:24.189278 1 shared_informer.go:247] Caches are synced for service account
    I0616 06:34:24.345696 1 shared_informer.go:247] Caches are synced for endpoint_slice
    I0616 06:34:24.357254 1 shared_informer.go:247] Caches are synced for ReplicationController
    I0616 06:34:24.369147 1 shared_informer.go:247] Caches are synced for deployment
    I0616 06:34:24.374880 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
    I0616 06:34:24.382269 1 shared_informer.go:247] Caches are synced for GC
    I0616 06:34:24.386715 1 shared_informer.go:247] Caches are synced for endpoint
    I0616 06:34:24.389888 1 shared_informer.go:247] Caches are synced for resource quota
    I0616 06:34:24.395141 1 shared_informer.go:247] Caches are synced for job
    I0616 06:34:24.395752 1 shared_informer.go:247] Caches are synced for ReplicaSet
    I0616 06:34:24.400068 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-6fs46"
    I0616 06:34:24.403119 1 shared_informer.go:247] Caches are synced for taint
    I0616 06:34:24.403204 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
    W0616 06:34:24.403261 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
    I0616 06:34:24.403324 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
    I0616 06:34:24.403537 1 shared_informer.go:247] Caches are synced for stateful set
    I0616 06:34:24.403566 1 taint_manager.go:187] Starting NoExecuteTaintManager
    I0616 06:34:24.403729 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
    I0616 06:34:24.407412 1 shared_informer.go:247] Caches are synced for disruption
    I0616 06:34:24.407435 1 disruption.go:339] Sending events to api server.
    I0616 06:34:24.409863 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
    I0616 06:34:24.420512 1 shared_informer.go:247] Caches are synced for attach detach
    I0616 06:34:24.422100 1 shared_informer.go:247] Caches are synced for HPA
    I0616 06:34:24.432909 1 shared_informer.go:247] Caches are synced for persistent volume
    I0616 06:34:24.433340 1 shared_informer.go:247] Caches are synced for PVC protection
    I0616 06:34:24.438532 1 shared_informer.go:247] Caches are synced for daemon sets
    I0616 06:34:24.448254 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d9g4h"
    I0616 06:34:24.448759 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vwqrw"
    I0616 06:34:24.544682 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
    I0616 06:34:24.804917 1 shared_informer.go:247] Caches are synced for garbage collector
    I0616 06:34:24.804939 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
    I0616 06:34:24.844815 1 shared_informer.go:247] Caches are synced for garbage collector
    I0616 06:34:25.237442 1 request.go:655] Throttling request took 1.048351938s, request: GET:https://192.168.49.2:8443/apis/extensions/v1beta1?timeout=32s
    I0616 06:34:26.038997 1 shared_informer.go:240] Waiting for caches to sync for resource quota
    I0616 06:34:26.039051 1 shared_informer.go:247] Caches are synced for resource quota
    W0616 06:34:35.688547 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube-m02" does not exist
    I0616 06:34:35.695720 1 range_allocator.go:373] Set node minikube-m02 PodCIDR to [10.244.1.0/24]
    I0616 06:34:35.696679 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pr2zj"
    I0616 06:34:35.697951 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8vvw7"
    E0616 06:34:35.707355 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4f0db0ff-3fc1-41e8-96fd-f0c0c2b05894", ResourceVersion:"437", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63759422044, loc:(*time.Location)(0x6f9a440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"app":"kindnet","k8s-app":"kindnet","tier":"node"},"name":"kindnet","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"app":"kindnet"}},"template":{"metadata":{"labels":{"app":"kindnet","k8s-app":"kindnet","tier":"node"}},"spec":{"containers":[{"env":[{"name":"HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"POD_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}},{"name":"POD_SUBNET","value":"10.244.0.0/16"}],"image":"kindest/kindnetd:v20210326-1e038dc5","name":"kindnet-cni","resources":{"limits":{"cpu":"100m","memory":"50Mi"},"requests":{"cpu":"100m","memory":"50Mi"}},"securityContext":{"capabilities":{"add":["NET_RAW","NET_ADMIN"]},"privileged":false},"volumeMounts":[{"mountPath":"/etc/cni/net.d","name":"cni-cfg"},{"mountPath":"/run/xtables.lock","name":"xtables-lock","readOnly":false},{"mountPath":"/lib/modules","name":"lib-modules","readOnly":true}]}],"hostNetwork":true,"serviceAccountName":"kindnet","tolerations":[{"effect":"NoSchedule","operator":"Exists"}],"volumes":[{"hostPath":{"path":"/etc/cni/net.mk","type":"DirectoryOrCreate"},"name":"cni-cfg"},{"hostPath":{"path":"/run/xtables.lock","type":"FileOrCreate"},"name":"xtables-lock"},{"hostPath":{"path":"/lib/modules"},"name":"lib-modules"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000886f60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000886fc0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000887020), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000887080)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000887160), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0008871c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000887220), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000887280), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000887400)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000887b80)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000303c20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002186218), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c53c00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00084c4a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002186260)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
    I0616 06:34:39.432894 1 event.go:291] "Event occurred" object="minikube-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube-m02 event: Registered Node minikube-m02 in Controller"
    W0616 06:34:39.432942 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube-m02. Assuming now as a timestamp.
    I0616 06:34:44.433332 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
    I0616 06:35:50.631610 1 event.go:291] "Event occurred" object="default/hello" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-695c67cf9c to 2"
    I0616 06:35:50.636095 1 event.go:291] "Event occurred" object="default/hello-695c67cf9c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-695c67cf9c-64b5z"
    I0616 06:35:50.638908 1 event.go:291] "Event occurred" object="default/hello-695c67cf9c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-695c67cf9c-q7dfx"

  • ==> kube-proxy [8b570b95dbdc] <==

  • I0616 06:34:25.544836 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
    I0616 06:34:25.544909 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
    W0616 06:34:25.558215 1 server_others.go:584] Unknown proxy mode "", assuming iptables proxy
    I0616 06:34:25.558352 1 server_others.go:185] Using iptables Proxier.
    I0616 06:34:25.558597 1 server.go:650] Version: v1.20.7
    I0616 06:34:25.559004 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
    I0616 06:34:25.559058 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
    I0616 06:34:25.559241 1 config.go:315] Starting service config controller
    I0616 06:34:25.559255 1 shared_informer.go:240] Waiting for caches to sync for service config
    I0616 06:34:25.559274 1 config.go:224] Starting endpoint slice config controller
    I0616 06:34:25.559278 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
    I0616 06:34:25.659443 1 shared_informer.go:247] Caches are synced for endpoint slice config
    I0616 06:34:25.659461 1 shared_informer.go:247] Caches are synced for service config

  • ==> kube-scheduler [1731ddaeda2f] <==

  • I0616 06:33:55.741107 1 serving.go:331] Generated self-signed cert in-memory
    W0616 06:34:01.234375 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
    W0616 06:34:01.234405 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
    W0616 06:34:01.234451 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
    W0616 06:34:01.234457 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
    I0616 06:34:01.335343 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
    I0616 06:34:01.336005 1 tlsconfig.go:240] Starting DynamicServingCertificateController
    I0616 06:34:01.336750 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    I0616 06:34:01.337035 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
    E0616 06:34:01.340610 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
    E0616 06:34:01.340625 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
    E0616 06:34:01.340716 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
    E0616 06:34:01.340729 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
    E0616 06:34:01.340896 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
    E0616 06:34:01.340905 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
    E0616 06:34:01.341079 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
    E0616 06:34:01.341122 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
    E0616 06:34:01.341178 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
    E0616 06:34:01.341399 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
    E0616 06:34:01.341464 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
    E0616 06:34:01.341502 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
    E0616 06:34:02.150853 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
    E0616 06:34:02.260216 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
    E0616 06:34:02.300718 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
    I0616 06:34:04.037625 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file

  • ==> kubelet <==

  • -- Logs begin at Wed 2021-06-16 06:33:23 UTC, end at Wed 2021-06-16 06:37:22 UTC. --
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.384248 2534 state_mem.go:88] [cpumanager] updated default cpuset: ""
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.384255 2534 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.384268 2534 policy_none.go:43] [cpumanager] none policy: Start
    Jun 16 06:34:11 minikube kubelet[2534]: W0616 06:34:11.385680 2534 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.386001 2534 plugin_manager.go:114] Starting Kubelet Plugin Manager
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.690911 2534 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.691120 2534 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.691171 2534 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.691212 2534 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766184 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-kubeconfig") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766232 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766258 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/c31fe6a5afdd142cf3450ac972274b36-etcd-data") pod "etcd-minikube" (UID: "c31fe6a5afdd142cf3450ac972274b36")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766277 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/01d7e312da0f9c4176daa8464d4d1a50-ca-certs") pod "kube-apiserver-minikube" (UID: "01d7e312da0f9c4176daa8464d4d1a50")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766295 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766327 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-k8s-certs") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766399 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-ca-certs") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766537 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766619 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/82ed17c7f4a56a29330619386941d47e-kubeconfig") pod "kube-scheduler-minikube" (UID: "82ed17c7f4a56a29330619386941d47e")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766657 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/c31fe6a5afdd142cf3450ac972274b36-etcd-certs") pod "etcd-minikube" (UID: "c31fe6a5afdd142cf3450ac972274b36")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766744 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/01d7e312da0f9c4176daa8464d4d1a50-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "01d7e312da0f9c4176daa8464d4d1a50")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766783 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/01d7e312da0f9c4176daa8464d4d1a50-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "01d7e312da0f9c4176daa8464d4d1a50")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766807 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/01d7e312da0f9c4176daa8464d4d1a50-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "01d7e312da0f9c4176daa8464d4d1a50")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766830 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/01d7e312da0f9c4176daa8464d4d1a50-k8s-certs") pod "kube-apiserver-minikube" (UID: "01d7e312da0f9c4176daa8464d4d1a50")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766856 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/c7b8fa13668654de8887eea36ddd7b5b-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "c7b8fa13668654de8887eea36ddd7b5b")
    Jun 16 06:34:11 minikube kubelet[2534]: I0616 06:34:11.766872 2534 reconciler.go:157] Reconciler: start to sync state
    Jun 16 06:34:14 minikube kubelet[2534]: W0616 06:34:14.578078 2534 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.mk
    Jun 16 06:34:16 minikube kubelet[2534]: E0616 06:34:16.400191 2534 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    Jun 16 06:34:19 minikube kubelet[2534]: W0616 06:34:19.578262 2534 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.mk
    Jun 16 06:34:21 minikube kubelet[2534]: E0616 06:34:21.413601 2534 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.185051 2534 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.185373 2534 docker_service.go:358] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.185539 2534 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
    Jun 16 06:34:24 minikube kubelet[2534]: E0616 06:34:24.234054 2534 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.451878 2534 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.456503 2534 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 16 06:34:24 minikube kubelet[2534]: W0616 06:34:24.578488 2534 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.mk
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.588657 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/ad9083bc-6a4f-4b5c-96e8-963e2af4c369-cni-cfg") pod "kindnet-d9g4h" (UID: "ad9083bc-6a4f-4b5c-96e8-963e2af4c369")
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.588687 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2c30af86-599c-4585-a126-19ac0f0fc982-kube-proxy") pod "kube-proxy-vwqrw" (UID: "2c30af86-599c-4585-a126-19ac0f0fc982")
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.588709 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/ad9083bc-6a4f-4b5c-96e8-963e2af4c369-lib-modules") pod "kindnet-d9g4h" (UID: "ad9083bc-6a4f-4b5c-96e8-963e2af4c369")
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.588814 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-jxr2s" (UniqueName: "kubernetes.io/secret/ad9083bc-6a4f-4b5c-96e8-963e2af4c369-kindnet-token-jxr2s") pod "kindnet-d9g4h" (UID: "ad9083bc-6a4f-4b5c-96e8-963e2af4c369")
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.589110 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/2c30af86-599c-4585-a126-19ac0f0fc982-lib-modules") pod "kube-proxy-vwqrw" (UID: "2c30af86-599c-4585-a126-19ac0f0fc982")
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.589164 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/ad9083bc-6a4f-4b5c-96e8-963e2af4c369-xtables-lock") pod "kindnet-d9g4h" (UID: "ad9083bc-6a4f-4b5c-96e8-963e2af4c369")
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.589182 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/2c30af86-599c-4585-a126-19ac0f0fc982-xtables-lock") pod "kube-proxy-vwqrw" (UID: "2c30af86-599c-4585-a126-19ac0f0fc982")
    Jun 16 06:34:24 minikube kubelet[2534]: I0616 06:34:24.589203 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-hns7b" (UniqueName: "kubernetes.io/secret/2c30af86-599c-4585-a126-19ac0f0fc982-kube-proxy-token-hns7b") pod "kube-proxy-vwqrw" (UID: "2c30af86-599c-4585-a126-19ac0f0fc982")
    Jun 16 06:34:25 minikube kubelet[2534]: W0616 06:34:25.645197 2534 pod_container_deletor.go:79] Container "3fa939d61fe5e39ae365a352843a0be01b940227f25d81975b8182b3f5b766a5" not found in pod's containers
    Jun 16 06:34:25 minikube kubelet[2534]: W0616 06:34:25.649712 2534 pod_container_deletor.go:79] Container "28bde7cc6ec32773fb6735e86609c5fd472a42903490320614f05cdc8fbc4c9b" not found in pod's containers
    Jun 16 06:34:26 minikube kubelet[2534]: E0616 06:34:26.426832 2534 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    Jun 16 06:34:29 minikube kubelet[2534]: W0616 06:34:29.578776 2534 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.mk
    Jun 16 06:34:31 minikube kubelet[2534]: E0616 06:34:31.441238 2534 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    Jun 16 06:34:34 minikube kubelet[2534]: W0616 06:34:34.579041 2534 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.mk
    Jun 16 06:34:36 minikube kubelet[2534]: E0616 06:34:36.454287 2534 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    Jun 16 06:34:47 minikube kubelet[2534]: I0616 06:34:47.969855 2534 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 16 06:34:47 minikube kubelet[2534]: I0616 06:34:47.970095 2534 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 16 06:34:48 minikube kubelet[2534]: I0616 06:34:48.158378 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-vdfsw" (UniqueName: "kubernetes.io/secret/4f96a848-cabf-4b9b-ae96-65d6e9d7f518-coredns-token-vdfsw") pod "coredns-74ff55c5b-6fs46" (UID: "4f96a848-cabf-4b9b-ae96-65d6e9d7f518")
    Jun 16 06:34:48 minikube kubelet[2534]: I0616 06:34:48.158421 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f5998c12-2265-4d3d-aa55-4ed55c73a87c-tmp") pod "storage-provisioner" (UID: "f5998c12-2265-4d3d-aa55-4ed55c73a87c")
    Jun 16 06:34:48 minikube kubelet[2534]: I0616 06:34:48.158452 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-7j9kx" (UniqueName: "kubernetes.io/secret/f5998c12-2265-4d3d-aa55-4ed55c73a87c-storage-provisioner-token-7j9kx") pod "storage-provisioner" (UID: "f5998c12-2265-4d3d-aa55-4ed55c73a87c")
    Jun 16 06:34:48 minikube kubelet[2534]: I0616 06:34:48.158579 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4f96a848-cabf-4b9b-ae96-65d6e9d7f518-config-volume") pod "coredns-74ff55c5b-6fs46" (UID: "4f96a848-cabf-4b9b-ae96-65d6e9d7f518")
    Jun 16 06:34:48 minikube kubelet[2534]: E0616 06:34:48.795240 2534 kuberuntime_manager.go:965] PodSandboxStatus of sandbox "bdb5df88cb094e67dbdbce6bdc4ee586dd2e38bd4adead449381d52df40f1cab" for pod "coredns-74ff55c5b-6fs46_kube-system(4f96a848-cabf-4b9b-ae96-65d6e9d7f518)" error: rpc error: code = Unknown desc = Error: No such container: bdb5df88cb094e67dbdbce6bdc4ee586dd2e38bd4adead449381d52df40f1cab
    Jun 16 06:35:50 minikube kubelet[2534]: I0616 06:35:50.641860 2534 topology_manager.go:187] [topologymanager] Topology Admit Handler
    Jun 16 06:35:50 minikube kubelet[2534]: I0616 06:35:50.812439 2534 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-s9b7z" (UniqueName: "kubernetes.io/secret/98b2628e-aa2c-4573-94b5-23fdb628c6de-default-token-s9b7z") pod "hello-695c67cf9c-q7dfx" (UID: "98b2628e-aa2c-4573-94b5-23fdb628c6de")

  • ==> storage-provisioner [4c856e5c8e82] <==

  • I0616 06:34:49.600006 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
    I0616 06:34:49.638076 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
    I0616 06:34:49.638128 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
    I0616 06:34:49.657373 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
    I0616 06:34:49.657522 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_15011124-013a-47c1-906d-1ac7c14dfcb4!
    I0616 06:34:49.657505 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbcddbcc-4c5c-4755-95b7-78130b5d4953", APIVersion:"v1", ResourceVersion:"519", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_15011124-013a-47c1-906d-1ac7c14dfcb4 became leader
    I0616 06:34:49.758492 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_15011124-013a-47c1-906d-1ac7c14dfcb4!

@spowelljr spowelljr added co/multinode Issues related to multinode clusters kind/support Categorizes issue or PR as a support question. labels Jun 16, 2021
@charleech
Copy link
Author

charleech commented Jun 21, 2021

After digging during the weekend, I've found that it is about the IP Address of the Pod network, 10.244.0.0/16 which may conflict with my internal network somewhere in the mist ! Then I try to find a way to change it so that I found the #9364 and #9838

I've changed the start up command to the following and everything works like a charm.

minikube start --nodes 2 \
--docker-opt bip=172.18.0.1/16 \
--extra-config=kubeadm.pod-network-cidr=12.244.0.0/16
for i in `seq 1 10`; do curl http://192.168.49.2:31000; echo; done

Hello from hello-695c67cf9c-5rxxf (12.244.0.2)
Hello from hello-695c67cf9c-hc4hn (12.244.1.3)
Hello from hello-695c67cf9c-5rxxf (12.244.0.2)
Hello from hello-695c67cf9c-5rxxf (12.244.0.2)
Hello from hello-695c67cf9c-hc4hn (12.244.1.3)
Hello from hello-695c67cf9c-5rxxf (12.244.0.2)
Hello from hello-695c67cf9c-5rxxf (12.244.0.2)
Hello from hello-695c67cf9c-5rxxf (12.244.0.2)
Hello from hello-695c67cf9c-5rxxf (12.244.0.2)
Hello from hello-695c67cf9c-hc4hn (12.244.1.3)

@charleech
Copy link
Author

I'm not sure if there is any document mentioning about the --docker-opt bip=172.18.0.1/16, --extra-config=kubeadm.pod-network-cidr=12.244.0.0/16 and other special additional parameters like them or not. Could you please help to advise further?

@charleech
Copy link
Author

Could you please help to point me to the proper direction?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/multinode Issues related to multinode clusters kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

2 participants