Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none driver: not able to use kubectl as normal user #8673

Open
Lavie526 opened this issue Jul 8, 2020 · 20 comments
Open

none driver: not able to use kubectl as normal user #8673

Lavie526 opened this issue Jul 8, 2020 · 20 comments
Labels
co/none-driver kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@Lavie526
Copy link

Lavie526 commented Jul 8, 2020

Steps to reproduce the issue:

  1. sudo minikube start --vm-driver=none --docker-opt="default-ulimit=core=-1" --alsologtostderr --extra-config=kubelet.cgroups-per-qos=false --extra-config=kubelet.enforce-node-allocatable=""
  2. There will be proxy issue while start minikube like how to skip download kubeadm & kubelet . because I download these in $PATH. #3846, so i tried to download those binary manually. However i found the cache folder is under the /root. Anyway i manually copy kubectl/kubelet to the default cache folder like /root/.minikube/cache/linux/v1.18.3. It finally start succesfully.
  3. I am able to run the command like sudo kubectl get nodes. However, when i run kubectl get nodes, it will show the error likes:

W0708 01:55:39.604315 22957 loader.go:223] Config not found: /scratch/jiekong/.kube/config
The connection to the server localhost:8080 was refused - did you specify the right host or port?

I have already set export CHANGE_MINIKUBE_NONE_USER=true when start minikube.

Full output of failed command:

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

Also, there are some message notify me like this:

❗ The 'none' driver is designed for experts who need to integrate with an existing VM
💡 Most users should use the newer 'docker' driver instead, which does not require root!
📘 For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

❗ kubectl and minikube configuration will be stored in /root
❗ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

▪ sudo mv /root/.kube /root/.minikube $HOME
▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

If i try to run the above two command, then try, it still not able to run.

@Lavie526
Copy link
Author

Lavie526 commented Jul 8, 2020

Anything wrong? How should I config to make it able to run minikube without sudo?
It installed everything to /root/.kube /root/.minikube. But not
/scratch/$USER/.minikube, and /scratch/$USER/.kube. After i manully move then, it is not able to use.
I remember the previous minikube version doesn't have this issue, is it a new issue for the latest version?

@Lavie526
Copy link
Author

Lavie526 commented Jul 8, 2020

Some updates-------------:
while using:
sudo -E minikube start --vm-driver=none --docker-opt="default-ulimit=core=-1" --alsologtostderr --extra-config=kubelet.cgroups-per-qos=false --extra-config=kubelet.enforce-node-allocatable=""

It is able to start under /scratch/$USER/, and it shows minikube start succefully.

However while i try to run kubectl get node with sudo or without sudo, it shows:
Unable to connect to the server: net/http: TLS handshake timeout

What's the problem?
Why not able to use the kubectl after start minikube?

@Lavie526
Copy link
Author

Lavie526 commented Jul 8, 2020

-- /stdout --
I0708 04:51:02.063069 23833 docker.go:384] kubernetesui/dashboard:v2.0.0 wasn't preloaded
I0708 04:51:02.063135 23833 exec_runner.go:49] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0708 04:51:02.173554 23833 store.go:62] repositories.json doesn't exist: sudo cat /var/lib/docker/image/overlay2/repositories.json: exit status 1
stdout:

stderr:
cat: /var/lib/docker/image/overlay2/repositories.json: No such file or directory
I0708 04:51:02.174013 23833 exec_runner.go:49] Run: which lz4
I0708 04:51:02.175049 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0708 04:51:02.175169 23833 kubeadm.go:719] prelaoding failed, will try to load cached images: getting file asset: open: open /scratch/jiekong/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4: no such file or directory
I0708 04:51:02.175335 23833 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:10.88.105.73 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:den03fyu DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.88.105.73"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:10.88.105.73 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0708 04:51:02.175623 23833 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.88.105.73
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: "den03fyu"
      kubeletExtraArgs:
      node-ip: 10.88.105.73
      taints: []

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "10.88.105.73"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.3
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration

disable disk resource management by default

imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 10.88.105.73:10249

I0708 04:51:02.176712 23833 exec_runner.go:49] Run: docker info --format {{.CgroupDriver}}
I0708 04:51:02.306243 23833 kubeadm.go:755] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=den03fyu --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.88.105.73 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
config:
{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0708 04:51:02.306964 23833 exec_runner.go:49] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
I0708 04:51:02.420730 23833 binaries.go:46] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.18.3: exit status 2
stdout:

stderr:
ls: cannot access /var/lib/minikube/binaries/v1.18.3: No such file or directory

Initiating transfer...
I0708 04:51:02.421292 23833 exec_runner.go:49] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.18.3
I0708 04:51:02.529645 23833 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubectl.sha256
I0708 04:51:02.529868 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubectl -> /var/lib/minikube/binaries/v1.18.3/kubectl
I0708 04:51:02.530034 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubectl --> /var/lib/minikube/binaries/v1.18.3/kubectl (44032000 bytes)
I0708 04:51:02.529702 23833 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubelet.sha256
I0708 04:51:02.530323 23833 exec_runner.go:49] Run: sudo systemctl is-active --quiet service kubelet
I0708 04:51:02.529724 23833 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubeadm.sha256
I0708 04:51:02.530660 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubeadm -> /var/lib/minikube/binaries/v1.18.3/kubeadm
I0708 04:51:02.530727 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubeadm --> /var/lib/minikube/binaries/v1.18.3/kubeadm (39813120 bytes)
I0708 04:51:02.657218 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubelet -> /var/lib/minikube/binaries/v1.18.3/kubelet
I0708 04:51:02.657524 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/cache/linux/v1.18.3/kubelet --> /var/lib/minikube/binaries/v1.18.3/kubelet (113283800 bytes)
I0708 04:51:02.918152 23833 exec_runner.go:49] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0708 04:51:03.031935 23833 exec_runner.go:91] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
I0708 04:51:03.032202 23833 exec_runner.go:98] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (534 bytes)
I0708 04:51:03.032429 23833 exec_runner.go:91] found /lib/systemd/system/kubelet.service, removing ...
I0708 04:51:03.032605 23833 exec_runner.go:98] cp: memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0708 04:51:03.032782 23833 exec_runner.go:98] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (1440 bytes)
I0708 04:51:03.032963 23833 start.go:268] checking
I0708 04:51:03.033126 23833 exec_runner.go:49] Run: grep 10.88.105.73 control-plane.minikube.internal$ /etc/hosts
I0708 04:51:03.034879 23833 exec_runner.go:49] Run: sudo systemctl daemon-reload
I0708 04:51:03.208210 23833 exec_runner.go:49] Run: sudo systemctl start kubelet
I0708 04:51:03.368188 23833 certs.go:52] Setting up /scratch/jiekong/.minikube/profiles/minikube for IP: 10.88.105.73
I0708 04:51:03.368411 23833 certs.go:169] skipping minikubeCA CA generation: /scratch/jiekong/.minikube/ca.key
I0708 04:51:03.368509 23833 certs.go:169] skipping proxyClientCA CA generation: /scratch/jiekong/.minikube/proxy-client-ca.key
I0708 04:51:03.368709 23833 certs.go:273] generating minikube-user signed cert: /scratch/jiekong/.minikube/profiles/minikube/client.key
I0708 04:51:03.368794 23833 crypto.go:69] Generating cert /scratch/jiekong/.minikube/profiles/minikube/client.crt with IP's: []
I0708 04:51:03.568353 23833 crypto.go:157] Writing cert to /scratch/jiekong/.minikube/profiles/minikube/client.crt ...
I0708 04:51:03.568520 23833 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/client.crt: {Name:mk102f7d86706185740d9bc9a57fc1d55716aadc Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0708 04:51:03.568955 23833 crypto.go:165] Writing key to /scratch/jiekong/.minikube/profiles/minikube/client.key ...
I0708 04:51:03.569063 23833 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/client.key: {Name:mkef0a0f26fc07209d23f79940d16c45455b63f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0708 04:51:03.569317 23833 certs.go:273] generating minikube signed cert: /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.32d4771a
I0708 04:51:03.569414 23833 crypto.go:69] Generating cert /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.32d4771a with IP's: [10.88.105.73 10.96.0.1 127.0.0.1 10.0.0.1]
I0708 04:51:03.736959 23833 crypto.go:157] Writing cert to /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.32d4771a ...
I0708 04:51:03.737194 23833 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.32d4771a: {Name:mk19df70614448c14ee6429d417342b7419a8c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0708 04:51:03.737692 23833 crypto.go:165] Writing key to /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.32d4771a ...
I0708 04:51:03.737835 23833 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.32d4771a: {Name:mk9a84f03fab0f9ac49212061d52f713117f834c Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0708 04:51:03.738100 23833 certs.go:284] copying /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt.32d4771a -> /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt
I0708 04:51:03.738361 23833 certs.go:288] copying /scratch/jiekong/.minikube/profiles/minikube/apiserver.key.32d4771a -> /scratch/jiekong/.minikube/profiles/minikube/apiserver.key
I0708 04:51:03.738576 23833 certs.go:273] generating aggregator signed cert: /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key
I0708 04:51:03.738679 23833 crypto.go:69] Generating cert /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0708 04:51:04.190088 23833 crypto.go:157] Writing cert to /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt ...
I0708 04:51:04.190175 23833 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt: {Name:mkd86cf3f7172f909cc9174e9befa523ad3f3568 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0708 04:51:04.190493 23833 crypto.go:165] Writing key to /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key ...
I0708 04:51:04.190577 23833 lock.go:35] WriteFile acquiring /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key: {Name:mk86f427bfbc5f46a12e1a6ff48f5514472dcc9b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0708 04:51:04.190800 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0708 04:51:04.190881 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0708 04:51:04.190952 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0708 04:51:04.190988 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0708 04:51:04.191013 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0708 04:51:04.191036 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0708 04:51:04.191086 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0708 04:51:04.191132 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0708 04:51:04.191215 23833 certs.go:348] found cert: /scratch/jiekong/.minikube/certs/scratch/jiekong/.minikube/certs/ca-key.pem (1679 bytes)
I0708 04:51:04.191286 23833 certs.go:348] found cert: /scratch/jiekong/.minikube/certs/scratch/jiekong/.minikube/certs/ca.pem (1029 bytes)
I0708 04:51:04.191349 23833 certs.go:348] found cert: /scratch/jiekong/.minikube/certs/scratch/jiekong/.minikube/certs/cert.pem (1070 bytes)
I0708 04:51:04.191390 23833 certs.go:348] found cert: /scratch/jiekong/.minikube/certs/scratch/jiekong/.minikube/certs/key.pem (1675 bytes)
I0708 04:51:04.191438 23833 vm_assets.go:95] NewFileAsset: /scratch/jiekong/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0708 04:51:04.192656 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0708 04:51:04.192824 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0708 04:51:04.192973 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0708 04:51:04.193079 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0708 04:51:04.193202 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0708 04:51:04.193285 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0708 04:51:04.193435 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0708 04:51:04.193566 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0708 04:51:04.193681 23833 exec_runner.go:91] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
I0708 04:51:04.193770 23833 exec_runner.go:98] cp: /scratch/jiekong/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0708 04:51:04.193852 23833 exec_runner.go:98] cp: memory --> /var/lib/minikube/kubeconfig (398 bytes)
I0708 04:51:04.194000 23833 exec_runner.go:49] Run: openssl version
I0708 04:51:04.198708 23833 exec_runner.go:49] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0708 04:51:04.313826 23833 exec_runner.go:49] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0708 04:51:04.316710 23833 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Jul 8 04:51 /usr/share/ca-certificates/minikubeCA.pem
I0708 04:51:04.316881 23833 exec_runner.go:49] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0708 04:51:04.329279 23833 exec_runner.go:49] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0708 04:51:04.444713 23833 kubeadm.go:293] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8192 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:10.88.105.73 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0708 04:51:04.445014 23833 exec_runner.go:49] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0708 04:51:04.502050 23833 exec_runner.go:49] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0708 04:51:04.616553 23833 exec_runner.go:49] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0708 04:51:04.729849 23833 exec_runner.go:49] Run: docker version --format {{.Server.Version}}
I0708 04:51:04.789067 23833 exec_runner.go:49] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0708 04:51:04.906194 23833 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
stdout:

stderr:
ls: cannot access /etc/kubernetes/admin.conf: No such file or directory
ls: cannot access /etc/kubernetes/kubelet.conf: No such file or directory
ls: cannot access /etc/kubernetes/controller-manager.conf: No such file or directory
ls: cannot access /etc/kubernetes/scheduler.conf: No such file or directory
I0708 04:51:04.906999 23833 exec_runner.go:49] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
I0708 04:51:24.990980 23833 exec_runner.go:78] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": (20.083826345s)
I0708 04:51:24.991236 23833 exec_runner.go:49] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0708 04:51:24.991362 23833 exec_runner.go:49] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0708 04:51:24.991467 23833 exec_runner.go:49] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl label nodes minikube.k8s.io/version=v1.11.0 minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_07_08T04_51_24_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0708 04:51:25.009728 23833 ops.go:35] apiserver oom_adj: -16
I0708 04:51:25.305621 23833 kubeadm.go:890] duration metric: took 314.347286ms to wait for elevateKubeSystemPrivileges.
I0708 04:51:25.308559 23833 kubeadm.go:295] StartCluster complete in 20.863855543s
I0708 04:51:25.308636 23833 settings.go:123] acquiring lock: {Name:mk6f220c874ab31ad6cc0cf9a6c90f7ab17dd518 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0708 04:51:25.308850 23833 settings.go:131] Updating kubeconfig: /scratch/jiekong/.kube/config
I0708 04:51:25.310299 23833 lock.go:35] WriteFile acquiring /scratch/jiekong/.kube/config: {Name:mk262b9661e6e96133150ac3387d626503976a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
🤹 Configuring local host environment ...

❗ The 'none' driver is designed for experts who need to integrate with an existing VM
💡 Most users should use the newer 'docker' driver instead, which does not require root!
📘 For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

I0708 04:51:25.311027 23833 addons.go:320] enableAddons start: toEnable=map[], additional=[]
I0708 04:51:25.311925 23833 addons.go:50] Setting storage-provisioner=true in profile "minikube"
I0708 04:51:25.311964 23833 addons.go:126] Setting addon storage-provisioner=true in "minikube"
W0708 04:51:25.311982 23833 addons.go:135] addon storage-provisioner should already be in state true
I0708 04:51:25.312003 23833 host.go:65] Checking if "minikube" exists ...
🔎 Verifying Kubernetes components...
I0708 04:51:25.312675 23833 kubeconfig.go:93] found "minikube" server: "https://10.88.105.73:8443"
I0708 04:51:25.313868 23833 api_server.go:145] Checking apiserver status ...
I0708 04:51:25.313937 23833 exec_runner.go:49] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0708 04:51:25.312693 23833 addons.go:50] Setting default-storageclass=true in profile "minikube"
I0708 04:51:25.314281 23833 kapi.go:58] client config for minikube: &rest.Config{Host:"https://10.88.105.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/scratch/jiekong/.minikube/profiles/minikube/client.crt", KeyFile:"/scratch/jiekong/.minikube/profiles/minikube/client.key", CAFile:"/scratch/jiekong/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1612b60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
I0708 04:51:25.314364 23833 addons.go:266] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0708 04:51:25.315000 23833 kubeconfig.go:93] found "minikube" server: "https://10.88.105.73:8443"
I0708 04:51:25.315131 23833 api_server.go:145] Checking apiserver status ...
I0708 04:51:25.315261 23833 exec_runner.go:49] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0708 04:51:25.316834 23833 api_server.go:47] waiting for apiserver process to appear ...
I0708 04:51:25.316951 23833 exec_runner.go:49] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0708 04:51:25.445681 23833 api_server.go:67] duration metric: took 133.146191ms to wait for apiserver process to appear ...
I0708 04:51:25.445833 23833 api_server.go:83] waiting for apiserver healthz status ...
I0708 04:51:25.445948 23833 api_server.go:193] Checking apiserver healthz at https://10.88.105.73:8443/healthz ...
I0708 04:51:25.445817 23833 exec_runner.go:49] Run: sudo egrep ^[0-9]+:freezer: /proc/24788/cgroup
I0708 04:51:25.448590 23833 exec_runner.go:49] Run: sudo egrep ^[0-9]+:freezer: /proc/24788/cgroup
I0708 04:51:25.453144 23833 api_server.go:213] https://10.88.105.73:8443/healthz returned 200:
ok
I0708 04:51:25.462488 23833 api_server.go:136] control plane version: v1.18.3
I0708 04:51:25.462555 23833 api_server.go:126] duration metric: took 16.616282ms to wait for apiserver health ...
I0708 04:51:25.462576 23833 system_pods.go:43] waiting for kube-system pods to appear ...
I0708 04:51:25.472194 23833 system_pods.go:61] 0 kube-system pods found
I0708 04:51:25.567619 23833 api_server.go:161] apiserver freezer: "7:freezer:/kubepods/burstable/pod8b2fecff29862725b4be383097508ab5/f5716b847a1ee3517e3d5a773b495a1db40598459decbe10d0957153e4c2c5f9"
I0708 04:51:25.567846 23833 exec_runner.go:49] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8b2fecff29862725b4be383097508ab5/f5716b847a1ee3517e3d5a773b495a1db40598459decbe10d0957153e4c2c5f9/freezer.state
I0708 04:51:25.568945 23833 api_server.go:161] apiserver freezer: "7:freezer:/kubepods/burstable/pod8b2fecff29862725b4be383097508ab5/f5716b847a1ee3517e3d5a773b495a1db40598459decbe10d0957153e4c2c5f9"
I0708 04:51:25.569079 23833 exec_runner.go:49] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8b2fecff29862725b4be383097508ab5/f5716b847a1ee3517e3d5a773b495a1db40598459decbe10d0957153e4c2c5f9/freezer.state
I0708 04:51:25.677309 23833 api_server.go:183] freezer state: "THAWED"
I0708 04:51:25.677357 23833 api_server.go:193] Checking apiserver healthz at https://10.88.105.73:8443/healthz ...
I0708 04:51:25.684256 23833 api_server.go:213] https://10.88.105.73:8443/healthz returned 200:
ok
I0708 04:51:25.685210 23833 kapi.go:58] client config for minikube: &rest.Config{Host:"https://10.88.105.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/scratch/jiekong/.minikube/profiles/minikube/client.crt", KeyFile:"/scratch/jiekong/.minikube/profiles/minikube/client.key", CAFile:"/scratch/jiekong/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1612b60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
I0708 04:51:25.685698 23833 api_server.go:183] freezer state: "THAWED"
I0708 04:51:25.685841 23833 api_server.go:193] Checking apiserver healthz at https://10.88.105.73:8443/healthz ...
I0708 04:51:25.696305 23833 api_server.go:213] https://10.88.105.73:8443/healthz returned 200:
ok
I0708 04:51:25.696413 23833 addons.go:233] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0708 04:51:25.696434 23833 exec_runner.go:91] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
I0708 04:51:25.696494 23833 exec_runner.go:98] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (1709 bytes)
I0708 04:51:25.696655 23833 exec_runner.go:49] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0708 04:51:25.697318 23833 addons.go:126] Setting addon default-storageclass=true in "minikube"
W0708 04:51:25.697365 23833 addons.go:135] addon default-storageclass should already be in state true
I0708 04:51:25.697391 23833 host.go:65] Checking if "minikube" exists ...
I0708 04:51:25.698042 23833 kubeconfig.go:93] found "minikube" server: "https://10.88.105.73:8443"
I0708 04:51:25.698096 23833 api_server.go:145] Checking apiserver status ...
I0708 04:51:25.698141 23833 exec_runner.go:49] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0708 04:51:25.835513 23833 exec_runner.go:49] Run: sudo egrep ^[0-9]+:freezer: /proc/24788/cgroup
I0708 04:51:25.954509 23833 api_server.go:161] apiserver freezer: "7:freezer:/kubepods/burstable/pod8b2fecff29862725b4be383097508ab5/f5716b847a1ee3517e3d5a773b495a1db40598459decbe10d0957153e4c2c5f9"
I0708 04:51:25.954773 23833 exec_runner.go:49] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8b2fecff29862725b4be383097508ab5/f5716b847a1ee3517e3d5a773b495a1db40598459decbe10d0957153e4c2c5f9/freezer.state
I0708 04:51:25.975637 23833 system_pods.go:61] 0 kube-system pods found
I0708 04:51:26.082816 23833 api_server.go:183] freezer state: "THAWED"
I0708 04:51:26.082877 23833 api_server.go:193] Checking apiserver healthz at https://10.88.105.73:8443/healthz ...
I0708 04:51:26.088933 23833 api_server.go:213] https://10.88.105.73:8443/healthz returned 200:
ok
I0708 04:51:26.089147 23833 addons.go:233] installing /etc/kubernetes/addons/storageclass.yaml
I0708 04:51:26.089256 23833 exec_runner.go:91] found /etc/kubernetes/addons/storageclass.yaml, removing ...
I0708 04:51:26.089416 23833 exec_runner.go:98] cp: deploy/addons/storageclass/storageclass.yaml.tmpl --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0708 04:51:26.089793 23833 exec_runner.go:49] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
🌟 Enabled addons: default-storageclass, storage-provisioner
I0708 04:51:26.376795 23833 addons.go:322] enableAddons completed in 1.065764106s
I0708 04:51:26.475663 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:26.475871 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:26.975162 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:26.975384 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:27.475106 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:27.475161 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:27.975099 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:27.975138 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:28.475038 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:28.475086 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:28.974897 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:28.975125 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:29.474969 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:29.475023 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:29.976265 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:29.976329 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:30.478012 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:30.478248 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:30.975085 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:30.975134 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:31.475057 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:31.475119 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:31.976585 23833 system_pods.go:61] 4 kube-system pods found
I0708 04:51:31.976640 23833 system_pods.go:63] "coredns-66bff467f8-7tfkv" [19c5ca58-63f0-4726-8c22-66e5b3beb41c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:31.976654 23833 system_pods.go:63] "coredns-66bff467f8-8pxx8" [6bb588d6-6149-416c-9bdf-40a3506efd17] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:31.976664 23833 system_pods.go:63] "kube-proxy-lzp2j" [c7def367-ba25-4bcd-9f97-a509b89110a5] Pending
I0708 04:51:31.976674 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:31.976685 23833 system_pods.go:74] duration metric: took 6.514094256s to wait for pod list to return data ...
I0708 04:51:31.976697 23833 kubeadm.go:449] duration metric: took 6.664173023s to wait for : map[apiserver:true system_pods:true] ...
I0708 04:51:31.976716 23833 node_conditions.go:99] verifying NodePressure condition ...
I0708 04:51:31.981135 23833 node_conditions.go:111] node storage ephemeral capacity is 51474912Ki
I0708 04:51:31.981299 23833 node_conditions.go:112] node cpu capacity is 16
I0708 04:51:31.981441 23833 node_conditions.go:102] duration metric: took 4.712013ms to run NodePressure ...
🏄 Done! kubectl is now configured to use "minikube"
I0708 04:51:32.059074 23833 start.go:395] kubectl: 1.18.5, cluster: 1.18.3 (minor skew: 0)

@Lavie526
Copy link
Author

Lavie526 commented Jul 8, 2020

kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: net/http: TLS handshake timeout

@mazzystr
Copy link

mazzystr commented Jul 8, 2020

Doesn't --driver=none require root? in which case you would have to transfer the config and certs to the normal user.

I know this is terrible but I do the following....

tar -cf client.tar .kube/config .minikube/profiles/minikube .minikube/ca.* .minikube/cert*
scp root@host:/root/client.tar .
tar -xf client.tar

OR configure client via kubectl config

@Lavie526
Copy link
Author

Lavie526 commented Jul 9, 2020

@mazzystr --driver=none require root, so I use sudo before start minikube.
The current issue is with "sudo -E start minikube" it is able to install to the /scratch/$USER/ folder. And it seems minikube start succefully from the log i have paste above.
However while I use kubectl get nodes, it will show me :

Unable to connect to the server: net/http: TLS handshake timeout

I guess maybe the start is not real success, there is some information in the output log:

🌟 Enabled addons: default-storageclass, storage-provisioner
I0708 04:51:26.376795 23833 addons.go:322] enableAddons completed in 1.065764106s
I0708 04:51:26.475663 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:26.475871 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:26.975162 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:26.975384 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:27.475106 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:27.475161 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:27.975099 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:27.975138 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:28.475038 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:28.475086 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:28.974897 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:28.975125 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:29.474969 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:29.475023 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:29.976265 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:29.976329 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:30.478012 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:30.478248 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:30.975085 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:30.975134 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:31.475057 23833 system_pods.go:61] 1 kube-system pods found
I0708 04:51:31.475119 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:31.976585 23833 system_pods.go:61] 4 kube-system pods found
I0708 04:51:31.976640 23833 system_pods.go:63] "coredns-66bff467f8-7tfkv" [19c5ca58-63f0-4726-8c22-66e5b3beb41c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:31.976654 23833 system_pods.go:63] "coredns-66bff467f8-8pxx8" [6bb588d6-6149-416c-9bdf-40a3506efd17] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:31.976664 23833 system_pods.go:63] "kube-proxy-lzp2j" [c7def367-ba25-4bcd-9f97-a509b89110a5] Pending
I0708 04:51:31.976674 23833 system_pods.go:63] "storage-provisioner" [af8c20ed-4a68-4f8f-899f-733c7b19ecbc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I0708 04:51:31.976685 23833 system_pods.go:74] duration metric: took 6.514094256s to wait for pod list to return data ...
I0708 04:51:31.976697 23833 kubeadm.go:449] duration metric: took 6.664173023s to wait for : map[apiserver:true system_pods:true] ...

Not sure if it is the issue while i can't use kubectl commands after start?

@Lavie526
Copy link
Author

Lavie526 commented Jul 9, 2020

I tried to do the trasfer, after transter. It only works with sudo, like "sudo kubectl get nodes", without sudo like "kubectl get nodes" it will report:
Unable to connect to the server: net/http: TLS handshake timeout

@mazzystr
Copy link

mazzystr commented Jul 9, 2020

...and that makes sense. Everything is going to be owned by root...all the config, all the certs, all the images, everything in ~/.minikube. There's a lot of junk packed in that directory. If you try to run kubectl as a normal user it will fail.

Try running minikube start directly as root. See if that works any better.

Or sudo chown -R user ~/.minikube

@mazzystr
Copy link

mazzystr commented Jul 9, 2020

Yup, just as suspected. Ensure crio is installed and running. Ensure kubelet is installed and running.

Then run minikube start directly as root... (I add a couple extra parameters to make my env usable)

# minikube start --driver=none --container-runtime=cri-o --disk-size=50g --memory=8096m --apiserver-ips=10.88.0.1,10.88.0.2,10.88.0.3,10.88.0.4,10.88.0.5,10.88.0.6,10.88.0.7,10.88.0.8 --apiserver-name=k8s.octacube.co --apiserver-names=k8s.octacube.co
😄  minikube v1.11.0 on Fedora 32
✨  Using the none driver based on user configuration
❗  The 'none' driver does not respect the --memory flag
❗  Using the 'cri-o' runtime with the 'none' driver is an untested configuration!
👍  Starting control plane node minikube in cluster minikube
🤹  Running on localhost (CPUs=4, Memory=15886MB, Disk=102350MB) ...
ℹ️  OS release is Fedora 32 (Thirty Two)
🎁  Preparing Kubernetes v1.18.3 on CRI-O  ...
    > kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 41.99 MiB / 41.99 MiB [---------------] 100.00% 50.52 MiB p/s 1s
    > kubelet: 108.04 MiB / 108.04 MiB [-------------] 100.00% 65.79 MiB p/s 2s
    > kubeadm: 37.97 MiB / 37.97 MiB [---------------] 100.00% 22.37 MiB p/s 2s
🤹  Configuring local host environment ...

❗  The 'none' driver is designed for experts who need to integrate with an existing VM
💡  Most users should use the newer 'docker' driver instead, which does not require root!
📘  For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

❗  kubectl and minikube configuration will be stored in /root
❗  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /root/.kube /root/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

💡  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

[root@cube0 ~]# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   28s

[root@cube0 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
cube0   Ready    master   51s   v1.18.3

@Lavie526
Copy link
Author

@mazzystr However, my expectation is to run kubectl as a normal user, not as root. How should I make it works?

@mazzystr
Copy link

Documentation is perfectly clear on the root requirement for --driver=none. Link is here

@medyagh
Copy link
Member

medyagh commented Jul 10, 2020

@Lavie526 none driver is only supported with root, however I recommend using our newest driver, Docker Driver.

I recommend delting the other one
sudo minikube delete --all
then switch to normal user
and do

minikube start --driver=docker

@Lavie526 does that solve your problem?

meanwhile we do have a issue to implement none driver as non root, but it is not in our priority since docker driver is our preferred new driver.

@medyagh medyagh added long-term-support Long-term support issues that can't be fixed in code kind/support Categorizes issue or PR as a support question. labels Jul 10, 2020
@medyagh medyagh changed the title Not able to use kubectl as normal user none driver: not able to use kubectl as normal user Jul 10, 2020
@priyawadhwa priyawadhwa added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Jul 13, 2020
@Lavie526
Copy link
Author

@medyagh I tried to use docker driver("minikube start --vm-driver=docker --docker-opt="default-ulimit=core=-1" --alsologtostderr --extra-config=kubelet.cgroups-per-qos=false --extra-config=kubelet.enforce-node-allocatable="" --extra-config=kubelet.cgroup-driver=systemd") as you have suggedsted above, however, there are still issues to start up:

🌐 Found network options:
▪ NO_PROXY=localhost,127.0.0.1,172.17.0.3,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
▪ http_proxy=http://www-proxy-brmdc.us.*.com:80/
▪ https_proxy=http://www-proxy-brmdc.us.*.com:80/
▪ no_proxy=10.88.105.73,localhost,127.0.0.1,172.17.0.3
I0719 19:10:15.272657 82881 ssh_runner.go:148] Run: systemctl --version
I0719 19:10:15.272719 82881 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0719 19:10:15.273027 82881 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 19:10:15.272746 82881 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 19:10:15.334467 82881 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:9031 SSHKeyPath:/scratch/jiekong/.minikube/machines/minikube/id_rsa Username:docker}
I0719 19:10:15.338242 82881 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:9031 SSHKeyPath:/scratch/jiekong/.minikube/machines/minikube/id_rsa Username:docker}
I0719 19:10:15.417613 82881 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0719 19:10:16.630977 82881 ssh_runner.go:188] Completed: sudo systemctl is-active --quiet service containerd: (1.21305259s)
I0719 19:10:16.631374 82881 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0719 19:10:16.631196 82881 ssh_runner.go:188] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.358313967s)
W0719 19:10:16.631736 82881 start.go:504] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 7
stdout:

stderr:
curl: (7) Failed to connect to k8s.gcr.io port 443: Connection timed out
❗ This container is having trouble accessing https://k8s.gcr.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0719 19:10:16.645337 82881 cruntime.go:192] skipping containerd shutdown because we are bound to it
I0719 19:10:16.645418 82881 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0719 19:10:16.659966 82881 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0719 19:10:16.672358 82881 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0719 19:10:16.732206 82881 ssh_runner.go:148] Run: sudo systemctl start docker
I0719 19:10:16.744449 82881 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
▪ opt default-ulimit=core=-1
▪ env NO_PROXY=localhost,127.0.0.1,172.17.0.3,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
▪ env HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/
▪ env HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/
▪ env NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3
I0719 19:10:16.816869 82881 cli_runner.go:109] Run: docker network ls --filter name=bridge --format {{.ID}}
I0719 19:10:16.868461 82881 cli_runner.go:109] Run: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f58908572170
b5c831a6ae16
E0719 19:10:16.920665 82881 start.go:96] Unable to get host IP: inspect IP bridge network "f58908572170\nb5c831a6ae16".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f58908572170
b5c831a6ae16: exit status 1
stdout:

stderr:
Error: No such network: f58908572170
b5c831a6ae16
I0719 19:10:16.921025 82881 exit.go:58] WithError(failed to start node)=startup failed: Failed to setup kubeconfig: inspect IP bridge network "f58908572170\nb5c831a6ae16".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f58908572170
b5c831a6ae16: exit status 1
stdout:

stderr:
Error: No such network: f58908572170
b5c831a6ae16
called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0xc0002c0480)
/usr/local/go/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x1baebdd, 0x14, 0x1ea7cc0, 0xc000fc1de0)
/app/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2c85020, 0xc000836300, 0x0, 0x6)
/app/cmd/minikube/cmd/start.go:198 +0x40f
github.com/spf13/cobra.(*Command).execute(0x2c85020, 0xc0008362a0, 0x6, 0x6, 0x2c85020, 0xc0008362a0)
/go/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0x2c84060, 0x0, 0x1, 0xc0006a5d20)
/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
/app/cmd/minikube/cmd/root.go:106 +0x747
main.main()
/app/cmd/minikube/main.go:71 +0x143
W0719 19:10:16.921363 82881 out.go:201] failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "f58908572170\nb5c831a6ae16".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f58908572170
b5c831a6ae16: exit status 1
stdout:

stderr:
Error: No such network: f58908572170
b5c831a6ae16

💣 failed to start node: startup failed: Failed to setup kubeconfig: inspect IP bridge network "f58908572170\nb5c831a6ae16".: docker network inspect --format "{{(index .IPAM.Config 0).Gateway}}" f58908572170
b5c831a6ae16: exit status 1
stdout:

stderr:
Error: No such network: f58908572170
b5c831a6ae16

😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose


@Lavie526
Copy link
Author

Lavie526 commented Jul 20, 2020

If i use "minikube start --vm-driver=docker",
[jiekong@den03fyu ~]$ minikube start --vm-driver=docker
😄 minikube v1.12.0 on Oracle 7.4 (xen/amd64)
▪ KUBECONFIG=/scratch/jiekong/.kube/config
▪ MINIKUBE_HOME=/scratch/jiekong
✨ Using the docker driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating docker container (CPUs=2, Memory=14600MB) ...
🌐 Found network options:
▪ NO_PROXY=localhost,127.0.0.1,172.17.0.3,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
▪ http_proxy=http://www-proxy-brmdc.us.*.com:80/
▪ https_proxy=http://www-proxy-brmdc.us.*.com:80/
▪ no_proxy=10.88.105.73,localhost,127.0.0.1,172.17.0.3
❗ This container is having trouble accessing https://k8s.gcr.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
▪ env NO_PROXY=localhost,127.0.0.1,172.17.0.3,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
▪ env HTTP_PROXY=http://www-proxy-brmdc.us.*.com:80/
▪ env HTTPS_PROXY=http://www-proxy-brmdc.us.*.com:80/
▪ env NO_PROXY=10.88.105.73,localhost,127.0.0.1,172.17.0.3
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
/////////////////////////////////////////////////////////////////////////////////////////
It seems status successfully, however while I use kubectl get nodes, its status is "NotReady":
kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube NotReady master 2m35s v1.18.3

Any suggestions?

@Lavie526
Copy link
Author

Lavie526 commented Jul 21, 2020

@medyagh Do you have any suggetions about the failure using docker driver in the above?
Also, is there any prerequest using this driver? I have already installed the latest docker and configure the cgroups to systemd. Also please refer to the logs info below.

@Lavie526
Copy link
Author

[jiekong@den03fyu tmp]$ minikube logs
==> Docker <==
-- Logs begin at Tue 2020-07-21 09:10:55 UTC, end at Tue 2020-07-21 09:14:34 UTC. --
Jul 21 09:11:02 minikube dockerd[80]: time="2020-07-21T09:11:02.765700925Z" level=info msg="Daemon shutdown complete"
Jul 21 09:11:02 minikube systemd[1]: docker.service: Succeeded.
Jul 21 09:11:02 minikube systemd[1]: Stopped Docker Application Container Engine.
Jul 21 09:11:02 minikube systemd[1]: Starting Docker Application Container Engine...
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.846036110Z" level=info msg="Starting up"
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.848246031Z" level=info msg="parsed scheme: "unix"" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.848284744Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.848309277Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.848324546Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.848421438Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00048e400, CONNECTING" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.849236631Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00048e400, READY" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.850195218Z" level=info msg="parsed scheme: "unix"" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.850222047Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.850239114Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.850269039Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.850313716Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006c25b0, CONNECTING" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.850980124Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006c25b0, READY" module=grpc
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.854172332Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.862073697Z" level=warning msg="mountpoint for pids not found"
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.862271000Z" level=info msg="Loading containers: start."
Jul 21 09:11:02 minikube dockerd[309]: time="2020-07-21T09:11:02.968734904Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 21 09:11:03 minikube dockerd[309]: time="2020-07-21T09:11:03.008134706Z" level=info msg="Loading containers: done."
Jul 21 09:11:03 minikube dockerd[309]: time="2020-07-21T09:11:03.029558686Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: opaque flag erroneously copied up, consider update to kernel 4.8 or later to fix" storage-driver=overlay2
Jul 21 09:11:03 minikube dockerd[309]: time="2020-07-21T09:11:03.029888979Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
Jul 21 09:11:03 minikube dockerd[309]: time="2020-07-21T09:11:03.029945532Z" level=info msg="Daemon has completed initialization"
Jul 21 09:11:03 minikube dockerd[309]: time="2020-07-21T09:11:03.056393007Z" level=info msg="API listen on /var/run/docker.sock"
Jul 21 09:11:03 minikube dockerd[309]: time="2020-07-21T09:11:03.056424378Z" level=info msg="API listen on [::]:2376"
Jul 21 09:11:03 minikube systemd[1]: Started Docker Application Container Engine.
Jul 21 09:11:03 minikube systemd[1]: Stopping Docker Application Container Engine...
Jul 21 09:11:03 minikube dockerd[309]: time="2020-07-21T09:11:03.841762829Z" level=info msg="Processing signal 'terminated'"
Jul 21 09:11:03 minikube dockerd[309]: time="2020-07-21T09:11:03.843116958Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
Jul 21 09:11:03 minikube dockerd[309]: time="2020-07-21T09:11:03.843986935Z" level=info msg="Daemon shutdown complete"
Jul 21 09:11:03 minikube systemd[1]: docker.service: Succeeded.
Jul 21 09:11:03 minikube systemd[1]: Stopped Docker Application Container Engine.
Jul 21 09:11:04 minikube systemd[1]: Starting Docker Application Container Engine...
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.772374915Z" level=info msg="Starting up"
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.774596980Z" level=info msg="parsed scheme: "unix"" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.774621360Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.774644981Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.774673340Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.774802481Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0005c6800, CONNECTING" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.774826049Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.784331919Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0005c6800, READY" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.785529079Z" level=info msg="parsed scheme: "unix"" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.785588957Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.785620260Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.785637204Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.785704488Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00087eff0, CONNECTING" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.786190053Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00087eff0, READY" module=grpc
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.789122543Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.796375105Z" level=warning msg="mountpoint for pids not found"
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.796725314Z" level=info msg="Loading containers: start."
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.922948971Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 21 09:11:04 minikube dockerd[520]: time="2020-07-21T09:11:04.972600673Z" level=info msg="Loading containers: done."
Jul 21 09:11:05 minikube dockerd[520]: time="2020-07-21T09:11:05.003410234Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: opaque flag erroneously copied up, consider update to kernel 4.8 or later to fix" storage-driver=overlay2
Jul 21 09:11:05 minikube dockerd[520]: time="2020-07-21T09:11:05.003871594Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
Jul 21 09:11:05 minikube dockerd[520]: time="2020-07-21T09:11:05.003955262Z" level=info msg="Daemon has completed initialization"
Jul 21 09:11:05 minikube systemd[1]: Started Docker Application Container Engine.
Jul 21 09:11:05 minikube dockerd[520]: time="2020-07-21T09:11:05.034732172Z" level=info msg="API listen on [::]:2376"
Jul 21 09:11:05 minikube dockerd[520]: time="2020-07-21T09:11:05.034757544Z" level=info msg="API listen on /var/run/docker.sock"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
b765c7383ac2e 74060cea7f704 3 minutes ago Running kube-apiserver 0 d964074fa72ba
0d753c127dc63 303ce5db0e90d 3 minutes ago Running etcd 0 c98ab429dcd05
924b96c9a517c a31f78c7c8ce1 3 minutes ago Running kube-scheduler 0 0b78bfadca933
bc1da187d9749 d3e55153f52fb 3 minutes ago Running kube-controller-manager 0 dfe824c5264d0

==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=d8747aec7ebf8332ddae276d5f8fb42d3152b5a1
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_07_21T02_11_36_0700
minikube.k8s.io/version=v1.9.1
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 21 Jul 2020 09:11:32 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Tue, 21 Jul 2020 09:14:34 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Tue, 21 Jul 2020 09:14:34 +0000 Tue, 21 Jul 2020 09:11:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 21 Jul 2020 09:14:34 +0000 Tue, 21 Jul 2020 09:11:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 21 Jul 2020 09:14:34 +0000 Tue, 21 Jul 2020 09:11:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Tue, 21 Jul 2020 09:14:34 +0000 Tue, 21 Jul 2020 09:11:27 +0000 KubeletNotReady container runtime status check may not have completed yet
Addresses:
InternalIP: 172.17.0.2
Hostname: minikube
Capacity:
cpu: 16
ephemeral-storage: 804139352Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 60111844Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 804139352Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 60111844Ki
pods: 110
System Info:
Machine ID: e83acec14442432b86b3e77b6bbcfe03
System UUID: c4d95ffe-70c0-4660-806f-a43891c87d6b
Boot ID: 55a28076-973d-4fd3-9b32-b25e77bad388
Kernel Version: 4.1.12-124.39.5.1.el7uek.x86_64
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.18.0
Kube-Proxy Version: v1.18.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


kube-system kindnet-cv7kb 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2m45s
kube-system kube-proxy-vjnqz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m45s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 100m (0%) 100m (0%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal NodeHasSufficientMemory 3m10s (x5 over 3m10s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m10s (x5 over 3m10s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m10s (x4 over 3m10s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Warning FailedNodeAllocatableEnforcement 3m10s kubelet, minikube Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: failed to find subsystem mount for required subsystem: pids
Normal Starting 2m54s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 2m54s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m54s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m54s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 2m46s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 2m46s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m46s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m46s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 2m39s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 2m39s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m39s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 2m39s kubelet, minikube Starting kubelet.
Normal Starting 2m32s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 2m32s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m32s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m32s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 2m25s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 2m25s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m25s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m25s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 2m17s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 2m17s kubelet, minikube Starting kubelet.
Normal NodeHasNoDiskPressure 2m17s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 2m17s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal Starting 2m10s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 2m10s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m10s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m10s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 2m2s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 2m2s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m2s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m2s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 115s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 115s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 115s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 115s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 107s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 107s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 107s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 107s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 100s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 100s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 100s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 100s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 92s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 92s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 92s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 92s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 85s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 85s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 85s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 85s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 77s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 77s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 77s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 77s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 70s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 70s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 70s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 70s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 62s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 62s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 62s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 62s kubelet, minikube Starting kubelet.
Normal Starting 55s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 55s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 55s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 55s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 47s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 47s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 47s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 47s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 40s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 40s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 40s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal Starting 40s kubelet, minikube Starting kubelet.
Normal Starting 32s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 32s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 32s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 32s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 25s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 25s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 25s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 25s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 17s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 17s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 17s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 17s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 10s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal Starting 10s kubelet, minikube Starting kubelet.
Normal NodeHasNoDiskPressure 10s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 10s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 2s kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 2s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2s kubelet, minikube Node minikube status is now: NodeHasSufficientPID

==> dmesg <==
[Jul18 05:37] systemd-fstab-generator[52440]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul18 11:37] systemd-fstab-generator[50154]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul18 17:37] systemd-fstab-generator[44990]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul18 23:37] systemd-fstab-generator[40881]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul19 05:37] systemd-fstab-generator[36344]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul19 11:37] systemd-fstab-generator[34144]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul19 17:37] systemd-fstab-generator[28939]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul19 23:37] systemd-fstab-generator[24785]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 01:35] systemd-fstab-generator[65022]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 05:22] systemd-fstab-generator[110191]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 05:23] systemd-fstab-generator[110364]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 05:24] systemd-fstab-generator[110483]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +2.796808] systemd-fstab-generator[110870]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +18.140954] systemd-fstab-generator[112205]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 05:26] systemd-fstab-generator[117323]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 05:37] systemd-fstab-generator[123141]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 11:37] systemd-fstab-generator[4023]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 17:37] systemd-fstab-generator[63123]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul20 23:37] systemd-fstab-generator[31942]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 05:37] systemd-fstab-generator[5139]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:28] systemd-fstab-generator[84894]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:30] systemd-fstab-generator[85161]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +13.594292] systemd-fstab-generator[85297]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +6.794363] systemd-fstab-generator[85367]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +1.429949] systemd-fstab-generator[85572]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +2.556154] systemd-fstab-generator[85950]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:32] systemd-fstab-generator[89986]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:35] systemd-fstab-generator[95248]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +7.413234] systemd-fstab-generator[95589]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +1.795417] systemd-fstab-generator[95786]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +2.981647] systemd-fstab-generator[96146]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:37] systemd-fstab-generator[100059]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:41] systemd-fstab-generator[106619]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +27.338319] systemd-fstab-generator[107758]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +15.088659] systemd-fstab-generator[108197]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:42] systemd-fstab-generator[108448]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:44] systemd-fstab-generator[111004]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:46] systemd-fstab-generator[112673]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:48] systemd-fstab-generator[115320]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:56] systemd-fstab-generator[122877]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:57] systemd-fstab-generator[123507]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 07:58] systemd-fstab-generator[127690]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:07] systemd-fstab-generator[30225]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:09] systemd-fstab-generator[30698]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +4.108791] systemd-fstab-generator[31109]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +19.365822] systemd-fstab-generator[31768]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:16] systemd-fstab-generator[38093]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:19] systemd-fstab-generator[39833]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +3.431086] systemd-fstab-generator[40246]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:21] systemd-fstab-generator[42489]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:25] systemd-fstab-generator[46138]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:27] systemd-fstab-generator[48231]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:49] systemd-fstab-generator[67085]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:50] systemd-fstab-generator[73846]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:51] systemd-fstab-generator[75593]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +54.567049] systemd-fstab-generator[81482]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:52] systemd-fstab-generator[81819]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[ +3.618974] systemd-fstab-generator[82220]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:54] systemd-fstab-generator[86445]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?
[Jul21 08:57] systemd-fstab-generator[92167]: Failed to create mount unit file /run/systemd/generator/var-lib-docker.mount, as it already exists. Duplicate entry in /etc/fstab?

==> etcd [0d753c127dc6] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-07-21 09:11:27.518533 I | etcdmain: etcd Version: 3.4.3
2020-07-21 09:11:27.518604 I | etcdmain: Git SHA: 3cf2f69b5
2020-07-21 09:11:27.518611 I | etcdmain: Go Version: go1.12.12
2020-07-21 09:11:27.518634 I | etcdmain: Go OS/Arch: linux/amd64
2020-07-21 09:11:27.518641 I | etcdmain: setting maximum number of CPUs to 16, total number of available CPUs is 16
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-07-21 09:11:27.518751 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-07-21 09:11:27.520080 I | embed: name = minikube
2020-07-21 09:11:27.520099 I | embed: data dir = /var/lib/minikube/etcd
2020-07-21 09:11:27.520106 I | embed: member dir = /var/lib/minikube/etcd/member
2020-07-21 09:11:27.520112 I | embed: heartbeat = 100ms
2020-07-21 09:11:27.520117 I | embed: election = 1000ms
2020-07-21 09:11:27.520123 I | embed: snapshot count = 10000
2020-07-21 09:11:27.520133 I | embed: advertise client URLs = https://172.17.0.2:2379
2020-07-21 09:11:27.587400 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 switched to configuration voters=()
raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 became follower at term 0
raft2020/07/21 09:11:27 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 became follower at term 1
raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
2020-07-21 09:11:27.591343 W | auth: simple token is not cryptographically signed
2020-07-21 09:11:27.593987 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-07-21 09:11:27.594135 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
2020-07-21 09:11:27.594800 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
2020-07-21 09:11:27.596285 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-07-21 09:11:27.596379 I | embed: listening for peers on 172.17.0.2:2380
2020-07-21 09:11:27.596674 I | embed: listening for metrics on http://127.0.0.1:2381
raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 is starting a new election at term 1
raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 became candidate at term 2
raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
raft2020/07/21 09:11:27 INFO: b8e14bda2255bc24 became leader at term 2
raft2020/07/21 09:11:27 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
2020-07-21 09:11:27.788950 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
2020-07-21 09:11:27.788966 I | embed: ready to serve client requests
2020-07-21 09:11:27.790561 I | embed: serving client requests on 127.0.0.1:2379
2020-07-21 09:11:27.790663 I | etcdserver: setting up the initial cluster version to 3.4
2020-07-21 09:11:27.797692 N | etcdserver/membership: set the initial cluster version to 3.4
2020-07-21 09:11:27.797784 I | etcdserver/api: enabled capabilities for version 3.4
2020-07-21 09:11:27.797809 I | embed: ready to serve client requests
2020-07-21 09:11:27.799240 I | embed: serving client requests on 172.17.0.2:2379

==> kernel <==
09:14:37 up 12 days, 1:08, 0 users, load average: 0.11, 0.22, 0.23
Linux minikube 4.1.12-124.39.5.1.el7uek.x86_64 #2 SMP Tue Jun 9 20:03:37 PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [b765c7383ac2] <==
W0721 09:11:30.299644 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0721 09:11:30.310507 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0721 09:11:30.327597 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0721 09:11:30.331227 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0721 09:11:30.348568 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0721 09:11:30.371735 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0721 09:11:30.371757 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0721 09:11:30.383362 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0721 09:11:30.383383 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0721 09:11:30.385380 1 client.go:361] parsed scheme: "endpoint"
I0721 09:11:30.385443 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0721 09:11:30.395207 1 client.go:361] parsed scheme: "endpoint"
I0721 09:11:30.395243 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0721 09:11:32.806409 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0721 09:11:32.806763 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0721 09:11:32.806826 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0721 09:11:32.807530 1 secure_serving.go:178] Serving securely on [::]:8443
I0721 09:11:32.807687 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0721 09:11:32.807739 1 controller.go:81] Starting OpenAPI AggregationController
I0721 09:11:32.807863 1 autoregister_controller.go:141] Starting autoregister controller
I0721 09:11:32.807891 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0721 09:11:32.820757 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0721 09:11:32.820776 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0721 09:11:32.821368 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0721 09:11:32.821380 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0721 09:11:32.821403 1 available_controller.go:387] Starting AvailableConditionController
I0721 09:11:32.821429 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0721 09:11:32.821469 1 crd_finalizer.go:266] Starting CRDFinalizer
I0721 09:11:32.821489 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0721 09:11:32.821496 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0721 09:11:32.823059 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0721 09:11:32.823108 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0721 09:11:32.823410 1 controller.go:86] Starting OpenAPI controller
I0721 09:11:32.823431 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0721 09:11:32.823459 1 naming_controller.go:291] Starting NamingConditionController
I0721 09:11:32.823475 1 establishing_controller.go:76] Starting EstablishingController
I0721 09:11:32.823491 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0721 09:11:32.823514 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
E0721 09:11:32.829426 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg:
I0721 09:11:32.908046 1 cache.go:39] Caches are synced for autoregister controller
I0721 09:11:32.920925 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0721 09:11:32.921561 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0721 09:11:32.921609 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0721 09:11:32.923228 1 shared_informer.go:230] Caches are synced for crd-autoregister
I0721 09:11:33.806270 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0721 09:11:33.806321 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0721 09:11:33.826503 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0721 09:11:33.830419 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0721 09:11:33.830642 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0721 09:11:34.195186 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0721 09:11:34.235108 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0721 09:11:34.338863 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
I0721 09:11:34.339832 1 controller.go:606] quota admission added evaluator for: endpoints
I0721 09:11:34.349356 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0721 09:11:34.943218 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0721 09:11:35.213722 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0721 09:11:35.990292 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0721 09:11:36.201323 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0721 09:11:50.830084 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0721 09:11:51.264533 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

==> kube-controller-manager [bc1da187d974] <==
I0721 09:11:50.725530 1 shared_informer.go:223] Waiting for caches to sync for deployment
I0721 09:11:50.744247 1 controllermanager.go:533] Started "cronjob"
I0721 09:11:50.744379 1 cronjob_controller.go:97] Starting CronJob Manager
E0721 09:11:50.765983 1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0721 09:11:50.766211 1 controllermanager.go:525] Skipping "service"
I0721 09:11:50.784315 1 controllermanager.go:533] Started "endpoint"
I0721 09:11:50.785227 1 endpoints_controller.go:182] Starting endpoint controller
I0721 09:11:50.785402 1 shared_informer.go:223] Waiting for caches to sync for endpoint
I0721 09:11:50.785729 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0721 09:11:50.809299 1 shared_informer.go:230] Caches are synced for ReplicaSet
I0721 09:11:50.809382 1 shared_informer.go:230] Caches are synced for service account
W0721 09:11:50.811036 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0721 09:11:50.811888 1 shared_informer.go:230] Caches are synced for PV protection
I0721 09:11:50.812001 1 shared_informer.go:230] Caches are synced for HPA
I0721 09:11:50.812533 1 shared_informer.go:230] Caches are synced for node
I0721 09:11:50.812557 1 range_allocator.go:172] Starting range CIDR allocator
I0721 09:11:50.812564 1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
I0721 09:11:50.812572 1 shared_informer.go:230] Caches are synced for cidrallocator
I0721 09:11:50.822224 1 shared_informer.go:230] Caches are synced for certificate-csrsigning
I0721 09:11:50.825613 1 shared_informer.go:230] Caches are synced for deployment
I0721 09:11:50.827843 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I0721 09:11:50.838063 1 shared_informer.go:230] Caches are synced for bootstrap_signer
I0721 09:11:50.838155 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator
I0721 09:11:50.845151 1 shared_informer.go:230] Caches are synced for ReplicationController
I0721 09:11:50.855142 1 shared_informer.go:230] Caches are synced for namespace
I0721 09:11:50.860092 1 shared_informer.go:230] Caches are synced for GC
I0721 09:11:50.882726 1 shared_informer.go:230] Caches are synced for certificate-csrapproving
I0721 09:11:50.882726 1 shared_informer.go:230] Caches are synced for TTL
I0721 09:11:50.883030 1 shared_informer.go:230] Caches are synced for endpoint_slice
I0721 09:11:50.883389 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"f6782f7d-c5d3-46a3-a878-7f252702ed61", APIVersion:"apps/v1", ResourceVersion:"228", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
I0721 09:11:50.885735 1 shared_informer.go:230] Caches are synced for endpoint
I0721 09:11:50.902659 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dc13fb0c-fe12-4a69-b66a-2ba00467016d", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-ztb7p
E0721 09:11:50.905734 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0721 09:11:50.914200 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"dc13fb0c-fe12-4a69-b66a-2ba00467016d", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-7fb5p
I0721 09:11:50.917628 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"kube-dns", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FailedToCreateEndpoint' Failed to create endpoint for service kube-system/kube-dns: endpoints "kube-dns" already exists
I0721 09:11:50.999159 1 shared_informer.go:230] Caches are synced for job
I0721 09:11:51.259215 1 shared_informer.go:230] Caches are synced for daemon sets
I0721 09:11:51.276935 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a871297b-34f9-4b0f-931c-c1ed00ecf3e0", APIVersion:"apps/v1", ResourceVersion:"252", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-cv7kb
I0721 09:11:51.277343 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"f67beda3-2d3c-4448-bd68-d638fc6b96cd", APIVersion:"apps/v1", ResourceVersion:"238", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-vjnqz
I0721 09:11:51.286349 1 shared_informer.go:230] Caches are synced for taint
I0721 09:11:51.286438 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
I0721 09:11:51.286589 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0721 09:11:51.286881 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"87e6749e-dec9-4a3d-98cd-00aa8b21f727", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
W0721 09:11:51.293415 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0721 09:11:51.293519 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
E0721 09:11:51.296157 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"f67beda3-2d3c-4448-bd68-d638fc6b96cd", ResourceVersion:"238", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730919496, loc:(*time.Location)(0x6d021e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001800a80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001800aa0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001800ae0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001ebdc40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001800b40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001800b60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001800ba0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001857cc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001fa03d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004056c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0013ea980)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001fa0428)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0721 09:11:51.309097 1 shared_informer.go:230] Caches are synced for disruption
I0721 09:11:51.309121 1 disruption.go:339] Sending events to api server.
I0721 09:11:51.402259 1 shared_informer.go:230] Caches are synced for persistent volume
I0721 09:11:51.408459 1 shared_informer.go:230] Caches are synced for expand
I0721 09:11:51.408478 1 shared_informer.go:230] Caches are synced for resource quota
I0721 09:11:51.409709 1 shared_informer.go:230] Caches are synced for stateful set
I0721 09:11:51.413972 1 shared_informer.go:230] Caches are synced for garbage collector
I0721 09:11:51.413992 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0721 09:11:51.424852 1 shared_informer.go:230] Caches are synced for PVC protection
I0721 09:11:51.460570 1 shared_informer.go:230] Caches are synced for attach detach
I0721 09:11:51.485948 1 shared_informer.go:230] Caches are synced for garbage collector
I0721 09:11:51.855834 1 request.go:621] Throttling request took 1.039122077s, request: GET:https://172.17.0.2:8443/apis/authorization.k8s.io/v1?timeout=32s
I0721 09:11:52.457014 1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0721 09:11:52.457068 1 shared_informer.go:230] Caches are synced for resource quota

==> kube-scheduler [924b96c9a517] <==
I0721 09:11:27.719775 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0721 09:11:27.719857 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0721 09:11:28.408220 1 serving.go:313] Generated self-signed cert in-memory
W0721 09:11:32.892639 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0721 09:11:32.892670 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0721 09:11:32.892681 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0721 09:11:32.892690 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0721 09:11:32.907030 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0721 09:11:32.907076 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0721 09:11:32.908724 1 authorization.go:47] Authorization is disabled
W0721 09:11:32.908741 1 authentication.go:40] Authentication is disabled
I0721 09:11:32.908757 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0721 09:11:32.910447 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0721 09:11:32.910521 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0721 09:11:32.911630 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0721 09:11:32.911742 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0721 09:11:32.914113 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0721 09:11:32.914281 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0721 09:11:32.918825 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0721 09:11:32.920267 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0721 09:11:32.920974 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0721 09:11:32.921497 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0721 09:11:32.921521 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0721 09:11:32.921765 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0721 09:11:32.921802 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0721 09:11:32.921918 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0721 09:11:32.923390 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0721 09:11:32.923711 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0721 09:11:32.924804 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0721 09:11:32.987774 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0721 09:11:32.987887 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0721 09:11:32.988038 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0721 09:11:32.988140 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0721 09:11:32.989048 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
I0721 09:11:35.010771 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0721 09:11:35.812647 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0721 09:11:35.822540 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0721 09:11:39.347157 1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
E0721 09:13:05.852131 1 scheduler.go:385] Error updating the condition of the pod kube-system/storage-provisioner: Operation cannot be fulfilled on pods "storage-provisioner": the object has been modified; please apply your changes to the latest version and try again

==> kubelet <==
-- Logs begin at Tue 2020-07-21 09:10:55 UTC, end at Tue 2020-07-21 09:14:38 UTC. --
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.203505 9938 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.203695 9938 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.238373 9938 kubelet_node_status.go:70] Attempting to register node minikube
Jul 21 09:14:34 minikube kubelet[9938]: E0721 09:14:34.250009 9938 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.256390 9938 kubelet_node_status.go:112] Node minikube was previously registered
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.256474 9938 kubelet_node_status.go:73] Successfully registered node minikube
Jul 21 09:14:34 minikube kubelet[9938]: E0721 09:14:34.450219 9938 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.454795 9938 cpu_manager.go:184] [cpumanager] starting with none policy
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.454830 9938 cpu_manager.go:185] [cpumanager] reconciling every 10s
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.454853 9938 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.455065 9938 state_mem.go:88] [cpumanager] updated default cpuset: ""
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.455079 9938 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Jul 21 09:14:34 minikube kubelet[9938]: I0721 09:14:34.455095 9938 policy_none.go:43] [cpumanager] none policy: Start
Jul 21 09:14:34 minikube kubelet[9938]: F0721 09:14:34.456245 9938 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 21 09:14:34 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Jul 21 09:14:34 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul 21 09:14:35 minikube systemd[1]: kubelet.service: Service RestartSec=600ms expired, scheduling restart.
Jul 21 09:14:35 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24.
Jul 21 09:14:35 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jul 21 09:14:35 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.207188 10414 server.go:417] Version: v1.18.0
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.207531 10414 plugins.go:100] No cloud provider specified.
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.207601 10414 server.go:837] Client rotation is on, will bootstrap in background
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.210224 10414 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.291879 10414 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294423 10414 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294456 10414 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294527 10414 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294563 10414 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294573 10414 container_manager_linux.go:306] Creating device plugin manager: true
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294648 10414 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.294671 10414 client.go:92] Start docker client with request timeout=2m0s
Jul 21 09:14:35 minikube kubelet[10414]: W0721 09:14:35.302281 10414 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.302325 10414 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Jul 21 09:14:35 minikube kubelet[10414]: W0721 09:14:35.309729 10414 plugins.go:193] can't set sysctl net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.309815 10414 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.317122 10414 docker_service.go:258] Docker Info: &{ID:KX6N:MJFK:QB5C:TQXV:R2SR:HYOP:2TNZ:BOVD:KVVT:L2FL:OVN7:FTK3 Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:69 SystemTime:2020-07-21T09:14:35.310786544Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.1.12-124.39.5.1.el7uek.x86_64 OperatingSystem:Ubuntu 19.10 (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0001f0f50 NCPU:16 MemTotal:61554528256 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy:http://www-proxy-brmdc.us.oracle.com:80/ HTTPSProxy:http://www-proxy-brmdc.us.oracle.com:80/ NoProxy:10.88.105.73,localhost,127.0.0.1,.us.oracle.com,.oraclecorp.com,172.17.0.3 Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:19.03.2 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ff48f57fc83a8c44cf4ad5d672424a98ba37ded6 Expected:ff48f57fc83a8c44cf4ad5d672424a98ba37ded6} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled]}
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.317235 10414 docker_service.go:271] Setting cgroupDriver to cgroupfs
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.334533 10414 remote_runtime.go:59] parsed scheme: ""
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.334577 10414 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.334615 10414 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.334628 10414 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.334684 10414 remote_image.go:50] parsed scheme: ""
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.334694 10414 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.334721 10414 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.334730 10414 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.334764 10414 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Jul 21 09:14:35 minikube kubelet[10414]: I0721 09:14:35.334787 10414 kubelet.go:317] Watching apiserver

@priyawadhwa
Copy link

Hey @Lavie526 -- could you please upgrade to minikube v1.12.2 and then run the following:

minikube delete
minikube start --driver docker

if that fails, please provide the output of:

  • kubectl get po -A
  • docker network ls
  • docker ps -a

@priyawadhwa priyawadhwa added the triage/needs-information Indicates an issue needs more information in order to work on it. label Aug 12, 2020
This was referenced Sep 3, 2020
@afbjorklund afbjorklund added co/none-driver triage/duplicate Indicates an issue is a duplicate of other open issue. and removed long-term-support Long-term support issues that can't be fixed in code triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Sep 13, 2020
@afbjorklund
Copy link
Collaborator

This sounds like a duplicate of #3760

Using sudo kubectl is correct (for now)

@afbjorklund afbjorklund added the kind/bug Categorizes issue or PR as related to a bug. label Sep 13, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 12, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 11, 2021
@priyawadhwa priyawadhwa added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jan 27, 2021
@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels May 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

8 participants