Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tunnel on windows: "sudo": executable file not found in %PATH% #9078

Closed
smacdav opened this issue Aug 25, 2020 · 16 comments · Fixed by #9753
Closed

tunnel on windows: "sudo": executable file not found in %PATH% #9078

smacdav opened this issue Aug 25, 2020 · 16 comments · Fixed by #9753
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. os/windows priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@smacdav
Copy link

smacdav commented Aug 25, 2020

Steps to reproduce the issue:

  1. Start minikube: minikube start --driver=docker
  2. Enable contour: kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
  3. In another cmd prompt: minikube tunnel
    Get the following output:
! The service envoy requires privileged ports to be exposed: [80 443]
* sudo permission will be asked for it.
* Starting tunnel for service envoy.
E0825 08:21:37.881647   14188 ssh_tunnel.go:113] error starting ssh tunnel: exec: "sudo": executable file not found in %PATH%

Full output of failed command:

I0825 08:21:34.195504   14188 mustload.go:64] Loading cluster: minikube
I0825 08:21:34.252507   14188 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0825 08:21:34.376524   14188 host.go:65] Checking if "minikube" exists ...
I0825 08:21:34.401504   14188 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0825 08:21:34.532508   14188 api_server.go:146] Checking apiserver status ...
I0825 08:21:34.576508   14188 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0825 08:21:34.657504   14188 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0825 08:21:34.814523   14188 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\smacd\.minikube\machines\minikube\id_rsa Username:docker}
I0825 08:21:35.914761   14188 ssh_runner.go:188] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.3052279s)
I0825 08:21:35.944767   14188 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/2111/cgroup
I0825 08:21:36.794713   14188 api_server.go:162] apiserver freezer: "20:freezer:/docker/686dfdac2f84b49e8b35d29de4306b63f276d8dc79237b1b87b216abfb467578/kubepods/burstable/pod6ff2e3bf96dbdcdd33879625130d5ccc/404c01b15f4715329080a176ded5e49382f1b081f85714bcf570f2086ffa2f18"
I0825 08:21:36.821731   14188 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/686dfdac2f84b49e8b35d29de4306b63f276d8dc79237b1b87b216abfb467578/kubepods/burstable/pod6ff2e3bf96dbdcdd33879625130d5ccc/404c01b15f4715329080a176ded5e49382f1b081f85714bcf570f2086ffa2f18/freezer.state
I0825 08:21:37.660470   14188 api_server.go:184] freezer state: "THAWED"
I0825 08:21:37.661468   14188 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32780/healthz ...
I0825 08:21:37.672472   14188 api_server.go:241] https://127.0.0.1:32780/healthz returned 200:
ok
I0825 08:21:37.672472   14188 tunnel.go:56] Checking for tunnels to cleanup...
I0825 08:21:37.714501   14188 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
! The service envoy requires privileged ports to be exposed: [80 443]
* sudo permission will be asked for it.
* Starting tunnel for service envoy.
E0825 08:21:37.881647   14188 ssh_tunnel.go:113] error starting ssh tunnel: exec: "sudo": executable file not found in %PATH%
I0825 08:21:37.896677   14188 loadbalancer_patcher.go:121] Patched envoy with IP 127.0.0.1

Full output of minikube start command used, if not already included:

C:\Users\smacd>minikube start --driver=docker
* minikube v1.12.2 on Microsoft Windows 10 Pro 10.0.19041 Build 19041
* Using the docker driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Creating docker container (CPUs=2, Memory=4000MB) ...
* Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"

! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.16.6-beta.0, which may be incompatible with Kubernetes 1.18.3.
* You can also use 'minikube kubectl -- get pods' to invoke a matching version

Optional: Full output of minikube logs command:

* ==> Docker <== * -- Logs begin at Tue 2020-08-25 14:53:37 UTC, end at Tue 2020-08-25 15:30:00 UTC. -- * Aug 25 14:53:37 minikube systemd[1]: Starting Docker Application Container Engine... * Aug 25 14:53:37 minikube dockerd[155]: time="2020-08-25T14:53:37.748286700Z" level=info msg="Starting up" * Aug 25 14:53:37 minikube dockerd[155]: time="2020-08-25T14:53:37.750958300Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Aug 25 14:53:37 minikube dockerd[155]: time="2020-08-25T14:53:37.751015300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Aug 25 14:53:37 minikube dockerd[155]: time="2020-08-25T14:53:37.751044900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Aug 25 14:53:37 minikube dockerd[155]: time="2020-08-25T14:53:37.751059000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Aug 25 14:53:37 minikube dockerd[155]: time="2020-08-25T14:53:37.759434200Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Aug 25 14:53:37 minikube dockerd[155]: time="2020-08-25T14:53:37.759553900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Aug 25 14:53:37 minikube dockerd[155]: time="2020-08-25T14:53:37.759581300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Aug 25 14:53:37 minikube dockerd[155]: time="2020-08-25T14:53:37.759686200Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.143400800Z" level=warning msg="Your kernel does not support cgroup blkio weight" * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.143463100Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.143479200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.143487600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.143495100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.143512700Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.143772400Z" level=info msg="Loading containers: start." * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.146070400Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.104-microsoft-standard\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.104-microsoft-standard\n, error: exit status 1" * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.285538900Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.374970500Z" level=info msg="Loading containers: done." * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.472197400Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.472388500Z" level=info msg="Daemon has completed initialization" * Aug 25 14:53:38 minikube dockerd[155]: time="2020-08-25T14:53:38.521104500Z" level=info msg="API listen on /run/docker.sock" * Aug 25 14:53:38 minikube systemd[1]: Started Docker Application Container Engine. * Aug 25 14:54:05 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. * Aug 25 14:54:07 minikube systemd[1]: Stopping Docker Application Container Engine... * Aug 25 14:54:07 minikube dockerd[155]: time="2020-08-25T14:54:07.311309400Z" level=info msg="Processing signal 'terminated'" * Aug 25 14:54:07 minikube dockerd[155]: time="2020-08-25T14:54:07.312473200Z" level=info msg="Daemon shutdown complete" * Aug 25 14:54:07 minikube systemd[1]: docker.service: Succeeded. * Aug 25 14:54:07 minikube systemd[1]: Stopped Docker Application Container Engine. * Aug 25 14:54:07 minikube systemd[1]: Starting Docker Application Container Engine... * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.374680700Z" level=info msg="Starting up" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.376902400Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.376940300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.376958700Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.376967700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.378149200Z" level=info msg="parsed scheme: \"unix\"" module=grpc * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.378188900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.378211200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.378222300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.390167300Z" level=info msg="[graphdriver] using prior storage driver: overlay2" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.408914800Z" level=warning msg="Your kernel does not support cgroup blkio weight" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.408953600Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.408962000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.408966900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.408971700Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.408976200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.409143200Z" level=info msg="Loading containers: start." * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.410698800Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.104-microsoft-standard\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.104-microsoft-standard\n, error: exit status 1" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.504051000Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.555446800Z" level=info msg="Loading containers: done." * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.595143800Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.595329100Z" level=info msg="Daemon has completed initialization" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.626752300Z" level=info msg="API listen on /var/run/docker.sock" * Aug 25 14:54:07 minikube dockerd[379]: time="2020-08-25T14:54:07.626754000Z" level=info msg="API listen on [::]:2376" * Aug 25 14:54:07 minikube systemd[1]: Started Docker Application Container Engine. * Aug 25 15:01:45 minikube dockerd[379]: time="2020-08-25T15:01:45.095060300Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 25 15:01:45 minikube dockerd[379]: time="2020-08-25T15:01:45.673864700Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Aug 25 15:01:52 minikube dockerd[379]: time="2020-08-25T15:01:52.458234900Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * e1a46d64284dd envoyproxy/envoy@sha256:f455fd7b45c2fe5d2862e19a3858284bbffc9c72359e0012641b59859d5c68db 27 minutes ago Running envoy 0 97586ff137a82 * edb28c809e5b7 d66222aa76f72 28 minutes ago Running shutdown-manager 0 97586ff137a82 * eef40fcaff8c8 projectcontour/contour@sha256:0fc1166bfc9973fbdf6cc95fe216963be6d44c113546b5247de23ed1337f19f9 28 minutes ago Exited envoy-initconfig 0 97586ff137a82 * aa219a9e766f4 projectcontour/contour@sha256:0fc1166bfc9973fbdf6cc95fe216963be6d44c113546b5247de23ed1337f19f9 28 minutes ago Running contour 0 a659a35a2ba66 * 0982b8c8bdbc7 projectcontour/contour@sha256:0fc1166bfc9973fbdf6cc95fe216963be6d44c113546b5247de23ed1337f19f9 28 minutes ago Running contour 0 7fac7039d0505 * d30d6d7deaea7 projectcontour/contour@sha256:0fc1166bfc9973fbdf6cc95fe216963be6d44c113546b5247de23ed1337f19f9 28 minutes ago Exited contour 0 01413466254b2 * 9da258754898b 67da37a9a360e 34 minutes ago Running coredns 0 ff16717f82385 * a988b16635665 9c3ca9f065bb1 34 minutes ago Running storage-provisioner 0 7267a9bb4240b * 97524a5fe22eb 3439b7546f29b 34 minutes ago Running kube-proxy 0 f4becee944426 * 329a9c61a4f94 303ce5db0e90d 34 minutes ago Running etcd 0 c7ec97204a4ba * 2c7a6abcc048a 76216c34ed0c7 34 minutes ago Running kube-scheduler 0 2cd2fc26caf0b * cc2c9b78a8119 da26705ccb4b5 34 minutes ago Running kube-controller-manager 0 8cf72281323c3 * 404c01b15f471 7e28efa976bd1 34 minutes ago Running kube-apiserver 0 a5aad892a1625 * * ==> coredns [9da258754898] <== * .:53 * [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 * CoreDNS-1.6.7 * linux/amd64, go1.13.6, da7f65b * * ==> describe nodes <== * Name: minikube * Roles: master * Labels: beta.kubernetes.io/arch=amd64 * beta.kubernetes.io/os=linux * kubernetes.io/arch=amd64 * kubernetes.io/hostname=minikube * kubernetes.io/os=linux * minikube.k8s.io/commit=be7c19d391302656d27f1f213657d925c4e1cfc2-dirty * minikube.k8s.io/name=minikube * minikube.k8s.io/updated_at=2020_08_25T07_55_16_0700 * minikube.k8s.io/version=v1.12.2 * node-role.kubernetes.io/master= * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock * node.alpha.kubernetes.io/ttl: 0 * volumes.kubernetes.io/controller-managed-attach-detach: true * CreationTimestamp: Tue, 25 Aug 2020 14:55:12 +0000 * Taints: * Unschedulable: false * Lease: * HolderIdentity: minikube * AcquireTime: * RenewTime: Tue, 25 Aug 2020 15:29:57 +0000 * Conditions: * Type Status LastHeartbeatTime LastTransitionTime Reason Message * ---- ------ ----------------- ------------------ ------ ------- * MemoryPressure False Tue, 25 Aug 2020 15:27:24 +0000 Tue, 25 Aug 2020 14:55:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available * DiskPressure False Tue, 25 Aug 2020 15:27:24 +0000 Tue, 25 Aug 2020 14:55:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure * PIDPressure False Tue, 25 Aug 2020 15:27:24 +0000 Tue, 25 Aug 2020 14:55:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available * Ready True Tue, 25 Aug 2020 15:27:24 +0000 Tue, 25 Aug 2020 14:55:26 +0000 KubeletReady kubelet is posting ready status * Addresses: * InternalIP: 172.17.0.3 * Hostname: minikube * Capacity: * cpu: 4 * ephemeral-storage: 263174212Ki * hugepages-2Mi: 0 * memory: 12996636Ki * pods: 110 * Allocatable: * cpu: 4 * ephemeral-storage: 263174212Ki * hugepages-2Mi: 0 * memory: 12996636Ki * pods: 110 * System Info: * Machine ID: c9ef89de68a342edacd5e0f74937a721 * System UUID: c9ef89de68a342edacd5e0f74937a721 * Boot ID: f49b2670-64c8-495f-850e-285f62f86fed * Kernel Version: 4.19.104-microsoft-standard * OS Image: Ubuntu 20.04 LTS * Operating System: linux * Architecture: amd64 * Container Runtime Version: docker://19.3.8 * Kubelet Version: v1.18.3 * Kube-Proxy Version: v1.18.3 * Non-terminated Pods: (10 in total) * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE * --------- ---- ------------ ---------- --------------- ------------- --- * kube-system coredns-66bff467f8-ms7d6 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 34m * kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34m * kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 34m * kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 34m * kube-system kube-proxy-md8tx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34m * kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 34m * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34m * projectcontour contour-d857b9789-b5hbs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28m * projectcontour contour-d857b9789-qgb22 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28m * projectcontour envoy-5dxwh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28m * Allocated resources: * (Total limits may be over 100 percent, i.e., overcommitted.) * Resource Requests Limits * -------- -------- ------ * cpu 650m (16%) 0 (0%) * memory 70Mi (0%) 170Mi (1%) * ephemeral-storage 0 (0%) 0 (0%) * hugepages-2Mi 0 (0%) 0 (0%) * Events: * Type Reason Age From Message * ---- ------ ---- ---- ------- * Normal NodeHasSufficientMemory 34m (x5 over 34m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 34m (x5 over 34m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 34m (x5 over 34m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal Starting 34m kubelet, minikube Starting kubelet. * Normal NodeHasSufficientMemory 34m kubelet, minikube Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 34m kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 34m kubelet, minikube Node minikube status is now: NodeHasSufficientPID * Normal NodeAllocatableEnforced 34m kubelet, minikube Updated Node Allocatable limit across pods * Normal Starting 34m kube-proxy, minikube Starting kube-proxy. * Normal NodeReady 34m kubelet, minikube Node minikube status is now: NodeReady * * ==> dmesg <== * [Aug25 14:18] WSL2: Performing memory compaction. * [Aug25 14:19] WSL2: Performing memory compaction. * [Aug25 14:20] WSL2: Performing memory compaction. * [Aug25 14:22] WSL2: Performing memory compaction. * [Aug25 14:23] WSL2: Performing memory compaction. * [Aug25 14:24] WSL2: Performing memory compaction. * [Aug25 14:26] WSL2: Performing memory compaction. * [Aug25 14:27] WSL2: Performing memory compaction. * [Aug25 14:28] WSL2: Performing memory compaction. * [Aug25 14:29] WSL2: Performing memory compaction. * [Aug25 14:31] WSL2: Performing memory compaction. * [Aug25 14:32] WSL2: Performing memory compaction. * [Aug25 14:33] WSL2: Performing memory compaction. * [Aug25 14:34] WSL2: Performing memory compaction. * [Aug25 14:35] WSL2: Performing memory compaction. * [Aug25 14:36] WSL2: Performing memory compaction. * [Aug25 14:37] WSL2: Performing memory compaction. * [Aug25 14:38] WSL2: Performing memory compaction. * [Aug25 14:39] WSL2: Performing memory compaction. * [Aug25 14:40] WSL2: Performing memory compaction. * [Aug25 14:41] WSL2: Performing memory compaction. * [Aug25 14:42] WSL2: Performing memory compaction. * [Aug25 14:43] WSL2: Performing memory compaction. * [Aug25 14:44] WSL2: Performing memory compaction. * [Aug25 14:45] WSL2: Performing memory compaction. * [Aug25 14:46] WSL2: Performing memory compaction. * [Aug25 14:48] WSL2: Performing memory compaction. * [Aug25 14:49] WSL2: Performing memory compaction. * [Aug25 14:52] WSL2: Performing memory compaction. * [Aug25 14:53] WSL2: Performing memory compaction. * [Aug25 14:55] WSL2: Performing memory compaction. * [Aug25 14:56] WSL2: Performing memory compaction. * [Aug25 14:57] WSL2: Performing memory compaction. * [Aug25 14:58] WSL2: Performing memory compaction. * [Aug25 14:59] WSL2: Performing memory compaction. * [Aug25 15:00] WSL2: Performing memory compaction. * [Aug25 15:02] WSL2: Performing memory compaction. * [Aug25 15:03] WSL2: Performing memory compaction. * [Aug25 15:04] WSL2: Performing memory compaction. * [Aug25 15:05] WSL2: Performing memory compaction. * [Aug25 15:06] WSL2: Performing memory compaction. * [Aug25 15:07] WSL2: Performing memory compaction. * [Aug25 15:08] WSL2: Performing memory compaction. * [Aug25 15:09] WSL2: Performing memory compaction. * [Aug25 15:10] WSL2: Performing memory compaction. * [Aug25 15:11] WSL2: Performing memory compaction. * [Aug25 15:13] WSL2: Performing memory compaction. * [Aug25 15:14] WSL2: Performing memory compaction. * [Aug25 15:15] WSL2: Performing memory compaction. * [Aug25 15:17] WSL2: Performing memory compaction. * [Aug25 15:18] WSL2: Performing memory compaction. * [Aug25 15:19] WSL2: Performing memory compaction. * [Aug25 15:20] WSL2: Performing memory compaction. * [Aug25 15:21] WSL2: Performing memory compaction. * [Aug25 15:23] WSL2: Performing memory compaction. * [Aug25 15:24] WSL2: Performing memory compaction. * [Aug25 15:25] WSL2: Performing memory compaction. * [Aug25 15:26] WSL2: Performing memory compaction. * [Aug25 15:27] WSL2: Performing memory compaction. * [Aug25 15:29] WSL2: Performing memory compaction. * * ==> etcd [329a9c61a4f9] <== * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-08-25 14:55:06.425958 I | etcdmain: etcd Version: 3.4.3 * 2020-08-25 14:55:06.425991 I | etcdmain: Git SHA: 3cf2f69b5 * 2020-08-25 14:55:06.425994 I | etcdmain: Go Version: go1.12.12 * 2020-08-25 14:55:06.425998 I | etcdmain: Go OS/Arch: linux/amd64 * 2020-08-25 14:55:06.426002 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 * [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead * 2020-08-25 14:55:06.426067 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-08-25 14:55:06.427809 I | embed: name = minikube * 2020-08-25 14:55:06.427820 I | embed: data dir = /var/lib/minikube/etcd * 2020-08-25 14:55:06.427824 I | embed: member dir = /var/lib/minikube/etcd/member * 2020-08-25 14:55:06.427828 I | embed: heartbeat = 100ms * 2020-08-25 14:55:06.427831 I | embed: election = 1000ms * 2020-08-25 14:55:06.427834 I | embed: snapshot count = 10000 * 2020-08-25 14:55:06.427843 I | embed: advertise client URLs = https://172.17.0.3:2379 * 2020-08-25 14:55:06.523231 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2 * raft2020/08/25 14:55:06 INFO: b273bc7741bcb020 switched to configuration voters=() * raft2020/08/25 14:55:06 INFO: b273bc7741bcb020 became follower at term 0 * raft2020/08/25 14:55:06 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] * raft2020/08/25 14:55:06 INFO: b273bc7741bcb020 became follower at term 1 * raft2020/08/25 14:55:06 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) * 2020-08-25 14:55:06.572268 W | auth: simple token is not cryptographically signed * 2020-08-25 14:55:06.637918 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided] * 2020-08-25 14:55:06.638560 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10) * raft2020/08/25 14:55:06 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) * 2020-08-25 14:55:06.643683 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2 * 2020-08-25 14:55:06.649441 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2020-08-25 14:55:06.649895 I | embed: listening for peers on 172.17.0.3:2380 * 2020-08-25 14:55:06.650344 I | embed: listening for metrics on http://127.0.0.1:2381 * raft2020/08/25 14:55:07 INFO: b273bc7741bcb020 is starting a new election at term 1 * raft2020/08/25 14:55:07 INFO: b273bc7741bcb020 became candidate at term 2 * raft2020/08/25 14:55:07 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2 * raft2020/08/25 14:55:07 INFO: b273bc7741bcb020 became leader at term 2 * raft2020/08/25 14:55:07 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2 * 2020-08-25 14:55:07.338705 I | etcdserver: setting up the initial cluster version to 3.4 * 2020-08-25 14:55:07.343549 N | etcdserver/membership: set the initial cluster version to 3.4 * 2020-08-25 14:55:07.344319 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2 * 2020-08-25 14:55:07.344442 I | embed: ready to serve client requests * 2020-08-25 14:55:07.344720 I | embed: ready to serve client requests * 2020-08-25 14:55:07.349193 I | embed: serving client requests on 127.0.0.1:2379 * 2020-08-25 14:55:07.353485 I | embed: serving client requests on 172.17.0.3:2379 * 2020-08-25 14:55:07.353846 I | etcdserver/api: enabled capabilities for version 3.4 * 2020-08-25 15:00:04.552138 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:286" took too long (127.0832ms) to execute * 2020-08-25 15:05:08.667233 I | mvcc: store.index: compact 594 * 2020-08-25 15:05:08.668640 I | mvcc: finished scheduled compaction at 594 (took 687.6µs) * 2020-08-25 15:10:08.700475 I | mvcc: store.index: compact 1052 * 2020-08-25 15:10:08.703455 I | mvcc: finished scheduled compaction at 1052 (took 2.0394ms) * 2020-08-25 15:15:08.713062 I | mvcc: store.index: compact 1415 * 2020-08-25 15:15:08.714205 I | mvcc: finished scheduled compaction at 1415 (took 827.8µs) * 2020-08-25 15:20:08.731612 I | mvcc: store.index: compact 1775 * 2020-08-25 15:20:08.733841 I | mvcc: finished scheduled compaction at 1775 (took 1.8743ms) * 2020-08-25 15:25:08.744211 I | mvcc: store.index: compact 2135 * 2020-08-25 15:25:08.745285 I | mvcc: finished scheduled compaction at 2135 (took 747.3µs) * * ==> kernel <== * 15:30:04 up 1 day, 7:02, 0 users, load average: 0.05, 0.28, 0.43 * Linux minikube 4.19.104-microsoft-standard #1 SMP Wed Feb 19 06:37:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux * PRETTY_NAME="Ubuntu 20.04 LTS" * * ==> kube-apiserver [404c01b15f47] <== * I0825 14:55:10.803206 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. * I0825 14:55:10.803248 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. * I0825 14:55:10.804586 1 client.go:361] parsed scheme: "endpoint" * I0825 14:55:10.804624 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0825 14:55:10.810583 1 client.go:361] parsed scheme: "endpoint" * I0825 14:55:10.810634 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0825 14:55:12.478244 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0825 14:55:12.478343 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key * I0825 14:55:12.478264 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0825 14:55:12.479146 1 secure_serving.go:178] Serving securely on [::]:8443 * I0825 14:55:12.479200 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0825 14:55:12.479335 1 available_controller.go:387] Starting AvailableConditionController * I0825 14:55:12.479753 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller * I0825 14:55:12.480270 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller * I0825 14:55:12.480298 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller * I0825 14:55:12.480327 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0825 14:55:12.480359 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0825 14:55:12.480939 1 crd_finalizer.go:266] Starting CRDFinalizer * I0825 14:55:12.482930 1 controller.go:86] Starting OpenAPI controller * I0825 14:55:12.483040 1 customresource_discovery_controller.go:209] Starting DiscoveryController * I0825 14:55:12.483058 1 naming_controller.go:291] Starting NamingConditionController * I0825 14:55:12.483085 1 establishing_controller.go:76] Starting EstablishingController * I0825 14:55:12.483098 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController * I0825 14:55:12.483110 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController * I0825 14:55:12.483780 1 autoregister_controller.go:141] Starting autoregister controller * I0825 14:55:12.483789 1 cache.go:32] Waiting for caches to sync for autoregister controller * I0825 14:55:12.484093 1 apiservice_controller.go:94] Starting APIServiceRegistrationController * I0825 14:55:12.484104 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller * I0825 14:55:12.484121 1 controller.go:81] Starting OpenAPI AggregationController * I0825 14:55:12.488907 1 crdregistration_controller.go:111] Starting crd-autoregister controller * I0825 14:55:12.488961 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister * E0825 14:55:12.500221 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg: * I0825 14:55:12.623200 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller * I0825 14:55:12.623484 1 shared_informer.go:230] Caches are synced for crd-autoregister * I0825 14:55:12.623501 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller * I0825 14:55:12.627493 1 cache.go:39] Caches are synced for AvailableConditionController controller * I0825 14:55:12.627570 1 cache.go:39] Caches are synced for autoregister controller * I0825 14:55:13.478995 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). * I0825 14:55:13.479167 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). * I0825 14:55:13.499887 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 * I0825 14:55:13.515576 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 * I0825 14:55:13.515716 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. * I0825 14:55:14.234378 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io * I0825 14:55:14.291617 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io * W0825 14:55:14.428733 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.3] * I0825 14:55:14.429806 1 controller.go:606] quota admission added evaluator for: endpoints * I0825 14:55:14.434669 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io * I0825 14:55:14.796263 1 controller.go:606] quota admission added evaluator for: serviceaccounts * I0825 14:55:15.960607 1 controller.go:606] quota admission added evaluator for: deployments.apps * I0825 14:55:16.114275 1 controller.go:606] quota admission added evaluator for: daemonsets.apps * I0825 14:55:16.517584 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io * I0825 14:55:21.594052 1 controller.go:606] quota admission added evaluator for: replicasets.apps * I0825 14:55:21.619942 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps * I0825 15:01:39.228110 1 controller.go:606] quota admission added evaluator for: jobs.batch * I0825 15:01:49.845045 1 client.go:361] parsed scheme: "endpoint" * I0825 15:01:49.845104 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * I0825 15:01:49.883230 1 client.go:361] parsed scheme: "endpoint" * I0825 15:01:49.883289 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] * W0825 15:11:02.609975 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted * W0825 15:29:48.832829 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted * * ==> kube-controller-manager [cc2c9b78a811] <== * I0825 14:55:21.239899 1 shared_informer.go:223] Waiting for caches to sync for ReplicationController * I0825 14:55:21.489443 1 controllermanager.go:533] Started "serviceaccount" * I0825 14:55:21.490066 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0825 14:55:21.490228 1 serviceaccounts_controller.go:117] Starting service account controller * I0825 14:55:21.490237 1 shared_informer.go:223] Waiting for caches to sync for service account * I0825 14:55:21.492281 1 shared_informer.go:223] Waiting for caches to sync for resource quota * W0825 14:55:21.503754 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0825 14:55:21.511278 1 shared_informer.go:230] Caches are synced for stateful set * I0825 14:55:21.532822 1 shared_informer.go:230] Caches are synced for GC * I0825 14:55:21.539622 1 shared_informer.go:230] Caches are synced for job * I0825 14:55:21.540314 1 shared_informer.go:230] Caches are synced for ReplicationController * I0825 14:55:21.540382 1 shared_informer.go:230] Caches are synced for endpoint_slice * I0825 14:55:21.540545 1 shared_informer.go:230] Caches are synced for TTL * I0825 14:55:21.541233 1 shared_informer.go:230] Caches are synced for attach detach * I0825 14:55:21.551508 1 shared_informer.go:230] Caches are synced for HPA * I0825 14:55:21.588910 1 shared_informer.go:230] Caches are synced for disruption * I0825 14:55:21.589053 1 disruption.go:339] Sending events to api server. * I0825 14:55:21.589353 1 shared_informer.go:230] Caches are synced for PVC protection * I0825 14:55:21.589755 1 shared_informer.go:230] Caches are synced for PV protection * I0825 14:55:21.589940 1 shared_informer.go:230] Caches are synced for ReplicaSet * I0825 14:55:21.589994 1 shared_informer.go:230] Caches are synced for bootstrap_signer * I0825 14:55:21.591464 1 shared_informer.go:230] Caches are synced for deployment * I0825 14:55:21.601550 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9a2745dc-01dc-4cca-9049-8a96815046a9", APIVersion:"apps/v1", ResourceVersion:"271", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1 * I0825 14:55:21.614835 1 shared_informer.go:230] Caches are synced for daemon sets * I0825 14:55:21.617948 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"a01ebf6d-c910-4596-8cfb-f30b3e5dfadc", APIVersion:"apps/v1", ResourceVersion:"303", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-ms7d6 * I0825 14:55:21.640442 1 shared_informer.go:230] Caches are synced for taint * I0825 14:55:21.640961 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: * W0825 14:55:21.641131 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0825 14:55:21.641197 1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode. * I0825 14:55:21.641274 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0825 14:55:21.642223 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"cb64f0ab-5a14-4494-bc60-3d6032e225a8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller * I0825 14:55:21.667096 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"241f098b-1118-43fc-8b11-dfd1012ea75d", APIVersion:"apps/v1", ResourceVersion:"218", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-md8tx * I0825 14:55:21.723604 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator * E0825 14:55:21.737999 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"241f098b-1118-43fc-8b11-dfd1012ea75d", ResourceVersion:"218", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733964116, loc:(*time.Location)(0x6d09200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001336d40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001336d60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001336d80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000b50200), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001336da0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001336dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001336e00)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000f7e410), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000bde418), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0002be000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00041e028)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000bde478)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again * E0825 14:55:21.760283 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again * I0825 14:55:21.839896 1 shared_informer.go:230] Caches are synced for endpoint * I0825 14:55:21.850450 1 shared_informer.go:230] Caches are synced for certificate-csrapproving * I0825 14:55:21.867338 1 shared_informer.go:230] Caches are synced for certificate-csrsigning * I0825 14:55:21.992993 1 shared_informer.go:230] Caches are synced for resource quota * I0825 14:55:21.992993 1 shared_informer.go:230] Caches are synced for resource quota * I0825 14:55:21.999322 1 shared_informer.go:230] Caches are synced for namespace * I0825 14:55:22.090728 1 shared_informer.go:230] Caches are synced for service account * I0825 14:55:22.099851 1 shared_informer.go:230] Caches are synced for garbage collector * I0825 14:55:22.099892 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0825 14:55:22.123694 1 shared_informer.go:230] Caches are synced for expand * I0825 14:55:22.140514 1 shared_informer.go:230] Caches are synced for persistent volume * I0825 14:55:22.190396 1 shared_informer.go:230] Caches are synced for garbage collector * I0825 14:55:31.642343 1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode. * I0825 15:01:39.286051 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"projectcontour", Name:"contour-certgen-v1.7.0", UID:"5664b95f-cedf-40d3-ba57-33db53980f55", APIVersion:"batch/v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: contour-certgen-v1.7.0-4mmws * I0825 15:01:39.539659 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"projectcontour", Name:"contour", UID:"49c46516-1bb1-4af5-983c-8794cf3e3a4f", APIVersion:"apps/v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set contour-d857b9789 to 2 * I0825 15:01:39.559214 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"projectcontour", Name:"contour-d857b9789", UID:"882c439f-0af7-44c2-8a60-931d3485a7a7", APIVersion:"apps/v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: contour-d857b9789-b5hbs * I0825 15:01:39.577643 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"projectcontour", Name:"contour-d857b9789", UID:"882c439f-0af7-44c2-8a60-931d3485a7a7", APIVersion:"apps/v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: contour-d857b9789-qgb22 * I0825 15:01:39.744203 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"projectcontour", Name:"envoy", UID:"95ccbfbc-2a08-426e-8966-224653b84a2e", APIVersion:"apps/v1", ResourceVersion:"716", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: envoy-5dxwh * I0825 15:01:45.593119 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"projectcontour", Name:"contour-certgen-v1.7.0", UID:"5664b95f-cedf-40d3-ba57-33db53980f55", APIVersion:"batch/v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed * I0825 15:01:55.318570 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for tlscertificatedelegations.projectcontour.io * I0825 15:01:55.318656 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for httpproxies.projectcontour.io * I0825 15:01:55.318689 1 shared_informer.go:223] Waiting for caches to sync for resource quota * I0825 15:01:55.418973 1 shared_informer.go:230] Caches are synced for resource quota * I0825 15:01:55.820439 1 shared_informer.go:223] Waiting for caches to sync for garbage collector * I0825 15:01:55.820514 1 shared_informer.go:230] Caches are synced for garbage collector * * ==> kube-proxy [97524a5fe22e] <== * W0825 14:55:22.764086 1 proxier.go:625] Failed to read file /lib/modules/4.19.104-microsoft-standard/modules.builtin with error open /lib/modules/4.19.104-microsoft-standard/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0825 14:55:22.767246 1 proxier.go:635] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0825 14:55:22.770020 1 proxier.go:635] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0825 14:55:22.772635 1 proxier.go:635] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0825 14:55:22.774683 1 proxier.go:635] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0825 14:55:22.776333 1 proxier.go:635] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0825 14:55:22.778445 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy * I0825 14:55:22.844365 1 node.go:136] Successfully retrieved node IP: 172.17.0.3 * I0825 14:55:22.844468 1 server_others.go:186] Using iptables Proxier. * W0825 14:55:22.844483 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined * I0825 14:55:22.844493 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local * I0825 14:55:22.845770 1 server.go:583] Version: v1.18.3 * I0825 14:55:22.853737 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I0825 14:55:22.853916 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0825 14:55:22.854018 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0825 14:55:22.854315 1 config.go:133] Starting endpoints config controller * I0825 14:55:22.855514 1 shared_informer.go:223] Waiting for caches to sync for endpoints config * I0825 14:55:22.855464 1 config.go:315] Starting service config controller * I0825 14:55:22.855804 1 shared_informer.go:223] Waiting for caches to sync for service config * I0825 14:55:22.955957 1 shared_informer.go:230] Caches are synced for endpoints config * I0825 14:55:22.956039 1 shared_informer.go:230] Caches are synced for service config * * ==> kube-scheduler [2c7a6abcc048] <== * I0825 14:55:06.438984 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0825 14:55:06.439074 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0825 14:55:09.250624 1 serving.go:313] Generated self-signed cert in-memory * W0825 14:55:12.535561 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0825 14:55:12.535594 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0825 14:55:12.535603 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. * W0825 14:55:12.535610 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0825 14:55:12.570698 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * I0825 14:55:12.570907 1 registry.go:150] Registering EvenPodsSpread predicate and priority function * W0825 14:55:12.573465 1 authorization.go:47] Authorization is disabled * W0825 14:55:12.573679 1 authentication.go:40] Authentication is disabled * I0825 14:55:12.573710 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 * I0825 14:55:12.638771 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 * I0825 14:55:12.639665 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0825 14:55:12.641870 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0825 14:55:12.639702 1 tlsconfig.go:240] Starting DynamicServingCertificateController * E0825 14:55:12.646926 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0825 14:55:12.648479 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope * E0825 14:55:12.648993 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0825 14:55:12.649388 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * E0825 14:55:12.649558 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0825 14:55:12.652378 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0825 14:55:12.652820 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0825 14:55:12.652940 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0825 14:55:12.653238 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0825 14:55:13.625806 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope * E0825 14:55:13.677784 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" * E0825 14:55:13.690712 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope * E0825 14:55:13.794093 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope * E0825 14:55:13.912675 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * I0825 14:55:16.542100 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * E0825 14:55:21.689587 1 factory.go:503] pod kube-system/coredns-66bff467f8-ms7d6 is already present in the backoff queue * E0825 14:55:23.032898 1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue * * ==> kubelet <== * -- Logs begin at Tue 2020-08-25 14:53:37 UTC, end at Tue 2020-08-25 15:30:06 UTC. -- * Aug 25 14:55:32 minikube kubelet[2522]: W0825 14:55:32.857768 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-ms7d6 through plugin: invalid network status for * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.292782 2522 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.422960 2522 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "contour-certgen-token-tvjd8" (UniqueName: "kubernetes.io/secret/783ff92a-9cf1-4dc4-933b-739503e343f3-contour-certgen-token-tvjd8") pod "contour-certgen-v1.7.0-4mmws" (UID: "783ff92a-9cf1-4dc4-933b-739503e343f3") * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.588333 2522 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.633501 2522 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.724412 2522 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "contour-config" (UniqueName: "kubernetes.io/configmap/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contour-config") pod "contour-d857b9789-b5hbs" (UID: "ae6d4df2-93ce-4434-a174-d6ab6e77c629") * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.724668 2522 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "contour-token-gs7wl" (UniqueName: "kubernetes.io/secret/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contour-token-gs7wl") pod "contour-d857b9789-b5hbs" (UID: "ae6d4df2-93ce-4434-a174-d6ab6e77c629") * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.725046 2522 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "contourcert" (UniqueName: "kubernetes.io/secret/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contourcert") pod "contour-d857b9789-b5hbs" (UID: "ae6d4df2-93ce-4434-a174-d6ab6e77c629") * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.772017 2522 topology_manager.go:233] [topologymanager] Topology Admit Handler * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.829113 2522 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "contourcert" (UniqueName: "kubernetes.io/secret/d8f03acf-d6df-4f33-b901-c79044b1cd17-contourcert") pod "contour-d857b9789-qgb22" (UID: "d8f03acf-d6df-4f33-b901-c79044b1cd17") * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.829957 2522 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "contour-config" (UniqueName: "kubernetes.io/configmap/d8f03acf-d6df-4f33-b901-c79044b1cd17-contour-config") pod "contour-d857b9789-qgb22" (UID: "d8f03acf-d6df-4f33-b901-c79044b1cd17") * Aug 25 15:01:39 minikube kubelet[2522]: E0825 15:01:39.829590 2522 secret.go:195] Couldn't get secret projectcontour/contourcert: secret "contourcert" not found * Aug 25 15:01:39 minikube kubelet[2522]: E0825 15:01:39.830513 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contourcert podName:ae6d4df2-93ce-4434-a174-d6ab6e77c629 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:40.3304683 +0000 UTC m=+384.453617201 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"contourcert\" (UniqueName: \"kubernetes.io/secret/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contourcert\") pod \"contour-d857b9789-b5hbs\" (UID: \"ae6d4df2-93ce-4434-a174-d6ab6e77c629\") : secret \"contourcert\" not found" * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.830959 2522 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "contour-token-gs7wl" (UniqueName: "kubernetes.io/secret/d8f03acf-d6df-4f33-b901-c79044b1cd17-contour-token-gs7wl") pod "contour-d857b9789-qgb22" (UID: "d8f03acf-d6df-4f33-b901-c79044b1cd17") * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.931751 2522 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "envoycert" (UniqueName: "kubernetes.io/secret/fc6354e9-880b-4dd8-8b79-137a6411a5b4-envoycert") pod "envoy-5dxwh" (UID: "fc6354e9-880b-4dd8-8b79-137a6411a5b4") * Aug 25 15:01:39 minikube kubelet[2522]: I0825 15:01:39.931986 2522 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "envoy-config" (UniqueName: "kubernetes.io/empty-dir/fc6354e9-880b-4dd8-8b79-137a6411a5b4-envoy-config") pod "envoy-5dxwh" (UID: "fc6354e9-880b-4dd8-8b79-137a6411a5b4") * Aug 25 15:01:39 minikube kubelet[2522]: E0825 15:01:39.932226 2522 secret.go:195] Couldn't get secret projectcontour/contourcert: secret "contourcert" not found * Aug 25 15:01:39 minikube kubelet[2522]: E0825 15:01:39.932617 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/d8f03acf-d6df-4f33-b901-c79044b1cd17-contourcert podName:d8f03acf-d6df-4f33-b901-c79044b1cd17 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:40.4325834 +0000 UTC m=+384.555732301 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"contourcert\" (UniqueName: \"kubernetes.io/secret/d8f03acf-d6df-4f33-b901-c79044b1cd17-contourcert\") pod \"contour-d857b9789-qgb22\" (UID: \"d8f03acf-d6df-4f33-b901-c79044b1cd17\") : secret \"contourcert\" not found" * Aug 25 15:01:40 minikube kubelet[2522]: E0825 15:01:40.039137 2522 secret.go:195] Couldn't get secret projectcontour/envoycert: secret "envoycert" not found * Aug 25 15:01:40 minikube kubelet[2522]: E0825 15:01:40.040367 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/fc6354e9-880b-4dd8-8b79-137a6411a5b4-envoycert podName:fc6354e9-880b-4dd8-8b79-137a6411a5b4 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:40.5393844 +0000 UTC m=+384.662533501 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"envoycert\" (UniqueName: \"kubernetes.io/secret/fc6354e9-880b-4dd8-8b79-137a6411a5b4-envoycert\") pod \"envoy-5dxwh\" (UID: \"fc6354e9-880b-4dd8-8b79-137a6411a5b4\") : secret \"envoycert\" not found" * Aug 25 15:01:40 minikube kubelet[2522]: E0825 15:01:40.342435 2522 secret.go:195] Couldn't get secret projectcontour/contourcert: secret "contourcert" not found * Aug 25 15:01:40 minikube kubelet[2522]: E0825 15:01:40.342617 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contourcert podName:ae6d4df2-93ce-4434-a174-d6ab6e77c629 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:41.3425643 +0000 UTC m=+385.465713301 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"contourcert\" (UniqueName: \"kubernetes.io/secret/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contourcert\") pod \"contour-d857b9789-b5hbs\" (UID: \"ae6d4df2-93ce-4434-a174-d6ab6e77c629\") : secret \"contourcert\" not found" * Aug 25 15:01:40 minikube kubelet[2522]: E0825 15:01:40.442723 2522 secret.go:195] Couldn't get secret projectcontour/contourcert: secret "contourcert" not found * Aug 25 15:01:40 minikube kubelet[2522]: E0825 15:01:40.442851 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/d8f03acf-d6df-4f33-b901-c79044b1cd17-contourcert podName:d8f03acf-d6df-4f33-b901-c79044b1cd17 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:41.4428247 +0000 UTC m=+385.565973601 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"contourcert\" (UniqueName: \"kubernetes.io/secret/d8f03acf-d6df-4f33-b901-c79044b1cd17-contourcert\") pod \"contour-d857b9789-qgb22\" (UID: \"d8f03acf-d6df-4f33-b901-c79044b1cd17\") : secret \"contourcert\" not found" * Aug 25 15:01:40 minikube kubelet[2522]: W0825 15:01:40.528472 2522 pod_container_deletor.go:77] Container "01413466254b2a5eb0ff5fa87a3296c289b4652ea1e0dca76c2aa25076739b73" not found in pod's containers * Aug 25 15:01:40 minikube kubelet[2522]: W0825 15:01:40.531523 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/contour-certgen-v1.7.0-4mmws through plugin: invalid network status for * Aug 25 15:01:40 minikube kubelet[2522]: E0825 15:01:40.543413 2522 secret.go:195] Couldn't get secret projectcontour/envoycert: secret "envoycert" not found * Aug 25 15:01:40 minikube kubelet[2522]: E0825 15:01:40.543552 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/fc6354e9-880b-4dd8-8b79-137a6411a5b4-envoycert podName:fc6354e9-880b-4dd8-8b79-137a6411a5b4 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:41.5435241 +0000 UTC m=+385.666673001 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"envoycert\" (UniqueName: \"kubernetes.io/secret/fc6354e9-880b-4dd8-8b79-137a6411a5b4-envoycert\") pod \"envoy-5dxwh\" (UID: \"fc6354e9-880b-4dd8-8b79-137a6411a5b4\") : secret \"envoycert\" not found" * Aug 25 15:01:41 minikube kubelet[2522]: E0825 15:01:41.347061 2522 secret.go:195] Couldn't get secret projectcontour/contourcert: secret "contourcert" not found * Aug 25 15:01:41 minikube kubelet[2522]: E0825 15:01:41.347163 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contourcert podName:ae6d4df2-93ce-4434-a174-d6ab6e77c629 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:43.3471384 +0000 UTC m=+387.470287301 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"contourcert\" (UniqueName: \"kubernetes.io/secret/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contourcert\") pod \"contour-d857b9789-b5hbs\" (UID: \"ae6d4df2-93ce-4434-a174-d6ab6e77c629\") : secret \"contourcert\" not found" * Aug 25 15:01:41 minikube kubelet[2522]: E0825 15:01:41.447570 2522 secret.go:195] Couldn't get secret projectcontour/contourcert: secret "contourcert" not found * Aug 25 15:01:41 minikube kubelet[2522]: E0825 15:01:41.447792 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/d8f03acf-d6df-4f33-b901-c79044b1cd17-contourcert podName:d8f03acf-d6df-4f33-b901-c79044b1cd17 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:43.4477466 +0000 UTC m=+387.570895501 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"contourcert\" (UniqueName: \"kubernetes.io/secret/d8f03acf-d6df-4f33-b901-c79044b1cd17-contourcert\") pod \"contour-d857b9789-qgb22\" (UID: \"d8f03acf-d6df-4f33-b901-c79044b1cd17\") : secret \"contourcert\" not found" * Aug 25 15:01:41 minikube kubelet[2522]: W0825 15:01:41.535686 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/contour-certgen-v1.7.0-4mmws through plugin: invalid network status for * Aug 25 15:01:41 minikube kubelet[2522]: E0825 15:01:41.548060 2522 secret.go:195] Couldn't get secret projectcontour/envoycert: secret "envoycert" not found * Aug 25 15:01:41 minikube kubelet[2522]: E0825 15:01:41.548180 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/fc6354e9-880b-4dd8-8b79-137a6411a5b4-envoycert podName:fc6354e9-880b-4dd8-8b79-137a6411a5b4 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:43.5481506 +0000 UTC m=+387.671299501 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"envoycert\" (UniqueName: \"kubernetes.io/secret/fc6354e9-880b-4dd8-8b79-137a6411a5b4-envoycert\") pod \"envoy-5dxwh\" (UID: \"fc6354e9-880b-4dd8-8b79-137a6411a5b4\") : secret \"envoycert\" not found" * Aug 25 15:01:43 minikube kubelet[2522]: E0825 15:01:43.366874 2522 secret.go:195] Couldn't get secret projectcontour/contourcert: secret "contourcert" not found * Aug 25 15:01:43 minikube kubelet[2522]: E0825 15:01:43.366983 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contourcert podName:ae6d4df2-93ce-4434-a174-d6ab6e77c629 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:47.3669582 +0000 UTC m=+391.490107101 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"contourcert\" (UniqueName: \"kubernetes.io/secret/ae6d4df2-93ce-4434-a174-d6ab6e77c629-contourcert\") pod \"contour-d857b9789-b5hbs\" (UID: \"ae6d4df2-93ce-4434-a174-d6ab6e77c629\") : secret \"contourcert\" not found" * Aug 25 15:01:43 minikube kubelet[2522]: E0825 15:01:43.467349 2522 secret.go:195] Couldn't get secret projectcontour/contourcert: secret "contourcert" not found * Aug 25 15:01:43 minikube kubelet[2522]: E0825 15:01:43.467480 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/d8f03acf-d6df-4f33-b901-c79044b1cd17-contourcert podName:d8f03acf-d6df-4f33-b901-c79044b1cd17 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:47.4674379 +0000 UTC m=+391.590586901 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"contourcert\" (UniqueName: \"kubernetes.io/secret/d8f03acf-d6df-4f33-b901-c79044b1cd17-contourcert\") pod \"contour-d857b9789-qgb22\" (UID: \"d8f03acf-d6df-4f33-b901-c79044b1cd17\") : secret \"contourcert\" not found" * Aug 25 15:01:43 minikube kubelet[2522]: E0825 15:01:43.568167 2522 secret.go:195] Couldn't get secret projectcontour/envoycert: secret "envoycert" not found * Aug 25 15:01:43 minikube kubelet[2522]: E0825 15:01:43.568260 2522 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/fc6354e9-880b-4dd8-8b79-137a6411a5b4-envoycert podName:fc6354e9-880b-4dd8-8b79-137a6411a5b4 nodeName:}" failed. No retries permitted until 2020-08-25 15:01:47.5682295 +0000 UTC m=+391.691378401 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"envoycert\" (UniqueName: \"kubernetes.io/secret/fc6354e9-880b-4dd8-8b79-137a6411a5b4-envoycert\") pod \"envoy-5dxwh\" (UID: \"fc6354e9-880b-4dd8-8b79-137a6411a5b4\") : secret \"envoycert\" not found" * Aug 25 15:01:44 minikube kubelet[2522]: W0825 15:01:44.568121 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/contour-certgen-v1.7.0-4mmws through plugin: invalid network status for * Aug 25 15:01:45 minikube kubelet[2522]: W0825 15:01:45.578659 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/contour-certgen-v1.7.0-4mmws through plugin: invalid network status for * Aug 25 15:01:45 minikube kubelet[2522]: I0825 15:01:45.582985 2522 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: d30d6d7deaea730a38e0c50212057e5906908f5ae5f13285caa3b2365941874b * Aug 25 15:01:45 minikube kubelet[2522]: I0825 15:01:45.678470 2522 reconciler.go:196] operationExecutor.UnmountVolume started for volume "contour-certgen-token-tvjd8" (UniqueName: "kubernetes.io/secret/783ff92a-9cf1-4dc4-933b-739503e343f3-contour-certgen-token-tvjd8") pod "783ff92a-9cf1-4dc4-933b-739503e343f3" (UID: "783ff92a-9cf1-4dc4-933b-739503e343f3") * Aug 25 15:01:45 minikube kubelet[2522]: I0825 15:01:45.680759 2522 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783ff92a-9cf1-4dc4-933b-739503e343f3-contour-certgen-token-tvjd8" (OuterVolumeSpecName: "contour-certgen-token-tvjd8") pod "783ff92a-9cf1-4dc4-933b-739503e343f3" (UID: "783ff92a-9cf1-4dc4-933b-739503e343f3"). InnerVolumeSpecName "contour-certgen-token-tvjd8". PluginName "kubernetes.io/secret", VolumeGidValue "" * Aug 25 15:01:45 minikube kubelet[2522]: I0825 15:01:45.778931 2522 reconciler.go:319] Volume detached for volume "contour-certgen-token-tvjd8" (UniqueName: "kubernetes.io/secret/783ff92a-9cf1-4dc4-933b-739503e343f3-contour-certgen-token-tvjd8") on node "minikube" DevicePath "" * Aug 25 15:01:46 minikube kubelet[2522]: W0825 15:01:46.596960 2522 pod_container_deletor.go:77] Container "01413466254b2a5eb0ff5fa87a3296c289b4652ea1e0dca76c2aa25076739b73" not found in pod's containers * Aug 25 15:01:47 minikube kubelet[2522]: W0825 15:01:47.809308 2522 pod_container_deletor.go:77] Container "7fac7039d0505e34d518a63228930cb2cac65829712e79b2d1feaeba726e8ed1" not found in pod's containers * Aug 25 15:01:47 minikube kubelet[2522]: W0825 15:01:47.809358 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/contour-d857b9789-b5hbs through plugin: invalid network status for * Aug 25 15:01:48 minikube kubelet[2522]: W0825 15:01:48.398241 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/contour-d857b9789-qgb22 through plugin: invalid network status for * Aug 25 15:01:48 minikube kubelet[2522]: W0825 15:01:48.483419 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/envoy-5dxwh through plugin: invalid network status for * Aug 25 15:01:48 minikube kubelet[2522]: W0825 15:01:48.833316 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/contour-d857b9789-b5hbs through plugin: invalid network status for * Aug 25 15:01:48 minikube kubelet[2522]: W0825 15:01:48.845070 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/envoy-5dxwh through plugin: invalid network status for * Aug 25 15:01:48 minikube kubelet[2522]: W0825 15:01:48.857241 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/contour-d857b9789-qgb22 through plugin: invalid network status for * Aug 25 15:01:49 minikube kubelet[2522]: W0825 15:01:49.867266 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/contour-d857b9789-b5hbs through plugin: invalid network status for * Aug 25 15:01:51 minikube kubelet[2522]: W0825 15:01:51.899124 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/contour-d857b9789-qgb22 through plugin: invalid network status for * Aug 25 15:01:52 minikube kubelet[2522]: W0825 15:01:52.918953 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/envoy-5dxwh through plugin: invalid network status for * Aug 25 15:01:53 minikube kubelet[2522]: W0825 15:01:53.956290 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/envoy-5dxwh through plugin: invalid network status for * Aug 25 15:02:04 minikube kubelet[2522]: W0825 15:02:04.119228 2522 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for projectcontour/envoy-5dxwh through plugin: invalid network status for * * ==> storage-provisioner [a988b1663566] <== * I0825 14:55:29.828108 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... * I0825 14:55:29.836661 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath * I0825 14:55:29.837322 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_934626e7-6467-4827-a682-92ea1d49dd82! * I0825 14:55:29.837373 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d4bc1e65-7a06-4e31-b702-76e1cf8fc0d0", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_934626e7-6467-4827-a682-92ea1d49dd82 became leader * I0825 14:55:29.938022 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_934626e7-6467-4827-a682-92ea1d49dd82!
@priyawadhwa priyawadhwa added the kind/support Categorizes issue or PR as a support question. label Aug 26, 2020
@priyawadhwa
Copy link

Hey @smacdav thanks for opening this issue. I"m just curious, does this work if you don't enable contour?

I'd also suggest upgrading to the latest version of minikube, v1.13.0.

@priyawadhwa priyawadhwa added the triage/needs-information Indicates an issue needs more information in order to work on it. label Sep 8, 2020
@smacdav
Copy link
Author

smacdav commented Sep 9, 2020

It does work if I don't enable contour. Interesting. Not useful to me, but interesting.

I went ahead and upgraded the system I was doing this on to v1.13.0. I have another system, however, on which v1.13.0 doesn't seem to work: it fails to connect to the controller-manager and scheduler, so they fail their health-checks. I rolled that system back to 1.12.2. It would be nice to be able to upgrade, though.

@smacdav
Copy link
Author

smacdav commented Sep 9, 2020

The problem I had with 1.13.0 appears to only be an issue if I start "fresh" and therefore run kubernetes v1.19.0. Then I get

$ minikube start
😄  minikube v1.13.0 on Microsoft Windows 10 Pro 10.0.19041 Build 19041
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.19.0 preload ...
    > preloaded-images-k8s-v6-v1.19.0-docker-overlay2-amd64.tar.lz4: 486.28 MiB
🔥  Creating docker container (CPUs=2, Memory=4000MB) ...
🐳  Preparing Kubernetes v1.19.0 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner

❗  C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.16.6-beta.0, which may have incompatibilites wi
th Kubernetes 1.19.0.
💡  Want kubectl v1.19.0? Try 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" by default
$ kubectl get componentstatuses
NAME                 STATUS      MESSAGE
       ERROR
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection ref
used
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection ref
used
etcd-0               Healthy     {"health":"true"}

If I run an existing kubernetes v1.18.3 image then it runs just fine. I should probably file a separate ticket for that, huh?

@eobermuhlner
Copy link

Same issue on v1.13.0

Running on Windows using gitbash.

$ minikube start
* minikube v1.13.0 on Microsoft Windows 10 Pro 10.0.20190 Build 20190
  - KUBECONFIG=C:\Users\EricObermuhlner\Kube\admin.conf
* Using the docker driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing docker container for "minikube" ...
* Preparing Kubernetes v1.19.0 on Docker 19.03.8 ...
* Verifying Kubernetes components...
* Enabled addons: dashboard, default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube" by default
$ minikube tunnel
! The service istio-ingressgateway requires privileged ports to be exposed: [80 443]
* sudo permission will be asked for it.
* Starting tunnel for service istio-ingressgateway.
E0909 08:46:43.683722   12972 ssh_tunnel.go:113] error starting ssh tunnel: exec: "sudo": executable file not found in %PATH%

@tstromberg
Copy link
Contributor

Yeah, I'd say we really screwed this up for Windows users when we added this forwarding feature. "sudo" isn't necessary there for binding to ports <1024.

@tstromberg tstromberg changed the title minikube tunnel fails on windows: error starting ssh tunnel: exec: "sudo": executable file not found in %PATH% tunnel on windows: "sudo": executable file not found in %PATH% Sep 26, 2020
@tstromberg tstromberg added this to the v1.14.0 milestone Sep 26, 2020
@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. os/windows priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Sep 26, 2020
@tstromberg
Copy link
Contributor

I suspected #6833 at first - but that was merged already back in February.

@tstromberg tstromberg added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Sep 26, 2020
@sebanazarian
Copy link

hi, any solution for this error?

ssh_tunnel.go:113] error starting ssh tunnel: exec: "sudo": executable file not found in %PATH%

@medyagh medyagh modified the milestones: v1.14.0, v1.15.0-candidate Oct 12, 2020
@blueelvis
Copy link
Contributor

I will pick this up.

/assign

@rashmilengade
Copy link

rashmilengade commented Nov 2, 2020

I am facing the same issue on windows. with WSL 2.

ssh_tunnel.go:113] error starting ssh tunnel: exec: "sudo": executable file not found in %PATH%

hi, any solution for this error yet?

@anuanju89
Copy link

I am also looking for a solution. Please suggest if there is any solution.

@blueelvis
Copy link
Contributor

blueelvis commented Nov 3, 2020 via email

@V4A001
Copy link

V4A001 commented Nov 26, 2020

Same issue here. Painful flaw as it makes minikube useless on a Windows machine.

@yogitubadzin
Copy link

The same problem, any solution?

@V4A001
Copy link

V4A001 commented Dec 11, 2020

Not from me. I use my own AKS cluster and not minikube, docker compose and docker desktop (has also k8) might be other options?

@3wolf
Copy link

3wolf commented Jan 14, 2021

It was v1.16.0... But this problem is still not be solved. 💢

@azurebose
Copy link

in gitbash its work for me and access

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. os/windows priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.