Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hyperv dashboard: [SSH_TCP_FAILURE] Error dialing tcp via ssh client #4320

Closed
gouravbansal11 opened this issue May 22, 2019 · 2 comments
Closed
Labels
co/dashboard dashboard related issues co/hyperv HyperV related issues co/sshd ssh related issues priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@gouravbansal11
Copy link

**minikube dashboard **:

**PS C:\Users\gouravba> minikube dashboard

  • Enabling dashboard ...

! Unable to enable dashboard
X Error: [SSH_TCP_FAILURE] [command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: dial tcp [fe80::215:5dff:fe80:7309]:22: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.]
i Advice: Your host is failing to route packets to the minikube VM. If you have VPN software, try turning it off or configuring it so that it does not re-route traffic to the VM IP. If not, check your VM environment routing options.

  • If the above advice does not help, please let us know:

Windows 10:

Below are the logs from the minikube start and afterwards operation. I am not connected to any VPN.

PS C:\Users\gouravba> minikube delete

  • Powering off "minikube" via SSH ...
    x Deleting "minikube" from hyperv ...
  • The "minikube" cluster has been deleted.
    PS C:\Users\gouravba> minikube start --vm-driver=hyperv --hyperv-virtual-switch=MinikubeSwitch --logtostderr
    o minikube v1.0.1 on windows (amd64)
    I0522 22:17:24.686418 35080 downloader.go:60] Not caching ISO, using https://storage.googleapis.com/minikube/iso/minikube-v1.0.1.iso
    $ Downloading Kubernetes v1.14.1 images in the background ...
    I0522 22:17:24.687433 35080 start.go:652] Saving config:
    {
    "MachineConfig": {
    "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v1.0.1.iso",
    "Memory": 2048,
    "CPUs": 2,
    "DiskSize": 20000,
    "VMDriver": "hyperv",
    "ContainerRuntime": "docker",
    "HyperkitVpnKitSock": "",
    "HyperkitVSockPorts": [],
    "XhyveDiskDriver": "ahci-hd",
    "DockerEnv": null,
    "InsecureRegistry": null,
    "RegistryMirror": null,
    "HostOnlyCIDR": "192.168.99.1/24",
    "HypervVirtualSwitch": "MinikubeSwitch",
    "KvmNetwork": "default",
    "DockerOpt": null,
    "DisableDriverMounts": false,
    "NFSShare": [],
    "NFSSharesRoot": "/nfsshares",
    "UUID": "",
    "GPU": false,
    "Hidden": false,
    "NoVTXCheck": false
    },
    "KubernetesConfig": {
    "KubernetesVersion": "v1.14.1",
    "NodeIP": "",
    "NodePort": 8443,
    "NodeName": "minikube",
    "APIServerName": "minikubeCA",
    "APIServerNames": null,
    "APIServerIPs": null,
    "DNSDomain": "cluster.local",
    "ContainerRuntime": "docker",
    "CRISocket": "",
    "NetworkPlugin": "",
    "FeatureGates": "",
    "ServiceCIDR": "10.96.0.0/12",
    "ImageRepository": "",
    "ExtraOptions": null,
    "ShouldLoadCachedImages": true,
    "EnableDefaultCNI": false
    }
    }
    I0522 22:17:24.687433 35080 cache_images.go:309] Attempting to cache image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 at C:\Users\gouravba.minikube\cache\images\gcr.io\k8s-minikube
    storage-provisioner_v1.8.1
    I0522 22:17:24.687433 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar
    -amd64_1.14.13
    I0522 22:17:24.687433 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/kube-scheduler:v1.14.1 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.14.1

I0522 22:17:24.687433 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/kube-controller-manager:v1.14.1 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.14.1
I0522 22:17:24.687433 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/kube-proxy:v1.14.1 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.14.1
I0522 22:17:24.687433 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/etcd:3.3.10 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\etcd_3.3.10
I0522 22:17:24.687433 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/kube-apiserver:v1.14.1 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.14.1
I0522 22:17:24.687433 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13
I0522 22:17:24.688418 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/pause:3.1 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\pause_3.1
I0522 22:17:24.688418 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0522 22:17:24.688418 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/coredns:1.3.1 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\coredns_1.3.1
I0522 22:17:24.688418 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1
I0522 22:17:24.688418 35080 cache_images.go:309] Attempting to cache image: k8s.gcr.io/kube-addon-manager:v9.0 at C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0
I0522 22:17:24.716421 35080 cluster.go:78] Machine does not exist... provisioning new machine
I0522 22:17:24.761418 35080 cluster.go:79] Provisioning machine with config: {MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.0.1.iso Memory:2048 CPUs:2 DiskSize:20000 VMDriver:hyperv ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] XhyveDiskDriver:ahci-hd DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch:MinikubeSwitch KvmNetwork:default Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: GPU:false Hidden:false NoVTXCheck:false}

Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
I0522 22:17:35.278508 35080 cache_images.go:306]
I0522 22:17:35.278508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.279508 35080 cache_images.go:306]
I0522 22:17:35.357358 35080 cache_images.go:86] Successfully cached all images.
I0522 22:19:25.360845 35080 ssh_runner.go:101] SSH: sudo rm -f /etc/docker/ca.pem
I0522 22:19:25.418718 35080 ssh_runner.go:101] SSH: sudo mkdir -p /etc/docker
I0522 22:19:25.442382 35080 ssh_runner.go:101] SSH: sudo rm -f /etc/docker/server.pem
I0522 22:19:25.454751 35080 ssh_runner.go:101] SSH: sudo mkdir -p /etc/docker
I0522 22:19:25.477663 35080 ssh_runner.go:101] SSH: sudo rm -f /etc/docker/server-key.pem
I0522 22:19:25.496386 35080 ssh_runner.go:101] SSH: sudo mkdir -p /etc/docker

  • "minikube" IP address is 192.168.1.12
    I0522 22:19:50.339341 35080 start.go:652] Saving config:
    {
    "MachineConfig": {
    "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v1.0.1.iso",
    "Memory": 2048,
    "CPUs": 2,
    "DiskSize": 20000,
    "VMDriver": "hyperv",
    "ContainerRuntime": "docker",
    "HyperkitVpnKitSock": "",
    "HyperkitVSockPorts": [],
    "XhyveDiskDriver": "ahci-hd",
    "DockerEnv": null,
    "InsecureRegistry": null,
    "RegistryMirror": null,
    "HostOnlyCIDR": "192.168.99.1/24",
    "HypervVirtualSwitch": "MinikubeSwitch",
    "KvmNetwork": "default",
    "DockerOpt": null,
    "DisableDriverMounts": false,
    "NFSShare": [],
    "NFSSharesRoot": "/nfsshares",
    "UUID": "",
    "GPU": false,
    "Hidden": false,
    "NoVTXCheck": false
    },
    "KubernetesConfig": {
    "KubernetesVersion": "v1.14.1",
    "NodeIP": "192.168.1.12",
    "NodePort": 8443,
    "NodeName": "minikube",
    "APIServerName": "minikubeCA",
    "APIServerNames": null,
    "APIServerIPs": null,
    "DNSDomain": "cluster.local",
    "ContainerRuntime": "docker",
    "CRISocket": "",
    "NetworkPlugin": "",
    "FeatureGates": "",
    "ServiceCIDR": "10.96.0.0/12",
    "ImageRepository": "",
    "ExtraOptions": null,
    "ShouldLoadCachedImages": true,
    "EnableDefaultCNI": false
    }
    }
  • Configuring Docker as the container runtime ...
    I0522 22:19:55.290845 35080 ssh_runner.go:101] SSH: systemctl is-active --quiet service containerd
    I0522 22:19:55.351607 35080 ssh_runner.go:101] SSH: systemctl is-active --quiet service crio
    I0522 22:19:55.364506 35080 ssh_runner.go:101] SSH: sudo systemctl stop crio
    I0522 22:19:55.417991 35080 ssh_runner.go:101] SSH: systemctl is-active --quiet service crio
    I0522 22:19:55.436410 35080 ssh_runner.go:101] SSH: sudo systemctl start docker
    I0522 22:19:56.060160 35080 ssh_runner.go:137] Run with output: docker version --format '{{.Server.Version}}'
    I0522 22:19:56.194967 35080 utils.go:240] > 18.06.3-ce
  • Version of container runtime is 18.06.3-ce
    : Waiting for image downloads to complete ...
  • Preparing Kubernetes environment ...
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.14.1
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.14.1
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\pause_3.1
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.14.1
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.14.1
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\etcd_3.3.10
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1
    I0522 22:20:01.212942 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/storage-provisioner_v1.8.1
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\coredns_1.3.1
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13
    I0522 22:20:01.125937 35080 cache_images.go:207] Loading image from cache: C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0
    I0522 22:20:01.273952 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.341725 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/kube-controller-manager_v1.14.1
    I0522 22:20:01.359730 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/pause_3.1
    I0522 22:20:01.361723 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/kube-scheduler_v1.14.1
    I0522 22:20:01.362725 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/kube-apiserver_v1.14.1
    I0522 22:20:01.362725 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/etcd_3.3.10
    I0522 22:20:01.367727 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/kubernetes-dashboard-amd64_v1.10.1
    I0522 22:20:01.368730 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/k8s-dns-sidecar-amd64_1.14.13
    I0522 22:20:01.368730 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.13
    I0522 22:20:01.368730 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/coredns_1.3.1
    I0522 22:20:01.369766 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/kube-addon-manager_v9.0
    I0522 22:20:01.382733 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/k8s-dns-kube-dns-amd64_1.14.13
    I0522 22:20:01.410739 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.411723 35080 ssh_runner.go:101] SSH: sudo rm -f /tmp/kube-proxy_v1.14.1
    I0522 22:20:01.416728 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.465735 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.493744 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.507730 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.519729 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.537723 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.554745 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.592736 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.598733 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.675151 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.702611 35080 ssh_runner.go:101] SSH: sudo mkdir -p /tmp
    I0522 22:20:01.928263 35080 docker.go:86] Loading image: /tmp/pause_3.1
    I0522 22:20:01.928263 35080 ssh_runner.go:101] SSH: docker load -i /tmp/pause_3.1
    I0522 22:20:02.625716 35080 utils.go:240] > Loaded image: k8s.gcr.io/pause:3.1
    I0522 22:20:02.647718 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/pause_3.1
    I0522 22:20:02.739433 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\pause_3.1 from cache
    I0522 22:20:05.137867 35080 docker.go:86] Loading image: /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.13
    I0522 22:20:05.137867 35080 ssh_runner.go:101] SSH: docker load -i /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.13
    I0522 22:20:09.513151 35080 utils.go:240] > Loaded image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13
    I0522 22:20:09.522147 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/k8s-dns-dnsmasq-nanny-amd64_1.14.13
    I0522 22:20:09.576147 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13 from cache
    I0522 22:20:09.580147 35080 docker.go:86] Loading image: /tmp/storage-provisioner_v1.8.1
    I0522 22:20:09.591152 35080 ssh_runner.go:101] SSH: docker load -i /tmp/storage-provisioner_v1.8.1
    I0522 22:20:11.970272 35080 utils.go:240] > Loaded image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
    I0522 22:20:11.977424 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/storage-provisioner_v1.8.1
    I0522 22:20:11.992348 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1 from cache
    I0522 22:20:11.993342 35080 docker.go:86] Loading image: /tmp/k8s-dns-sidecar-amd64_1.14.13
    I0522 22:20:12.006511 35080 ssh_runner.go:101] SSH: docker load -i /tmp/k8s-dns-sidecar-amd64_1.14.13
    I0522 22:20:13.497860 35080 utils.go:240] > Loaded image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13
    I0522 22:20:13.503492 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/k8s-dns-sidecar-amd64_1.14.13
    I0522 22:20:13.521570 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13 from cache
    I0522 22:20:13.521570 35080 docker.go:86] Loading image: /tmp/coredns_1.3.1
    I0522 22:20:13.523573 35080 ssh_runner.go:101] SSH: docker load -i /tmp/coredns_1.3.1
    I0522 22:20:14.559684 35080 utils.go:240] > Loaded image: k8s.gcr.io/coredns:1.3.1
    I0522 22:20:14.567170 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/coredns_1.3.1
    I0522 22:20:14.587554 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\coredns_1.3.1 from cache
    I0522 22:20:14.587554 35080 docker.go:86] Loading image: /tmp/k8s-dns-kube-dns-amd64_1.14.13
    I0522 22:20:14.592561 35080 ssh_runner.go:101] SSH: docker load -i /tmp/k8s-dns-kube-dns-amd64_1.14.13
    I0522 22:20:16.127528 35080 utils.go:240] > Loaded image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13
    I0522 22:20:16.138061 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/k8s-dns-kube-dns-amd64_1.14.13
    I0522 22:20:16.164604 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13 from cache
    I0522 22:20:16.166408 35080 docker.go:86] Loading image: /tmp/kube-scheduler_v1.14.1
    I0522 22:20:16.170408 35080 ssh_runner.go:101] SSH: docker load -i /tmp/kube-scheduler_v1.14.1
    I0522 22:20:18.874006 35080 utils.go:240] > Loaded image: k8s.gcr.io/kube-scheduler:v1.14.1
    I0522 22:20:18.886155 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kube-scheduler_v1.14.1
    I0522 22:20:18.899152 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.14.1 from cache
    I0522 22:20:18.901152 35080 docker.go:86] Loading image: /tmp/kube-proxy_v1.14.1
    I0522 22:20:18.904168 35080 ssh_runner.go:101] SSH: docker load -i /tmp/kube-proxy_v1.14.1
    I0522 22:20:19.958148 35080 utils.go:240] > Loaded image: k8s.gcr.io/kube-proxy:v1.14.1
    I0522 22:20:19.968677 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kube-proxy_v1.14.1
    I0522 22:20:19.987057 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.14.1 from cache
    I0522 22:20:19.990055 35080 docker.go:86] Loading image: /tmp/kube-addon-manager_v9.0
    I0522 22:20:19.997136 35080 ssh_runner.go:101] SSH: docker load -i /tmp/kube-addon-manager_v9.0
    I0522 22:20:23.642093 35080 utils.go:240] > Loaded image: k8s.gcr.io/kube-addon-manager:v9.0
    I0522 22:20:23.644816 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kube-addon-manager_v9.0
    I0522 22:20:23.660434 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0 from cache
    I0522 22:20:23.662428 35080 docker.go:86] Loading image: /tmp/kubernetes-dashboard-amd64_v1.10.1
    I0522 22:20:23.664445 35080 ssh_runner.go:101] SSH: docker load -i /tmp/kubernetes-dashboard-amd64_v1.10.1
    I0522 22:20:26.769780 35080 utils.go:240] > Loaded image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    I0522 22:20:26.770923 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kubernetes-dashboard-amd64_v1.10.1
    I0522 22:20:26.801616 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1 from cache
    I0522 22:20:26.802611 35080 docker.go:86] Loading image: /tmp/kube-controller-manager_v1.14.1
    I0522 22:20:26.808608 35080 ssh_runner.go:101] SSH: docker load -i /tmp/kube-controller-manager_v1.14.1
    I0522 22:20:30.169429 35080 utils.go:240] > Loaded image: k8s.gcr.io/kube-controller-manager:v1.14.1
    I0522 22:20:30.181562 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kube-controller-manager_v1.14.1
    I0522 22:20:30.195718 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.14.1 from cache
    I0522 22:20:30.197726 35080 docker.go:86] Loading image: /tmp/kube-apiserver_v1.14.1
    I0522 22:20:30.203742 35080 ssh_runner.go:101] SSH: docker load -i /tmp/kube-apiserver_v1.14.1
    I0522 22:20:34.001259 35080 utils.go:240] > Loaded image: k8s.gcr.io/kube-apiserver:v1.14.1
    I0522 22:20:34.029107 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/kube-apiserver_v1.14.1
    I0522 22:20:34.058033 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.14.1 from cache
    I0522 22:20:34.059032 35080 docker.go:86] Loading image: /tmp/etcd_3.3.10
    I0522 22:20:34.071188 35080 ssh_runner.go:101] SSH: docker load -i /tmp/etcd_3.3.10
    I0522 22:20:40.407633 35080 utils.go:240] > Loaded image: k8s.gcr.io/etcd:3.3.10
    I0522 22:20:40.423266 35080 ssh_runner.go:101] SSH: sudo rm -rf /tmp/etcd_3.3.10
    I0522 22:20:40.439490 35080 cache_images.go:236] Successfully loaded image C:\Users\gouravba.minikube\cache\images\k8s.gcr.io\etcd_3.3.10 from cache
    I0522 22:20:40.440505 35080 cache_images.go:113] Successfully loaded all cached images.
    I0522 22:20:40.449494 35080 kubeadm.go:447] kubelet v1.14.1 config:

[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests

[Install]
I0522 22:20:40.460577 35080 cache_binaries.go:61] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubeadm
I0522 22:20:40.460577 35080 cache_binaries.go:61] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubelet
I0522 22:20:40.481557 35080 ssh_runner.go:101] SSH: sudo rm -f /usr/bin/kubeadm
I0522 22:20:40.489776 35080 ssh_runner.go:101] SSH: sudo rm -f /usr/bin/kubelet
I0522 22:20:40.502889 35080 ssh_runner.go:101] SSH: sudo mkdir -p /usr/bin
I0522 22:20:40.513788 35080 ssh_runner.go:101] SSH: sudo mkdir -p /usr/bin
I0522 22:20:43.717991 35080 ssh_runner.go:101] SSH: sudo rm -f /lib/systemd/system/kubelet.service
I0522 22:20:43.741133 35080 ssh_runner.go:101] SSH: sudo mkdir -p /lib/systemd/system
I0522 22:20:43.786157 35080 ssh_runner.go:101] SSH: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0522 22:20:43.803410 35080 ssh_runner.go:101] SSH: sudo mkdir -p /etc/systemd/system/kubelet.service.d
I0522 22:20:43.841985 35080 ssh_runner.go:101] SSH: sudo rm -f /var/lib/kubeadm.yaml
I0522 22:20:43.858348 35080 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib
I0522 22:20:43.880767 35080 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0522 22:20:43.906714 35080 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/addons
I0522 22:20:43.940673 35080 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/manifests/addon-manager.yaml
I0522 22:20:43.956123 35080 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/manifests/
I0522 22:20:43.994767 35080 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0522 22:20:44.014177 35080 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/addons
I0522 22:20:44.056394 35080 ssh_runner.go:101] SSH:
sudo systemctl daemon-reload &&
sudo systemctl start kubelet
I0522 22:20:44.213843 35080 certs.go:46] Setting up certificates for IP: 192.168.1.12
I0522 22:20:44.297845 35080 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/ca.crt
I0522 22:20:44.326901 35080 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0522 22:20:44.375708 35080 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/ca.key
I0522 22:20:44.406006 35080 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0522 22:20:44.442696 35080 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/apiserver.crt
I0522 22:20:44.474860 35080 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0522 22:20:44.535925 35080 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/apiserver.key
I0522 22:20:44.552471 35080 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0522 22:20:44.612635 35080 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.crt
I0522 22:20:44.652882 35080 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0522 22:20:44.721040 35080 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.key
I0522 22:20:44.760474 35080 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0522 22:20:44.846351 35080 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client.crt
I0522 22:20:44.875449 35080 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0522 22:20:44.979768 35080 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client.key
I0522 22:20:44.997440 35080 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0522 22:20:45.037998 35080 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/kubeconfig
I0522 22:20:45.069836 35080 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube
I0522 22:20:50.106007 35080 kubeconfig.go:127] Using kubeconfig: C:\Users\gouravba/.kube/config

  • Pulling images required by Kubernetes v1.14.1 ...
    I0522 22:20:50.121435 35080 ssh_runner.go:101] SSH: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml
    I0522 22:20:55.435355 35080 utils.go:240] > [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.14.1
    I0522 22:21:01.436713 35080 utils.go:240] > [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.14.1
    I0522 22:21:06.462994 35080 utils.go:240] > [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.14.1
    I0522 22:21:11.121463 35080 utils.go:240] > [config/images] Pulled k8s.gcr.io/kube-proxy:v1.14.1
    I0522 22:21:16.123360 35080 utils.go:240] > [config/images] Pulled k8s.gcr.io/pause:3.1
    I0522 22:21:20.179886 35080 utils.go:240] > [config/images] Pulled k8s.gcr.io/etcd:3.3.10
    I0522 22:21:27.767412 35080 utils.go:240] > [config/images] Pulled k8s.gcr.io/coredns:1.3.1
  • Launching Kubernetes v1.14.1 using kubeadm ...
    I0522 22:21:27.772004 35080 ssh_runner.go:137] Run with output:
    sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI

I0522 22:21:27.832202 35080 utils.go:240] > [init] Using Kubernetes version: v1.14.1
I0522 22:21:27.832202 35080 utils.go:240] > [preflight] Running pre-flight checks
I0522 22:21:27.955423 35080 utils.go:240] ! [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
I0522 22:21:28.064327 35080 utils.go:240] ! [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0522 22:21:28.178020 35080 utils.go:240] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0522 22:21:28.178020 35080 utils.go:240] > [preflight] Pulling images required for setting up a Kubernetes cluster
I0522 22:21:28.181035 35080 utils.go:240] > [preflight] This might take a minute or two, depending on the speed of your internet connection
I0522 22:21:28.183018 35080 utils.go:240] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0522 22:21:28.737620 35080 utils.go:240] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0522 22:21:28.742911 35080 utils.go:240] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0522 22:21:28.748965 35080 utils.go:240] > [kubelet-start] Activating the kubelet service
I0522 22:21:28.869834 35080 utils.go:240] > [certs] Using certificateDir folder "/var/lib/minikube/certs/"
I0522 22:21:28.869834 35080 utils.go:240] > [certs] Using existing ca certificate authority
I0522 22:21:28.872833 35080 utils.go:240] > [certs] Using existing apiserver certificate and key on disk
I0522 22:21:29.164492 35080 utils.go:240] > [certs] Generating "apiserver-kubelet-client" certificate and key
I0522 22:21:29.874746 35080 utils.go:240] > [certs] Generating "front-proxy-ca" certificate and key
I0522 22:21:30.068804 35080 utils.go:240] > [certs] Generating "front-proxy-client" certificate and key
I0522 22:21:30.201769 35080 utils.go:240] > [certs] Generating "etcd/ca" certificate and key
I0522 22:21:30.380749 35080 utils.go:240] > [certs] Generating "etcd/server" certificate and key
I0522 22:21:30.380749 35080 utils.go:240] > [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.1.12 127.0.0.1 ::1]
I0522 22:21:30.691876 35080 utils.go:240] > [certs] Generating "etcd/peer" certificate and key
I0522 22:21:30.692871 35080 utils.go:240] > [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.1.12 127.0.0.1 ::1]
I0522 22:21:30.813783 35080 utils.go:240] > [certs] Generating "etcd/healthcheck-client" certificate and key
I0522 22:21:31.140674 35080 utils.go:240] > [certs] Generating "apiserver-etcd-client" certificate and key
I0522 22:21:31.202300 35080 utils.go:240] > [certs] Generating "sa" key and public key
I0522 22:21:31.203298 35080 utils.go:240] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0522 22:21:31.432169 35080 utils.go:240] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0522 22:21:31.599342 35080 utils.go:240] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0522 22:21:31.913717 35080 utils.go:240] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0522 22:21:32.111663 35080 utils.go:240] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0522 22:21:32.114966 35080 utils.go:240] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0522 22:21:32.115977 35080 utils.go:240] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0522 22:21:32.131468 35080 utils.go:240] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0522 22:21:32.137289 35080 utils.go:240] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0522 22:21:32.144772 35080 utils.go:240] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0522 22:21:32.153579 35080 utils.go:240] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0522 22:21:51.702017 35080 utils.go:240] > [apiclient] All control plane components are healthy after 19.548336 seconds
I0522 22:21:51.708018 35080 utils.go:240] > [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0522 22:21:51.850656 35080 utils.go:240] > [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
I0522 22:21:52.460549 35080 utils.go:240] > [upload-certs] Skipping phase. Please see --experimental-upload-certs
I0522 22:21:52.460549 35080 utils.go:240] > [mark-control-plane] Marking the node minikube as control-plane by adding the label "node-role.kubernetes.io/master=''"
I0522 22:21:52.979470 35080 utils.go:240] > [bootstrap-token] Using token: l33l9x.o7ucdgpljshmj7pq
I0522 22:21:52.979470 35080 utils.go:240] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0522 22:21:52.997726 35080 utils.go:240] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0522 22:21:53.013223 35080 utils.go:240] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0522 22:21:53.029062 35080 utils.go:240] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0522 22:21:53.045974 35080 utils.go:240] > [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0522 22:21:53.220784 35080 utils.go:240] > [addons] Applied essential addon: CoreDNS
I0522 22:21:53.425464 35080 utils.go:240] > [addons] Applied essential addon: kube-proxy
I0522 22:21:53.429298 35080 utils.go:240] > Your Kubernetes control-plane has initialized successfully!
I0522 22:21:53.430324 35080 utils.go:240] > To start using your cluster, you need to run the following as a regular user:
I0522 22:21:53.431297 35080 utils.go:240] > mkdir -p $HOME/.kube
I0522 22:21:53.435307 35080 utils.go:240] > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0522 22:21:53.441300 35080 utils.go:240] > sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0522 22:21:53.445345 35080 utils.go:240] > You should now deploy a pod network to the cluster.
I0522 22:21:53.452316 35080 utils.go:240] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0522 22:21:53.456310 35080 utils.go:240] > https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0522 22:21:53.458293 35080 utils.go:240] > You can now join any number of control-plane nodes by copying certificate authorities
I0522 22:21:53.464316 35080 utils.go:240] > and service account keys on each node and then running the following as root:
I0522 22:21:53.476763 35080 utils.go:240] > kubeadm join localhost:8443 --token l33l9x.o7ucdgpljshmj7pq
I0522 22:21:53.479775 35080 utils.go:240] > --discovery-token-ca-cert-hash sha256:3e69edb3eb612698a9cf96f7ce1aa661537d378c0346ea6b28ba94a8470f29fa
I0522 22:21:53.487781 35080 utils.go:240] > --experimental-control-plane
I0522 22:21:53.492766 35080 utils.go:240] > Then you can join any number of worker nodes by running the following on each as root:
I0522 22:21:53.498776 35080 utils.go:240] > kubeadm join localhost:8443 --token l33l9x.o7ucdgpljshmj7pq
I0522 22:21:53.501763 35080 utils.go:240] > --discovery-token-ca-cert-hash sha256:3e69edb3eb612698a9cf96f7ce1aa661537d378c0346ea6b28ba94a8470f29fa
: Waiting for pods: apiserverI0522 22:21:53.610763 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ...
I0522 22:21:53.663760 35080 kubernetes.go:134] Found 0 Pods for label selector component=kube-apiserver
I0522 22:22:46.675526 35080 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver
proxyI0522 22:22:49.676332 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ...
I0522 22:22:49.686189 35080 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy
etcdI0522 22:22:49.690186 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ...
I0522 22:22:49.703804 35080 kubernetes.go:134] Found 0 Pods for label selector component=etcd
I0522 22:23:11.711556 35080 kubernetes.go:134] Found 1 Pods for label selector component=etcd
schedulerI0522 22:23:19.712690 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ...
I0522 22:23:19.719476 35080 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler
controllerI0522 22:23:19.722471 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ...
I0522 22:23:19.730474 35080 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager
dnsI0522 22:23:19.748147 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ...
I0522 22:23:19.761357 35080 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns

  • Configuring cluster permissions ...
    I0522 22:23:19.953349 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-apiserver" ...
    I0522 22:23:19.965421 35080 kubernetes.go:134] Found 1 Pods for label selector component=kube-apiserver
    I0522 22:23:19.965421 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-proxy" ...
    I0522 22:23:19.971418 35080 kubernetes.go:134] Found 1 Pods for label selector k8s-app=kube-proxy
    I0522 22:23:19.971418 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=etcd" ...
    I0522 22:23:19.998758 35080 kubernetes.go:134] Found 1 Pods for label selector component=etcd
    I0522 22:23:19.998758 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-scheduler" ...
    I0522 22:23:20.023834 35080 kubernetes.go:134] Found 1 Pods for label selector component=kube-scheduler
    I0522 22:23:20.023834 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "component=kube-controller-manager" ...
    I0522 22:23:20.036022 35080 kubernetes.go:134] Found 1 Pods for label selector component=kube-controller-manager
    I0522 22:23:20.037020 35080 kubernetes.go:123] Waiting for pod with label "kube-system" in ns "k8s-app=kube-dns" ...
    I0522 22:23:20.061047 35080 kubernetes.go:134] Found 2 Pods for label selector k8s-app=kube-dns
  • Verifying component health ...I0522 22:23:20.063048 35080 ssh_runner.go:137] Run with output: sudo systemctl is-active kubelet
    I0522 22:23:20.080981 35080 utils.go:240] > active
    .I0522 22:23:20.100056 35080 kubeadm.go:129] https://192.168.1.12:8443/healthz response: &{Status:200 OK StatusCode:200 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff] Date:[Wed, 22 May 2019 16:53:20 GMT] Content-Length:[2]] Body:0xc0007bc540 ContentLength:2 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00019e700 TLS:0xc00000e580}
    .
  • kubectl is now configured to use "minikube"
    = Done! Thank you for using minikube!
    PS C:\Users\gouravba>
    PS C:\Users\gouravba>
    PS C:\Users\gouravba>
    PS C:\Users\gouravba> kubectl get pods -n kube-system
    Unable to connect to the server: dial tcp 192.168.1.12:8443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
    PS C:\Users\gouravba> kubectl get pods -n kube-system
    Unable to connect to the server: dial tcp 192.168.1.12:8443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
    PS C:\Users\gouravba>
    PS C:\Users\gouravba>
    PS C:\Users\gouravba> minikube dashboard
  • Enabling dashboard ...

! Unable to enable dashboard
X Error: [SSH_TCP_FAILURE] [command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: dial tcp [fe80::215:5dff:fe80:7309]:22: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.]
i Advice: Your host is failing to route packets to the minikube VM. If you have VPN software, try turning it off or configuring it so that it does not re-route traffic to the VM IP. If not, check your VM environment routing options.

  • If the above advice does not help, please let us know:
@tstromberg tstromberg changed the title [SSH_TCP_FAILURE] [command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: hyperv dashboard: [SSH_TCP_FAILURE] Error dialing tcp via ssh client May 22, 2019
@tstromberg
Copy link
Contributor

Interesting. Do you mind sharing the output of:

minikube logs
minikube status

This seems like a hyperv network configuration issue, but it could also be something worse like memory pressure (hyperv defaults to Dynamic Memory Management: #1776

@tstromberg tstromberg added co/dashboard dashboard related issues co/hyperv HyperV related issues co/sshd ssh related issues priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. labels May 22, 2019
@tstromberg
Copy link
Contributor

Thank you for sharing your experience!

This issue appears to be a duplicate of #2414 so I will close this in preference to that issue so that we may centralize the information around it. If you feel that this is not in fact a duplicate, please feel free to re-open this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/dashboard dashboard related issues co/hyperv HyperV related issues co/sshd ssh related issues priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

2 participants