Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kube cluster shutsdown after a few minutes MiniKube VM still running #2326

Closed
JockDaRock opened this issue Dec 15, 2017 · 6 comments
Closed
Labels
co/hyperv HyperV related issues kind/bug Categorizes issue or PR as related to a bug. os/windows

Comments

@JockDaRock
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Please provide the following details:

Environment:
Windows 10 / Surface Pro 4, 16 GB RAM, i7 processor

Minikube version (use minikube version): v 0.24.1

  • OS (e.g. from /etc/os-release): Windows 10

  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): hyper-v

  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.23.6

  • Install tools: chocolatey

  • Others:
    The above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):

minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver": 
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json

What happened:

I created a Virtual-Switch in Hyper-V Manager in Windows to use with minikube. To do this I open the Hyper-V manager in windows. On the right of the Hyper-V manager in Windows, I clicked the Virtual Switch Manager setting. I then add a new Virtual Switch call "Primary Virtual Switch" and select external switch for the kind of switch it is.

I then executed the following command in administrator cli:
minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"

waited for my cluster to start...

After the cluster starts running, I deploy an app for testing.

Everything starts up just fine and is working as expected... until it's NOT working.

A few minutes later I am unable to access any part of the application I ran. The cluster shuts down - status of minilube is VM is running and cluster is down.

c:\Users\random\faas-netes>minikube status
minikube: Running
cluster: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 172.16.0.125

What you expected to happen:
I expect for the kube cluster to remain running...

How to reproduce it (as minimally and precisely as possible):
see above description of what happened.

The below logs was all I was able to get.
Output of minikube logs (if applicable):

Dec 15 16:17:29 minikube systemd[1]: Starting Localkube...
Dec 15 16:17:29 minikube localkube[3279]: listening for peers on http://localhost:2380
Dec 15 16:17:29 minikube localkube[3279]: listening for client requests on localhost:2379
Dec 15 16:17:29 minikube localkube[3279]: name = default
Dec 15 16:17:29 minikube localkube[3279]: data dir = /var/lib/localkube/etcd
Dec 15 16:17:29 minikube localkube[3279]: member dir = /var/lib/localkube/etcd/member
Dec 15 16:17:29 minikube localkube[3279]: heartbeat = 100ms
Dec 15 16:17:29 minikube localkube[3279]: election = 1000ms
Dec 15 16:17:29 minikube localkube[3279]: snapshot count = 10000
Dec 15 16:17:29 minikube localkube[3279]: advertise client URLs = http://localhost:2379
Dec 15 16:17:29 minikube localkube[3279]: initial advertise peer URLs = http://localhost:2380
Dec 15 16:17:29 minikube localkube[3279]: initial cluster = default=http://localhost:2380
Dec 15 16:17:29 minikube localkube[3279]: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
Dec 15 16:17:29 minikube localkube[3279]: 8e9e05c52164694d became follower at term 0
Dec 15 16:17:29 minikube localkube[3279]: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
Dec 15 16:17:29 minikube localkube[3279]: 8e9e05c52164694d became follower at term 1
Dec 15 16:17:29 minikube localkube[3279]: starting server... [version: 3.1.10, cluster version: to_be_decided]
Dec 15 16:17:29 minikube localkube[3279]: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
Dec 15 16:17:29 minikube localkube[3279]: 8e9e05c52164694d is starting a new election at term 1
Dec 15 16:17:29 minikube localkube[3279]: 8e9e05c52164694d became candidate at term 2
Dec 15 16:17:29 minikube localkube[3279]: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
Dec 15 16:17:29 minikube localkube[3279]: 8e9e05c52164694d became leader at term 2
Dec 15 16:17:29 minikube localkube[3279]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
Dec 15 16:17:29 minikube localkube[3279]: setting up the initial cluster version to 3.1
Dec 15 16:17:29 minikube localkube[3279]: set the initial cluster version to 3.1
Dec 15 16:17:29 minikube localkube[3279]: I1215 16:17:29.827575    3279 etcd.go:58] Etcd server is ready
Dec 15 16:17:29 minikube localkube[3279]: localkube host ip address: 172.16.0.125
Dec 15 16:17:29 minikube localkube[3279]: enabled capabilities for version 3.1
Dec 15 16:17:29 minikube localkube[3279]: Starting apiserver...
Dec 15 16:17:29 minikube localkube[3279]: Waiting for apiserver to be healthy...
Dec 15 16:17:29 minikube localkube[3279]: I1215 16:17:29.828286    3279 server.go:114] Version: v1.8.0
Dec 15 16:17:29 minikube localkube[3279]: W1215 16:17:29.828594    3279 authentication.go:380] AnonymousAuth is not allowed with the AllowAll authorizer.  Resetting AnonymousAuth to false. You should use a different authorizer
Dec 15 16:17:29 minikube localkube[3279]: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
Dec 15 16:17:29 minikube localkube[3279]: ready to serve client requests
Dec 15 16:17:29 minikube localkube[3279]: I1215 16:17:29.829105    3279 plugins.go:101] No cloud provider specified.
Dec 15 16:17:29 minikube localkube[3279]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
Dec 15 16:17:30 minikube localkube[3279]: [restful] 2017/12/15 16:17:30 log.go:33: [restful/swagger] listing is available at https://172.16.0.125:8443/swaggerapi
Dec 15 16:17:30 minikube localkube[3279]: [restful] 2017/12/15 16:17:30 log.go:33: [restful/swagger] https://172.16.0.125:8443/swaggerui/ is mapped to folder /swagger-ui/
Dec 15 16:17:30 minikube localkube[3279]: [restful] 2017/12/15 16:17:30 log.go:33: [restful/swagger] listing is available at https://172.16.0.125:8443/swaggerapi
Dec 15 16:17:30 minikube localkube[3279]: [restful] 2017/12/15 16:17:30 log.go:33: [restful/swagger] https://172.16.0.125:8443/swaggerui/ is mapped to folder /swagger-ui/
Dec 15 16:17:30 minikube localkube[3279]: I1215 16:17:30.828358    3279 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Dec 15 16:17:30 minikube localkube[3279]: E1215 16:17:30.829264    3279 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Dec 15 16:17:31 minikube localkube[3279]: I1215 16:17:31.828457    3279 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Dec 15 16:17:31 minikube localkube[3279]: E1215 16:17:31.829910    3279 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Dec 15 16:17:32 minikube localkube[3279]: I1215 16:17:32.828409    3279 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Dec 15 16:17:32 minikube localkube[3279]: E1215 16:17:32.829581    3279 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.006276    3279 aggregator.go:138] Skipping APIService creation for scheduling.k8s.io/v1alpha1
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.007358    3279 serve.go:85] Serving securely on 0.0.0.0:8443
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.007611    3279 controller.go:84] Starting OpenAPI AggregationController
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.008429    3279 available_controller.go:192] Starting AvailableConditionController
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.008607    3279 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Dec 15 16:17:33 minikube systemd[1]: Started Localkube.
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.013500    3279 crd_finalizer.go:242] Starting CRDFinalizer
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.014173    3279 apiservice_controller.go:112] Starting APIServiceRegistrationController
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.014196    3279 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.014214    3279 crdregistration_controller.go:112] Starting crd-autoregister controller
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.014219    3279 controller_utils.go:1041] Waiting for caches to sync for crd-autoregister controller
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.014243    3279 customresource_discovery_controller.go:152] Starting DiscoveryController
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.014254    3279 naming_controller.go:277] Starting NamingConditionController
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.109552    3279 cache.go:39] Caches are synced for AvailableConditionController controller
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.115284    3279 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.115318    3279 controller_utils.go:1048] Caches are synced for crd-autoregister controller
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.115401    3279 autoregister_controller.go:136] Starting autoregister controller
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.115407    3279 cache.go:32] Waiting for caches to sync for autoregister controller
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.216031    3279 cache.go:39] Caches are synced for autoregister controller
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.828455    3279 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Dec 15 16:17:33 minikube localkube[3279]: I1215 16:17:33.837314    3279 ready.go:49] Got healthcheck response: [+]ping ok
Dec 15 16:17:33 minikube localkube[3279]: [+]etcd ok
Dec 15 16:17:33 minikube localkube[3279]: [+]poststarthook/generic-apiserver-start-informers ok
Dec 15 16:17:33 minikube localkube[3279]: [+]poststarthook/start-apiextensions-informers ok
Dec 15 16:17:33 minikube localkube[3279]: [+]poststarthook/start-apiextensions-controllers ok
Dec 15 16:17:33 minikube localkube[3279]: [+]poststarthook/bootstrap-controller ok
Dec 15 16:17:33 minikube localkube[3279]: [-]poststarthook/ca-registration failed: reason withheld
Dec 15 16:17:33 minikube localkube[3279]: [+]poststarthook/start-kube-apiserver-informers ok
Dec 15 16:17:33 minikube localkube[3279]: [+]poststarthook/start-kube-aggregator-informers ok
Dec 15 16:17:33 minikube localkube[3279]: [+]poststarthook/apiservice-registration-controller ok
Dec 15 16:17:33 minikube localkube[3279]: [+]poststarthook/apiservice-status-available-controller ok
Dec 15 16:17:33 minikube localkube[3279]: [+]poststarthook/apiservice-openapi-controller ok
Dec 15 16:17:33 minikube localkube[3279]: [+]poststarthook/kube-apiserver-autoregistration ok
Dec 15 16:17:33 minikube localkube[3279]: [-]autoregister-completion failed: reason withheld
Dec 15 16:17:33 minikube localkube[3279]: healthz check failed
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.828812    3279 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.838407    3279 ready.go:49] Got healthcheck response: ok
Dec 15 16:17:34 minikube localkube[3279]: apiserver is ready!
Dec 15 16:17:34 minikube localkube[3279]: Starting controller-manager...
Dec 15 16:17:34 minikube localkube[3279]: Waiting for controller-manager to be healthy...
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.838455    3279 controllermanager.go:109] Version: v1.8.0
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.842296    3279 leaderelection.go:174] attempting to acquire leader lease...
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.853430    3279 leaderelection.go:184] successfully acquired lease kube-system/kube-controller-manager
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.855646    3279 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"75b8b30c-e1b3-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"35", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.867422    3279 plugins.go:101] No cloud provider specified.
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.869733    3279 controller_utils.go:1041] Waiting for caches to sync for tokens controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.869760    3279 controllermanager.go:487] Started "daemonset"
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.871190    3279 controllermanager.go:487] Started "deployment"
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.869778    3279 daemon_controller.go:230] Starting daemon sets controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.872412    3279 controller_utils.go:1041] Waiting for caches to sync for daemon sets controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.872457    3279 deployment_controller.go:151] Starting deployment controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.872461    3279 controller_utils.go:1041] Waiting for caches to sync for deployment controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.872529    3279 replica_set.go:156] Starting replica set controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.872534    3279 controller_utils.go:1041] Waiting for caches to sync for replica set controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.872999    3279 controllermanager.go:487] Started "replicaset"
Dec 15 16:17:34 minikube localkube[3279]: E1215 16:17:34.874079    3279 certificates.go:48] Failed to start certificate controller: error reading CA cert file "/etc/kubernetes/ca/ca.pem": open /etc/kubernetes/ca/ca.pem: no such file or directory
Dec 15 16:17:34 minikube localkube[3279]: W1215 16:17:34.874211    3279 controllermanager.go:484] Skipping "csrsigning"
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.875600    3279 controllermanager.go:487] Started "csrapproving"
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.875766    3279 certificate_controller.go:109] Starting certificate controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.875929    3279 controller_utils.go:1041] Waiting for caches to sync for certificate controller
Dec 15 16:17:34 minikube localkube[3279]: W1215 16:17:34.877058    3279 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.877526    3279 controllermanager.go:487] Started "attachdetach"
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.878698    3279 controllermanager.go:487] Started "replicationcontroller"
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.879965    3279 attach_detach_controller.go:255] Starting attach detach controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.880134    3279 controller_utils.go:1041] Waiting for caches to sync for attach detach controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.880312    3279 replication_controller.go:151] Starting RC controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.880725    3279 controller_utils.go:1041] Waiting for caches to sync for RC controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.905672    3279 controllermanager.go:487] Started "namespace"
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.906625    3279 namespace_controller.go:186] Starting namespace controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.906642    3279 controller_utils.go:1041] Waiting for caches to sync for namespace controller
Dec 15 16:17:34 minikube localkube[3279]: I1215 16:17:34.970430    3279 controller_utils.go:1048] Caches are synced for tokens controller
Dec 15 16:17:35 minikube localkube[3279]: controller-manager is ready!
Dec 15 16:17:35 minikube localkube[3279]: Starting scheduler...
Dec 15 16:17:35 minikube localkube[3279]: Waiting for scheduler to be healthy...
Dec 15 16:17:35 minikube localkube[3279]: E1215 16:17:35.840824    3279 server.go:173] unable to register configz: register config "componentconfig" twice
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.108275    3279 controllermanager.go:487] Started "garbagecollector"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.109307    3279 garbagecollector.go:136] Starting garbage collector controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.109328    3279 controller_utils.go:1041] Waiting for caches to sync for garbage collector controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.109346    3279 graph_builder.go:321] GraphBuilder running
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.110265    3279 controllermanager.go:487] Started "job"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.110507    3279 job_controller.go:138] Starting job controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.110625    3279 controller_utils.go:1041] Waiting for caches to sync for job controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.111759    3279 controllermanager.go:487] Started "cronjob"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.111989    3279 cronjob_controller.go:98] Starting CronJob Manager
Dec 15 16:17:36 minikube localkube[3279]: W1215 16:17:36.111914    3279 core.go:128] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.112430    3279 core.go:131] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
Dec 15 16:17:36 minikube localkube[3279]: W1215 16:17:36.112535    3279 controllermanager.go:484] Skipping "route"
Dec 15 16:17:36 minikube localkube[3279]: W1215 16:17:36.112681    3279 controllermanager.go:484] Skipping "persistentvolume-expander"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.113697    3279 controllermanager.go:487] Started "endpoint"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.113985    3279 endpoints_controller.go:153] Starting endpoint controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.114006    3279 controller_utils.go:1041] Waiting for caches to sync for endpoint controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.114919    3279 controllermanager.go:487] Started "podgc"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.115136    3279 gc_controller.go:76] Starting GC controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.115238    3279 controller_utils.go:1041] Waiting for caches to sync for GC controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.116334    3279 controllermanager.go:487] Started "resourcequota"
Dec 15 16:17:36 minikube localkube[3279]: W1215 16:17:36.116504    3279 controllermanager.go:471] "bootstrapsigner" is disabled
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.116621    3279 resource_quota_controller.go:238] Starting resource quota controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.116788    3279 controller_utils.go:1041] Waiting for caches to sync for resource quota controller
Dec 15 16:17:36 minikube localkube[3279]: E1215 16:17:36.117732    3279 core.go:70] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
Dec 15 16:17:36 minikube localkube[3279]: W1215 16:17:36.117863    3279 controllermanager.go:484] Skipping "service"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.119018    3279 controllermanager.go:487] Started "persistentvolume-binder"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.119211    3279 pv_controller_base.go:259] Starting persistent volume controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.119388    3279 controller_utils.go:1041] Waiting for caches to sync for persistent volume controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.120348    3279 node_controller.go:249] Sending events to api server.
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.120591    3279 taint_controller.go:158] Sending events to api server.
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.120734    3279 controllermanager.go:487] Started "node"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.120897    3279 node_controller.go:516] Starting node controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.120990    3279 controller_utils.go:1041] Waiting for caches to sync for node controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.121879    3279 controllermanager.go:487] Started "serviceaccount"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.122120    3279 serviceaccounts_controller.go:113] Starting service account controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.122361    3279 controller_utils.go:1041] Waiting for caches to sync for service account controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.124962    3279 controllermanager.go:487] Started "horizontalpodautoscaling"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.125188    3279 horizontal.go:145] Starting HPA controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.125343    3279 controller_utils.go:1041] Waiting for caches to sync for HPA controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.126892    3279 controllermanager.go:487] Started "disruption"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.127083    3279 disruption.go:288] Starting disruption controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.127316    3279 controller_utils.go:1041] Waiting for caches to sync for disruption controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.128757    3279 controllermanager.go:487] Started "statefulset"
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.128919    3279 stateful_set.go:146] Starting stateful set controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.129090    3279 controller_utils.go:1041] Waiting for caches to sync for stateful set controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.130058    3279 controllermanager.go:487] Started "ttl"
Dec 15 16:17:36 minikube localkube[3279]: W1215 16:17:36.130267    3279 controllermanager.go:471] "tokencleaner" is disabled
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.130224    3279 ttl_controller.go:116] Starting TTL controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.131002    3279 controller_utils.go:1041] Waiting for caches to sync for TTL controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.172660    3279 controller_utils.go:1048] Caches are synced for replica set controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.173020    3279 controller_utils.go:1048] Caches are synced for deployment controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.176296    3279 controller_utils.go:1048] Caches are synced for certificate controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.180975    3279 controller_utils.go:1048] Caches are synced for attach detach controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.206725    3279 controller_utils.go:1048] Caches are synced for namespace controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.210984    3279 controller_utils.go:1048] Caches are synced for job controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.214550    3279 controller_utils.go:1048] Caches are synced for endpoint controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.215686    3279 controller_utils.go:1048] Caches are synced for GC controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.221696    3279 controller_utils.go:1048] Caches are synced for node controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.221829    3279 taint_controller.go:181] Starting NoExecuteTaintManager
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.223108    3279 controller_utils.go:1048] Caches are synced for service account controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.226047    3279 controller_utils.go:1048] Caches are synced for HPA controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.229736    3279 controller_utils.go:1048] Caches are synced for stateful set controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.231336    3279 controller_utils.go:1048] Caches are synced for TTL controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.372555    3279 controller_utils.go:1048] Caches are synced for daemon sets controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.381001    3279 controller_utils.go:1048] Caches are synced for RC controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.409913    3279 controller_utils.go:1048] Caches are synced for garbage collector controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.410173    3279 garbagecollector.go:145] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.417107    3279 controller_utils.go:1048] Caches are synced for resource quota controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.419952    3279 controller_utils.go:1048] Caches are synced for persistent volume controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.427921    3279 controller_utils.go:1048] Caches are synced for disruption controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.428229    3279 disruption.go:296] Sending events to api server.
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.552199    3279 controller_utils.go:1041] Waiting for caches to sync for scheduler controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.653054    3279 controller_utils.go:1048] Caches are synced for scheduler controller
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.653166    3279 leaderelection.go:174] attempting to acquire leader lease...
Dec 15 16:17:36 minikube localkube[3279]: scheduler is ready!
Dec 15 16:17:36 minikube localkube[3279]: Starting kubelet...
Dec 15 16:17:36 minikube localkube[3279]: Waiting for kubelet to be healthy...
Dec 15 16:17:36 minikube localkube[3279]: I1215 16:17:36.840283    3279 feature_gate.go:156] feature gates: map[]
Dec 15 16:17:36 minikube localkube[3279]: W1215 16:17:36.840524    3279 server.go:276] --require-kubeconfig is deprecated. Set --kubeconfig without using --require-kubeconfig.
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.190608    3279 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.190641    3279 client.go:95] Start docker client with request timeout=2m0s
Dec 15 16:17:37 minikube localkube[3279]: W1215 16:17:37.195362    3279 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.236126    3279 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/localkube.service"
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.264416    3279 fs.go:139] Filesystem UUIDs: map[2017-10-19-17-24-41-00:/dev/sr0 9bcb670f-38e8-4fc9-aa80-67b1d4dc8673:/dev/sda2 d4f2c107-0c9e-4078-97f0-e479f7cef966:/dev/sda1]
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.264670    3279 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:17 fsType:tmpfs blockSize:0} /dev/sda1:{mountpoint:/mnt/sda1 major:8 minor:1 fsType:ext4 blockSize:0}]
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.266044    3279 manager.go:216] Machine: {NumCores:2 CpuFrequency:2207922 MemoryCapacity:2088075264 HugePages:[{PageSize:2048 NumPages:0}] MachineID:08199f0354f04e4fb198b7249f47bb91 SystemUUID:5DBF36CD-D8D7-2943-80AC-B9FA51153DF7 BootID:bddab358-daa0-4d42-a3b1-5342d8686fe0 Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:17 Capacity:1044037632 Type:vfs Inodes:254892 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:17293533184 Type:vfs Inodes:9732096 HasInodes:true} {Device:rootfs DeviceMajor:0 DeviceMinor:1 Capacity:0 Type:vfs Inodes:0 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:00:15:5d:f3:1b:2e Speed:300 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:2088075264 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:4194304 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:4194304 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Azure InstanceType:Unknown InstanceID:5DBF36CD-D8D7-2943-80AC-B9FA51153DF7}
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.269204    3279 manager.go:222] Version: {KernelVersion:4.9.13 ContainerOsVersion:Buildroot 2017.02 DockerVersion:17.06.0-ce DockerAPIVersion:1.30 CadvisorVersion: CadvisorRevision:}
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.269773    3279 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.271111    3279 container_manager_linux.go:252] container manager verified user specified cgroup-root exists: /
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.271263    3279 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.272518    3279 container_manager_linux.go:288] Creating device plugin handler: false
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.272835    3279 kubelet.go:273] Adding manifest file: /etc/kubernetes/manifests
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.273106    3279 kubelet.go:283] Watching apiserver
Dec 15 16:17:37 minikube localkube[3279]: W1215 16:17:37.281728    3279 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.281760    3279 kubelet.go:517] Hairpin mode set to "hairpin-veth"
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.286272    3279 docker_service.go:207] Docker cri networking managed by kubernetes.io/no-op
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.290793    3279 docker_service.go:224] Setting cgroupDriver to cgroupfs
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.296483    3279 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.297850    3279 kuberuntime_manager.go:174] Container runtime docker initialized, version: 17.06.0-ce, apiVersion: 1.30.0
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.297964    3279 kuberuntime_manager.go:898] updating runtime config through cri with podcidr 10.180.1.0/24
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.298049    3279 docker_service.go:306] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.180.1.0/24,},}
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.298147    3279 kubelet_network.go:276] Setting Pod CIDR:  -> 10.180.1.0/24
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.299712    3279 server.go:718] Started kubelet v1.8.0
Dec 15 16:17:37 minikube localkube[3279]: E1215 16:17:37.300148    3279 kubelet.go:1234] Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.300696    3279 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.301599    3279 server.go:128] Starting to listen on 0.0.0.0:10250
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.302183    3279 server.go:296] Adding debug handlers to kubelet server.
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.319604    3279 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.319652    3279 status_manager.go:140] Starting to sync pod status with apiserver
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.319662    3279 kubelet.go:1768] Starting kubelet main sync loop.
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.319678    3279 kubelet.go:1779] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Dec 15 16:17:37 minikube localkube[3279]: E1215 16:17:37.320014    3279 container_manager_linux.go:603] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.320032    3279 volume_manager.go:246] Starting Kubelet Volume Manager
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.328672    3279 factory.go:355] Registering Docker factory
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.329456    3279 factory.go:89] Registering Rkt factory
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.331101    3279 factory.go:157] Registering CRI-O factory
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.331123    3279 factory.go:54] Registering systemd factory
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.331237    3279 factory.go:86] Registering Raw factory
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.331328    3279 manager.go:1140] Started watching for new ooms in manager
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.331674    3279 manager.go:311] Starting recovery of all containers
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.358960    3279 manager.go:316] Recovery completed
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.365054    3279 rkt.go:56] starting detectRktContainers thread
Dec 15 16:17:37 minikube localkube[3279]: E1215 16:17:37.394722    3279 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node 'minikube' not found
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.420809    3279 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.425710    3279 kubelet_node_status.go:83] Attempting to register node minikube
Dec 15 16:17:37 minikube localkube[3279]: kubelet is ready!
Dec 15 16:17:37 minikube localkube[3279]: Starting proxy...
Dec 15 16:17:37 minikube localkube[3279]: Waiting for proxy to be healthy...
Dec 15 16:17:37 minikube localkube[3279]: W1215 16:17:37.841341    3279 server_others.go:63] unable to register configz: register config "componentconfig" twice
Dec 15 16:17:37 minikube localkube[3279]: I1215 16:17:37.849742    3279 server_others.go:117] Using iptables Proxier.
Dec 15 16:17:38 minikube localkube[3279]: proxy is ready!
Dec 15 16:17:39 minikube localkube[3279]: sync duration of 4.2857495s, expected less than 1s
Dec 15 16:17:42 minikube localkube[3279]: I1215 16:17:42.320270    3279 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Dec 15 16:17:42 minikube localkube[3279]: I1215 16:17:42.322303    3279 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Dec 15 16:17:42 minikube localkube[3279]: E1215 16:17:42.324974    3279 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 15 16:17:42 minikube localkube[3279]: I1215 16:17:42.420245    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/7b19c3ba446df5355649563d32723e4f-addons") pod "kube-addon-manager-minikube" (UID: "7b19c3ba446df5355649563d32723e4f")
Dec 15 16:17:42 minikube localkube[3279]: I1215 16:17:42.420662    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/7b19c3ba446df5355649563d32723e4f-kubeconfig") pod "kube-addon-manager-minikube" (UID: "7b19c3ba446df5355649563d32723e4f")
Dec 15 16:17:43 minikube localkube[3279]: E1215 16:17:43.226238    3279 status.go:62] apiserver received an error that is not an metav1.Status: etcdserver: request timed out
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.226855    3279 trace.go:76] Trace[116564143]: "Create /api/v1/namespaces/default/serviceaccounts" (started: 2017-12-15 16:17:36.2238551 +0000 UTC m=+6.745252400) (total time: 7.0029379s):
Dec 15 16:17:43 minikube localkube[3279]: Trace[116564143]: [7.0029379s] [7.002778s] END
Dec 15 16:17:43 minikube localkube[3279]: E1215 16:17:43.227495    3279 serviceaccounts_controller.go:176] default failed with : etcdserver: request timed out
Dec 15 16:17:43 minikube localkube[3279]: apply entries took too long [4.1745401s for 4 entries]
Dec 15 16:17:43 minikube localkube[3279]: avoid queries with large range/delete range!
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.314113    3279 trace.go:76] Trace[1947558679]: "List /api/v1/namespaces/kube-system/limitranges" (started: 2017-12-15 16:17:42.329613 +0000 UTC m=+12.850984400) (total time: 984.4749ms):
Dec 15 16:17:43 minikube localkube[3279]: Trace[1947558679]: [984.3909ms] [984.3863ms] Listing from storage done
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.315528    3279 trace.go:76] Trace[1706214133]: "List /apis/batch/v1/jobs" (started: 2017-12-15 16:17:36.1126791 +0000 UTC m=+6.634076400) (total time: 7.202813s):
Dec 15 16:17:43 minikube localkube[3279]: Trace[1706214133]: [7.2027756s] [7.2027688s] Listing from storage done
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.314402    3279 trace.go:76] Trace[1159643257]: "Get /api/v1/namespaces/kube-system/pods/kube-addon-manager-minikube" (started: 2017-12-15 16:17:42.324651 +0000 UTC m=+12.846022400) (total time: 989.7355ms):
Dec 15 16:17:43 minikube localkube[3279]: Trace[1159643257]: [989.7355ms] [989.725ms] END
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.314600    3279 trace.go:76] Trace[1154176478]: "List /api/v1/nodes" (started: 2017-12-15 16:17:36.2164854 +0000 UTC m=+6.737882700) (total time: 7.0980767s):
Dec 15 16:17:43 minikube localkube[3279]: Trace[1154176478]: [7.0980115s] [7.0980037s] Listing from storage done
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.314695    3279 trace.go:76] Trace[1154335662]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2017-12-15 16:17:36.8541596 +0000 UTC m=+7.375556900) (total time: 6.4605s):
Dec 15 16:17:43 minikube localkube[3279]: Trace[1154335662]: [6.460482s] [6.4604764s] About to write a response
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.314743    3279 trace.go:76] Trace[997829740]: "Get /api/v1/nodes/minikube" (started: 2017-12-15 16:17:37.8506911 +0000 UTC m=+8.372088400) (total time: 5.464019s):
Dec 15 16:17:43 minikube localkube[3279]: Trace[997829740]: [5.464019s] [5.464012s] END
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.315126    3279 trace.go:76] Trace[793487571]: "Create /api/v1/nodes" (started: 2017-12-15 16:17:37.4277445 +0000 UTC m=+7.949142000) (total time: 5.8873441s):
Dec 15 16:17:43 minikube localkube[3279]: Trace[793487571]: [5.8873136s] [5.8871198s] Object stored in database
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.315163    3279 trace.go:76] Trace[1487976637]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2017-12-15 16:17:36.6538941 +0000 UTC m=+7.175291400) (total time: 6.6612352s):
Dec 15 16:17:43 minikube localkube[3279]: Trace[1487976637]: [6.6612352s] [6.6612293s] END
Dec 15 16:17:43 minikube localkube[3279]: E1215 16:17:43.316381    3279 actual_state_of_world.go:483] Failed to set statusUpdateNeeded to needed true because nodeName="minikube"  does not exist
Dec 15 16:17:43 minikube localkube[3279]: E1215 16:17:43.323479    3279 actual_state_of_world.go:497] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true because nodeName="minikube"  does not exist
Dec 15 16:17:43 minikube localkube[3279]: W1215 16:17:43.325645    3279 server.go:580] Failed to retrieve node info: nodes "minikube" not found
Dec 15 16:17:43 minikube localkube[3279]: W1215 16:17:43.325766    3279 proxier.go:468] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
Dec 15 16:17:43 minikube localkube[3279]: W1215 16:17:43.325795    3279 proxier.go:473] clusterCIDR not specified, unable to distinguish between internal and external traffic
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.325940    3279 kubelet_node_status.go:86] Successfully registered node minikube
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.326742    3279 server_others.go:152] Tearing down inactive rules.
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.344841    3279 trace.go:76] Trace[164840250]: "Create /api/v1/namespaces/default/events" (started: 2017-12-15 16:17:37.3050524 +0000 UTC m=+7.826449700) (total time: 6.0397367s):
Dec 15 16:17:43 minikube localkube[3279]: Trace[164840250]: [6.0396091s] [6.0395461s] Object stored in database
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.346127    3279 trace.go:76] Trace[1088091709]: "Create /api/v1/namespaces/kube-system/events" (started: 2017-12-15 16:17:34.8567411 +0000 UTC m=+5.378073700) (total time: 8.4894067s):
Dec 15 16:17:43 minikube localkube[3279]: Trace[1088091709]: [8.4893777s] [8.4893166s] Object stored in database
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.365942    3279 trace.go:76] Trace[1520253545]: "Create /api/v1/namespaces/kube-system/pods" (started: 2017-12-15 16:17:42.3289626 +0000 UTC m=+12.850334000) (total time: 1.0369595s):
Dec 15 16:17:43 minikube localkube[3279]: Trace[1520253545]: [1.0102384s] [1.0099861s] About to store object in database
Dec 15 16:17:43 minikube localkube[3279]: E1215 16:17:43.367414    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.367636    3279 config.go:202] Starting service config controller
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.367760    3279 controller_utils.go:1041] Waiting for caches to sync for service config controller
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.367908    3279 config.go:102] Starting endpoints config controller
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.368013    3279 controller_utils.go:1041] Waiting for caches to sync for endpoints config controller
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.368845    3279 kuberuntime_manager.go:898] updating runtime config through cri with podcidr
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.370533    3279 leaderelection.go:184] successfully acquired lease kube-system/kube-scheduler
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.371274    3279 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"7ac7f114-e1b3-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"45", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.372076    3279 docker_service.go:306] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.374300    3279 kubelet_network.go:276] Setting Pod CIDR: 10.180.1.0/24 ->
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.468041    3279 controller_utils.go:1048] Caches are synced for service config controller
Dec 15 16:17:43 minikube localkube[3279]: I1215 16:17:43.469498    3279 controller_utils.go:1048] Caches are synced for endpoints config controller
Dec 15 16:17:46 minikube localkube[3279]: I1215 16:17:46.222724    3279 node_controller.go:563] Initializing eviction metric for zone:
Dec 15 16:17:46 minikube localkube[3279]: W1215 16:17:46.222785    3279 node_controller.go:916] Missing timestamp for Node minikube. Assuming now as a timestamp.
Dec 15 16:17:46 minikube localkube[3279]: I1215 16:17:46.222809    3279 node_controller.go:832] Controller detected that zone  is now in state Normal.
Dec 15 16:17:46 minikube localkube[3279]: I1215 16:17:46.223140    3279 event.go:218] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"7741b461-e1b3-11e7-bc34-00155df31b2e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
Dec 15 16:17:55 minikube localkube[3279]: sync duration of 1.7809599s, expected less than 1s
Dec 15 16:17:55 minikube localkube[3279]: I1215 16:17:55.178387    3279 trace.go:76] Trace[1996813726]: "GuaranteedUpdate etcd3: *api.Endpoints" (started: 2017-12-15 16:17:53.4006174 +0000 UTC m=+23.921949700) (total time: 1.7777804s):
Dec 15 16:17:55 minikube localkube[3279]: Trace[1996813726]: [1.777751s] [1.7774259s] Transaction committed
Dec 15 16:17:55 minikube localkube[3279]: I1215 16:17:55.178875    3279 trace.go:76] Trace[689898712]: "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2017-12-15 16:17:53.4004279 +0000 UTC m=+23.921760100) (total time: 1.7784599s):
Dec 15 16:17:55 minikube localkube[3279]: Trace[689898712]: [1.7783541s] [1.7782034s] Object stored in database
Dec 15 16:17:55 minikube localkube[3279]: I1215 16:17:55.179797    3279 trace.go:76] Trace[1130415310]: "GuaranteedUpdate etcd3: *api.Node" (started: 2017-12-15 16:17:53.4008914 +0000 UTC m=+23.922223700) (total time: 1.7789198s):
Dec 15 16:17:55 minikube localkube[3279]: Trace[1130415310]: [1.778901s] [1.777946s] Transaction committed
Dec 15 16:17:57 minikube localkube[3279]: I1215 16:17:57.089030    3279 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"storage-provisioner", UID:"82f7fbaa-e1b3-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"92", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned storage-provisioner to minikube
Dec 15 16:17:57 minikube localkube[3279]: E1215 16:17:57.091019    3279 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 15 16:17:57 minikube localkube[3279]: I1215 16:17:57.094486    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-z55jx" (UniqueName: "kubernetes.io/secret/82f7fbaa-e1b3-11e7-bc34-00155df31b2e-default-token-z55jx") pod "storage-provisioner" (UID: "82f7fbaa-e1b3-11e7-bc34-00155df31b2e")
Dec 15 16:17:57 minikube localkube[3279]: I1215 16:17:57.759529    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"kubernetes-dashboard", UID:"835e9909-e1b3-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"100", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-qbpn4
Dec 15 16:17:57 minikube localkube[3279]: I1215 16:17:57.769226    3279 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kubernetes-dashboard-qbpn4", UID:"835f386e-e1b3-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"101", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kubernetes-dashboard-qbpn4 to minikube
Dec 15 16:17:57 minikube localkube[3279]: E1215 16:17:57.780638    3279 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 15 16:17:57 minikube localkube[3279]: I1215 16:17:57.797602    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-z55jx" (UniqueName: "kubernetes.io/secret/835f386e-e1b3-11e7-bc34-00155df31b2e-default-token-z55jx") pod "kubernetes-dashboard-qbpn4" (UID: "835f386e-e1b3-11e7-bc34-00155df31b2e")
Dec 15 16:17:57 minikube localkube[3279]: I1215 16:17:57.955339    3279 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-dns", UID:"837c1a17-e1b3-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"113", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-dns-86f6f55dd5 to 1
Dec 15 16:17:57 minikube localkube[3279]: I1215 16:17:57.972734    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-86f6f55dd5", UID:"837cdb5e-e1b3-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"114", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-86f6f55dd5-d662m
Dec 15 16:17:57 minikube localkube[3279]: I1215 16:17:57.994221    3279 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-86f6f55dd5-d662m", UID:"837e5862-e1b3-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"116", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-dns-86f6f55dd5-d662m to minikube
Dec 15 16:17:58 minikube localkube[3279]: I1215 16:17:58.101044    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/837e5862-e1b3-11e7-bc34-00155df31b2e-kube-dns-config") pod "kube-dns-86f6f55dd5-d662m" (UID: "837e5862-e1b3-11e7-bc34-00155df31b2e")
Dec 15 16:17:58 minikube localkube[3279]: I1215 16:17:58.101093    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-z55jx" (UniqueName: "kubernetes.io/secret/837e5862-e1b3-11e7-bc34-00155df31b2e-default-token-z55jx") pod "kube-dns-86f6f55dd5-d662m" (UID: "837e5862-e1b3-11e7-bc34-00155df31b2e")
Dec 15 16:18:06 minikube localkube[3279]: sync duration of 1.1772227s, expected less than 1s
Dec 15 16:18:06 minikube localkube[3279]: I1215 16:18:06.376821    3279 trace.go:76] Trace[374277382]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2017-12-15 16:18:05.2261817 +0000 UTC m=+35.747550300) (total time: 1.1505736s):
Dec 15 16:18:06 minikube localkube[3279]: Trace[374277382]: [1.1504783s] [1.1504719s] About to write a response
Dec 15 16:18:06 minikube localkube[3279]: I1215 16:18:06.381334    3279 trace.go:76] Trace[1536028818]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2017-12-15 16:18:05.4593381 +0000 UTC m=+35.980706700) (total time: 921.9582ms):
Dec 15 16:18:06 minikube localkube[3279]: Trace[1536028818]: [921.7754ms] [921.7651ms] About to write a response
Dec 15 16:18:07 minikube localkube[3279]: W1215 16:18:07.436574    3279 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 15 16:18:17 minikube localkube[3279]: W1215 16:18:17.439535    3279 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 15 16:18:17 minikube localkube[3279]: W1215 16:18:17.443583    3279 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 15 16:18:26 minikube localkube[3279]: sync duration of 6.2234753s, expected less than 1s
Dec 15 16:18:26 minikube localkube[3279]: I1215 16:18:26.648178    3279 trace.go:76] Trace[1699685276]: "GuaranteedUpdate etcd3: *api.Endpoints" (started: 2017-12-15 16:18:22.4257938 +0000 UTC m=+52.947119200) (total time: 4.2223584s):
Dec 15 16:18:26 minikube localkube[3279]: Trace[1699685276]: [4.2223446s] [4.2222506s] Transaction committed
Dec 15 16:18:26 minikube localkube[3279]: I1215 16:18:26.648269    3279 trace.go:76] Trace[1989793625]: "Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2017-12-15 16:18:22.4257495 +0000 UTC m=+52.947075000) (total time: 4.2225072s):
Dec 15 16:18:26 minikube localkube[3279]: Trace[1989793625]: [4.222461s] [4.2224366s] Object stored in database
Dec 15 16:18:26 minikube localkube[3279]: I1215 16:18:26.648458    3279 trace.go:76] Trace[23824871]: "GuaranteedUpdate etcd3: *api.Endpoints" (started: 2017-12-15 16:18:20.7654542 +0000 UTC m=+51.286779600) (total time: 5.8829938s):
Dec 15 16:18:26 minikube localkube[3279]: Trace[23824871]: [5.882983s] [5.8828685s] Transaction committed
Dec 15 16:18:26 minikube localkube[3279]: I1215 16:18:26.648547    3279 trace.go:76] Trace[554599373]: "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2017-12-15 16:18:20.7653875 +0000 UTC m=+51.286712900) (total time: 5.8831478s):
Dec 15 16:18:26 minikube localkube[3279]: Trace[554599373]: [5.8830863s] [5.883055s] Object stored in database
Dec 15 16:18:26 minikube localkube[3279]: I1215 16:18:26.648928    3279 trace.go:76] Trace[2079583850]: "List /api/v1/nodes" (started: 2017-12-15 16:18:23.3279957 +0000 UTC m=+53.849321200) (total time: 3.3209192s):
Dec 15 16:18:26 minikube localkube[3279]: Trace[2079583850]: [3.3208448s] [3.3208382s] Listing from storage done
Dec 15 16:18:26 minikube localkube[3279]: I1215 16:18:26.649052    3279 trace.go:76] Trace[1854484373]: "GuaranteedUpdate etcd3: *api.Node" (started: 2017-12-15 16:18:25.2064597 +0000 UTC m=+55.727789000) (total time: 1.4425776s):
Dec 15 16:18:26 minikube localkube[3279]: Trace[1854484373]: [1.4425469s] [1.4419384s] Transaction committed
Dec 15 16:18:26 minikube localkube[3279]: I1215 16:18:26.656878    3279 trace.go:76] Trace[1767403372]: "List /apis/batch/v1/jobs" (started: 2017-12-15 16:18:23.3893663 +0000 UTC m=+53.910691700) (total time: 3.267489s):
Dec 15 16:18:26 minikube localkube[3279]: Trace[1767403372]: [3.2673213s] [3.2673153s] Listing from storage done
Dec 15 16:18:27 minikube localkube[3279]: W1215 16:18:27.457713    3279 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 15 16:18:28 minikube localkube[3279]: I1215 16:18:28.123455    3279 kuberuntime_manager.go:499] Container {Name:kubernetes-dashboard Image:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.0 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:9090 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-z55jx ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:9090,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Dec 15 16:18:28 minikube localkube[3279]: I1215 16:18:28.123587    3279 kuberuntime_manager.go:738] checking backoff for container "kubernetes-dashboard" in pod "kubernetes-dashboard-qbpn4_kube-system(835f386e-e1b3-11e7-bc34-00155df31b2e)"
Dec 15 16:18:43 minikube localkube[3279]: E1215 16:18:43.367915    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:18:43 minikube localkube[3279]: sync duration of 3.0715611s, expected less than 1s
Dec 15 16:18:43 minikube localkube[3279]: I1215 16:18:43.962983    3279 trace.go:76] Trace[1119745409]: "Get /api/v1/namespaces/default" (started: 2017-12-15 16:18:43.4100968 +0000 UTC m=+73.931417900) (total time: 552.8508ms):
Dec 15 16:18:43 minikube localkube[3279]: Trace[1119745409]: [552.8035ms] [552.7972ms] About to write a response
Dec 15 16:18:43 minikube localkube[3279]: I1215 16:18:43.964904    3279 trace.go:76] Trace[161757314]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2017-12-15 16:18:42.8780432 +0000 UTC m=+73.399364400) (total time: 1.0868268s):
Dec 15 16:18:43 minikube localkube[3279]: Trace[161757314]: [1.0866546s] [1.0866458s] About to write a response
Dec 15 16:18:43 minikube localkube[3279]: I1215 16:18:43.965780    3279 trace.go:76] Trace[872039460]: "GuaranteedUpdate etcd3: *api.Endpoints" (started: 2017-12-15 16:18:40.8801912 +0000 UTC m=+71.401512600) (total time: 3.0855689s):
Dec 15 16:18:43 minikube localkube[3279]: Trace[872039460]: [3.0855441s] [3.0853861s] Transaction committed
Dec 15 16:18:43 minikube localkube[3279]: I1215 16:18:43.965850    3279 trace.go:76] Trace[977788330]: "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2017-12-15 16:18:40.8800198 +0000 UTC m=+71.401341000) (total time: 3.0858182s):
Dec 15 16:18:43 minikube localkube[3279]: Trace[977788330]: [3.0857861s] [3.0857143s] Object stored in database
Dec 15 16:18:43 minikube localkube[3279]: I1215 16:18:43.975112    3279 trace.go:76] Trace[1070891396]: "GuaranteedUpdate etcd3: *api.Event" (started: 2017-12-15 16:18:43.368878 +0000 UTC m=+73.890199100) (total time: 606.2069ms):
Dec 15 16:18:43 minikube localkube[3279]: Trace[1070891396]: [593.7614ms] [593.7614ms] initial value restored
Dec 15 16:18:46 minikube localkube[3279]: sync duration of 2.2729291s, expected less than 1s
Dec 15 16:18:46 minikube localkube[3279]: W1215 16:18:46.346461    3279 kuberuntime_container.go:191] Non-root verification doesn't support non-numeric user (nobody)
Dec 15 16:18:55 minikube localkube[3279]: E1215 16:18:55.359799    3279 proxier.go:1621] Failed to delete stale service IP 10.96.0.10 connections, error: error deleting connection tracking state for UDP service IP: 10.96.0.10, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
Dec 15 16:19:13 minikube localkube[3279]: I1215 16:19:13.804615    3279 trace.go:76] Trace[504274602]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2017-12-15 16:19:12.3418488 +0000 UTC m=+102.863167300) (total time: 1.4627398s):
Dec 15 16:19:13 minikube localkube[3279]: Trace[504274602]: [1.4626795s] [1.462673s] About to write a response
Dec 15 16:19:13 minikube localkube[3279]: I1215 16:19:13.805098    3279 trace.go:76] Trace[1824030387]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2017-12-15 16:19:12.3347891 +0000 UTC m=+102.856107600) (total time: 1.4702917s):
Dec 15 16:19:13 minikube localkube[3279]: Trace[1824030387]: [1.4702666s] [1.4702598s] About to write a response
Dec 15 16:19:43 minikube localkube[3279]: E1215 16:19:43.368309    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:20:43 minikube localkube[3279]: E1215 16:20:43.368672    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:21:43 minikube localkube[3279]: E1215 16:21:43.369375    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:22:43 minikube localkube[3279]: E1215 16:22:43.369600    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:23:43 minikube localkube[3279]: E1215 16:23:43.370137    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:24:43 minikube localkube[3279]: E1215 16:24:43.370610    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:25:43 minikube localkube[3279]: E1215 16:25:43.370873    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:26:43 minikube localkube[3279]: E1215 16:26:43.371875    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:27:30 minikube localkube[3279]: store.index: compact 458
Dec 15 16:27:30 minikube localkube[3279]: finished scheduled compaction at 458 (took 410.6µs)
Dec 15 16:27:43 minikube localkube[3279]: E1215 16:27:43.372364    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:28:43 minikube localkube[3279]: E1215 16:28:43.372692    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:29:43 minikube localkube[3279]: E1215 16:29:43.373924    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:30:43 minikube localkube[3279]: E1215 16:30:43.374550    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:31:43 minikube localkube[3279]: E1215 16:31:43.374880    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:32:30 minikube localkube[3279]: store.index: compact 794
Dec 15 16:32:30 minikube localkube[3279]: finished scheduled compaction at 794 (took 430.1µs)
Dec 15 16:32:43 minikube localkube[3279]: E1215 16:32:43.375722    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:33:43 minikube localkube[3279]: E1215 16:33:43.376261    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:34:43 minikube localkube[3279]: E1215 16:34:43.378042    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:35:43 minikube localkube[3279]: E1215 16:35:43.378995    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.029913    3279 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openfaas", Name:"alertmanager", UID:"01e4272b-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1364", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set alertmanager-7b858df4c5 to 1
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.051108    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"alertmanager-7b858df4c5", UID:"01e65f2c-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1365", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: alertmanager-7b858df4c5-g62h2
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.070539    3279 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openfaas", Name:"alertmanager-7b858df4c5-g62h2", UID:"01e77352-e1b6-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"1368", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned alertmanager-7b858df4c5-g62h2 to minikube
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.078049    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "alertmanager-config" (UniqueName: "kubernetes.io/configmap/01e77352-e1b6-11e7-bc34-00155df31b2e-alertmanager-config") pod "alertmanager-7b858df4c5-g62h2" (UID: "01e77352-e1b6-11e7-bc34-00155df31b2e")
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.078086    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tvk4t" (UniqueName: "kubernetes.io/secret/01e77352-e1b6-11e7-bc34-00155df31b2e-default-token-tvk4t") pod "alertmanager-7b858df4c5-g62h2" (UID: "01e77352-e1b6-11e7-bc34-00155df31b2e")
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.138530    3279 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openfaas", Name:"faas-netesd", UID:"01f67e4a-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1381", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set faas-netesd-677f767644 to 1
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.142202    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"faas-netesd-677f767644", UID:"01f721d7-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1382", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: E1215 16:35:49.162278    3279 replica_set.go:424] Sync "openfaas/faas-netesd-677f767644" failed with pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: E1215 16:35:49.169375    3279 replica_set.go:424] Sync "openfaas/faas-netesd-677f767644" failed with pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.169980    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"faas-netesd-677f767644", UID:"01f721d7-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1384", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: E1215 16:35:49.171469    3279 replica_set.go:424] Sync "openfaas/faas-netesd-677f767644" failed with pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.171700    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"faas-netesd-677f767644", UID:"01f721d7-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1384", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.221066    3279 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openfaas", Name:"gateway", UID:"01fa6bd8-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1387", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set gateway-74794fc599 to 1
Dec 15 16:35:49 minikube localkube[3279]: W1215 16:35:49.222990    3279 container.go:354] Failed to create summary reader for "/system.slice/run-rf087873e73d742f380eaf68ecca98212.scope": none of the resources are being tracked.
Dec 15 16:35:49 minikube localkube[3279]: E1215 16:35:49.225144    3279 replica_set.go:424] Sync "openfaas/faas-netesd-677f767644" failed with pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.225247    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"faas-netesd-677f767644", UID:"01f721d7-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1384", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.246966    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"gateway-74794fc599", UID:"01fdd6f9-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: gateway-74794fc599-w2w5x
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.268558    3279 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openfaas", Name:"gateway-74794fc599-w2w5x", UID:"02048317-e1b6-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"1394", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned gateway-74794fc599-w2w5x to minikube
Dec 15 16:35:49 minikube localkube[3279]: E1215 16:35:49.282749    3279 replica_set.go:424] Sync "openfaas/faas-netesd-677f767644" failed with pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.282823    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"faas-netesd-677f767644", UID:"01f721d7-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1384", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: E1215 16:35:49.369531    3279 replica_set.go:424] Sync "openfaas/faas-netesd-677f767644" failed with pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.369918    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"faas-netesd-677f767644", UID:"01f721d7-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1384", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.386756    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tvk4t" (UniqueName: "kubernetes.io/secret/02048317-e1b6-11e7-bc34-00155df31b2e-default-token-tvk4t") pod "gateway-74794fc599-w2w5x" (UID: "02048317-e1b6-11e7-bc34-00155df31b2e")
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.408677    3279 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openfaas", Name:"nats", UID:"021a32b3-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1415", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nats-768978b499 to 1
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.427819    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"nats-768978b499", UID:"021efb15-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nats-768978b499-l4thv
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.448208    3279 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openfaas", Name:"nats-768978b499-l4thv", UID:"02210a6f-e1b6-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"1421", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned nats-768978b499-l4thv to minikube
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.522246    3279 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openfaas", Name:"prometheus", UID:"022eb2cf-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1433", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set prometheus-545c84bb9b to 1
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.526497    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"prometheus-545c84bb9b", UID:"02319445-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: prometheus-545c84bb9b-g4gx6
Dec 15 16:35:49 minikube localkube[3279]: E1215 16:35:49.541849    3279 replica_set.go:424] Sync "openfaas/faas-netesd-677f767644" failed with pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.542239    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"faas-netesd-677f767644", UID:"01f721d7-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1384", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "faas-netesd-677f767644-" is forbidden: service account openfaas/faas-controller was not found, retry after the service account is created
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.544362    3279 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openfaas", Name:"prometheus-545c84bb9b-g4gx6", UID:"023226ac-e1b6-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"1436", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned prometheus-545c84bb9b-g4gx6 to minikube
Dec 15 16:35:49 minikube localkube[3279]: E1215 16:35:49.553305    3279 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.591593    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tvk4t" (UniqueName: "kubernetes.io/secret/02210a6f-e1b6-11e7-bc34-00155df31b2e-default-token-tvk4t") pod "nats-768978b499-l4thv" (UID: "02210a6f-e1b6-11e7-bc34-00155df31b2e")
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.610328    3279 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openfaas", Name:"queue-worker", UID:"023a97e7-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1449", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set queue-worker-75667b7d59 to 1
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.649403    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"queue-worker-75667b7d59", UID:"023f28b0-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: queue-worker-75667b7d59-499wh
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.673234    3279 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openfaas", Name:"queue-worker-75667b7d59-499wh", UID:"023fbe1f-e1b6-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"1451", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned queue-worker-75667b7d59-499wh to minikube
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.696789    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "prometheus-config" (UniqueName: "kubernetes.io/configmap/023226ac-e1b6-11e7-bc34-00155df31b2e-prometheus-config") pod "prometheus-545c84bb9b-g4gx6" (UID: "023226ac-e1b6-11e7-bc34-00155df31b2e")
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.696832    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tvk4t" (UniqueName: "kubernetes.io/secret/023226ac-e1b6-11e7-bc34-00155df31b2e-default-token-tvk4t") pod "prometheus-545c84bb9b-g4gx6" (UID: "023226ac-e1b6-11e7-bc34-00155df31b2e")
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.797105    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-tvk4t" (UniqueName: "kubernetes.io/secret/023fbe1f-e1b6-11e7-bc34-00155df31b2e-default-token-tvk4t") pod "queue-worker-75667b7d59-499wh" (UID: "023fbe1f-e1b6-11e7-bc34-00155df31b2e")
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.876954    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas", Name:"faas-netesd-677f767644", UID:"01f721d7-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1384", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: faas-netesd-677f767644-bv5gf
Dec 15 16:35:49 minikube localkube[3279]: I1215 16:35:49.908574    3279 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openfaas", Name:"faas-netesd-677f767644-bv5gf", UID:"02661428-e1b6-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"1468", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned faas-netesd-677f767644-bv5gf to minikube
Dec 15 16:35:49 minikube localkube[3279]: W1215 16:35:49.950262    3279 container.go:354] Failed to create summary reader for "/system.slice/run-rf50ef82be9b64d079d1fd9eb492ba0b2.scope": none of the resources are being tracked.
Dec 15 16:35:49 minikube localkube[3279]: W1215 16:35:49.950422    3279 container.go:354] Failed to create summary reader for "/system.slice/run-r11fc8a451528489685d5944304ca12ba.scope": none of the resources are being tracked.
Dec 15 16:35:49 minikube localkube[3279]: W1215 16:35:49.950729    3279 container.go:354] Failed to create summary reader for "/system.slice/run-r3b01899fe0b84b96a71c74a7847a4dd6.scope": none of the resources are being tracked.
Dec 15 16:35:49 minikube localkube[3279]: E1215 16:35:49.950789    3279 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 15 16:35:50 minikube localkube[3279]: I1215 16:35:50.000576    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "faas-controller-token-c7r6r" (UniqueName: "kubernetes.io/secret/02661428-e1b6-11e7-bc34-00155df31b2e-faas-controller-token-c7r6r") pod "faas-netesd-677f767644-bv5gf" (UID: "02661428-e1b6-11e7-bc34-00155df31b2e")
Dec 15 16:35:50 minikube localkube[3279]: W1215 16:35:50.129403    3279 container.go:354] Failed to create summary reader for "/system.slice/run-r34c16ce6e9d94defa4a35fbe16972422.scope": none of the resources are being tracked.
Dec 15 16:35:59 minikube localkube[3279]: I1215 16:35:59.628832    3279 kuberuntime_manager.go:499] Container {Name:gateway Image:functions/gateway:0.6.14 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:8080 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:functions_provider_url Value:http://faas-netesd.openfaas.svc.cluster.local:8080/ ValueFrom:nil} {Name:faas_nats_address Value:nats.openfaas ValueFrom:nil} {Name:faas_nats_port Value:4222 ValueFrom:nil} {Name:read_timeout Value:10 ValueFrom:nil} {Name:write_timeout Value:10 ValueFrom:nil}] Resources:{Limits:map[memory:{i:{value:52428800 scale:0} d:{Dec:<nil>} s:50Mi Format:BinarySI}] Requests:map[memory:{i:{value:52428800 scale:0} d:{Dec:<nil>} s:50Mi Format:BinarySI}]} VolumeMounts:[{Name:default-token-tvk4t ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Dec 15 16:35:59 minikube localkube[3279]: I1215 16:35:59.630319    3279 kuberuntime_manager.go:738] checking backoff for container "gateway" in pod "gateway-74794fc599-w2w5x_openfaas(02048317-e1b6-11e7-bc34-00155df31b2e)"
Dec 15 16:36:08 minikube localkube[3279]: W1215 16:36:08.833858    3279 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 15 16:36:15 minikube localkube[3279]: sync duration of 3.3370113s, expected less than 1s
Dec 15 16:36:15 minikube localkube[3279]: I1215 16:36:15.865231    3279 trace.go:76] Trace[191628839]: "GuaranteedUpdate etcd3: *api.Endpoints" (started: 2017-12-15 16:36:14.5317378 +0000 UTC m=+1125.052900000) (total time: 1.333466s):
Dec 15 16:36:15 minikube localkube[3279]: Trace[191628839]: [1.3334493s] [1.3333401s] Transaction committed
Dec 15 16:36:15 minikube localkube[3279]: I1215 16:36:15.865396    3279 trace.go:76] Trace[1858626996]: "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2017-12-15 16:36:14.5316914 +0000 UTC m=+1125.052853600) (total time: 1.333692s):
Dec 15 16:36:15 minikube localkube[3279]: Trace[1858626996]: [1.333582s] [1.3335552s] Object stored in database
Dec 15 16:36:15 minikube localkube[3279]: I1215 16:36:15.865863    3279 trace.go:76] Trace[950645743]: "GuaranteedUpdate etcd3: *api.Endpoints" (started: 2017-12-15 16:36:14.3536904 +0000 UTC m=+1124.874852600) (total time: 1.512151s):
Dec 15 16:36:15 minikube localkube[3279]: Trace[950645743]: [1.5114488s] [1.511256s] Transaction committed
Dec 15 16:36:15 minikube localkube[3279]: I1215 16:36:15.865930    3279 trace.go:76] Trace[445206999]: "Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2017-12-15 16:36:14.3536021 +0000 UTC m=+1124.874764400) (total time: 1.5123181s):
Dec 15 16:36:15 minikube localkube[3279]: Trace[445206999]: [1.5122789s] [1.5122289s] Object stored in database
Dec 15 16:36:39 minikube localkube[3279]: I1215 16:36:39.181303    3279 trace.go:76] Trace[21483734]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2017-12-15 16:36:37.9421075 +0000 UTC m=+1148.463266500) (total time: 1.2391663s):
Dec 15 16:36:39 minikube localkube[3279]: Trace[21483734]: [1.2391162s] [1.2391079s] About to write a response
Dec 15 16:36:39 minikube localkube[3279]: I1215 16:36:39.181303    3279 trace.go:76] Trace[988334678]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2017-12-15 16:36:37.9662042 +0000 UTC m=+1148.487363200) (total time: 1.2150711s):
Dec 15 16:36:39 minikube localkube[3279]: Trace[988334678]: [1.2150429s] [1.2150356s] About to write a response
Dec 15 16:36:43 minikube localkube[3279]: E1215 16:36:43.379327    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:36:49 minikube localkube[3279]: sync duration of 2.3276859s, expected less than 1s
Dec 15 16:36:49 minikube localkube[3279]: I1215 16:36:49.942586    3279 trace.go:76] Trace[1442995397]: "List /apis/extensions/v1beta1/namespaces/openfaas-fn/deployments" (started: 2017-12-15 16:36:49.2832813 +0000 UTC m=+1159.804474200) (total time: 659.278ms):
Dec 15 16:36:49 minikube localkube[3279]: Trace[1442995397]: [659.0549ms] [658.9877ms] Listing from storage done
Dec 15 16:37:30 minikube localkube[3279]: store.index: compact 1130
Dec 15 16:37:30 minikube localkube[3279]: finished scheduled compaction at 1130 (took 651.6µs)
Dec 15 16:37:43 minikube localkube[3279]: E1215 16:37:43.380297    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Dec 15 16:38:07 minikube localkube[3279]: I1215 16:38:07.063442    3279 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openfaas-fn", Name:"qrcode-go", UID:"541f77e1-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set qrcode-go-6c96bf7b48 to 1
Dec 15 16:38:07 minikube localkube[3279]: I1215 16:38:07.129999    3279 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openfaas-fn", Name:"qrcode-go-6c96bf7b48", UID:"5421a215-e1b6-11e7-bc34-00155df31b2e", APIVersion:"extensions", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: qrcode-go-6c96bf7b48-4mcft
Dec 15 16:38:07 minikube localkube[3279]: I1215 16:38:07.159391    3279 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openfaas-fn", Name:"qrcode-go-6c96bf7b48-4mcft", UID:"5431b76d-e1b6-11e7-bc34-00155df31b2e", APIVersion:"v1", ResourceVersion:"1695", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned qrcode-go-6c96bf7b48-4mcft to minikube
Dec 15 16:38:07 minikube localkube[3279]: I1215 16:38:07.277661    3279 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jfzkl" (UniqueName: "kubernetes.io/secret/5431b76d-e1b6-11e7-bc34-00155df31b2e-default-token-jfzkl") pod "qrcode-go-6c96bf7b48-4mcft" (UID: "5431b76d-e1b6-11e7-bc34-00155df31b2e")
Dec 15 16:38:07 minikube localkube[3279]: W1215 16:38:07.450331    3279 container.go:367] Failed to get RecentStats("/system.slice/run-r26e60d43443f4790a7b4e48d229c7b8f.scope") while determining the next housekeeping: unable to find data for container /system.slice/run-r26e60d43443f4790a7b4e48d229c7b8f.scope
Dec 15 16:38:43 minikube localkube[3279]: E1215 16:38:43.380864    3279 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address

Anything else do we need to know:
Not sure, I think that is about everything...

@JockDaRock
Copy link
Author

It seems like a way to workaround this issue is to stop the VM in Hyper-V Manager and un-enable Dynamic Memory.

Seems a bit hacky to do that, but it appears to work.

the problem is if you use minikube delete to close down minikube, you have to will have to turn off Dynamic Memory again in Hyper-V Manager on the minikube VM to get it to work.

Can a temporary fix in this area be to disable Dynamic Memory when minikube is provisioning the VM Hyper-V?

@JockDaRock
Copy link
Author

completing this request should fix the bug... #1766

@r2d4 r2d4 added kind/bug Categorizes issue or PR as related to a bug. more-info-needed labels Mar 5, 2018
@npagare
Copy link

npagare commented Apr 30, 2018

Hi @JockDaRock , thank you for your post on medium.
Has anyone from kubernetes team communicated with you on the ETA to address this issue?

I followed your steps form your post on medium; any my console output has been stuck at this for sometime ...
image
Checking Kube status in a separate powershell windows indicates the cluster is up and running
image

As a result of this hanging, I deleted the minikube VM from hyper-v and tried to run the minikube again.
But, now I am not able to start the VM =
image

Any thoughts on this ?

Thanks,

@JockDaRock
Copy link
Author

Hey @IoTFier , Thank you for your response and giving it a try. I have not heard anything back from the Kubernetes team on this issue sadly (going on almost 5 months).

Anyway, I am not familiar with your particular problem. But I can try and reproduce this week and see what happens. First impression is to try a complete reinstall of minikube.

@mcoakley
Copy link

@IoTFier just as a heads up, I have seen errors like that before. I'm sure there is a way to fix it either through the VM (i.e. fixing services), reconfiguring parts of the minikube environment (I haven't read through all of the setup code yet), or reconfiguring kubectl. Remember, minikube's job is to make it easy to setup a single node Kubernetes cluster for testing, that doesn't mean you can't set one up by hand on Windows. Meaning, if the minikube start fails you can perform all of the steps it would have performed by hand... but then why use minikube.

However, it is pretty easy to just clean up a bad install and then try again.

To clean up a bad install, stop the minikube VM in HyperV. (You may have to stop the service if it is really hung - which has happened to me but I feel that may be more my system than a general problem.) Once the VM is stopped, delete it using the HyperV manager. Then open your Windows home directory in Windows Explorer (C:\Users{your username}). In there you will see a .minikube folder. Assuming you have just set things up and it has failed (since you were just issuing the minukube start command), delete the .minikube folder. This will let the minikube start command restart the setup and configuration.

IF you plan on changing any of the defaults that name the minikube VM or change where things are stored you should also delete the .kube folder in your home directory as well. It contains the kubectl (and other kubeXXX apps) configuration and will not point to the correct information. Of course, you could correct a lot of that (I am assuming) by issuing minikube update-context.

I will also admit that I'm sure you can remove a portion of that folder to have it reinstall while caching the ISO download but since it is only 160Mb or so it isn't a big deal.

BTW... I'm using the latest minikube 0.28.1 and I haven't experienced this bug with dynamic memory.

@tstromberg
Copy link
Contributor

Closing open localkube issues, as localkube was long deprecated and removed from the last two minikube releases. I hope you were able to find another solution that worked out for you - if not, please open a new PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/hyperv HyperV related issues kind/bug Categorizes issue or PR as related to a bug. os/windows
Projects
None yet
Development

No branches or pull requests

5 participants