Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multinode: minikube cluster NotReady after restart #8600

Closed
eachirei opened this issue Jun 29, 2020 · 5 comments · Fixed by #8698
Closed

multinode: minikube cluster NotReady after restart #8600

eachirei opened this issue Jun 29, 2020 · 5 comments · Fixed by #8698
Assignees
Labels
co/multinode Issues related to multinode clusters kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@eachirei
Copy link

minikube start --driver=virtualbox -n 3 --extra-config=apiserver.service-node-port-range=1-65535
Steps to reproduce the issue:

  1. minikube start --driver=virtualbox -n 3 --extra-config=apiserver.service-node-port-range=1-65535
  2. minikube stop
  3. minikube start --driver=virtualbox -n 3 --extra-config=apiserver.service-node-port-range=1-65535

Full output of failed command:
The command does not fail. The minikube primary kubelet config gets replaced by the one for my third node.
I have deleted the cluster multiple times and this behavior is constant.

Full output of minikube start command used, if not already included:
😄 minikube v1.11.0 on Darwin 10.15.5
▪ KUBECONFIG=******
✨ Using the virtualbox driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing virtualbox VM for "minikube" ...
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
▪ apiserver.service-node-port-range=1-65535
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
❗ The cluster minikube already exists which means the --nodes parameter will be ignored. Use "minikube node add" to add nodes to an existing cluster.
👍 Starting node minikube-m02 in cluster minikube
🔄 Restarting existing virtualbox VM for "minikube-m02" ...
🌐 Found network options:
▪ NO_PROXY=192.168.99.116
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
▪ env NO_PROXY=192.168.99.116
👍 Starting node minikube-m03 in cluster minikube
🔄 Restarting existing virtualbox VM for "minikube-m03" ...
🌐 Found network options:
▪ NO_PROXY=192.168.99.116,192.168.99.117
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
▪ env NO_PROXY=192.168.99.116
▪ env NO_PROXY=192.168.99.116,192.168.99.117
🏄 Done! kubectl is now configured to use "minikube"

Optional: Full output of minikube logs command:

@medyagh medyagh added the co/multinode Issues related to multinode clusters label Jul 7, 2020
@medyagh
Copy link
Member

medyagh commented Jul 7, 2020

@sharifelgamal do you mind verifying if this issue is still going on ?

@medyagh medyagh added triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Jul 7, 2020
@medyagh medyagh changed the title minikube cluster NotReady after restart multinode: minikube cluster NotReady after restart Jul 7, 2020
@sharifelgamal
Copy link
Collaborator

So where exactly are you seeing the issue? Is it with kubectl get nodes?

@eachirei
Copy link
Author

eachirei commented Jul 7, 2020

Yes, kubectl get nodes indicates the master node is not ready. Upon accessing the node using minikube ssh -n minikube and running systemctl status kubelet the result shows the node is running with hostname override minikube-m03 and host ip the one assigned to the aforementioned minikube-m03. After modifying the configuration, I restart the docker service and the kubelet service, and delete any pods regarding dns or networking (those managed by deployments). I found out I had to delete those pods, especially the ones assigned to master, as the dns wouldn't work properly anymore (and the network one just to be safe). Things seem to act as normal, but I still had some unexplained issues with pods residing on the master node. I have to follow this procedure after each restart...

@sharifelgamal
Copy link
Collaborator

I have reproduced the issue with both hyperkit and virtualbox. We'll look more closely at it soon.

@sharifelgamal sharifelgamal added kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Jul 10, 2020
@sharifelgamal
Copy link
Collaborator

It seems to be specific to VM drivers for whatever reason.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/multinode Issues related to multinode clusters kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants