-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v0.35.0 hangs waiting for DNS pods before CNI is deployed #3852
Comments
This sounds like a bug with how we handle CNI. Surprisingly, we have no integration tests for this! Question: shouldn't CNI come up before DNS? Alternatively, should we just not block on DNS if CNI is provided? |
Yeah I think CNI should come up before DNS, hence we should not block if the user chooses to bring their own. In my opinion, this is how it should be: if It should be clear in documentation / usage that if ONLY It becomes a little fuzzy if an alternate container runtime is chosen. If the user wants a custom CNI, maybe they must set I'm personally of the opinion that CNI should become the default instead of kubenet and then the above becomes very different, but I realize that's not entirely relevant here 😅 I'm happy to help contribute the above once a consensus is reached 😄 |
Note: I believe this was addressed by #3896 - and will be part of the v1.0 release. |
Just to confirm that 1.0 will address it and that it's covered by our integration tests, what command-line should I use to replicate this issue? |
@tstromberg I think on minikube v0.35.0, just this much should replicate it: minikube start \
--network-plugin=cni \
--extra-config=kubelet.network-plugin=cni (Not exactly sure if that The specific command that failed for me on v0.35.0 was this, but based on testing it reproduced with less flags: minikube start \
--kubernetes-version=v1.13.4 \
--vm-driver=virtualbox \
--network-plugin=cni \
--extra-config=kubelet.network-plugin=cni \
--container-runtime=containerd |
@anitgandhi - thanks! I've sent out a PR to add an example CNI test so that we don't break this feature again in the future. |
On v0.35.0, I'm seeing an issue where
minikube start
waits for the core pods to come up, but hangs ondns
. Looking atkubectl get pods --all-namespaces
, I can see that the 2 coredns pods are pending, similar to #3567 . Of course, this makes sense since there isn't a CNI deployed yet. It also happens even if I don't set--container-runtime
and let it fall back to the default of Docker.This issue didn't seem to present itself on v0.34.1, using the same command. I believe this is because of the behavior introduced in #3774 . On v0.34.1 and prior, minikube doesn't block on waiting for DNS pods to come up; instead, it would continue and eventually clean exit, at which point my immediate next step of a CNI deployment would take over and after than, DNS pods would successfully start.
When I removed the CNI related lines in
minikube start
, everything worked fine. This makes sense since once I remove those lines, the behavior introduced in #3617 would take effect, enabling the defaultmybridge
CNI. The issue with this is that coredns pods will be managed by that CNI instead of the one I deployed (unless I restart the coredns pods)Perhaps this is expected behavior, in which case, this can be closed. But in my opinion, when users want to bring their own CNI, there should be an option for that.
I guess the tl;dr is that the change in waiting for DNS pods to come up before a CNI is up (when not using the default CNI) instead of continuing is what's causing this issue for me 😅
Logs
OS: Ubuntu 18.04 with VirtualBox 6.0
minikube logs
output (and a snippet showing there was no default CNI config deployed):The text was updated successfully, but these errors were encountered: