Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none on Ubuntu should automatically set --extra-config=kubelet.resolv-conf #3511

Closed
tlkh opened this issue Jan 8, 2019 · 9 comments
Closed
Assignees
Labels
co/coredns CoreDNS related issues co/none-driver ev/CrashLoopBackOff Crash Loop Backoff events help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. r/2019q2 Issue was last reviewed 2019q2
Milestone

Comments

@tlkh
Copy link

tlkh commented Jan 8, 2019

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Please provide the following details:

Environment: Ubuntu 18.04, fresh install

Minikube version (use minikube version): v0.32.0

  • OS (e.g. from /etc/os-release): Ubuntu 18.04.1 LTS
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): None
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): ?
  • Install tools: ?
  • Others: NIL
    The above can be generated in one go with the following commands (can be copied and pasted directly into your terminal):
minikube version
echo "";
echo "OS:";
cat /etc/os-release
echo "";
echo "VM driver:"; 
grep DriverName ~/.minikube/machines/minikube/config.json
echo "";
echo "ISO version";
grep -i ISO ~/.minikube/machines/minikube/config.json

What happened:

CoreDNS CrashLoopBackOff. Log shows:

[FATAL] plugin/loop: Seen "HINFO IN xxxxxxxxx." more than twice, loop detected

Related issue in CoreDNS: coredns/coredns#2087

What you expected to happen:

Expected it to work!

How to reproduce it (as minimally and precisely as possible):

Deploy minikube on Ubuntu 18.04 with "None" driver

Output of minikube logs (if applicable):

Anything else do we need to know:

Solution:

Add instructions to disable systemd-resolved and use dnsmaq. This worked for me.

sudo apt-get install dnsmasq
sudo systemctl stop systemd-resolved
sudo systemctl disable systemd-resolved

sudo nano /etc/NetworkManager/NetworkManager.conf
# add under [main]
# dns=dnsmasq

sudo cp /etc/resolv.conf /etc/resolv.conf.bak
sudo rm /etc/resolv.conf; sudo ln -s /var/run/NetworkManager/resolv.conf /etc/resolv.conf

sudo systemctl start dnsmasq
sudo systemctl restart NetworkManager
@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. area/dns DNS issues labels Jan 8, 2019
@tstromberg tstromberg changed the title CoreDNS CrashLoopBackoff CoreDNS CrashLoopBackoff with none driver: Seen "HINFO IN xxxxxxxxx." more than twice, loop detected Jan 23, 2019
@tstromberg tstromberg changed the title CoreDNS CrashLoopBackoff with none driver: Seen "HINFO IN xxxxxxxxx." more than twice, loop detected none driver: CoreDNS detects resolver loop, goes into CrashloopBackoff Jan 23, 2019
@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Jan 23, 2019
@tstromberg
Copy link
Contributor

Thanks for the info! coredns/coredns#2087 and https://stackoverflow.com/questions/53075796/coredns-pods-have-crashloopbackoff-or-error-state/53414041#53414041 were very helpful in understanding this issue. The basic problem as I can understand it is that your machine is already running a DNS server, and that it's causing a feedback loop with CoreDNS.

I'm still not sure on the best way to go about resolving this issue. You mention that you are on Ubuntu 18.04, which means this answer could be implementable:

https://stackoverflow.com/questions/53075796/coredns-pods-have-crashloopbackoff-or-error-state?answertab=votes#tab-top

Do you mind sharing the output of:

ps -afe | grep kubelet

and:

systemctl list-unit-files | grep enabled | egrep -i 'resolv|dns'

At a minimum, minikube should be able to detect this awkward configuration and warn about it instead of generating a confusing error.

@tlkh
Copy link
Author

tlkh commented Jan 23, 2019

@tstromberg I have added my own solution that worked into the bottom of my issue. I believe Ubuntu 18.04 by default runs systemd-resolved which is the issue and can be disabled. I believe a simple note somewhere would suffice.

Here are the output of my current setup (which works after disabling systemd-resolved). When I attempt another clean setup I will post the output of the "broken" clean install.

>> ps -afe | grep kubelet
root      1444     1  7 Jan08 ?        1-03:11:26 /usr/bin/kubelet --authorization-mode=Webhook --client-ca-file=/var/lib/minikube/certs/ca.crt --fail-swap-on=false --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --cgroup-driver=cgroupfs --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --hostname-override=minikube --feature-gates=DevicePlugins=true
kubeflow 21728 21655  0 22:45 pts/0    00:00:00 grep --color=auto kubelet
root     24881 24861  4 Jan18 ?        06:26:16 kube-apiserver --authorization-mode=Node,RBAC --enable-admission-plugins=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --feature-gates=DevicePlugins=true --advertise-address=172.17.37.244 --allow-privileged=true --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key
>> systemctl list-unit-files | grep enabled | egrep -i 'resolv|dns'
dns-clean.service                          enabled        
dnsmasq.service                            enabled        
pppd-dns.service                           enabled

@DanyC97
Copy link

DanyC97 commented Feb 8, 2019

i just started playing on minikube and i just bumped into this while i am running on 18.10. should we at least start documenting this bits so people know what to do ?

maybe a troubleshooting or FAQ section ?

@andrewjjenkins
Copy link

On my Ubuntu 18.04, starting minikube with a kubelet.resolv-conf option fixed this. This is basically porting the fix from coredns/coredns#2087 to minikube config.

minikube --vm-driver=none start \
  --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf

@cdennison
Copy link

That worked for me - same symptoms as above. Running Ubuntu 18.04.1.

@tstromberg tstromberg added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. needs-solution-message Issues where where offering a solution for an error would be helpful and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Mar 26, 2019
@tstromberg tstromberg changed the title none driver: CoreDNS detects resolver loop, goes into CrashloopBackoff none on Ubuntu should automatically set --extra-config=kubelet.resolv-conf Apr 11, 2019
@tstromberg tstromberg added this to the v1.0.1 milestone Apr 11, 2019
@tstromberg tstromberg modified the milestones: v1.0.1, v1.1.0-candidate Apr 30, 2019
@tstromberg tstromberg self-assigned this May 1, 2019
@tstromberg tstromberg removed the needs-solution-message Issues where where offering a solution for an error would be helpful label May 2, 2019
@tstromberg tstromberg removed their assignment May 16, 2019
@medyagh medyagh self-assigned this May 16, 2019
@tstromberg tstromberg modified the milestones: v1.1.0, v1.2.0-candidate May 22, 2019
@tstromberg tstromberg added the r/2019q2 Issue was last reviewed 2019q2 label May 22, 2019
@schollii
Copy link

schollii commented Jun 8, 2019

Note that minikube will allow you to do a "soft" restart ie without a stop, but this did not solve the problem for me. Reason is that a soft restart does not actually restart the minikube kubelet when vm-driver=none.

Problem was fixed after I did minikube stop then minikube start (with the extra-config arg described in another reply). Interestingly, none of the kube-system namespace pods get restarted by a minikube stop/start cycle. This makes sense for vm-driver=none, because all the pods are actually processes running straight on the host.

@medyagh
Copy link
Member

medyagh commented Jun 13, 2019

closing this , as this issue should have been solved by this PR. #4465

please reopen if the issue is still there.

@medyagh medyagh closed this as completed Jun 13, 2019
@harpratap
Copy link

harpratap commented Jul 29, 2019

@medyagh I am still facing the same issue
Minikube version = 1.2.0
Ubuntu 18.0.2 LTS
sudo minikube --vm-driver=none start --cpus 8 --memory 8048

kubectl logs coredns-7559cdd6f8-tg9c5
.:53
2019/07/29 02:21:59 [INFO] CoreDNS-1.2.2
2019/07/29 02:21:59 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b
2019/07/29 02:21:59 [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
2019/07/29 02:21:59 [FATAL] plugin/loop: Seen "HINFO IN 400104270526716248.6901523470932003903." more than twice, loop detected

Edit: Seems like something wrong in my setup, Tried this on a clean Ubuntu VM and it works fine. Will need to debug my setup now

@gattytto
Copy link

for running minikube inside a LXC container with ubuntu 20.04(focal)
minikube start --vm-driver=none --extra-config kubeadm.ignore-preflight-errors=SystemVerification --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/coredns CoreDNS related issues co/none-driver ev/CrashLoopBackOff Crash Loop Backoff events help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. r/2019q2 Issue was last reviewed 2019q2
Projects
None yet
Development

No branches or pull requests

9 participants