Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Enable host DNS resolution in virtualbox driver by default #3451

Closed
clocklear opened this issue Dec 13, 2018 · 10 comments
Closed
Labels
area/dns DNS issues co/virtualbox good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2

Comments

@clocklear
Copy link
Contributor

The default settings for the VirtualBox machine enables DNSProxy and disables HostDNSResolver. I'd like to propose reversing these settings. If the host is used for DNS resolution, we can enable the kubernetes cluster to resolve registry URLs like registry.kube-system.svc.cluster.local:80, assuming that the host has properly configured a cluster.local resolver that routes traffic to the minikube network.

Backstory

My local development workflow is something like this:

## Start our minikube and enable the local registry
minikube start --extra-config=apiserver.service-cluster-ip-range=10.96.0.0/24
minikube addons enable registry

## Set up local routing to the minikube
MINIKUBEIP=`minikube ip`
KUBEDNS=`kubectl get svc -o json kube-dns --namespace=kube-system | jq -r '.spec.clusterIP'`
route -n add 10.96.0.0/24 $MINIKUBEIP

cat << EOF | sudo tee /etc/resolver/cluster.local
nameserver $KUBEDNS
domain cluster.local
search_order 1
EOF

This allows for a rich local development experience, in which I can resolve cluster services by DNS without having to port-forward individual things (ie: http://kubernetes-dashboard.kube-system.svc.cluster.local). The problem comes when I push a new manifest to my kube installation that references containers I have pushed to my local registry. By default, since the VirtualBox host doesn't have HostDNSResolver enabled, it doesn't know how to resolve cluster.local, so the image pull ends up 404'ing. If we enable HostDNSResolver, the VB host will forward these resolution requests to my local machine, which will service them properly.

This should not cause any backwards-incompatible issues (so far as I can tell) and should be a drop-in replacement for what folks are already using. I am, of course, open to putting this behind some sort of runtime configuration flag.

I've made this change in my own fork of minikube and it works great for my use case. I'd be happen to submit a PR.

@ceason
Copy link
Contributor

ceason commented Dec 13, 2018

The core problem here seems to be that minikube addons enable registry creates a registry that minikube can't actually use (because the docker daemon inside the VM isn't connected to kube-dns and therefore can't resolve the registry's hostname).

@balopat
Copy link
Contributor

balopat commented Dec 13, 2018

This is very close to what I'm using to demo now in Kubecon shortly, using hyperkit - I guess host resolution there works by default.

HostDNS resolution in VBox: I don't see any fundamental issues with enabling it, if you'd raise a PR that would be awesome, and we can go from there.

network route creation: minikube tunnel does exactly that, so that part is done: #3015

accessing the registry from within minikube (@ceason): pull/push should be enabled with the --insecure-registry flag for the docker daemon. For containerd it's a bit trickier (#3444) and I haven't tried it yet with CRI-O. The resolution works as soon as there is a host based DNS resolution and minikube tunnel is up (or you can create the network routes manually too). It's a bit backwards but from within the VM, name resolution would go outside the VM to the host, that is pointing to 10.96.0.10, that is resolving to kube-dns, that is going to give back a 10.x.x.x service IP and it does work.

@tstromberg tstromberg added kind/feature Categorizes issue or PR as related to a new feature. co/virtualbox good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. labels Dec 18, 2018
@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. area/dns DNS issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jan 24, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@woodcockjosh
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 11, 2019
@tstromberg tstromberg added the r/2019q2 Issue was last reviewed 2019q2 label May 22, 2019
@erickalmeidaptc
Copy link

I tried to use minikube tunnel on Ubuntu 18.10 and DNS resolution of services does not work.

Work only after I make this changes:

  1. Create a file: /etc/systemd/resolved.conf.d/99.minikube.conf
[Resolve]
DNS=10.96.0.10
Domains=svc.cluster.local
Cache=yes
  1. Altered file: /etc/nsswitch.conf
    from:
hosts:          files mdns4_minimal [NOTFOUND=return] dns myhostname

to:

hosts:          files mdns4_minimal dns [NOTFOUND=return] myhostname
  1. sudo systemctl restart systemd-resolved;
$ dig +short kubernetes-dashboard.kube-system.svc.cluster.local
10.100.252.203

But after some time the process CoreDNS inside VM hangs all CPU. I found this issue maybe was related. coredns/coredns#2083

Anyone know how I can solve this?

@tstromberg
Copy link
Contributor

Since #3453 was merged - what is remaining here?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 6, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 7, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/dns DNS issues co/virtualbox good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2
Projects
None yet
Development

No branches or pull requests

8 participants