-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow macOS to resolve service FQDNs during 'minikube tunnel' #3464
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Can one of the admins verify this patch? |
I signed it |
…l' (MacOS only for now)
@minikube-bot OK to test |
This is nice, I've been doing the exact same thing manually. |
@minikube-bot OK to test |
Will do. I'm traveling for the holidays but will get to this first week of the new year. |
@balopat I've updated the integration test to include the DNS resolver config. There is one test failure at |
@minikube-bot OK to test |
1 similar comment
@minikube-bot OK to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great - thank you for contributing this! Minor nits:
@@ -34,6 +36,9 @@ func (router *osRouter) EnsureRouteIsAdded(route *Route) error { | |||
if exists { | |||
return nil | |||
} | |||
if err := writeResolverFile(route); err != nil { | |||
return fmt.Errorf("could not write /etc/resolver/{cluster_domain} file: %s", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't include the path name in the message, as EnsureRouteIsAdded doesn't authoritatively know what it is. My suggestion is: return fmt.Errorf("write resolver file: %v", err)
@@ -162,5 +167,37 @@ func (router *osRouter) Cleanup(route *Route) error { | |||
if !re.MatchString(message) { | |||
return fmt.Errorf("error deleting route: %s, %d", message, len(strings.Split(message, "\n"))) | |||
} | |||
// idempotent removal of cluster domain dns | |||
resolverFile := fmt.Sprintf("/etc/resolver/%s", route.ClusterDomain) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make '/etc/resolver' a constant to share between functions, and use filepath.Join to create paths from it.
Also, I prefer that the variable name uses path instead of file, since the latter references something specific in Go (file.File)
command := exec.Command("sudo", "route", "-n", "delete", cidr) | ||
_, err := command.CombinedOutput() | ||
if err != nil { | ||
t.Logf("cleanup error (should be ok): %s", err) | ||
} | ||
command = exec.Command("sudo", "rm", "-f", fmt.Sprintf("/etc/resolver/%s", r.ClusterDomain)) | ||
_, err = command.CombinedOutput() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're not using the output:
if err := command.Run(); err != nil {
return fmt.Errorf("rm: %v", err)
}
May I suggest an alternative? By using the ingress addon (or e.g. Traefik) you can leverage external-dns to populate CoreDNS and use that to resolve any ingresses, just like you would in a production setup. Right now I've got it running with a CoreDNS & etcd deployment that are separate from kube-system, but it should be possible to simply use the kube-system etcd without any changes and add some config to the kube-system etcd. Meaning only external-dns would need to be added. I have all my ingresses running on the In order to get the host to query the right resolver for the
Though I'm not quite sure how to achieve the same for Windows or Linux. Linux I'm sure is trivial, Windows maybe not so much. Check out the gist: https://gist.github.com/andsens/2c8c67cf72346c2c0df02614d6386d0a |
I don't think this is so much an alternative as it is a really nice addition. I use the exact same commands as you, but I point it at the kube-system CoreDNS server rather than a separate one managed by external-dns. Glancing at this PR, it does the exact same. This gives me the experience of being "inside" the cluster during development, and allows me to access services from the host machine that are usually only accessible from containers running inside the cluster (i.e. However, the kube-system CoreDNS server doesn't contain DNS entries for ingresses, so combining this with external-dns seems like a good addition. I currently put ingress DNS entries in |
Marking as WIP as there are still outstanding review comments to be addressed. |
I just wanted to give a heads up regarding the mDNS option. I have have done a bit of research on this before I arrived at my resolver setup and there is one big mean hurdle: mDNS is made for link-local adresses. So either the broadcaster needs to sit directly on the tunnel interface (i.e. in the virtual machine) or you set up a reflector in its place and move the discovery mechanism into a container, avahi has an That said, mDNS is a really nice cross-platform way to do this. The pain it takes to get it working properly is repaid a thousandfold once everything "just works", on every machine, with zero configuration. |
Interesting test failure on the last run:
@minikube-bot OK to test |
@minikube-bot OK to test |
Merging! Thank you for dealing with this PR being open extraordinarily long. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: ceason If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
minikube tunnel
is an excellent command, but it still has a particularly rough edge; users must manually look up a service's ClusterIP and refer to that IP directly, rather than referencing the service's DNS name (ie<service>.<namespace>.svc.<clusterDomain>
).This PR adds the ability to resolve service FQDNs from host during
minikube tunnel
.Currently this is only implemented for macOS, (via its resolver which allows easily overriding nameservers for a particular domain). Some potential implementations for other platforms are:
Linux
*.svc.<clusterDomain>
by querying the cluster's DNS serviceWindows
Does the macOS implementation seem reasonable? If so, does adding support for macOS now and Windows/Linux later seem reasonable?