-
Notifications
You must be signed in to change notification settings - Fork 39.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm: Create control plane with ClusterFirstWithHostNet dns policy #68890
Conversation
8f0c8b9
to
8f6ec98
Compare
/test pull-kubernetes-integration |
/test pull-kubernetes-e2e-kops-aws |
/assign @fabriziopandini |
/test pull-kubernetes-e2e-kops-aws |
@andrewrynhard /kind feature this might not get reviews for a while due to the 1.12 release closing by. for reference: the addition makes sense, except that we need to verify that there are no unwanted sides effects. /cc @kubernetes/sig-cluster-lifecycle-pr-reviews |
@neolit123: GitHub didn't allow me to request PR reviews from the following users: chrisohaver. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@andrewrynhard
thanks |
@andrewrynhard thanks for this PR! I think this is a valuable change Being this change on all the control plane components, I left lgtm pending for a second opinion. |
/test pull-kubernetes-e2e-kops-aws |
I'm a bit confused on what the current behavior is and why it fails: // DNSClusterFirst indicates that the pod should use cluster DNS
// first unless hostNetwork is true, if it is available, then
// fall back on the default (as determined by kubelet) DNS settings.
DNSClusterFirst DNSPolicy = "ClusterFirst" Even if kube-dns is not up yet, won't the default kubelet settings just get used? Is there an issue that describes this in more detail? |
@stealthybox it is a bit confusing. And I'm not sure I understand either. I just know that service DNS did not resolve until I switched to self-hosted and that uses
EDIT:
which is true in the case of the control plane. The default policy in this case is describe here. |
Thanks for figuring that out @andrewrynhard Looks like this modification is sufficient to cover the etcd, apiserver, KCM, and scheduler: +1 |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andrewrynhard, fabriziopandini, stealthybox The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The actual functionality for these options is here: kubernetes/pkg/kubelet/network/dns/dns.go Lines 274 to 284 in e9fe3f7
I think this broke my funky docker-in-docker local cluster setup, trying to figure out why :^) |
AFAICT this is basically equivalent to setting these pods to only use the in-cluster DNS: kubernetes/pkg/kubelet/network/dns/dns.go Lines 346 to 360 in e9fe3f7
Not sure how this is supposed to work for bootstrapping? |
Does kubeadm always deploy kube-proxy on master? To be able to send DNS query to kube-dns (coredns), kube-dns's service VIP need to be routable beforehand. |
yes, it does. the CI tests for master are green (it runs nodes over GCE): but Ben is seeing local failures: |
I don't think we start kube-proxy during control plane bringup. |
Issues are attributable back to #69195 meaning we cannot resolve |
The apiserver can't use kube-dns as it's only resolver as it's a circular dependency. The apiserver will come up before the kube-dns pods are present and needs to resolve things like etcd to be able to work. |
Right. With #69195 in some cases we fixed /etc/hosts not being respected as a fallback, which was sufficient for my case. For others this might still be problematic. |
the resolv.conf for when ClusterFirstWithHostNet is set:
This is likely green because the apiserver manifest has the following set:
With these settings we would probably make the tests green cause kube-apiserver will be able to resolve what it needs to by using the ip addresses. If the user has an external etcd cluster this will break. |
I don't have external etcd, just have some fun networking (https://sigs.k8s.io/kind), but I suspect this would break external etcd unless the nodes had a hosts entry. cc @MrHohn |
What this PR does / why we need it:
Currently, the static pods created by
kubeadm
use theClusterFirst
dns policy. This prevents doing things like setting theoidc-issuer-url
flag of the API Server to a Kubernetes service. By using theClusterFirstWithHostNet
policy the API Server will be able to resolve the service DNS.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Special notes for your reviewer:
Release note:
/cc sig-cluster-lifecycle