-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Masters should not be excluded from service load balancers #65618
Comments
#33884 has |
@ljani: Reiterating the mentions to trigger a notification: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@ljani This is by design, load balancers do not include master nodes in the available pool of backend servers. You should remove the I'm going to close this issue as it's not a bug. If you would like to see the ability to use masters as available backends for LB's then please create another ticket with a feature request. /close |
@jhorwit2 Thanks for the response. So, it's okay to remove the Searching for masterless does not really yield any related results. Anyhow, if that's a supported scenario, then I'm very happy with it. Otherwise I should open the ticket. |
It all depends on how the cluster is setup and if they depend on the labels or not. Single node clusters will probably always have some quirks. |
I've been following the Creating a single master cluster with kubeadm guide and thus I'm using |
The docs are probably incorrect when it comes to this because it wasn't tested. They should be updated. /reopen |
@jhorwit2: Those labels are not set on the issue: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
please, mind that kubeadm documentation will desist from adding cloud provider specific documentation in the future, but instead link to external resources. /cc @kubernetes/sig-cluster-lifecycle-bugs |
My comment in the other issue:
I think excluding masters from service load balancer is a bug and needs a more nuanced design. I'm going to bump priority here because I've got users trying to run pods on masters that have service LB on them that can't use service LB. |
Changing title to be more accurate. |
I think the correct fix here was the service LB health check support for targeting the serving pool to nodes that hold the pod, and now that we have that we don't need this hack. |
To be clear, this would only apply for Services with |
I'm sure we could add logic in service controller to include master if externalTrafficPolicy == Local, exclude otherwise but that doesn't seem like an elegant solution 🤔 |
For masters, it seems likely. I mean, in general I expect externalTrafficPolicy: Local to be the correct setting for the vast majority of LB services - is there a reason I would not want that policy in general use? |
In general I would agree that LBs would use |
@smarterclayton this should perhaps be merged with #65013 The service controller today does some filtering based on masters in interesting ways. Historically it seems that masters were marked unschedulable to be excluded from lb services. Then once labels for roles became popular that got added as well. |
This gets a little hacky with how backends are updated today. We would need to keep track of two different backend sets while updating each service. |
We grew this in c22d042 (docs/user/aws/install_upi: Add 'sed' call to zero compute replicas, 2019-05-02, openshift#1649) to set the stage for changing the 'replicas: 0' semantics from "we'll make you some dummy MachineSets" to "we won't make you MachineSets". But that hasn't happened yet, and since 64f96df (scheduler: Use schedulable masters if no compute hosts defined, 2019-07-16, openshift#2004) 'replicas: 0' for compute has also meant "add the 'worker' role to control-plane nodes". That leads to racy problems when ingress comes through a load balancer, because Kubernetes load balancers exclude control-plane nodes from their target set [1,2] (although this may get relaxed soonish [3]). If the router pods get scheduled on the control plane machines due to the 'worker' role, they are not reachable from the load balancer and ingress routing breaks [4]. Seth says: > pod nodeSelectors are not like taints/tolerations. They only have > effect at scheduling time. They are not continually enforced. which means that attempting to address this issue as a day-2 operation would mean removing the 'worker' role from the control-plane nodes and then manually evicting the router pods to force rescheduling. So until we get the changes from [3], it's easier to just drop this section and keep the 'worker' role off the control-plane machines entirely. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1671136#c1 [2]: kubernetes/kubernetes#65618 [3]: https://bugzilla.redhat.com/show_bug.cgi?id=1744370#c6 [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1755073
We grew replicas-zeroing in c22d042 (docs/user/aws/install_upi: Add 'sed' call to zero compute replicas, 2019-05-02, openshift#1649) to set the stage for changing the 'replicas: 0' semantics from "we'll make you some dummy MachineSets" to "we won't make you MachineSets". But that hasn't happened yet, and since 64f96df (scheduler: Use schedulable masters if no compute hosts defined, 2019-07-16, openshift#2004) 'replicas: 0' for compute has also meant "add the 'worker' role to control-plane nodes". That leads to racy problems when ingress comes through a load balancer, because Kubernetes load balancers exclude control-plane nodes from their target set [1,2] (although this may get relaxed soonish [3]). If the router pods get scheduled on the control plane machines due to the 'worker' role, they are not reachable from the load balancer and ingress routing breaks [4]. Seth says: > pod nodeSelectors are not like taints/tolerations. They only have > effect at scheduling time. They are not continually enforced. which means that attempting to address this issue as a day-2 operation would mean removing the 'worker' role from the control-plane nodes and then manually evicting the router pods to force rescheduling. So until we get the changes from [3], we can either drop the zeroing [5] or adjust the scheduler configuration to remove the effect of the zeroing. In both cases, this is a change we'll want to revert later once we bump Kubernetes to pick up a fix for the service load-balancer targets. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1671136#c1 [2]: kubernetes/kubernetes#65618 [3]: https://bugzilla.redhat.com/show_bug.cgi?id=1744370#c6 [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1755073 [5]: openshift#2402
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
We grew replicas-zeroing in c22d042 (docs/user/aws/install_upi: Add 'sed' call to zero compute replicas, 2019-05-02, openshift#1649) to set the stage for changing the 'replicas: 0' semantics from "we'll make you some dummy MachineSets" to "we won't make you MachineSets". But that hasn't happened yet, and since 64f96df (scheduler: Use schedulable masters if no compute hosts defined, 2019-07-16, openshift#2004) 'replicas: 0' for compute has also meant "add the 'worker' role to control-plane nodes". That leads to racy problems when ingress comes through a load balancer, because Kubernetes load balancers exclude control-plane nodes from their target set [1,2] (although this may get relaxed soonish [3]). If the router pods get scheduled on the control plane machines due to the 'worker' role, they are not reachable from the load balancer and ingress routing breaks [4]. Seth says: > pod nodeSelectors are not like taints/tolerations. They only have > effect at scheduling time. They are not continually enforced. which means that attempting to address this issue as a day-2 operation would mean removing the 'worker' role from the control-plane nodes and then manually evicting the router pods to force rescheduling. So until we get the changes from [3], we can either drop the zeroing [5] or adjust the scheduler configuration to remove the effect of the zeroing. In both cases, this is a change we'll want to revert later once we bump Kubernetes to pick up a fix for the service load-balancer targets. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1671136#c1 [2]: kubernetes/kubernetes#65618 [3]: https://bugzilla.redhat.com/show_bug.cgi?id=1744370#c6 [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1755073 [5]: openshift#2402
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This was addressed via kubernetes/enhancements#1144 (see here) and #90126. |
There should no longer be any issues running router pods on control plane nodes (i.e. kubernetes/kubernetes#65618 which was resolved in kubernetes/enhancements#1144). Remove this limitation from the docs. Signed-off-by: Stephen Finucane <[email protected]>
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
I'm running a single node cluster on AWS EC2, ie. scheduling work on the master as well. Now that I tried to add an ELB for my service, the EC2 instance is not associated with the ELB. The ELB gets created but there are 0 EC2 instances associated with it. I'm using ELB for SSL termination, if you wonder why I'd like to load balance a single node cluster.
The service controller logs this message:
What you expected to happen:
The EC2 instance is associated with the ELB.
How to reproduce it (as minimally and precisely as possible):
kubeadm
andcloud-provider=aws
Status: 0 of 0 instances in service
Anything else we need to know?:
The reason for this seems to be the
node-role.kubernetes.io/master
label. This blocks associating a load balancer with the node. On the other hand this changed what is included, becauseincludeNodeFromNodeList
did not check if a node is a master. I'm not sure what would be correct fix. I could try submit a PR, if you guide me how this should behave. Is my scenario even a supported one?I think this bug should be reproducible on other clouds as well.
Environment:
kubectl version
):uname -a
):kubeadm
The text was updated successfully, but these errors were encountered: