-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support specifying kube-proxy tolerations #699
Support specifying kube-proxy tolerations #699
Comments
I can confirm: |
Related #609 @luxas Both issues are today the biggest problems we have using kubeadm in a prod/HA cluster. I can help fixing them if you can tell me how you'd like to approach these things. I can also bring that up somewhere more suitable to discuss this, don't really know where though. |
Any ideas already how to approach this? This hit us multiple time now and I'd be happy helping to fix this. For something as fundamental as kube-proxy maybe just having a generic NoSchedule toleration would be fine? |
/assign @fabriziopandini |
@discordianfish - we're going to address in the 1.11 cycle, but the backlog is pretty Y'uge atm. |
@timothysc That's why I'm offering my help :) |
So I think there are two options:
So I'd just go with 1., at least for now. I think it's very rare that somebody wants to prevent kube-proxy from running on their nodes. At least it's arguably much more likely that somebody tainting their nodes want kube-proxy on it than not. Since it's so trivial, I'll just submit a PR for this where this approach can be discussed. |
As a essential core component, kube-proxy should generally run on all nodes even if the cluster operator taints nodes for special purposes. This fixes kubernetes/kubeadm#699
This change was reverted/broken by the next commit to the manifest, kubernetes/kubernetes@d194926#diff-e3ad35b550d4fcbf99d00903a91c787e |
^ @dixudx WDYT? |
@mxey Yeah 😕 |
@kubernetes/sig-cluster-lifecycle-bugs - who is actively working on re-fixing this one? |
/assign @neolit123 |
Automatic merge from submit-queue (batch tested with PRs 65931, 65705, 66033). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. kubeadm: run kube-proxy on non-master tainted nodes **What this PR does / why we need it**: kube-proxy should be able to run on all nodes, independent on the taint of such nodes. This restriction was previously removed in bb28449 but then was brought back in d194926. /cc @kubernetes/sig-cluster-lifecycle-pr-reviews /cc @luxas @detiber @dixudx @discordianfish @mxey /kind bug /area kube-proxy /area kubeadm **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes kubernetes/kubeadm#699 **Special notes for your reviewer**: we are removing the requirement again, but please have a look at all the implications here. hopefully we don't have to bring it again. **Release note**: ```release-note kubeadm: run kube-proxy on non-master tainted nodes ```
FEATURE REQUEST
kubeadm deploys kube-proxy with these tolerations:
This allows kube-proxy to run on masters but not any other tainted hosts. In my case, I want to taint a group of hosts to reserve them for special workloads. But they still need to run kube-proxy. I could just edit the daemonset directly but would prefer a declarative way (e.g kubeadm config) and I'm worried that kubeadm might revert this change on upgrades/replacing masters.
Alternatively the toleration could also just be made to apply to all NoSchedule taints which would solve my problem without introducing a new flag/config.
The text was updated successfully, but these errors were encountered: