-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UsePolicyConfigMap for kube-scheduler #3581
Conversation
Hi @whs. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Looked at this an I am wondering if we can make the API cooler.... meaning being able to tweak the scheduler.... make it so a user can set the configmap. The other options which may be easier is to expose addons API to create cluster. Which I am planning on doing, then they can override the configmap. @justinsb thoughts? |
/ok-to-test Let's get this out of the way |
An idea: another way you can put scheduler configuration in is by giving it the file. We're running our kube-up based production cluster using this option, by uploading the scheduler policy to the master node and bind mount into the container. This might be achievable using fileAssets. Another plus is that after changing fileAssets the user probably want to rolling update their cluster so the scheduler will be restarted. I'm not sure how hard is this implementation. |
// PolicyConfigMap is the name of configmap to use for scheduler policy | ||
PolicyConfigMap string `json:"policyConfigMap,omitempty" flag:"policy-configmap"` | ||
// PolicyConfigMapNamespace is the namespace containing the configmap | ||
PolicyConfigMapNamespace string `json:"policyConfigMapNamespace,omitempty" flag:"policy-configmap-namespace"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any reason that this is not kube-system? From a best practice standpoint would we ever put is somewhere else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default value of --policy-configmap-namespace is already kube-system, so I removed the configuration option
Externalizing the addons API will do all of this and more :) https://github.com/kubernetes/kops/blob/master/channels/pkg/api/channel.go is channels. Channels runs inside of protokube and applys addons with kubectl apply. The workflow would be
We would take this PR, and build upon it. |
@whs PR needs rebase |
@whs can we get a re-base, and then I think we are good to go. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 comment
return &api.Cluster{ | ||
Spec: api.ClusterSpec{ | ||
CloudProvider: "aws", | ||
KubernetesVersion: "v1.4.0", | ||
KubeScheduler: &api.KubeSchedulerConfig{ | ||
PolicyConfigMap: "scheduler-config", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason for removing PolicyConfigMap
? It seems the name of the ConfigMap changes from scheduler-config
-> scheduler-policy
between the 2 versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry if I'm missing some context, reading through the other PR now :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, so aftering reading through the PRs, my only question is around why we chose to change the ConfigMap name here, it seems it can break backwards compatability or leave dangling resources. Does scheduler create the ConfigMap scheduler-policy
by default if it doesn't exist?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's an oversight on my part. I'll rebase and change the name back. Thanks for the catch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I just realized. The name "scheduler-config" is not actually in use anywhere except as test data. The actual config name in the cluster is configurable by this PolicyConfigMap field (which has no default value).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahh, missed that it's just test data, thanks!
I've rebased. I think we should be safe with changing the name, as in the old PR the user can choose any name and namespace anyway (and we don't have a default value). The old documentation also suggested the name "scheduler-policy". |
@@ -39,8 +39,8 @@ func (b *KubeSchedulerOptionsBuilder) BuildOptions(o interface{}) error { | |||
|
|||
config := clusterSpec.KubeScheduler | |||
|
|||
if config.PolicyConfigMap != "" && b.IsKubernetesLT("v1.7.0") { | |||
return fmt.Errorf("policyConfigMap is only supported in Kubernetes 1.7.0 or later") | |||
if config.UsePolicyConfigMap != nil && b.IsKubernetesLT("v1.7.0") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be Kubernetes version greater than?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IsKubernetesGTE *
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I under your question right. This policyConfigMap feature was added in Kubernetes 1.7.0, and so I added a check that if the Kubernetes version is under that it will throw an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are correct, it's getting late for me :P
lgtm |
I think you missed a slash? |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andrewsykim The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue. |
Continued from #3546
In this version, a single option
usePolicyConfigMap
is added that will install scheduler.addons.k8s.io, which contains a default configmap.