-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Refactor cloud provider creation options #8583
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor cloud provider creation options #8583
Conversation
this is an alternate solution instead of #8531 |
refactored slightly to take suggestions:
|
651e7ab
to
ea046a0
Compare
cluster-autoscaler/processors/scaledowncandidates/scale_down_candidates_processor.go
Outdated
Show resolved
Hide resolved
This change helps to prevent circular dependencies between the core and builder packages as we start to pass the AutoscalerOptions to the cloud provider builder functions.
this changes the options input to the cloud provider builder function so that the full autoscaler options are passed. This is being proposed so that cloud providers will have new options for injecting behavior into the core parts of the autoscaler.
util function to help cloud providers in adding additional combined scale down processors.
This change adds a custom scale down node processor for cluster api to reject nodes that are undergoing upgrade.
this change moves the cloud provider initialization to the end of the initializeDefaultOptions function to ensure that all other options are prepared before the cloud provider. Due to the cloud provider now receiving the full AutoscalerOptions struct, we need to ensure that all the data is available.
this change removes the import from the gce module in favor of using the string value directly.
ea046a0
to
51a0514
Compare
updated to revert the public scoping on combined mixed node scaled down processor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/assign @towca
@elmiko: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
@jackfrancis is the azure test failure something that i introduced? |
Thanks for incorporating my feedback, LGTM! Leaving the hold so that you can confirm the e2e test before submitting, feel free to unhold. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: elmiko, jackfrancis, towca The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
E2E failures are not related to this change: #8681 /hold cancel |
Thx @elmiko and everyone contributing to this!! Really appreciate it |
/kind bug |
/cherry-pick cluster-autoscaler-release-1.34 |
@jackfrancis: #8583 failed to apply on top of branch "cluster-autoscaler-release-1.34":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind cleanup
/kind api-change
What this PR does / why we need it:
This patch series changes an argument to the
NewCloudProvider
function to use anAutoscalerOptions
struct instead ofAutoscalingOptions
. This change allows cloud providers to have more control over the core functionality of the cluster autoscaler.In specific, this patch series also adds a method named
RegisterScaleDownNodeProcessor
to theAutoscalerOptions
so that cloud providers can inject a custom scale down processor.Lastly, this change adds a custom scale down processor to the clusterapi provider to help it avoid removing the wrong instance during scale down operations that occur during a cluster upgrade.
Which issue(s) this PR fixes:
Fixes #8494
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: