-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Add capacity buffers scale up events #8621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
1b2a75d to
7dee2b6
Compare
cluster-autoscaler/processors/capacitybuffer/scale_up_status_processor.go
Show resolved
Hide resolved
| // relevant events for pods depending on their post scale-up status. | ||
| func (p *EventingScaleUpStatusProcessor) Process(context *context.AutoscalingContext, status *ScaleUpStatus) { | ||
| consideredNodeGroupsMap := nodeGroupListToMapById(status.ConsideredNodeGroups) | ||
| consideredNodeGroupsMap := NodeGroupListToMapById(status.ConsideredNodeGroups) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
merge conflict pro tip here:
(your change here didn't actually conflict w/ the autoscalingCtx *ca_context.AutoscalingContext change but FYI)
| metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
| v1alpha1 "k8s.io/autoscaler/cluster-autoscaler/apis/capacitybuffer/autoscaling.x-k8s.io/v1alpha1" | ||
| "k8s.io/autoscaler/cluster-autoscaler/context" | ||
| "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupset" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
merge conflict resolution pro tip:
git doesn't know how to resolve the surrounding import additions on top of the change to the "k8s.io/autoscaler/cluster-autoscaler/context" import alias change
7dee2b6 to
469e2c9
Compare
cluster-autoscaler/processors/status/eventing_scale_up_processor.go
Outdated
Show resolved
Hide resolved
cluster-autoscaler/processors/capacitybuffer/pod_list_processor.go
Outdated
Show resolved
Hide resolved
cluster-autoscaler/processors/capacitybuffer/scale_up_status_processor.go
Outdated
Show resolved
Hide resolved
cluster-autoscaler/processors/capacitybuffer/scale_up_status_processor.go
Show resolved
Hide resolved
cluster-autoscaler/processors/capacitybuffer/scale_up_status_processor.go
Outdated
Show resolved
Hide resolved
| }, | ||
| PodsAwaitEvaluation: []*apiv1.Pod{}, | ||
| }, | ||
| expectedTriggeredScaleUp: 2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WDYT about changing this from int to a slice of error strings so that it is possible to validate the message content as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think comparing the whole event string wouldn't add much value on testing this specific processor but also might make it a bit overcomplicated and not easy to add other tests, especially it would add the complexity of comparing/validating the reasonMessages list which is already tested EventingScaleUpStatusProcessor
469e2c9 to
becf9cf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
/hold
Just a tiny comment, feel free to cancel the hold and merge as is if you disagree.
cluster-autoscaler/processors/capacitybuffer/pod_list_processor.go
Outdated
Show resolved
Hide resolved
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abdelrahman882, x13n The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
becf9cf to
d6a0b77
Compare
|
/lgtm |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Add events to capacity buffers objects when buffer causes scale up or fails to do so.
Which issue(s) this PR fixes:
None
Special notes for your reviewer:
This changes is mainly caching fake pods to buffer objects in step 1. then in step 2 we use that mapping to get the buffer object from fake pods objects and create scale up events for the buffers
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
NONE