-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Service initial scale implementation for initialScale 0 use case #8613
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
a10e93e
b241828
16add1e
dbdbf17
6cd7c82
287cf12
9930bbf
570eae3
0189945
d021d2c
54da489
4213378
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -44,6 +44,8 @@ import ( | |
| corev1listers "k8s.io/client-go/listers/core/v1" | ||
| ) | ||
|
|
||
| const noPrivateServiceName = "No Private Service Name" | ||
|
|
||
| // podCounts keeps record of various numbers of pods | ||
| // for each revision. | ||
| type podCounts struct { | ||
|
|
@@ -85,7 +87,7 @@ func (c *Reconciler) ReconcileKind(ctx context.Context, pa *pav1alpha1.PodAutosc | |
| if _, err = c.ReconcileSKS(ctx, pa, nv1alpha1.SKSOperationModeServe, 0 /*numActivators == all*/); err != nil { | ||
| return fmt.Errorf("error reconciling SKS: %w", err) | ||
| } | ||
| pa.Status.MarkSKSNotReady("No Private Service Name") // In both cases this is true. | ||
| pa.Status.MarkSKSNotReady(noPrivateServiceName) // In both cases this is true. | ||
| return computeStatus(ctx, pa, podCounts{want: scaleUnknown}, logger) | ||
| } | ||
|
|
||
|
|
@@ -219,24 +221,30 @@ func reportMetrics(pa *pav1alpha1.PodAutoscaler, pc podCounts) error { | |
| } | ||
|
|
||
| // computeActiveCondition updates the status of a PA given the current scale (got), desired scale (want) | ||
| // and the current status, as per the following table: | ||
| // active threshold (min), and the current status, as per the following table: | ||
| // | ||
| // | Want | Got | Status | New status | | ||
| // | 0 | <any> | <any> | inactive | | ||
| // | >0 | < min | <any> | activating | | ||
| // | >0 | >= min | <any> | active | | ||
| // | -1 | < min | inactive | inactive | | ||
| // | -1 | < min | activating | activating | | ||
| // | -1 | < min | active | activating | | ||
| // | -1 | >= min | inactive | inactive | | ||
| // | -1 | >= min | activating | active | | ||
| // | -1 | >= min | active | active | | ||
| // | Want | Got | min | Status | New status | | ||
| // | 0 | <any> | <any> | <any> | inactive | | ||
| // | >0 | < min | <any> | <any> | activating | | ||
| // | >0 | >= min | <any> | <any> | active | | ||
| // | -1 | < min | <any> | inactive | inactive | | ||
| // | -1 | < min | <any> | activating | activating | | ||
| // | -1 | < min | <any> | active | activating | | ||
| // | -1 | >= min | <any> | inactive | inactive | | ||
| // | -1 | >= min | 0 | activating | inactive | | ||
| // | -1 | >= min | 0 | active | inactive | <-- this case technically is impossible. | ||
| // | -1 | >= min | >0 | activating | active | | ||
| // | -1 | >= min | >0 | active | active | | ||
| func computeActiveCondition(ctx context.Context, pa *pav1alpha1.PodAutoscaler, pc podCounts) { | ||
| minReady := activeThreshold(ctx, pa) | ||
| if pc.ready >= minReady { | ||
| pa.Status.MarkScaleTargetInitialized() | ||
| } | ||
|
|
||
| switch { | ||
| case pc.want == 0: | ||
| if pa.Status.IsActivating() { | ||
| // Need to check for minReady = 0 because in the initialScale 0 case, pc.want will be -1. | ||
| case pc.want == 0 || minReady == 0: | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. do we need to do
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We aren't overriding -1 with 0, so we are still hitting the pc.want = -1, minReady = 0 case.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hm, I had that state somewhere in my head, but I guess we got rid of that :-)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah originally we were overriding -1 with 0, but with Markus' simplification we are able to get rid of it. |
||
| if pa.Status.IsActivating() && minReady > 0 { | ||
| // We only ever scale to zero while activating if we fail to activate within the progress deadline. | ||
| pa.Status.MarkInactive("TimedOut", "The target could not be activated.") | ||
| } else { | ||
|
|
@@ -247,12 +255,19 @@ func computeActiveCondition(ctx context.Context, pa *pav1alpha1.PodAutoscaler, p | |
| if pc.want > 0 || !pa.Status.IsInactive() { | ||
| pa.Status.MarkActivating( | ||
| "Queued", "Requests to the target are being buffered as resources are provisioned.") | ||
| } else { | ||
| // This is for the initialScale 0 case. In the first iteration, minReady is 0, | ||
| // but for the following iterations, minReady is 1. pc.want will continue being | ||
| // -1 until we start receiving metrics, so we will end up here. | ||
| // Even though PA is already been marked as inactive in the first iteration, we | ||
| // still need to set it again. Otherwise reconciliation will fail with NewObservedGenFailure | ||
| // because we cannot go through one iteration of reconciliation without setting | ||
| // some status. | ||
| pa.Status.MarkInactive("NoTraffic", "The target is not receiving traffic.") | ||
|
Comment on lines
+258
to
+266
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The way I read it, this is redundant.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Correct, we are marking Inactive with Inactive again here. This is still needed because we will be getting the NewObservedGenFailure during reconciliation post processing, because we cannot go through one iteration of reconciliation without setting some sort of status.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hm, so just updating the status with the same value updates ObsGen?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @whaught i'm not sure if my understanding is correct. I think it's because PA has been updated during the reconciliation, therefore there's a difference in ObsGen and Gen, but the status is not updated, which causes the failure: https://github.com/knative/pkg/blob/deb6b33d2a6c114f596f52630e85c475bf43abce/reconciler/reconcile_common.go#L50-L53
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. When Spec changes, we reset Ready to unknown with a dummy message that the reconciler didn't set anything for a new spec before calling ReconcileKind. The reconciler is expected to set something upon observation of a new generation (or is left with the default message as a warning)
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wonder the spec change in PA is 🤔 |
||
| } | ||
|
|
||
| case pc.ready >= minReady: | ||
| if pc.want > 0 || !pa.Status.IsInactive() { | ||
| pa.Status.MarkScaleTargetInitialized() | ||
| // SKS should already be active. | ||
| pa.Status.MarkActive() | ||
| } | ||
| } | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this isn't actually needed after latest refactor, but I guess it's a reasonable method to have anyway 🤷🏼