-
Notifications
You must be signed in to change notification settings - Fork 968
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add divisor to resource_field_ref #1063
Conversation
@@ -436,6 +436,10 @@ func waitForDeploymentReplicasFunc(ctx context.Context, conn *kubernetes.Clients | |||
return resource.RetryableError(fmt.Errorf("Waiting for rollout to finish: %d old replicas are pending termination...", dply.Status.Replicas-dply.Status.UpdatedReplicas)) | |||
} | |||
|
|||
if dply.Status.Replicas > dply.Status.ReadyReplicas { | |||
return resource.RetryableError(fmt.Errorf("Waiting for rollout to finish: %d replicas wanted; %d replicas Ready", dply.Status.Replicas, dply.Status.ReadyReplicas)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This piece of logic was originally inside an acceptance test, and it would fail periodically because the deployment wasn't finished rolling out. So I added it into wait_for_rollout
, and now it waits until the deployment says all pods are ready.
level = "s0:c123,c789" | ||
} | ||
run_as_group = 200 | ||
run_as_user = 201 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test had been failing for a long time, due to some permissions issues that I think were related to the container image. This change gives us a working container and still tests the run_as_group
.
value_from { | ||
resource_field_ref { | ||
container_name = "containername" | ||
resource = "requests.memory" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this section to the regression test to ensure that it wasn't going to overwrite existing deployments that use resource_field_ref
. (divisor
is intentionally omitted, because it doesn't exist in the released version of the kubernetes provider, so I can't test it here yet).
Looks good but needs to be rebased. |
Add `divisor` field to resource_field_ref. Add acceptance tests for resource_field_ref. This is a cherry-pick of hashicorp#538 Co-authored-by: Joshua Hoblitt <[email protected]>
d4d70c1
to
5f9a81c
Compare
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Description
This PR adds a field that was missing from our implementation of
resourceFieldRef
in theContainer
spec.This is a cherry-pick of work done by Joshua Hoblitt in #538
Acceptance tests
Output from acceptance testing:
Release Note
Release note for CHANGELOG:
References
#538
Community Note