Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add divisor to resource_field_ref #1063

Merged
merged 3 commits into from
Nov 18, 2020

Conversation

dak1n1
Copy link
Contributor

@dak1n1 dak1n1 commented Nov 12, 2020

Description

This PR adds a field that was missing from our implementation of resourceFieldRef in the Container spec.
This is a cherry-pick of work done by Joshua Hoblitt in #538

Acceptance tests

  • Have you added an acceptance test for the functionality being added?
  • Have you run the acceptance tests on this branch?

Output from acceptance testing:

=== RUN   TestAccKubernetesDeployment_basic
--- PASS: TestAccKubernetesDeployment_basic (28.53s)
=== RUN   TestAccKubernetesDeployment_regression
--- PASS: TestAccKubernetesDeployment_regression (85.69s)
=== RUN   TestAccKubernetesDeployment_with_resource_field_selector
--- PASS: TestAccKubernetesDeployment_with_resource_field_selector (16.01s)

Release Note

Release note for CHANGELOG:

* Add divisor to resource_field_ref.
* Update Deployment `wait_for` to wait for Ready pods.

References

#538

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

@@ -436,6 +436,10 @@ func waitForDeploymentReplicasFunc(ctx context.Context, conn *kubernetes.Clients
return resource.RetryableError(fmt.Errorf("Waiting for rollout to finish: %d old replicas are pending termination...", dply.Status.Replicas-dply.Status.UpdatedReplicas))
}

if dply.Status.Replicas > dply.Status.ReadyReplicas {
return resource.RetryableError(fmt.Errorf("Waiting for rollout to finish: %d replicas wanted; %d replicas Ready", dply.Status.Replicas, dply.Status.ReadyReplicas))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This piece of logic was originally inside an acceptance test, and it would fail periodically because the deployment wasn't finished rolling out. So I added it into wait_for_rollout, and now it waits until the deployment says all pods are ready.

level = "s0:c123,c789"
}
run_as_group = 200
run_as_user = 201
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test had been failing for a long time, due to some permissions issues that I think were related to the container image. This change gives us a working container and still tests the run_as_group.

value_from {
resource_field_ref {
container_name = "containername"
resource = "requests.memory"
Copy link
Contributor Author

@dak1n1 dak1n1 Nov 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this section to the regression test to ensure that it wasn't going to overwrite existing deployments that use resource_field_ref. (divisor is intentionally omitted, because it doesn't exist in the released version of the kubernetes provider, so I can't test it here yet).

@jrhouston
Copy link
Collaborator

Looks good but needs to be rebased.

dak1n1 and others added 3 commits November 18, 2020 15:01
Add `divisor` field to resource_field_ref.
Add acceptance tests for resource_field_ref.

This is a cherry-pick of
hashicorp#538

Co-authored-by: Joshua Hoblitt <[email protected]>
Fix permissions errors in test
TestAccKubernetesDeployment_with_container_security_context_run_as_group
@ghost
Copy link

ghost commented Dec 19, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Dec 19, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants