Skip to content

Add multi container changes to deploy PodSpec#7801

Merged
knative-prow-robot merged 15 commits intoknative:masterfrom
savitaashture:multi-deploy
Jun 3, 2020
Merged

Add multi container changes to deploy PodSpec#7801
knative-prow-robot merged 15 commits intoknative:masterfrom
savitaashture:multi-deploy

Conversation

@savitaashture
Copy link
Contributor

Fixes:
Parts of #5822 #3384

Proposed Changes

  • Add changes to PodSpec in order to support multiple containers

Release Note

@googlebot googlebot added the cla: yes Indicates the PR's author has signed the CLA. label Apr 30, 2020
@knative-prow-robot knative-prow-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. area/API API objects and controllers area/test-and-release It flags unit/e2e/conformance/perf test issues for product features labels Apr 30, 2020
Copy link
Contributor

@knative-prow-robot knative-prow-robot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@savitaashture: 0 warnings.

Details

In response to this:

Fixes:
Parts of #5822 #3384

Proposed Changes

  • Add changes to PodSpec in order to support multiple containers

Release Note

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@savitaashture
Copy link
Contributor Author

/retest

}

digests := make(chan digestData, len(rev.Spec.Containers))
d := digestData{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems what you really want here is:

  1. an error variable (you don't really need the whole digest object)
  2. sync.Once to enure the error is just set once. Right now the last error wins, but any of them should do.

digestGrp.Wait()
close(digests)
// If there is a digest error no need to create a containerStatus object
if d.digestError != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a little weird indeed. One would expect this to be the usecase of the errgroup, but I get you need more state than just the error.

What about this:

  1. Remove the redundant error check here.
  2. Build ContainerStatuses below outside of the struct.
  3. As soon as the loop below hits an error, exit early.
  4. If the foor loop below passes just fine, assign the ContainerStatuses to the struct.

Pseudo-code:

var (
  servingDigest string
  containerStatuses = make([]v1.ContainerStatuses, 0, len(digests))
)
for v := range digests {
  if v.digestError != nil {
    rev.Status.MarkContainerHealthyFalse(v1.ReasonContainerMissing,
      v1.RevisionContainerMissingMessage(
        v.image, v.digestError.Error()))
    return v.digestError
  }
  if v.isServingContainer {
    servingDigests = v.digestValue
  }
  containerStatuses = append(containerStatuses, v1.ContainerStatuses{
    Name:        v.containerName,
    ImageDigest: v.digestValue,
  })
}
// If we reached here, no errors happened.
rev.Status.DeprecatedImageDigest = servingDigest
rev.Status.ContainerStatuses = containerStatuses

That should allow the same semantics without the extra variable and mutex above.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah also errgroup.Wait() returns an error

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@markusthoemmes Thank you updated PR

@dprotaso the errgroup returns nil so that's why not verifying error as part of errgroup.Wait()

Copy link
Contributor

@markusthoemmes markusthoemmes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

}{{
name: "user-defined user port, queue proxy have PORT env",
rev: revision("bar", "foo",
withPodSpec([]corev1.Container{{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These fixtures here have become hard to understand with the additional of withPodSpec since further down there's another revisionOption that's modifying the container that you've built here

}
}

func withPodSpec(containers []corev1.Container) revisionOption {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this doesn't really take any podspec properties maybe we should call this withContainers

digestGrp.Wait()
close(digests)
// If there is a digest error no need to create a containerStatus object
if d.digestError != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah also errgroup.Wait() returns an error

} else {
container = makeContainer(rev.Spec.PodSpec.Containers[i], rev)
}
updateImage(rev, &container)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After #7866 is merged, we won't need to iterate through ContainerStatuses anymore since the same container should have the same index as rev.Spec.PodSpec.Containers

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should prolly ditch the entire function then.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated changes based on #7866 PR

}
)

defaultRevision = &v1.Revision{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason why we are removing the serving container from the default revision and instead adding them in the test cases?

Copy link
Contributor Author

@savitaashture savitaashture May 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing defaultRevision supports single container and to add testcase for multiple container need to write another function just with the podSpec change.

So in order to avoid code duplication moved container section to another function and called in the testcases

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An option could be to create a RevisionOption func to add the extra container to the existing revision.

ie.

func WithSidecar(c v1.Container) RevisionOption {
  return func (r v1.Revision) {
    r.Containers = append(r.Containers, c)
  // potential validation
  }
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added func withContainers(containers []corev1.Container) RevisionOption { function as that contains containers for both serving and non-serving

Copy link
Contributor

@markusthoemmes markusthoemmes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests are quite beasty to review, sorry for the delay 🙈

Copy link
Member

@dprotaso dprotaso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks great - one last thing

@savitaashture
Copy link
Contributor Author

/test pull-knative-serving-upgrade-tests

@markusthoemmes
Copy link
Contributor

@savitaashture this needs a rebase.

@knative-prow-robot knative-prow-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 3, 2020
@knative-prow-robot knative-prow-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 3, 2020
@knative-metrics-robot
Copy link

The following is the coverage report on the affected files.
Say /test pull-knative-serving-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/apis/serving/v1alpha1/revision_lifecycle.go 89.6% 90.0% 0.4
pkg/webhook/podspec_dryrun.go 91.7% 91.3% -0.4

@dprotaso
Copy link
Member

dprotaso commented Jun 3, 2020

/lgtm
/approve

@knative-prow-robot knative-prow-robot added the lgtm Indicates that a PR is ready to be merged. label Jun 3, 2020
@knative-prow-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dprotaso, savitaashture

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow-robot knative-prow-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 3, 2020
@vagababov
Copy link
Contributor

🎉

@dprotaso
Copy link
Member

dprotaso commented Jun 3, 2020

thanks for all the work @savitaashture !

@knative-prow-robot knative-prow-robot merged commit d74d622 into knative:master Jun 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. area/API API objects and controllers area/test-and-release It flags unit/e2e/conformance/perf test issues for product features cla: yes Indicates the PR's author has signed the CLA. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants