Add multi container changes to deploy PodSpec#7801
Add multi container changes to deploy PodSpec#7801knative-prow-robot merged 15 commits intoknative:masterfrom
Conversation
knative-prow-robot
left a comment
There was a problem hiding this comment.
@savitaashture: 0 warnings.
Details
In response to this:
Proposed Changes
- Add changes to PodSpec in order to support multiple containers
Release Note
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
ec63e8b to
30d6c65
Compare
|
/retest |
pkg/reconciler/revision/revision.go
Outdated
| } | ||
|
|
||
| digests := make(chan digestData, len(rev.Spec.Containers)) | ||
| d := digestData{} |
There was a problem hiding this comment.
it seems what you really want here is:
- an error variable (you don't really need the whole digest object)
- sync.Once to enure the error is just set once. Right now the last error wins, but any of them should do.
pkg/reconciler/revision/revision.go
Outdated
| digestGrp.Wait() | ||
| close(digests) | ||
| // If there is a digest error no need to create a containerStatus object | ||
| if d.digestError != nil { |
There was a problem hiding this comment.
This is a little weird indeed. One would expect this to be the usecase of the errgroup, but I get you need more state than just the error.
What about this:
- Remove the redundant error check here.
- Build ContainerStatuses below outside of the struct.
- As soon as the loop below hits an error, exit early.
- If the foor loop below passes just fine, assign the ContainerStatuses to the struct.
Pseudo-code:
var (
servingDigest string
containerStatuses = make([]v1.ContainerStatuses, 0, len(digests))
)
for v := range digests {
if v.digestError != nil {
rev.Status.MarkContainerHealthyFalse(v1.ReasonContainerMissing,
v1.RevisionContainerMissingMessage(
v.image, v.digestError.Error()))
return v.digestError
}
if v.isServingContainer {
servingDigests = v.digestValue
}
containerStatuses = append(containerStatuses, v1.ContainerStatuses{
Name: v.containerName,
ImageDigest: v.digestValue,
})
}
// If we reached here, no errors happened.
rev.Status.DeprecatedImageDigest = servingDigest
rev.Status.ContainerStatuses = containerStatusesThat should allow the same semantics without the extra variable and mutex above.
There was a problem hiding this comment.
yeah also errgroup.Wait() returns an error
There was a problem hiding this comment.
@markusthoemmes Thank you updated PR
@dprotaso the errgroup returns nil so that's why not verifying error as part of errgroup.Wait()
markusthoemmes
left a comment
There was a problem hiding this comment.
/assign @markusthoemmes @dprotaso
| }{{ | ||
| name: "user-defined user port, queue proxy have PORT env", | ||
| rev: revision("bar", "foo", | ||
| withPodSpec([]corev1.Container{{ |
There was a problem hiding this comment.
These fixtures here have become hard to understand with the additional of withPodSpec since further down there's another revisionOption that's modifying the container that you've built here
| } | ||
| } | ||
|
|
||
| func withPodSpec(containers []corev1.Container) revisionOption { |
There was a problem hiding this comment.
Since this doesn't really take any podspec properties maybe we should call this withContainers
pkg/reconciler/revision/revision.go
Outdated
| digestGrp.Wait() | ||
| close(digests) | ||
| // If there is a digest error no need to create a containerStatus object | ||
| if d.digestError != nil { |
There was a problem hiding this comment.
yeah also errgroup.Wait() returns an error
d78568c to
fcf2444
Compare
| } else { | ||
| container = makeContainer(rev.Spec.PodSpec.Containers[i], rev) | ||
| } | ||
| updateImage(rev, &container) |
There was a problem hiding this comment.
After #7866 is merged, we won't need to iterate through ContainerStatuses anymore since the same container should have the same index as rev.Spec.PodSpec.Containers
There was a problem hiding this comment.
Should prolly ditch the entire function then.
| } | ||
| ) | ||
|
|
||
| defaultRevision = &v1.Revision{ |
There was a problem hiding this comment.
Is there a reason why we are removing the serving container from the default revision and instead adding them in the test cases?
There was a problem hiding this comment.
The existing defaultRevision supports single container and to add testcase for multiple container need to write another function just with the podSpec change.
So in order to avoid code duplication moved container section to another function and called in the testcases
There was a problem hiding this comment.
An option could be to create a RevisionOption func to add the extra container to the existing revision.
ie.
func WithSidecar(c v1.Container) RevisionOption {
return func (r v1.Revision) {
r.Containers = append(r.Containers, c)
// potential validation
}
}There was a problem hiding this comment.
Added func withContainers(containers []corev1.Container) RevisionOption { function as that contains containers for both serving and non-serving
807838b to
569fec7
Compare
23511e3 to
9910567
Compare
markusthoemmes
left a comment
There was a problem hiding this comment.
The tests are quite beasty to review, sorry for the delay 🙈
ba41468 to
45fc323
Compare
dprotaso
left a comment
There was a problem hiding this comment.
looks great - one last thing
|
/test pull-knative-serving-upgrade-tests |
|
@savitaashture this needs a rebase. |
a465642 to
85d51c1
Compare
|
The following is the coverage report on the affected files.
|
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dprotaso, savitaashture The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
🎉 |
|
thanks for all the work @savitaashture ! |
Fixes:
Parts of #5822 #3384
Proposed Changes
Release Note