-
Notifications
You must be signed in to change notification settings - Fork 462
pkg/controller: fix container runtime registries generation #489
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pkg/controller: fix container runtime registries generation #489
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we can also just update that annotation to the new version and avoid a whole new generation with same content, but let's keep it this way for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the difference here? It looks like we have mc.ObjectMeta.Annotations in another place in this code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
none, but ObjectMeta is an embedded field and Go let's you access embedded fields' fields' directly on the top level struct...nothing changes functionality wise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I, of course, can change it back...it's just OCD kicking lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. moved this back for consistency...
|
This looks sane to me but - today we don't have any tests that cover this. Did you test this by hand? I think we could add something to e2e-aws-op that builds another release payload from the same code and at least verify that a no-op upgrade works. (Don't have to do it in this PR, just brainstorming) |
I'm testing this right now on a cluster but unlikely to finish the whole testing today but that's the plan
do we have |
The CRC (container runtime config) controller recently added a check to avoid resyncing and recreating the very same registries config if nothing has changed on the image crd side [1]. While that's correct, during an upgrade, the controllers need to generate the MC fragments using the controller version they're at. Since we weren't checking the versions of the controller that generated the registries config, we wrongly assumed the configurations were equal and never generated a new one with the new controller. This patch fixes that by adding a version check before skipping a regeneration on equal content in the registries configs. Fixes: openshift#487 [1] openshift#461 Signed-off-by: Antonio Murdaca <[email protected]>
7bc4f6d to
c2ef2c1
Compare
|
@umohnani8 ptal |
|
Good to see that CI is now cooperating again... |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cgwalters, runcom The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
just FYI, I'm spinning up two cluster, one to reproduce using a CI payload (broken) and another one to upgrade to my own payload built from this PR. First should fail with #487, second should hopefully go through (if no other errors like daemon SIGTERM kicks in?) |
|
oh gosh, this was fast lol will report back anyway I guess |
|
as a side note, the bug in the linked issue isn't 100% reproducible to my testing, I've upgraded to the very same version as Colin's and did not get the error |
|
nevermind, I was actually using the very same version, so it was a no-op |
|
I was able to upgrade to my custom payload just fine 👍 it works |
|
ok, properly tested this one on 2 clusters, the first cluster is correctly failing w/o this patch on upgrade, whether the second one just goes through and successfully upgrade. First cluster MCs broken (notice the registries configs being on The second cluster deployed just fine as expected. Notice the MCs for registries got generated by the new controller (without And the cluster finishes the upgrade just normally. |
|
@cgwalters I do completely agree our CI needs to have a smoke upgrade testing (be it with |
The CRC (container runtime config) controller recently added a check to
avoid resyncing and recreating the very same registries config if
nothing has changed on the image crd side [1]. While that's correct,
during an upgrade, the controllers need to generate the MC fragments
using the controller version they're at. Since we weren't checking the
versions of the controller that generated the registries config, we
wrongly assumed the configurations were equal and never generated a new
one with the new controller.
This patch fixes that by adding a version check before skipping a
regeneration on equal content in the registries configs.
Fixes: #487
[1] #461
Signed-off-by: Antonio Murdaca [email protected]