-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ccruntime: Set INSTALL_OFFICIAL_CONTAINERD
to false
#274
ccruntime: Set INSTALL_OFFICIAL_CONTAINERD
to false
#274
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hit an error testing this out locally:
# kubectl logs pod/cc-operator-pre-install-daemon-8xwt9 -n confidential-containers-system
/opt/confidential-containers-pre-install-artifacts/scripts/reqs-deploy.sh: line 208: INSTALL_COCO_CONTAINERD: unbound variable
9418a21
to
0573d19
Compare
Are you sure you'be been testing the latest version of this PR? Asking as looking at the reqs script I don't easily see it happening, as we set those at the beginning of the file. |
I'm not sure, but given all the changes you've made since then I've re-tested in a 'clean' environment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't able to get this tested properly in the original environment that I found the issue in as I had issues when trying to rebuild and deploy a new operator container image, so I just created a new VM and ran the code that the e2e tests use to do this and it looks like it's working for me now:
# kubectl logs ds/cc-operator-pre-install-daemon -n confidential-containers-system
INSTALL_COCO_CONTAINERD: false
INSTALL_OFFICIAL_CONTAINERD: false
INSTALL_VFIO_GPU_CONTAINERD: false
INSTALL_NYDUS_SNAPSHOTTER: true
Copying nydus-snapshotter artifacts onto host
Created symlink /etc/systemd/system/containerd.service.requires/nydus-snapshotter.service → /etc/systemd/system/nydus-snapshotter.service.
configure nydus snapshotter for containerd
Create /etc/containerd/config.toml.d
Drop-in the nydus configuration
[proxy_plugins]
[proxy_plugins.nydus]
type = "snapshot"
address = "/run/containerd-nydus/containerd-nydus-grpc.sock"
Restarting containerd
@wainersm, I need your help here for changing the tests. |
ok, looking at it. |
0573d19
to
34ef4d4
Compare
@fidencio I'm thinking in tackle CI like this:
sound as a good plan? |
CI pass with that change: http://jenkins.katacontainers.io/job/confidential-containers-operator-main-ubuntu-20.04-x86_64-containerd_kata-qemu-PR/328/console
Here we have a problem... the CI scripts leverage docker to build images and service with a local registry, but docker depends on its
|
Managed to fix the execution on Ubuntu 22.04. @fidencio you will need to pick my two commits from #276 Here is the final coverage: a) Jobs running on Ubuntu 20.04, containerd 1.6, INSTALL_OFFICIAL_CONTAINERD=true http://jenkins.katacontainers.io/job/confidential-containers-operator-main-ubuntu-20.04-x86_64-containerd_kata-clh-PR If I'm not mistaken SEV/SNP is Ubuntu 20.04 and TDX is CentOS Stream 8 then they will fall on this case. b) Jobs running on Ubuntu 22.04, containerd 1.7, INSTALL_OFFICIAL_CONTAINERD=false (default behavior) http://jenkins.katacontainers.io/job/confidential-containers-operator-main-ubuntu-22.04-x86_64-containerd_kata-clh-PR |
/test |
Right now we're just ignoring the global env vars passed, which is not the expected behaviour, mainly as we're duplicating a bunch of env vars passed to the preInstall / postUninstall payloads. Let's ensure we actually read the global ones and take those into consideration as well. Signed-off-by: Fabiano Fidêncio <[email protected]>
Otherwise we may end up in situations where users will simply try to add / replace an environment variable via kustomize and will end up with the operator broken as this env var would simply be vanished. :-/ Signed-off-by: Fabiano Fidêncio <[email protected]>
Otherwise we may hit issues in in cases where folks will use kustomize to add / replace one the values, as kustomize just overrides the whole set of env vars. We need to keep those in sync with whatever is set in the configs, but this shouldn't be as much of a burden as leaving it for the user to do so. Signed-off-by: Fabiano Fidêncio <[email protected]>
In theory those could very well be set only for the pre-install / post-uninstall, but that doesn't make much sense as the user experience on setting those would be to having to set those twice. With the user experience in mind, let's just move them to the global env vars and let the user set / kustomize them only once. Signed-off-by: Fabiano Fidêncio <[email protected]>
34ef4d4
to
c71cba4
Compare
/test |
TDX CI is failing due to:
SEV CI is failing due to:
|
c71cba4
to
22e7e3c
Compare
/test |
# containerd v1.7+ running. If you know for sure that's not | ||
# the case, please, set this variable to `true` | ||
- name: "INSTALL_OFFICIAL_CONTAINERD" | ||
value: "false" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to my previous question on setting INSTALL_OFFICIAL_CONTAINERD to false in reqs-deploy.sh, if it makes sense to set the default there, and comment out this patch for users to uncomment and install containerd if needed ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users can comment this out to install containerd if needed, IIRC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. Would it make sense to document these variables ?
It can be done in a separate PR as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it would and I will do it in a separate PR before the release.
This should also be mentioned in the release notes,
The majority of the managed kubernetes solutions are already relying on a new enough (v1.7+) version of containerd. Fixes: confidential-containers#272 Signed-off-by: Fabiano Fidêncio <[email protected]>
With the new requirement of containerd 1.7+, the operator.sh was adapted to enable the operator to install containerd in case the system's installed version is less than or equal to 1.6. Signed-off-by: Wainer dos Santos Moschetta <[email protected]>
On Ubuntu 22.04 installs docker from the distro and consequently containerd version 1.7. This ways we cover the default installation case where INSTALL_OFFICIAL_CONTAINERD=false. Signed-off-by: Wainer dos Santos Moschetta <[email protected]>
22e7e3c
to
812289b
Compare
/test |
/test-s390x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Thanks Fabiano
The majority of the managed kubernetes solutions are already relying on
a new enough (v1.7+) version of containerd.
/cc @wainersm, we'd need to adapt CI for this, and I'd need your help to do so.