CNTRLPLANE-2241: Conditionally add the KMS plugin volume mount to the kube-apiserver container#2015
Conversation
WalkthroughUpdate adds a new featureGateAccessor parameter and field to the target config controller APIs and propagates it through callers for KMS plugin volume/mount integration; also updates dependency versions in go.mod and promotes gomega to a direct require. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes ✨ Finishing touches
Comment |
|
/payload-job periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-techpreview |
|
@bertinatto: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/ba812410-f890-11f0-8937-792694951cf7-0 |
| return fmt.Errorf("failed to get apiserver/cluster: %w", err) | ||
| } | ||
|
|
||
| if apiServer.Spec.Encryption.Type != configv1.EncryptionTypeKMS { |
There was a problem hiding this comment.
This decision needs to be based on the revision we're currently rendering, not the current state. We also need to keep mounting the socket path when we're migrating from a revision where KMS was previously enabled.
Maybe the encryption controllers should be able to return any host path(s) they need for a given revision? This is subtle, and we will want to keep implementations in sync across the different apiserver operators.
There was a problem hiding this comment.
This decision needs to be based on the revision we're currently rendering, not the current state. We also need to keep mounting the socket path when we're migrating from a revision where KMS was previously enabled.
I agree with @benluddy.
Let's say, APIServer CR is configured like this;
apiVersion: config.openshift.io/v1
kind: APIServer
metadata:
name: cluster
spec:
encryption:
type: KMSEncryption controllers generates;
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- kms:
name: myKmsProvider
endpoint: unix:///var/run/kms-plugin/socket.sock
cachesize: 100
timeout: 3s
- identity: {}After that cluster admin moves to aescbc;
apiVersion: config.openshift.io/v1
kind: APIServer
metadata:
name: cluster
spec:
encryption:
type: aescbcEncryptionConfiguration is in this format;
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: key
- kms: # <---- KMS is still in here
name: myKmsProvider
endpoint: unix:///var/run/kms-plugin/socket.sock
cachesize: 100
timeout: 3s
- identity: {}In order to migrate resources from KMS to aescbc, kms plugin must still be accessible.
In my opinion, if feature gate is enabled, we can always mount this hostPath. In v2, if we adopt side-car approach, kms plugin will be accessible within the pod network and we won't need hostPath.
Maybe the encryption controllers should be able to return any host path(s) they need for a given revision? This is subtle, and we will want to keep implementations in sync across the different apiserver operators.
Encryption controllers are provider agnostic. So they don't know these details. But I believe that we'll need plugin lifecycle controllers which can return this information.
There was a problem hiding this comment.
Let’s also add a comment that this solution is considered temporary and is needed for v1.
In the future, we will probably come up with a different solution.
I also think that for v1, if FG is enabled, we could simply use hostPath - let's keep it simple for now.
There was a problem hiding this comment.
It looks like the new FG will be KMSEncryption
There was a problem hiding this comment.
Switched to use the feature gate instead. Also, added a comment stating this is a temporary solution.
05ca276 to
0f9be05
Compare
|
/payload-job periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-techpreview |
|
@bertinatto: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/86364ca0-fadc-11f0-8791-89ff27c17445-0 |
|
/hold |
|
/payload-job periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-techpreview |
|
@bertinatto: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/b6582390-faf0-11f0-9d5b-cd80cedec70e-0 |
|
/payload-job periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-techpreview |
|
@bertinatto: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/845ac2b0-fb88-11f0-86f7-a63e297741b3-0 |
|
/payload-job periodic-ci-openshift-release-master-nightly-4.22-e2e-vsphere-ovn-techpreview |
|
@bertinatto: trigger 2 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/6c0f51e0-fc72-11f0-95f5-0f998cdda80a-0 |
|
/retest |
|
/test k8s-e2e-gcp |
|
/retest |
go.mod
Outdated
| sigs.k8s.io/kube-storage-version-migrator v0.0.6-0.20230721195810-5c8923c5ff96 | ||
| ) | ||
|
|
||
| require github.com/onsi/gomega v1.35.1 |
There was a problem hiding this comment.
also could it be merged with the existing require block ?
There was a problem hiding this comment.
no, go.mod isn't supposed to be edited manually, go mod should be used instead. Something in the chain of the library-go bump caused this
a7f5eb0 to
f59d118
Compare
e640ca0 to
f3e3574
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@go.mod`:
- Around line 40-41: The standalone require for github.com/onsi/gomega v1.35.1
// indirect should be moved into the existing indirect require block so all
indirect dependencies are grouped together; edit the go.mod to remove the
separate line and insert that require entry into the existing indirect block
(near the other // indirect entries) preserving alphabetical order among the
package paths.
| } | ||
|
|
||
| if err := encryptionkms.AddKMSPluginVolumeAndMountToPodSpec(&required.Spec, "kube-apiserver", featureGateAccessor); err != nil { | ||
| return nil, false, fmt.Errorf("failed to add KMS encryption volumes: %v", err) |
|
/lgtm |
|
/retitle CNTRLPLANE-2241: Conditionally add the KMS plugin volume mount to the kube-apiserver container |
|
@bertinatto: This pull request references CNTRLPLANE-2241 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bertinatto, p0lyn0mial The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest-required |
|
@bertinatto: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/verified by @bertinatto |
|
@bertinatto: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
fecf00c
into
openshift:main
|
/cherry-pick release-4.21 |
|
@bertinatto: #2015 failed to apply on top of branch "release-4.21": DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
In any case I will have to bump apiservers with the changes in library-go. Maybe it is better to get the changes in that PR. |
/assign @ardaguclu @benluddy @p0lyn0mial