-
Notifications
You must be signed in to change notification settings - Fork 100
Eliminate dedicated ConfigMap target cache #187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Erik Godding Boye <[email protected]>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/retest |
1 similar comment
|
/retest |
|
@erikgb: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
| }, | ||
| &corev1.ConfigMap{}: { | ||
| // Watch full config maps across all namespaces. | ||
| // TODO: create a seperate cache for targets and sources and only |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm surprised that this comment is still present in the codebase. Since it has been fixed &a separate cache was added.
| ByObject: map[client.Object]cache.ByObject{ | ||
| // Cache metadata for ConfigMaps cluster-wide | ||
| &metav1.PartialObjectMetadata{ | ||
| TypeMeta: metav1.TypeMeta{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fear that controller runtime does not differentiate between the partial metadata object and the typed object. There will be a config collision.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #172
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I propose we keep the dedicated cache until c/r supports differentiating between partial metadata objects and full objects (see #187 (comment)).
However, I would like to remove the // TODO: comment already, I think this is something that we missed in #89 (I feel like it might have been lost while rebasing).
@inteon agreed! Sorry for not testing this better. It got late yesterday. 😉 I'll update the PR. |
|
Replaced by #188 - ref. comment above. |
This PR eliminates the dedicated target cache and fixes the TODO related to source/target ConfigMap caches. Now the cache should only cache full configmaps in source namespace and cache configmap metadata cluster-wide.
I wished to simplify this a bit before anyone tries to add secrets as targets.
I am pretty sure that ConfigMap and PartialMetadata (for configmaps) should be treated independently in caches now. But feel free to check this @inteon - if you think the tests are not covering this area well enough. I asked about this in controller-runtime channel on Slack: https://kubernetes.slack.com/archives/C02MRBMN00Z/p1695564370173869. And also had a look at kubernetes-sigs/controller-runtime#1174. So if this isn't working as expected, I think we should create an issue in controller-runtime.
/cc @inteon