-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GC issues with kustomize-based workflows #242
Comments
There is KEP for garbage collection. Once this feature is available, those unused resources will be garbage collected. |
Thanks for the pointer! Maybe it's worth linking from the kustomize docs? |
Sounds good, I'll add it there. |
Thanks! |
Is this feature available in K8s cluster version 1.14? |
@Liujingfang1 the linked KEP is closed as they all moved to https://github.com/kubernetes/enhancements but I can't find the corresponding KEP there. Do you happen to know if there's still something relevant open somewhere? Thanks |
The trail is cold here as far as I can see. Any update @Liujingfang1 ? Do generated objects persist forever at this point? |
Would also like to know any solution/workaround for this? |
I think the KEP never reached any consensus and died. There's no easy way to do this now. So far I see two approached:
I think perhaps a more ideal solution would be for there to be a (custom) resource that kustomize creates which remembers what it created before. When you apply a new configuration to that resource it can do a diff and delete anything that was removed. |
I found |
Should the documentation be updated?
This part is pretty confusing since its not working this way.. |
Can this issue be reopened? It's a big gap in kustomize for practical usage. I also found prune to be very limited and not really production usable. |
Has there been any resolution to this? Does the garbage collection work correctly or what is the recommended best practice? |
You can use ArgoCD or something like that which prunes old resources, it's a good idea in general so maybe worth the trouble in the long term. Or cobble together a solution of your own to clean them up. Or just ignore them. |
Ah gotcha thank you for the quick response |
There's an issue with unused k8s objects accumulating when using kustomize for a GitOps kind of workflow (but not limited to such cases I think).
In examples/combineConfigs.md, there's the following phrase:
This is not quite true. For example, I see this on my cluster where I use kustomize to deploy some stuff:
Configmaps / secrets with hashed names is one example, another being objects that are no longer
used in a new version of some app and thus were removed from the config repo.
What really needs to be done for k8s GC to work is setting
ownerReferences
in the objects' metadata, for example (see here for more info):There's an obvious problem with this approach (kubernetes/kubernetes#66068): you can't set a fixed uid for a k8s object yet you must know the uid of the owner to attach the objects to be GC'd to it. This is not compatible with
kustomize
approach with first generating yaml viakustomize build
and then applying it w/o getting any info from the cluster, so some compromises need to be made.I frankly can't think of some pretty approach right now. Some of the workarounds people use for this kind of problem:
I'm not sure such a cleanup helper can be fully implemented within the bounds of
kustomize
, but some assistance onkustomize
side is definitely required, such as making it possible to injectownerReference
s or at least justuids
easily.Thoughts?..
The text was updated successfully, but these errors were encountered: