Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GC issues with kustomize-based workflows #242

Closed
ivan4th opened this issue Aug 9, 2018 · 15 comments
Closed

GC issues with kustomize-based workflows #242

ivan4th opened this issue Aug 9, 2018 · 15 comments

Comments

@ivan4th
Copy link
Contributor

ivan4th commented Aug 9, 2018

There's an issue with unused k8s objects accumulating when using kustomize for a GitOps kind of workflow (but not limited to such cases I think).
In examples/combineConfigs.md, there's the following phrase:

A GC process in the k8s master eventually deletes unused configMaps.

This is not quite true. For example, I see this on my cluster where I use kustomize to deploy some stuff:

sc-scraper-5t97th7t2k             1         16d
sc-scraper-6h7g546ckt             1         19d
sc-scraper-744687ccfc             1         15d
sc-scraper-88kmmkfdk6             1         16d
sc-scraper-fh44t8758h             1         19d
sc-scraper-h72429g4d6             1         7d
sc-scraper-hbck7hdtht             1         19d
sc-scraper-hg446dfb4m             1         15d
sc-scraper-kbh8kbdbf6             1         14d
sc-scraper-kd98tc426k             1         15d

Configmaps / secrets with hashed names is one example, another being objects that are no longer
used in a new version of some app and thus were removed from the config repo.

What really needs to be done for k8s GC to work is setting ownerReferences in the objects' metadata, for example (see here for more info):

apiVersion: v1
kind: Service
metadata:
  name: wiki
  ownerReferences:
  - apiVersion: apps/v1
    kind: Deployment
    name: wiki
    uid: f491b0f0-2522-4b62-8f81-bde62999f825
spec:
  ports:
  - port: 80
    name: web
  selector:
    app: wiki

There's an obvious problem with this approach (kubernetes/kubernetes#66068): you can't set a fixed uid for a k8s object yet you must know the uid of the owner to attach the objects to be GC'd to it. This is not compatible with kustomize approach with first generating yaml via kustomize build and then applying it w/o getting any info from the cluster, so some compromises need to be made.

I frankly can't think of some pretty approach right now. Some of the workarounds people use for this kind of problem:

  • referencing a per-app CRD from each object (see here and here)
  • referencing a pre-created namespace

I'm not sure such a cleanup helper can be fully implemented within the bounds of kustomize, but some assistance on kustomize side is definitely required, such as making it possible to inject ownerReferences or at least just uids easily.

Thoughts?..

@Liujingfang1
Copy link
Contributor

There is KEP for garbage collection. Once this feature is available, those unused resources will be garbage collected.

@ivan4th
Copy link
Contributor Author

ivan4th commented Aug 15, 2018

Thanks for the pointer! Maybe it's worth linking from the kustomize docs?

@Liujingfang1
Copy link
Contributor

Sounds good, I'll add it there.

@ivan4th
Copy link
Contributor Author

ivan4th commented Aug 16, 2018

Thanks!

@mr-karan
Copy link
Contributor

mr-karan commented Oct 7, 2019

Is this feature available in K8s cluster version 1.14?

@michaelbannister
Copy link

@Liujingfang1 the linked KEP is closed as they all moved to https://github.com/kubernetes/enhancements but I can't find the corresponding KEP there. Do you happen to know if there's still something relevant open somewhere?

Thanks

@afirth
Copy link
Contributor

afirth commented Mar 23, 2020

The trail is cold here as far as I can see. Any update @Liujingfang1 ? Do generated objects persist forever at this point?

@guiguan
Copy link

guiguan commented Apr 17, 2020

Would also like to know any solution/workaround for this?

@dobesv
Copy link

dobesv commented Apr 22, 2020

I think the KEP never reached any consensus and died. There's no easy way to do this now.

So far I see two approached:

  1. Manually delete the extra configmaps once in a while
  2. If your kustomize covers everything in a namespace you can use kubectl apply --prune and resources not listed will be deleted. But of course you have to be careful with this one since it will delete anything not in the file, which is a bit unfortunate.

I think perhaps a more ideal solution would be for there to be a (custom) resource that kustomize creates which remembers what it created before. When you apply a new configuration to that resource it can do a diff and delete anything that was removed.

@guiguan
Copy link

guiguan commented Apr 25, 2020

I found kubectl apply --prune is quite limited and error prone. I am using kapp now, which does a really nice job in terms of k8s app deployment and management

@ivishnevs
Copy link

ivishnevs commented Apr 29, 2020

Should the documentation be updated?

The older configMap, when no longer referenced by any other resource, is eventually garbage collected.

This part is pretty confusing since its not working this way..

@jsravn
Copy link

jsravn commented Jun 25, 2020

Can this issue be reopened? It's a big gap in kustomize for practical usage. I also found prune to be very limited and not really production usable.

@alexandros-genies
Copy link

Has there been any resolution to this? Does the garbage collection work correctly or what is the recommended best practice?

@dobesv
Copy link

dobesv commented Aug 26, 2022

You can use ArgoCD or something like that which prunes old resources, it's a good idea in general so maybe worth the trouble in the long term. Or cobble together a solution of your own to clean them up. Or just ignore them.

@alexandros-genies
Copy link

Ah gotcha thank you for the quick response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants