Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm delete deletes only the helm entry but not the deployment. #1033

Closed
omersi opened this issue Feb 17, 2021 · 23 comments
Closed

Helm delete deletes only the helm entry but not the deployment. #1033

omersi opened this issue Feb 17, 2021 · 23 comments
Labels
bug Something isn't working

Comments

@omersi
Copy link

omersi commented Feb 17, 2021




Describe the bug
When trying to delete from helm screen (using ctrl+d) it deletes only the release and not the pods/deployments etc.

To Reproduce
Steps to reproduce the behavior:

  1. create a release using helm
  2. delete the release from the helm screen
  3. check pods/deployments. you'll see that all other components are still there.

Expected behavior
Delete all resources associated with the release I just deleted

Screenshots
If applicable, add screenshots to help explain your problem.
list of the available helm releases
01xxxx pods are still running

Versions (please complete the following information):

  • OS: Ubuntu 20.04.2 LTS
  • K9s Rev: v0.24.2
  • K8s Rev: v1.16.15-gke.6000
@derailed
Copy link
Owner

derailed commented Mar 7, 2021

@omersi Thank you for reporting this! This is a bit strange as deleting the chart should delete all related artifacts. Are you sure you don't have another deployment holding these resources? Just tried a sample pg chart and deploy/pods/sec/etc... are all getting deleted as expected. Please send more details if this is not the case. Tx!!

@ele18081
Copy link

ele18081 commented Jul 8, 2021

Getting the same behavior on my side.

I think, it used to work as expected in the begining, but now, it delete only the chart and not the k8s objects

@madnezzm
Copy link

same issue
K9s Rev: v0.24.15
K8s Rev: v1.20.7

@FabianSperrle
Copy link

Same issue here. Normal helm delete removes all pods, services, etc, but ctrl+d from k9s somehow removes the helm deployment while keeping all resources.

K9s Rev: v0.24.15
K8s Rev: v1.21.5

Helm: version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"} -- Does k9s interface with my local helm install, or does it come with its own packaged version of helm?

The k9s log does not contain entry entries relating to helm but is spammed with E1007 09:53:26.272662 163 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *unstructured.Unstructured: failed to list *unstructured.Unstructured: the server could not find the requested resource.

@derailed I just started using k9s and am not sure what other information might be useful for debugging this. Please let me know if/how I can help and where to start.

@felpasl
Copy link

felpasl commented Feb 25, 2022

+1, maybe put a command to uninstall helm package, not only delete the reference

@jdimmerman
Copy link

@derailed Note that I am seeing this after deleting via the helm CLI, outside of the context of k9s. This github issue came up when I was poking around so thought I'd post here.

helm version
version.BuildInfo{Version:"v3.8.2", GitCommit:"6e3701edea09e5d55a8ca2aae03a68917630e91b", GitTreeState:"clean", GoVersion:"go1.18.1"}

Azure Kubernetes Cluster running 1.22.6.

@mike-code
Copy link

@derailed I'd like to report this issue too. I guess this issue should be reopened

@derailed
Copy link
Owner

Thank you all for piping in!! I am not able to repro this using the latest k9s v0.26.0. Installed a redis chart, delete it and all associated resources are uninstalled as expected. Please add more details here so we can find a good repro. Thank you!

@derailed derailed reopened this Jul 27, 2022
@derailed derailed added the bug Something isn't working label Jul 27, 2022
@slimus
Copy link
Collaborator

slimus commented Jul 27, 2022

Hi there! Do you have "helm.sh/resource-policy": keep in your helm charts?

@mike-code
Copy link

I can confirm that in my case it was the resource policy issue.

@slimus
Copy link
Collaborator

slimus commented Aug 17, 2022

@mike-code thank you for reporting back! I think we can close this issue, but maybe add this information into readme or somewhere? @derailed what do you think?

@flixx
Copy link

flixx commented Sep 1, 2022

It is not resolved for me yet.
I don't have "helm.sh/resource-policy": keep.

When I do a helm uninstall --namespace kube-system release-name (helm 3) from console, all resources get wiped.
When I use k9s delete, then helm managed resources stay.

I use this command to check:

kubectl get all --namespace kube-system -l app.kubernetes.io/managed-by=Helm

I observed this behaviour with different charts.
One of them is the sentry helm chart.

I am using k9s v0.26.3

Maybe this has something todo with namespaces?

@muffl0n
Copy link

muffl0n commented Oct 18, 2022

I think this one could be related to #1558. If your default namespace is different to the selected namespace where you are deleting the helm release, the release is deleted but not the resources. At least not in the namespace you selected, but in the default.
Seems like k9s does not handle the selected namespace correctly on helm release deletion.

@laran
Copy link

laran commented Dec 3, 2022

I have the same issue.

@hsarazin
Copy link

I fixed it by making the exact same deploiement build again and then uninstall it with helm instead of delete

@FreZZZeR
Copy link

The same issue.
k9s v0.27.4
K8s v1.24.11

@tuxillo
Copy link

tuxillo commented Jun 14, 2023

Same.

@corinz
Copy link

corinz commented Aug 1, 2023

+1

@JDenaro
Copy link

JDenaro commented Oct 3, 2023

Still not working.
K9s Rev: v0.27.4
K8s Rev: v1.26.7

@jlarwig
Copy link

jlarwig commented Jan 29, 2024

We are facing the same issue. Has someone already implemented a solution in a PR that could be referenced / tracked in this issue?

@briankandersen
Copy link

Pretty please, get this fixed!

@thorbenbelow
Copy link
Contributor

thorbenbelow commented Jan 31, 2024

Setting the namespace explicitly for the KubeClient seems to fix this issue (PR).
Not sure if this behaviour is intended by the helm package so I also opened an issue over there.

derailed added a commit that referenced this issue Feb 7, 2024
derailed added a commit that referenced this issue Feb 7, 2024
@foreignmeloman
Copy link

foreignmeloman commented Mar 5, 2024

@derailed FYI seems like this bug still persists, when current context selected in ~/.kube/config differs from the one selected within k9s.

placintaalexandru pushed a commit to placintaalexandru/k9s that referenced this issue Apr 3, 2024
* [Maint] Fix race condition issue

* [Bug] Fix derailed#2501

* [Maint] Allow reference to resource aliases for plugins

* [Feat] Intro cp namespace command + misc cleanup

* [Maint] Rev k8s v0.29.1

* [Bug] Fix derailed#1033, derailed#1558

* [Bug] Fix derailed#2527

* [Bug] Fix derailed#2520

* rel v0.31.8
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.