Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API Resources "carry over" between contexts, causing errors if they share shortnames #2492

Closed
djpenka opened this issue Jan 18, 2024 · 2 comments · Fixed by #2493
Closed

Comments

@djpenka
Copy link

djpenka commented Jan 18, 2024




Describe the bug

TLDR
When you switch between clusters, k9s "remembers" the aliases/api-resources from your previous cluster. This is bad, since some aliases from a previous cluster can override the proper aliases from your current cluster.

I ran into this with two apiversions of the same custom resource, one older and one newer. However, it happens with any api resources/aliases.

For example, Flux uses the newer kustomize.toolkit.fluxcd.io/v1/kustomizations resource with the shortname ks on one cluster. If you switch to another cluster that uses the kustomize.toolkit.fluxcd.io/v1beta2/kustomizations resource, :ks will sometimes try to find kustomize.toolkit.fluxcd.io/v1 and fail with the message no resource meta defined for "kustomize.toolkit.fluxcd.io/v1/kustomizations".

To Reproduce
Steps to reproduce the behavior:

  1. Install a CRD on cluster 1
  2. Do not install it on cluster 2
  3. Open k9s on cluster 1
  4. Change to cluster 2
  5. You can still see the command to show the cluster 1 CRD

Reproduce my specific issue:

  1. Install old kustomize controller CRDs on cluster 1
  2. Install new kustomize controller CRDs on cluster 2
  3. Open k9s on cluster 1
  4. Change to cluster 2
  5. The :ks command will only work on one cluster, the specific cluster seems to be inconsistent

Historical Documents

k9s.log for an example where k9s decided that ks mapped to kustomize.toolkit.fluxcd.io/v1beta1/kustomizations and where I was still able to try and find kuma.io/v1alpha1/zoneegresses from my previous cluster context.

10:24AM INF 🐶 K9s starting up...
10:24AM INF ✅ Kubernetes connectivity
10:24AM WRN CustomView watcher failed error="lstat /Users/dylanpenka/Library/Application Support/k9s/views.yaml: no such file or directory"
10:24AM WRN CustomView watcher failed error="lstat /Users/dylanpenka/Library/Application Support/k9s/views.yaml: no such file or directory"
10:24AM ERR Retry failed error="context canceled"
10:24AM ERR Component init failed for "" error="no resource meta defined for \"kustomize.toolkit.fluxcd.io/v1beta1/kustomizations\""
10:24AM ERR Watcher failed for kustomize.toolkit.fluxcd.io/v1/kustomizations -- the server could not find the requested resource (get kustomizations.kustomize.toolkit.fluxcd.io) error="the server could not find the requested resource (get kustomizations.kustomize.toolkit.fluxcd.io)"
10:24AM WRN CustomView watcher failed error="lstat /Users/dylanpenka/Library/Application Support/k9s/views.yaml: no such file or directory"
10:24AM ERR Component init failed for "" error="no resource meta defined for \"kustomize.toolkit.fluxcd.io/v1/kustomizations\""
10:26AM ERR Component init failed for "" error="no resource meta defined for \"clusterconfig.azure.com/v1alpha1/fluxconfigs\""
10:29AM WRN CustomView watcher failed error="lstat /Users/dylanpenka/Library/Application Support/k9s/views.yaml: no such file or directory"
10:29AM ERR Component init failed for "" error="no resource meta defined for \"kuma.io/v1alpha1/zoneegresses\""

Expected behavior
I should only have aliases defined for the context that I'm currently using. I should not "carry over" aliases from other contexts.

Screenshots
Before changing to the second cluster:
image

After changing to the second cluster
image

Changing back to the first cluster (note that ks maps to v1 and not the correct v1beta2)
image

Error message from the first cluster
image

Versions (please complete the following information):

  • OS: Mac 14.2.1
  • K9s: v0.31.6 and v0.31.5
  • K8s: v1.26.10
derailed added a commit that referenced this issue Jan 18, 2024
@derailed
Copy link
Owner

@djpenka Great catch! Thank you for this awesome bug report Dylan.

derailed added a commit that referenced this issue Jan 18, 2024
thejoeejoee pushed a commit to thejoeejoee/k9s that referenced this issue Feb 23, 2024
placintaalexandru pushed a commit to placintaalexandru/k9s that referenced this issue Apr 3, 2024
@justinharringa
Copy link

Hey there, first of all, thanks for k9s!

I'm seeing this behavior in version 0.32.5. Has there been a regression?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants