-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k9s running very slowly when opening namespace with 13k pods #1462
Comments
Very similar phenomena happens when viewing a very large list of namespaces in a cluster (+18k namespaces). UI become unresponsive, to a point it is unusable. |
Seems this could possibly be solved with some pagination? |
Similar issue here as well, even though our cluster is much smaller (~50 namespace, ~600 pod, ~300 Helm installation). It is especially slow when doing Helm operations. |
+10086,how to optimize it with multi core? |
Does anyone have any short-term solutions for this? I'm running with 10K jobs, each with a pod, and it's becoming faster to use I would be very happy to have data refreshed much less frequently (e.g. on request, or every few minutes), but be able to search for jobs & pods quickly. |
@cesher Thank you all for piping in! Using the latest k9s rev v0.29.1.
Please add details here if that's not the case. Thank you! |
Hi @derailed, I'm having this issue since roughly Anything I can do to help, please let me know! |
@GMartinez-Sisti Very kind!Thank you Gabriel! |
I was wondering if just having a lot of objects would be enough to trigger it, and was able to reproduce it locally: → kind create cluster --name test
→ for i in {1..300}; do kubectl create ns "ns-$i"; done If you try this, open Hope it helps 😄 PS: happy to test dev builds! Just let me know. |
@GMartinez-Sisti Very kind! Thank you Gabriel! Dang! you are correct! |
Hey @derailed! Thanks for being so responsive to these comments thus far. While looking through the source code, I noticed that the ListOptions that we provide whenever we try to get all objects tends to be pretty sparse. For example (and correct me if I'm wrong here) but I think this is the line we hit most commonly when we get the list of objects within a namespace, while we use this for namespace collection. In both of these cases, we provide an "empty" ListOptions for the call. Lists are typically pretty expensive for kubernetes to perform, since it has to do a significant amount of scanning against etcd to collect all data before passing it back up. Based on my understanding, there are a few ways we could make this more responsive.
|
@alexnovak Very kind! Thank you so much for this research Alex!! You are absolutely correct. I've spent a lot of time on this cycle working on improving perf for the next drop and accumulated additional gray hairs in the process... This is indeed tricky with the current state of affairs since all filtering/parsing is done mostly client side and thus expects full lists. Tho I don't think k9s will ever accommodate the 13k pods in a single ns scenario, with everyones help we can make it more bearable. Let see if we're a bit happier with the next drop and we will dive in individual use cases thereafter... |
Hi @derailed! Thank you for the great explanation.
I can add my opinion on this: with so many resources, and assuming they are pods, my goal is usually to check whatever is not healthy, otherwise it is indeed unmanageable. So maybe we could have an option to filter for “except Ready” when there are more than x pods. |
@GMartinez-Sisti Thank you for the feedback Gabriel! Let see if we're happier with v0.32.0... then we can track perf issues in separate tickets. |
Describe the bug
We have a cluster with 13k pods in a single namespace. When I start k9s in that namespace, it can take minutes for the pods to load and for the terminal to become responsive once again, although there is a lot of latency/stutter. I understand that 13k pods are not very common, and it would be totally fine to say that that is outside the scope of what k9s is designed to handle. But I saw this other ticket that seemed very similar but for secrets. I wonder if we could do something like that? Only load metadata to list the pods and get the details on-demand?
To Reproduce
Steps to reproduce the behavior:
/<pod-name-filter>
Expected behavior
K9s does not slow down.
Screenshots
It's my employer's cluster, doubt I could post a picture here. Let me know if there is something else I can provide.
Versions (please complete the following information):
Additional context
The cluster is running on AWS EKS
The text was updated successfully, but these errors were encountered: