-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slowness due to client-side throttling in v0.32.0 #2582
Comments
@Ubiquitine Thank you for this report! Yikes not what I was expecting after a perf improvement pass ;( |
Hi, More info:
This is my config.yaml
UPD: more logs |
Also experiencing pretty significant lag/slowness on 0.32.1. My k9s.log doesn't show any client-side throttling errors, but I'm wondering if I'm looking at the right log? Happy to attach logs/configs to help debug. Versions (please complete the following information): OS: macOS 13.4 I've seen this slowness across the board. I work at a SaaS company and we have clusters in all three major clouds, having namespaces in the range of 20-200+, with pod counts from 200-5000+. config.yaml
views.yaml
E: add version info |
Just upgraded from 0.2x to 0.32.1 today, and everything is extremely slow. I work for a Telco company remotely through VPN. Installed K9s in windows 10 Enterprise WSL. There is no relavant log find in k9s.* log files. |
Let see if we're happier on v0.32.2?? |
I have gotten the same issue |
I can confirm that the the issue seems to be fixed in v0.32.2. At least in my case. |
Same, using version 0.32.4 |
Describe the bug
After upgrading to v0.32.0, k9s started to response really slow when switching resources after few minutes of running.
To Reproduce
Steps to reproduce the behavior:
Historical Documents
INFO log show lines like this during the issue:
I0304 18:39:43.076819 114467 request.go:697] Waited for 18.03660912s due to client-side throttling, not priority and fairness, request: GET:https://API_URL/apis/RESOURCES_PATHS
Expected behavior
Namespace and resources switch should not take more than couple seconds (at least it didn't in previous versions).
Versions (please complete the following information):
Additional context
I also tried to clean ~/.kube/cache directory, but the issue keeps coming back.
To be fair, I have 120+ namespaces with deployments and jobs in each, but I guess something has been changed in client behavior in 0.32.0 that introduced this throttling and causes huge delays.
The text was updated successfully, but these errors were encountered: