-
Notifications
You must be signed in to change notification settings - Fork 705
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Call the offending get_client_config
only when required.
#5583
Conversation
get_client_config
call in proxy.get_client_config
only when required.
85f8fb7
to
df4da38
Compare
c2149e7
to
733673b
Compare
df4da38
to
eb7d1de
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome; thanks! It is pretty similar to what was happening in kubeaps-apis. Generating the client is an expensive call that requires plenty of calls and we were creating it each time again and again.
Yeah - this particular call was a lot less expensive (it's not doing lots of network calls - unlike creating the rest mapper in kubeapps-apis - but when called concurrently 50 times, certainly starving CPU on my system :) |
733673b
to
beec648
Compare
eb7d1de
to
ba41a2c
Compare
beec648
to
c848f59
Compare
ba41a2c
to
1ce9334
Compare
Signed-off-by: Michael Nelson <[email protected]>
Signed-off-by: Michael Nelson <[email protected]>
Signed-off-by: Michael Nelson <[email protected]>
Signed-off-by: Michael Nelson <[email protected]>
Signed-off-by: Michael Nelson <[email protected]>
1ce9334
to
63beebd
Compare
✅ Deploy Preview for kubeapps-dev canceled.
|
Signed-off-by: Michael Nelson [email protected]
Reduces the proxying of the ~50 API calls (when the go-lang kubeapps-apis initialises its REST mapper) from 4-6s down to a consistent 1.3-1.6s (on my single computer running a cluster including pinniped-proxy).
Description of the change
After more profiling, the remaining offender was the call to create a
kube::Client
which ranged from 4-100ms (of CPU + interrupts - not sure if there was any I/O) for every single request. So when 50 requests are issued concurrently, it was starving the CPU available (on my single laptop running a cluster etc.).The call to create theEDIT: Actually, I just realised we may not even need to cache it, since we can simple move where it's evaluated so we only evaluate it when the token was not cached. I'll try that before marking ready.kube::Client
is dependent on the target k8s api server only, so it's easy to cache as the number of different results is equal to the number of clusters targeted by Kubeapps.Benefits
Pinniped proxy handles the 50 concurrent request/responses in a more reasonable time.
Possible drawbacks
Applicable issues
Additional information
Without this change, the main cause of CPU was the call to
get_client_config
, which then delays all the proxy requests starting:After the change the call is effectively zero.