Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reuse the http request object for http probes #115939

Open
SergeyKanzhelev opened this issue Feb 21, 2023 · 24 comments · May be fixed by #126266
Open

Reuse the http request object for http probes #115939

SergeyKanzhelev opened this issue Feb 21, 2023 · 24 comments · May be fixed by #126266
Assignees
Labels
area/kubelet good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@SergeyKanzhelev
Copy link
Member

Today we construct the request object for every http request for the http probes. This involves concatenating some strings, setting headers, etc.

Probes are simple http get requests that are executed synchronously (for a single probe) and has no request body.

The optimization may be to reuse the request object for all probe executions. This likely save some cpu and memory for the kubelet process.

Some code references:

  • DoProbe is executed today on each probe execution.
  • New request are being created for each execution.
  • New object will need to be stored in worker that for http probes will hold the http request object. This object will need to encapsulate the logic of runProbe method of the prober.

I'm marking this issue as a good first issue, as the issue doesn't require deep knowledge of Kubernetes. However it is not an easy issue, a few parts of code will be affected and change needs to be done very carefully.

Please review code and make sure you understand the task before self-assigning the issue.

Implementation notes

  1. If you can send the small benchmark results on probes with and without this optimization, it will be a good starting point. Note, for the benchmark, each probe can run once a second, not more often. But the number of probes is virtually unlimited since we are not defining the limit on number of containers per Pod. Realistically we are talking about 110 pods (max) with maybe 2 containers each, with 3 probes each (660 probes).
  2. Code change may be big. So make sure to help reviewers to review the code. One idea may be to split implementation into multiple commits to simplify the review. First, creating the object encapsulating the logic of runProbe, second storing and initializing this object in worker, and third, actual reusing of the request object.

/sig node
/good-first-issue
/help-wanted
/area kubelet

@k8s-ci-robot
Copy link
Contributor

@SergeyKanzhelev:
This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

Today we construct the request object for every http request for the http probes. This involves concatenating some strings, setting headers, etc.

Probes are simple http get requests that are executed synchronously (for a single probe) and has no request body.

The optimization may be to reuse the request object for all probe executions. This likely save some cpu and memory for the kubelet process.

Some code references:

  • DoProbe is executed today on each probe execution.
  • New request are being created for each execution.
  • New object will need to be stored in worker that for http probes will hold the http request object. This object will need to encapsulate the logic of runProbe method of the prober.

I'm marking this issue as a good first issue, as the issue doesn't require deep knowledge of Kubernetes. However it is not an easy issue, a few parts of code will be affected and change needs to be done very carefully.

Please review code and make sure you understand the task before self-assigning the issue.

Implementation notes

  1. If you can send the small benchmark results on probes with and without this optimization, it will be a good starting point. Note, for the benchmark, each probe can run once a second, not more often. But the number of probes is virtually unlimited since we are not defining the limit on number of containers per Pod. Realistically we are talking about 110 pods (max) with maybe 2 containers each, with 3 probes each (660 probes).
  2. Code change may be big. So make sure to help reviewers to review the code. One idea may be to split implementation into multiple commits to simplify the review. First, creating the object encapsulating the logic of runProbe, second storing and initializing this object in worker, and third, actual reusing of the request object.

/sig node
/good-first-issue
/help-wanted
/area kubelet

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. area/kubelet help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 21, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@XDRAGON2002
Copy link
Member

Hi @SergeyKanzhelev!
I'm a new contributor and would like to contribute to the project.

Since this is a good first issue, I suppose this would be a good starting point for contributing?

I would like to take this up.

Could you help me out in getting started with this if so?

@VishalPraneeth
Copy link

Hello @SergeyKanzhelev @thockin
I'm a beginner to open-source and want to work on this good-first-issue
Can you please guide me on how to proceed with this
I've gone through the Contributing.md guidelines and have basic programming knowledge in Golang

@SergeyKanzhelev
Copy link
Member Author

@XDRAGON2002, @VishalPraneeth thank you for your interest. Please look at code references I posted in original issue for a good starting points to understand the task.

@Sajiyah-Salat
Copy link

I completely understand the issue. And you have explained it very well. Till now I have contributed in 4 prs in kubernetes in which 2 were merged. I know golang but still I am afraid that I dont know how can I start? Can you please help me with that.

@rayowang
Copy link
Member

/assign

@rayowang
Copy link
Member

rayowang commented Feb 23, 2023

Just recently, I checked kubelet-related issues and saw this code as well. I will use the object pool to complete the reuse of related objects. @SergeyKanzhelev

@aroradaman
Copy link
Member

Hi @SergeyKanzhelev
I tweaked the prober and did some CPU-memory profiling and compared them, overall saved 32.MB of RAM and 2.27sec CPU time by running 500 probers concurrently for 2 minutes and probing every second.

Benchmarking function: https://github.com/daman1807/kubernetes/blob/11f6565bf092128deaf9e3b080e4e9e4b92dc07b/pkg/probe/http/benchmark_test.go

Modification in prober for this benchmarking:
https://github.com/daman1807/kubernetes/blob/11f6565bf092128deaf9e3b080e4e9e4b92dc07b/pkg/probe/http/http.go#L65-L93

cpu_diff

mem_diff

@SergeyKanzhelev
Copy link
Member Author

overall saved 32.MB of RAM and 2.27sec CPU time by running 500 probers concurrently for 2 minutes and probing every second

very nice. I think it's reasonable to go ahead with this implementation than. 65K per probe saving with the reasonable straightforward refactoring is a very good saving

@thockin
Copy link
Member

thockin commented Feb 25, 2023 via email

@SergeyKanzhelev
Copy link
Member Author

@aojea also did some work here and had notes about HTTP in particular.

This issue is only about golang objects reuse. No connections keep alive or anything like this.

@aojea
Copy link
Member

aojea commented Feb 26, 2023

Hi @SergeyKanzhelev I tweaked the prober and did some CPU-memory profiling and compared them, overall saved 32.MB of RAM and 2.27sec CPU time by running 500 probers concurrently for 2 minutes and probing every second.

@daman1807 can you publish the difference of the benchmarks before and after your change using benchstat https://pkg.go.dev/golang.org/x/perf/cmd/benchstat

EDIT

I see, the benchmark is not using golang benchmark tooling

and I understand now Sergeys point, it seems that the NewRequestForHTTPGetAction does a lot of things, and caching its result can indeed save cycles

@aroradaman
Copy link
Member

@aojea I tried benchstat to compare perf but the variance between samples is too high to judge anything, I ran the tests for 2 minutes with 100 and 500 pods doing probes every second. The results also look pretty weird to me, maybe the GC didn't run for the test 🤯

goos: darwin
goarch: amd64
pkg: k8s.io/kubernetes/pkg/kubelet/prober
cpu: VirtualApple @ 2.50GHz
                  │  old.txt   │             new.txt              │
                  │   sec/op   │   sec/op    vs base              │
Prober/pods_100-8   120.0 ± 0%   120.0 ± 0%       ~ (p=1.000 n=6)
Prober/pods_500-8   120.0 ± 0%   120.0 ± 0%  +0.00% (p=0.041 n=6)
geomean             120.0        120.0       +0.00%

                  │      old.txt       │                new.txt                │
                  │        B/op        │     B/op       vs base                │
Prober/pods_100-8   10973330.2Ki ± 17%   367.9Ki ± 44%  -100.00% (p=0.002 n=6)
Prober/pods_500-8    53956.745Mi ±  6%   1.812Mi ± 24%  -100.00% (p=0.002 n=6)
geomean                  23.48Gi         826.3Ki        -100.00%

                  │      old.txt      │               new.txt                │
                  │     allocs/op     │  allocs/op    vs base                │
Prober/pods_100-8   140414.314k ± 17%   3.126k ± 34%  -100.00% (p=0.002 n=6)
Prober/pods_500-8    707079.79k ±  6%   16.00k ±  8%  -100.00% (p=0.002 n=6)
geomean                  315.1M         7.072k        -100.00%

@ashutosh887
Copy link

let me work on this issue now please
/assign

@pacoxu
Copy link
Member

pacoxu commented Apr 26, 2023

/assign @aman1807

let me work on this issue now please

as #116058 is already in process.

@k8s-ci-robot
Copy link
Contributor

@pacoxu: GitHub didn't allow me to assign the following users: aman1807.

Note that only kubernetes members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @aman1807

let me work on this issue now please

as #116058 is already in process.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@aroradaman
Copy link
Member

/assign

@ashutosh887
Copy link

What needs to be done here @SergeyKanzhelev

@igh9410
Copy link

igh9410 commented Apr 28, 2024

Hi @SergeyKanzhelev!
I'm new to Kubernetes and want to contribute to the project.
It looks like there have been several PRs made already for this issue, but they haven't been closed yet. I was wondering if I could assign myself to this issue and work on it?
I noticed that this issue has been labeled as "maybe fixed by other PR."
Thank you!

@Taoup Taoup linked a pull request Jul 22, 2024 that will close this issue
@Taoup
Copy link

Taoup commented Jul 22, 2024

/assign

@BeElectronicSakshi
Copy link

Hi Team, Can team confirm whether the issue is still open?

@BeElectronicSakshi
Copy link

/assign

@DevEmilio96
Copy link

/assign

@DevEmilio96 DevEmilio96 removed their assignment Sep 24, 2024
@aroradaman aroradaman removed their assignment Nov 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet