-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
List pods to find those relevant for runtime detection instead of caching the workload objects #2325
List pods to find those relevant for runtime detection instead of caching the workload objects #2325
Conversation
…ching the workload objects
if err != nil { | ||
return reconcile.Result{}, err | ||
var selectedPods []corev1.Pod | ||
for _, pod := range podList.Items { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider using index and do direct access to avoid copying large structs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
package main
import (
"testing"
)
type SubStruct struct {
X int
Y float64
Z string
}
type BigStruct struct {
ID int
Name string
Values [10]float64 // Fixed-size array
Numbers []int // Slice
Sub *SubStruct // Pointer to another struct
SubList []*SubStruct // Slice of pointers to structs
ByteData []byte // Large slice for simulating memory usage
}
func createBigStructSlice(n int) []BigStruct {
slice := make([]BigStruct, n)
for i := range slice {
sub := &SubStruct{X: i, Y: float64(i) * 1.1, Z: "sub"}
slice[i] = BigStruct{
ID: i,
Name: "BigStruct",
Values: [10]float64{1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0},
Numbers: []int{1, 2, 3, 4, 5},
Sub: sub,
SubList: []*SubStruct{sub, sub, sub, sub, sub, sub, sub, sub, sub, sub, sub, sub},
ByteData: make([]byte, 4096), // Simulating some memory-heavy data
}
}
return slice
}
func BenchmarkRangeValue(b *testing.B) {
data := createBigStructSlice(10)
var out []BigStruct
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for _, elem := range data {
elem.ID += 1
out = append(out, elem)
}
}
}
func BenchmarkRangeIndex(b *testing.B) {
data := createBigStructSlice(10)
var out []BigStruct
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for i := range data {
data[i].ID += 1
out = append(out, data[i])
}
}
}
go test -bench=. -benchmem
goos: darwin
goarch: arm64
pkg: main/slice_benchmark
cpu: Apple M2 Pro
BenchmarkRangeValue-12 573656 2646 ns/op 10742 B/op 0 allocs/op
BenchmarkRangeIndex-12 877153 2397 ns/op 10981 B/op 0 allocs/op
From the benchmark it looks like it doesn't make a noticeable difference when we have allocations inside the loop,
A difference is noticable if the loop does not allocate:
func BenchmarkRangeValue(b *testing.B) {
data := createBigStructSlice(10)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for _, elem := range data {
elem.ID += 1
}
}
}
func BenchmarkRangeIndex(b *testing.B) {
data := createBigStructSlice(10)
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
for i := range data {
data[i].ID += 1
}
}
}
go test -bench=. -benchmem
goos: darwin
goarch: arm64
pkg: main/slice_benchmark
cpu: Apple M2 Pro
BenchmarkRangeValue-12 36781131 32.02 ns/op 0 B/op 0 allocs/op
BenchmarkRangeIndex-12 287294347 4.148 ns/op 0 B/op 0 allocs/op
PASS
This PR refactors the runtime detection controllers to avoid
Get
orList
operation on workload objects (Deployments, StatefulSets and DaemonSets). As a result:This is done by modifying the
insttrumentationconfig
reconciler to list the Pods associated to the reconciled instrumentationConfig.The Odiglet cache is already configured to only include Pods whithin the same node, hence this list operation will only return pods in the same node and in the same ns of the instrumentationConfig.