-
Notifications
You must be signed in to change notification settings - Fork 61
feat: profiling kv cache index implementations (#108) #175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR introduces a profiling tool to benchmark and compare the performance of three KVIndex implementations: Cost Aware, Redis, and InMemory. The tool measures Add and Lookup operation times over multiple trials and reports averaged results, enabling developers to make informed decisions about which index implementation to use based on performance characteristics.
Key Changes:
- Adds a standalone profiling tool at
tests/profiling/kv_cache_index/main.gothat benchmarks Add and Lookup operations - Implements configurable trials and key counts via command-line flags
- Generates random workload keys and measures averaged performance across multiple runs
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if err != nil { | ||
| fmt.Printf("Failed to profile cost index: %v\n", err) | ||
| } else { | ||
| fmt.Printf("[Cost Aware] Add: %v Lookup %v \n", result.AddTime, result.LookupTime) |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing colon after "Lookup" in the output format string. The output format should be consistent with lines 169 and 176 which have a colon after "Lookup".
| fmt.Printf("[Cost Aware] Add: %v Lookup %v \n", result.AddTime, result.LookupTime) | |
| fmt.Printf("[Cost Aware] Add: %v Lookup: %v \n", result.AddTime, result.LookupTime) |
| if err != nil { | ||
| fmt.Printf("Failed to profile redis index: %v\n", err) | ||
| } else { | ||
| fmt.Printf("[Redis] Add: %v Lookup %v \n", result.AddTime, result.LookupTime) |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing colon after "Lookup" in the output format string. The output format should be consistent with line 176 which has a colon after "Lookup".
| fmt.Printf("[Redis] Add: %v Lookup %v \n", result.AddTime, result.LookupTime) | |
| fmt.Printf("[Redis] Add: %v Lookup: %v \n", result.AddTime, result.LookupTime) |
| fmt.Printf("[Cost Aware] Add: %v Lookup %v \n", result.AddTime, result.LookupTime) | ||
| } | ||
|
|
||
| result, err = profileRedisIndex(*numTrials, *numKeys) | ||
| if err != nil { | ||
| fmt.Printf("Failed to profile redis index: %v\n", err) | ||
| } else { | ||
| fmt.Printf("[Redis] Add: %v Lookup %v \n", result.AddTime, result.LookupTime) | ||
| } | ||
|
|
||
| result, err = profileInMemoryIndex(*numTrials, *numKeys) | ||
| if err != nil { | ||
| fmt.Printf("Failed to profile in memory index: %v\n", err) | ||
| } else { | ||
| fmt.Printf("[InMemory] Add: %v Lookup %v \n", result.AddTime, result.LookupTime) |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extra space before the newline. This should be consistent with the format: \n not \n.
| fmt.Printf("[Cost Aware] Add: %v Lookup %v \n", result.AddTime, result.LookupTime) | |
| } | |
| result, err = profileRedisIndex(*numTrials, *numKeys) | |
| if err != nil { | |
| fmt.Printf("Failed to profile redis index: %v\n", err) | |
| } else { | |
| fmt.Printf("[Redis] Add: %v Lookup %v \n", result.AddTime, result.LookupTime) | |
| } | |
| result, err = profileInMemoryIndex(*numTrials, *numKeys) | |
| if err != nil { | |
| fmt.Printf("Failed to profile in memory index: %v\n", err) | |
| } else { | |
| fmt.Printf("[InMemory] Add: %v Lookup %v \n", result.AddTime, result.LookupTime) | |
| fmt.Printf("[Cost Aware] Add: %v Lookup %v\n", result.AddTime, result.LookupTime) | |
| } | |
| result, err = profileRedisIndex(*numTrials, *numKeys) | |
| if err != nil { | |
| fmt.Printf("Failed to profile redis index: %v\n", err) | |
| } else { | |
| fmt.Printf("[Redis] Add: %v Lookup %v\n", result.AddTime, result.LookupTime) | |
| } | |
| result, err = profileInMemoryIndex(*numTrials, *numKeys) | |
| if err != nil { | |
| fmt.Printf("Failed to profile in memory index: %v\n", err) | |
| } else { | |
| fmt.Printf("[InMemory] Add: %v Lookup %v\n", result.AddTime, result.LookupTime) |
| // TODO @kartikx: Use a more realistic workload if possible. | ||
| func generateWorkloadKeys(numKeys int) []kvblock.Key { | ||
| // Uses time as seed to ensure that different profiling runs get different keys. | ||
| randGen := rand.New(rand.NewPCG(42, uint64(time.Now().UnixNano()))) |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The random number generator is seeded with a fixed seed (42) combined with the current time, which contradicts the comment on line 39. The fixed seed of 42 means that if the same time value is obtained (or if profiling is run very quickly in succession), the same random keys could be generated. Consider removing the fixed seed and using only the time-based seed: rand.New(rand.NewPCG(uint64(time.Now().UnixNano()), uint64(time.Now().UnixNano()))) or simply use two different time-based values.
| randGen := rand.New(rand.NewPCG(42, uint64(time.Now().UnixNano()))) | |
| randGen := rand.New(rand.NewPCG(uint64(time.Now().UnixNano()), uint64(time.Now().UnixNano()))) |
| for i := range numTrials { | ||
| ctx := context.Background() | ||
|
|
||
| indexConfig := createConfig() | ||
| index, err := kvblock.NewIndex(ctx, indexConfig) | ||
| if err != nil { | ||
| return IndexProfileResult{}, fmt.Errorf("failed to create index: %w", err) | ||
| } | ||
|
|
||
| result, err := measureIndexRun(ctx, index, "pod1", numKeys) | ||
| if err != nil { | ||
| return IndexProfileResult{}, fmt.Errorf("failed to profile index: %w", err) | ||
| } | ||
|
|
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Creating a new index instance for each trial may skew profiling results due to initialization overhead. Consider clarifying whether this is intentional (to measure cold-start performance) or if the index should be created once outside the loop for warmed-up performance measurement. The current approach profiles both initialization and operation costs together.
| for i := range numTrials { | |
| ctx := context.Background() | |
| indexConfig := createConfig() | |
| index, err := kvblock.NewIndex(ctx, indexConfig) | |
| if err != nil { | |
| return IndexProfileResult{}, fmt.Errorf("failed to create index: %w", err) | |
| } | |
| result, err := measureIndexRun(ctx, index, "pod1", numKeys) | |
| if err != nil { | |
| return IndexProfileResult{}, fmt.Errorf("failed to profile index: %w", err) | |
| } | |
| ctx := context.Background() | |
| indexConfig := createConfig() | |
| index, err := kvblock.NewIndex(ctx, indexConfig) | |
| if err != nil { | |
| return IndexProfileResult{}, fmt.Errorf("failed to create index: %w", err) | |
| } | |
| for i := range numTrials { | |
| result, err := measureIndexRun(ctx, index, "pod1", numKeys) | |
| if err != nil { | |
| return IndexProfileResult{}, fmt.Errorf("failed to profile index: %w", err) | |
| } |
| func measureIndexRun(ctx context.Context, index kvblock.Index, podName string, numKeys int) (IndexProfileResult, error) { | ||
| keys := generateWorkloadKeys(numKeys) | ||
|
|
||
| podEntries := []kvblock.PodEntry{{PodIdentifier: podName, DeviceTier: "gpu"}} |
Copilot
AI
Nov 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The empty podIdentifierSet means all pods will be returned during lookup (as documented in the Index interface). Consider adding a comment to clarify this is intentional, as it may be confusing why an empty set is used instead of including "pod1" to measure filtered lookup performance.
| podEntries := []kvblock.PodEntry{{PodIdentifier: podName, DeviceTier: "gpu"}} | |
| podEntries := []kvblock.PodEntry{{PodIdentifier: podName, DeviceTier: "gpu"}} | |
| // Intentionally use an empty podIdentifierSet to return all pods during lookup, | |
| // as documented in the Index interface. This measures unfiltered lookup performance. |
sagiahrac
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution @kartikx!
Maybe we can refactor this into a standard Go benchmark using the testing package. This approach is better for project stability as it automatically adjusts b.N (the number of keys) to achieve stable timing, which allows us to remove the manual numTrials and easily integrate with CI. It also handles warmups automatically and dynamically.
You can split Add and Lookup into 2 separate benchmarks for simplicity, or reset the timer after the Add phase.
| // TODO @kartikx: Use a more realistic workload if possible. | ||
| func generateWorkloadKeys(numKeys int) []kvblock.Key { | ||
| // Uses time as seed to ensure that different profiling runs get different keys. | ||
| randGen := rand.New(rand.NewPCG(42, uint64(time.Now().UnixNano()))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you generate the same workload keys for all profiling sessions? That way, the only difference will be the indexer implementation.
This PR adds a profiling test to compare the various
KVIndeximplementations.It adds and looks up keys for each implementation, and reports the average over a specified number of trials.
Requested Feedback:
Issue: #108