-
Notifications
You must be signed in to change notification settings - Fork 840
chore(reexecute/c): add back scraping of multiple metrics #4673
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR reintroduces the scraping of multiple metrics for the re-execution benchmark test, leveraging the merged PR #4640 that enables registering multiple benchmark results. Instead of reporting only the average mGas/s, the code now reports granular MeterVM sub-metrics (block parse, verify, and accept times) to provide more precise performance assessment.
Key Changes:
- Refactored metrics collection to support both counter and gauge metric types
- Added reporting of MeterVM-specific metrics (block_parse, block_verify, block_accept)
- Introduced ms/ggas (milliseconds per gigagas) as an additional performance metric
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| tests/reexecute/c/vm_reexecute.go | Removed original metric collection functions that only supported counter metrics |
| tests/reexecute/c/metrics.go | Added new metrics file with enhanced metric collection supporting multiple metric types and MeterVM-specific metrics |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "github.com/ava-labs/avalanchego/tests" | ||
| ) | ||
|
|
||
| type metricKind uint |
Copilot
AI
Dec 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The type name 'metricKind' should follow Go conventions and be renamed to 'MetricKind' (exported) or use a more descriptive name like 'metricType' to better convey that it represents the type of Prometheus metric being queried.
| mgas float64 = 1_000_000 | ||
| ggas float64 = 1_000_000_000 | ||
| nsPerMs float64 = 1_000_000 | ||
| ) | ||
|
|
||
| mgasPerSecond := (totalGas / mgas) / elapsed.Seconds() | ||
| tool.addResult(mgasPerSecond, "mgas/s") | ||
|
|
||
| totalGGas := totalGas / ggas | ||
| msPerGGas := (float64(elapsed) / nsPerMs) / totalGGas | ||
| tool.addResult(msPerGGas, "ms/ggas") | ||
|
|
||
| for _, metric := range meterVMMetrics { | ||
| metricVal, err := getMetricValue(registry, metric) | ||
| r.NoError(err) | ||
|
|
||
| metricValMS := (metricVal / nsPerMs) / totalGGas |
Copilot
AI
Dec 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable 'nsPerMs' uses an ambiguous abbreviation. Consider renaming to 'nanosecondsPerMillisecond' for better clarity, as the conversion constant's purpose is not immediately obvious.
| mgas float64 = 1_000_000 | |
| ggas float64 = 1_000_000_000 | |
| nsPerMs float64 = 1_000_000 | |
| ) | |
| mgasPerSecond := (totalGas / mgas) / elapsed.Seconds() | |
| tool.addResult(mgasPerSecond, "mgas/s") | |
| totalGGas := totalGas / ggas | |
| msPerGGas := (float64(elapsed) / nsPerMs) / totalGGas | |
| tool.addResult(msPerGGas, "ms/ggas") | |
| for _, metric := range meterVMMetrics { | |
| metricVal, err := getMetricValue(registry, metric) | |
| r.NoError(err) | |
| metricValMS := (metricVal / nsPerMs) / totalGGas | |
| mgas float64 = 1_000_000 | |
| ggas float64 = 1_000_000_000 | |
| nanosecondsPerMillisecond float64 = 1_000_000 | |
| ) | |
| mgasPerSecond := (totalGas / mgas) / elapsed.Seconds() | |
| tool.addResult(mgasPerSecond, "mgas/s") | |
| totalGGas := totalGas / ggas | |
| msPerGGas := (float64(elapsed) / nanosecondsPerMillisecond) / totalGGas | |
| tool.addResult(msPerGGas, "ms/ggas") | |
| for _, metric := range meterVMMetrics { | |
| metricVal, err := getMetricValue(registry, metric) | |
| r.NoError(err) | |
| metricValMS := (metricVal / nanosecondsPerMillisecond) / totalGGas |
Why this should be merged
With #4640 merged in, we now have the ability to register multiple benchmark results without corrupting the Github-Action-Benchmark dashboard. This PR therefore re-registers the MeterVM metrics which gives us a more precise assessment for the re-execution test (by reporting sub-metrics) rather than just printing the average mGas/s for the entire benchmark.
How this works
This PR is basically a copy-and-paste of #4369.
How this was tested
CI
Need to be documented in RELEASES.md?
No