Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[subsystem-benchmarks] Save results to json #3829

Merged
merged 12 commits into from
Mar 26, 2024
Merged

Conversation

AndreiEres
Copy link
Contributor

@AndreiEres AndreiEres commented Mar 25, 2024

Here we add the ability to save subsystem benchmark results in JSON format to display them as graphs

To draw graphs, CI team will use github-action-benchmark. Since we are using custom benchmarks, we need to prepare a specific data type:

[
    {
        "name": "CPU Load",
        "unit": "Percent",
        "value": 50
    }
]

Then we'll get graphs like this:
example

A live page with graphs

@AndreiEres AndreiEres changed the title Save subsystem-benchmark results to json [subsystem-benchmarks] Save results to json Mar 25, 2024
@AndreiEres AndreiEres added R0-silent Changes should not be mentioned in any release notes T12-benchmarks This PR/Issue is related to benchmarking and weights. labels Mar 25, 2024
@AndreiEres AndreiEres requested a review from ggwpez March 25, 2024 17:32
@@ -82,6 +82,25 @@ impl BenchmarkUsage {
_ => None,
}
}

pub fn to_json(&self) -> color_eyre::eyre::Result<String> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more like, to_chart_items, because you also have Serialize/Deserialize derived for this structure which ca produce Jsons

@@ -151,3 +170,10 @@ impl ResourceUsage {
}

type ResourceUsageCheck<'a> = (&'a str, f64, f64);

#[derive(Debug, Serialize)]
pub struct ChartItem {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a format our other tooling is expecting ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, added to the PR description

Copy link
Contributor

@sandreim sandreim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is missing is documentation about this charting feature and how to use it

@sandreim
Copy link
Contributor

Also, would be great to share some screenshots of it in action 😁

@AndreiEres
Copy link
Contributor Author

Also, would be great to share some screenshots of it in action 😁

Added PR description.

@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: cargo-clippy
Logs: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/5660584

@AndreiEres AndreiEres added this pull request to the merge queue Mar 26, 2024
Merged via the queue into master with commit fd79b3b Mar 26, 2024
124 of 132 checks passed
@AndreiEres AndreiEres deleted the AndreiEres/sb-charts branch March 26, 2024 16:18
dharjeezy pushed a commit to dharjeezy/polkadot-sdk that referenced this pull request Apr 9, 2024
Here we add the ability to save subsystem benchmark results in JSON
format to display them as graphs

To draw graphs, CI team will use
[github-action-benchmark](https://github.com/benchmark-action/github-action-benchmark).
Since we are using custom benchmarks, we need to prepare [a specific
data
type](https://github.com/benchmark-action/github-action-benchmark?tab=readme-ov-file#examples):
```
[
    {
        "name": "CPU Load",
        "unit": "Percent",
        "value": 50
    }
]
```

Then we'll get graphs like this: 

![example](https://raw.githubusercontent.com/rhysd/ss/master/github-action-benchmark/main.png)

[A live page with
graphs](https://benchmark-action.github.io/github-action-benchmark/dev/bench/)

---------

Co-authored-by: ordian <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
R0-silent Changes should not be mentioned in any release notes T12-benchmarks This PR/Issue is related to benchmarking and weights.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants