Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-add benchmarking as required workflow before PR merges #1013

Open
jak-pan opened this issue Feb 2, 2025 · 2 comments
Open

Re-add benchmarking as required workflow before PR merges #1013

jak-pan opened this issue Feb 2, 2025 · 2 comments

Comments

@jak-pan
Copy link
Contributor

jak-pan commented Feb 2, 2025

Currently there is no automated or semi-automated way to run benchmarks in the repo and are done manually.

We need to be able to automate benchmarking on the reference machine before PRs are merged to the master.
Correct benchmarks are crucial to security of the network and forgetting to do this or missing important change can cause chain stall.

TLDR; Compare old and new benchmark results, if the new bench results differ by amount of reads/writes or some threshold % (let's say 1%), force update benchmarks.

We had a bench wizard before, can we revisit this? Why did it stop working?

@dmoka
Copy link
Contributor

dmoka commented Feb 3, 2025

One good benefit I see in having them manually updated by us is that it gives us a chance to review the bench results locally before committing them, potentially spotting some issues early.

By having them automatically done we might easily overlook some results. Threshold can also help spot big changes (like an extrinsic has become 10-20% cheaper by accident or wrong benchhmark function), the question would be what is the right threshold as sometimes 10-20% change is also acceptable

I still like the idea, we just need to make it bulletproof.

@jak-pan
Copy link
Contributor Author

jak-pan commented Feb 9, 2025

By having them automatically done we might easily overlook some results.

To be clear - I don't want to push the changes automatically. I want to check if the benchmarks changed and sound alarm if they did. I would say that's the opposite of overlooking it.

We should then decide if we made mistake or if we should improve something or just commit re-benchmarked results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants