Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stylus_benchmark #2827

Open
wants to merge 57 commits into
base: master
Choose a base branch
from
Open

stylus_benchmark #2827

wants to merge 57 commits into from

Conversation

diegoximenes
Copy link
Contributor

@diegoximenes diegoximenes commented Dec 11, 2024

Resolves NIT-2757

This PR adds the stylus_benchmark binary.
It will mainly be used to fine tune the ink prices that are charged today.

It deterministically creates .wat programs with lots of instructions to be benchmarked.

If this PR is approved then more .wat programs generations, benchmarking other instructions, will be added in a future PR.

This PR introduces two new WASM instruction to jit, start_benchmark and end_benchmark.
Code blocks between start_benchmark and end_benchmark instructions will be benchmarked.

stylus_benchmark uses jit as a library.

@cla-bot cla-bot bot added the s Automatically added by the CLA bot if the creator of a PR is registered as having signed the CLA. label Dec 11, 2024
@diegoximenes diegoximenes requested a review from tsahee December 11, 2024 19:27
@tsahee
Copy link
Contributor

tsahee commented Dec 17, 2024

general approach seems great.
would like to get more reviews but I think it's already interesting to add a few benchmarks and start looking at results.

use std::path::PathBuf;

fn generate_add_i32_wat(mut out_path: PathBuf) -> eyre::Result<()> {
let number_of_ops = 20_000_000;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm skeptical of the faithfulness of this benchmark when compared to actual execution. Stylus programs are usually tiny, given the contract size limitation (24kb), so real programs will live in the CPU cache. Generating a big program that doesn't fit the CPU cache might shadow the processing time with the memory transfer overhead between RAM and the cache. My theory is that the memory transfer will become the bottleneck, and the measurement of instructions won't be precise. For instance, the addition instruction might appear much more expensive than in real usage because of the CPU caching problem.

Instead of creating a program with millions of instructions, I suggest you create a program with a few thousand instructions inside a loop. Since you have a few thousand instructions, the overhead of the loop increment and branch instructions should be minimal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right :), the size of the program was impacting a lot benchmark performance.
The strategy that you mentioned is around 5 times faster than not using a loop at all.

Instead of using toggle_benchmark to specify a single code block to be benchmarked, I added start_benchmark and end_benchmark instructions, that enables to specify multiple execution blocks to be benchmarked.
I tried to only benchmark the execution block inside the loop, but calling start_benchmark/end_benchmark multiple times introduces a performance overhead, that is not worth when compared to having a single start_benchmark/end_benchmark block that includes the loop.

Thanks!!!

arbitrator/arbutil/src/benchmark.rs Outdated Show resolved Hide resolved
arbitrator/arbutil/src/benchmark.rs Show resolved Hide resolved
arbitrator/tools/stylus_benchmark/src/benchmark.rs Outdated Show resolved Hide resolved

let exec = &mut WasmEnv::default();

let module = exec_program(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this code benchmarking only the JIT execution? I'm worried about the differences between the JIT and the interpreter, so it would be nice to benchmark non-JIT execution as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a good point.
By interpreter you mean execution through the prover's binary?

@tsahee, any takes on that?

@diegoximenes
Copy link
Contributor Author

diegoximenes commented Dec 23, 2024

general approach seems great. would like to get more reviews but I think it's already interesting to add a few benchmarks and start looking at results.

I intend to add more scenarios to be benchmarked in a future PR, together with a more detailed look at results.

But here goes the output of the benchmarks implemented in this PR, that was run in a Apple M3 Pro:

No scenario specified, benchmarking all scenarios

Benchmarking add_i32
Run 0, duration: 120.735917ms, ink_spent: Ink(29298411152)
Run 1, duration: 108.255875ms, ink_spent: Ink(29298411152)
Run 2, duration: 110.482833ms, ink_spent: Ink(29298411152)
Run 3, duration: 95.918417ms, ink_spent: Ink(29298411152)
Run 4, duration: 95.604125ms, ink_spent: Ink(29298411152)
Run 5, duration: 95.353792ms, ink_spent: Ink(29298411152)
Run 6, duration: 95.531375ms, ink_spent: Ink(29298411152)
After discarding top and bottom runs:
avg_duration: 99.926139ms, avg_ink_spent_per_micro_second: 293201
Benchmarking xor_i32
Run 0, duration: 96.054583ms, ink_spent: Ink(29298411152)
Run 1, duration: 96.208625ms, ink_spent: Ink(29298411152)
Run 2, duration: 97.552791ms, ink_spent: Ink(29298411152)
Run 3, duration: 96.658ms, ink_spent: Ink(29298411152)
Run 4, duration: 96.164417ms, ink_spent: Ink(29298411152)
Run 5, duration: 96.1905ms, ink_spent: Ink(29298411152)
Run 6, duration: 96.149083ms, ink_spent: Ink(29298411152)
After discarding top and bottom runs:
avg_duration: 96.187847ms, avg_ink_spent_per_micro_second: 304598

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
s Automatically added by the CLA bot if the creator of a PR is registered as having signed the CLA.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants