Skip to content

Commit

Permalink
ci: transformer benchmark measure transformer itself only (#5193)
Browse files Browse the repository at this point in the history
Transformer benchmark measure only the transformer itself. Parse, generate `Semantic`, and drop `Semantic` outside of the measured section.

This should:

1. Give us greater visibility of how changes to transformer affect its performance.
2. Reduce variance in transformer benchmarks, since they will no longer include the variance introduced by `SemanticBuilder`.

Not ready to merge yet. We should first add an "end to end" benchmark testing the entire compilation process (parse - semantic - transform - minify - codegen).

This PR depends on Boshen/criterion2.rs#49. This PR currently makes Oxc's dependency on `criterion2` a git dependency on that PR's branch. That can be changed once the upstream PR is merged.
  • Loading branch information
overlookmotel committed Aug 27, 2024
1 parent a17cf33 commit b479afd
Show file tree
Hide file tree
Showing 3 changed files with 40 additions and 19 deletions.
4 changes: 2 additions & 2 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ cfg-if = "1.0.0"
compact_str = "0.8.0"
console = "0.15.8"
convert_case = "0.6.0"
criterion2 = { version = "1.0.0", default-features = false }
criterion2 = { version = "1.1.0", default-features = false }
daachorse = { version = "1.0.0" }
dashmap = "6.0.1"
encoding_rs = "0.8.34"
Expand Down
53 changes: 37 additions & 16 deletions tasks/benchmark/benches/transformer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,30 +14,51 @@ fn bench_transformer(criterion: &mut Criterion) {
for file in TestFiles::complicated().files() {
let id = BenchmarkId::from_parameter(&file.file_name);
let source_type = SourceType::from_path(&file.file_name).unwrap();
let source_text = file.source_text.as_str();

group.bench_with_input(id, &file.source_text, |b, source_text| {
// The whole transformation process needs to be benched otherwise it will end up with
// transforming an already transformed AST.
b.iter_with_large_drop(|| {
let allocator = Allocator::default();
// Create `Allocator` outside of `bench_function`, so same allocator is used for
// both the warmup and measurement phases
let mut allocator = Allocator::default();

group.bench_function(id, |b| {
b.iter_with_setup_wrapper(|runner| {
// Reset allocator at start of each iteration
allocator.reset();

// Create fresh AST + semantic data for each iteration
let ParserReturn { trivias, program, .. } =
Parser::new(&allocator, source_text, source_type).parse();
let transform_options = TransformOptions::default();
let program = allocator.alloc(program);
let (symbols, scopes) = SemanticBuilder::new(source_text, source_type)
.build(program)
.semantic
.into_symbol_table_and_scope_tree();
Transformer::new(
&allocator,
Path::new(&file.file_name),
source_type,
source_text,
trivias,
transform_options,
)
.build_with_symbols_and_scopes(symbols, scopes, program);
allocator

// Clone `trivias` (which is an `Arc`). We keep a 2nd copy, so the value is not dropped
// when `Transformer` is dropped inside the measured section.
// We clone `trivias` here rather than in `routine` to avoid the cloning being included
// in measure.
let trivias_copy = trivias.clone();

runner.run(|| {
let transform_options = TransformOptions::default();
let ret = Transformer::new(
&allocator,
Path::new(&file.file_name),
source_type,
source_text,
trivias,
transform_options,
)
.build_with_symbols_and_scopes(symbols, scopes, program);

// Return the `TransformerReturn`, so it's dropped outside of the measured section.
// `TransformerReturn` contains `ScopeTree` and `SymbolTable` which are costly to drop.
// That's not central to transformer, so we don't want it included in this measure.
ret
});

drop(trivias_copy);
});
});
}
Expand Down

0 comments on commit b479afd

Please sign in to comment.