Skip to content

feat(ssa): constant_folding with revisits#10013

Closed
aakoshh wants to merge 20 commits intomasterfrom
af/9963-const-fold-loop
Closed

feat(ssa): constant_folding with revisits#10013
aakoshh wants to merge 20 commits intomasterfrom
af/9963-const-fold-loop

Conversation

@aakoshh
Copy link
Contributor

@aakoshh aakoshh commented Sep 26, 2025

Description

Problem*

Resolves #9963

Summary*

Changes constant_folding to schedule revisits of blocks under the following circumstances:

  • Say we have CFG like b0 -> [b1, b2]. When in b2 we find a duplicate instruction from b1 and hoist the instruction from b2 to the common denominator b0, schedule a revisit of b1 and then b2, so that we can deduplicate the instruction in b1 with that of b0 and then we can look for any instruction that we can now again hoist from b2 or b1 into b0. (It could potentially be enough to revisit b1, as it can trigger hoisting from b2, but if that schedules a revisit of b2, that revisit will make the original schedule be skipped).
  • Say we have a CFG like b0 -> [b1 -> b2 -> b3, b4], and we replaced an instruction in b2 with the results of its duplicate in b1, then we found another duplicate of the one b1 in b4. We hoisted the instruction from b4 into b0, and revisited b1, where we replaced it with the instruction now in b0. Then, revisit b2 and b3 again as a descendant of b1, so that we can re-resolve the instructions in them.

Other changes:

  • Fixes simplify to not insert a new result if the instruction simplified to itself (can happen with Binary: instead of None it returns SimplifiedToInstruction(<maybe-self>), and Constrain can simplify to SimplifiedToInstructionMultiple(vec![<maybe-self>])).
  • When we cache an instruction that already was in the cache, replace it with the new value if the block it's in dominates the previous one. For example if we find a duplicate of something from b1 in b2, and we hoist into b0, then let the result in b0 immediately become active in the cache, so that an instruction in b3 is replaced, rather than hoisted.

TODO:

  • Add a test that checks whether indirect dependants need to be revisited.
  • Investigate the long compilation time involving Poseidon hashes in regression_5252 and fold_2_to_17
  • Think of some limit to how many times we can trigger revisits.

Additional Context

Documentation*

Check one:

  • No documentation needed.
  • Documentation included in this PR.
  • [For Experimental Features] Documentation to be submitted in a separate PR.

PR Checklist*

  • I have tested the changes locally.
  • I have formatted the changes with Prettier and/or cargo fmt on default settings.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 26, 2025

Changes to Brillig bytecode sizes

Generated at commit: 35f5d7696174440c36d4012f229ba68bd0827951, compared to commit: 8ca4af784ce805900a8d5472830c9c28e92562b8

🧾 Summary (10% most significant diffs)

Program Brillig opcodes (+/-) %
lambda_from_array_inliner_max -240 ✅ -5.81%
lambda_from_array_inliner_min -240 ✅ -5.81%
lambda_from_array_inliner_zero -240 ✅ -5.81%
binary_operator_overloading_inliner_max -39 ✅ -11.27%
regression_9455_inliner_max -25 ✅ -53.19%
regression_9455_inliner_min -25 ✅ -53.19%
regression_9455_inliner_zero -25 ✅ -53.19%

Full diff report 👇
Program Brillig opcodes (+/-) %
tuple_inputs_inliner_min 288 (+1) +0.35%
tuple_inputs_inliner_zero 288 (+1) +0.35%
tuple_inputs_inliner_max 306 (+1) +0.33%
hashmap_inliner_zero 7,977 (-1) -0.01%
uhashmap_inliner_min 7,186 (-2) -0.03%
uhashmap_inliner_zero 6,922 (-2) -0.03%
hashmap_inliner_min 8,895 (-3) -0.03%
poseidon_bn254_hash_width_3_inliner_max 4,852 (-2) -0.04%
poseidon_bn254_hash_width_3_inliner_zero 4,423 (-2) -0.05%
regression_5252_inliner_max 4,019 (-2) -0.05%
poseidonsponge_x5_254_inliner_max 3,699 (-2) -0.05%
slices_inliner_max 1,684 (-4) -0.24%
nested_array_dynamic_inliner_max 1,601 (-10) -0.62%
regression_11294_inliner_min 290 (-2) -0.68%
nested_array_dynamic_inliner_min 1,327 (-10) -0.75%
nested_array_dynamic_inliner_zero 1,327 (-10) -0.75%
regression_9538_inliner_max 93 (-1) -1.06%
regression_9538_inliner_min 93 (-1) -1.06%
regression_9538_inliner_zero 93 (-1) -1.06%
array_oob_regression_7965_inliner_max 269 (-5) -1.82%
array_oob_regression_7965_inliner_min 269 (-5) -1.82%
array_oob_regression_7965_inliner_zero 269 (-5) -1.82%
regression_9496_inliner_max 534 (-12) -2.20%
regression_9496_inliner_min 534 (-12) -2.20%
regression_9496_inliner_zero 534 (-12) -2.20%
higher_order_functions_inliner_zero 721 (-29) -3.87%
lambda_from_array_inliner_max 3,888 (-240) -5.81%
lambda_from_array_inliner_min 3,888 (-240) -5.81%
lambda_from_array_inliner_zero 3,888 (-240) -5.81%
binary_operator_overloading_inliner_max 307 (-39) -11.27%
regression_9455_inliner_max 22 (-25) -53.19%
regression_9455_inliner_min 22 (-25) -53.19%
regression_9455_inliner_zero 22 (-25) -53.19%

@github-actions
Copy link
Contributor

github-actions bot commented Sep 26, 2025

Changes to number of Brillig opcodes executed

Generated at commit: 35f5d7696174440c36d4012f229ba68bd0827951, compared to commit: 8ca4af784ce805900a8d5472830c9c28e92562b8

🧾 Summary (10% most significant diffs)

Program Brillig opcodes (+/-) %
regression_9455_inliner_max -22 ✅ -52.38%
regression_9455_inliner_min -22 ✅ -52.38%
regression_9455_inliner_zero -22 ✅ -52.38%
lambda_from_array_inliner_max -1,566 ✅ -57.24%
lambda_from_array_inliner_min -1,566 ✅ -57.24%
lambda_from_array_inliner_zero -1,566 ✅ -57.24%

Full diff report 👇
Program Brillig opcodes (+/-) %
higher_order_functions_inliner_zero 1,308 (+42) +3.32%
slices_inliner_max 2,526 (+59) +2.39%
regression_9496_inliner_max 508 (+9) +1.80%
regression_9496_inliner_min 508 (+9) +1.80%
regression_9496_inliner_zero 508 (+9) +1.80%
tuple_inputs_inliner_max 530 (+4) +0.76%
tuple_inputs_inliner_min 572 (+4) +0.70%
tuple_inputs_inliner_zero 572 (+4) +0.70%
poseidon_bn254_hash_width_3_inliner_zero 164,435 (-471) -0.29%
nested_array_dynamic_inliner_min 2,725 (-8) -0.29%
nested_array_dynamic_inliner_zero 2,725 (-8) -0.29%
array_oob_regression_7965_inliner_max 321 (-1) -0.31%
array_oob_regression_7965_inliner_min 321 (-1) -0.31%
array_oob_regression_7965_inliner_zero 321 (-1) -0.31%
nested_array_dynamic_inliner_max 2,523 (-8) -0.32%
poseidon_bn254_hash_width_3_inliner_max 136,400 (-471) -0.34%
poseidonsponge_x5_254_inliner_max 151,457 (-600) -0.39%
regression_5252_inliner_max 752,848 (-3,000) -0.40%
regression_9538_inliner_max 154 (-1) -0.65%
regression_9538_inliner_min 154 (-1) -0.65%
regression_9538_inliner_zero 154 (-1) -0.65%
binary_operator_overloading_inliner_max 183 (-22) -10.73%
regression_9455_inliner_max 20 (-22) -52.38%
regression_9455_inliner_min 20 (-22) -52.38%
regression_9455_inliner_zero 20 (-22) -52.38%
lambda_from_array_inliner_max 1,170 (-1,566) -57.24%
lambda_from_array_inliner_min 1,170 (-1,566) -57.24%
lambda_from_array_inliner_zero 1,170 (-1,566) -57.24%

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Performance Alert ⚠️

Possible performance regression was detected for benchmark 'Execution Time'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.20.

Benchmark suite Current: d73fed9 Previous: 8ca4af7 Ratio
rollup-block-root-single-tx 0.004 s 0.002 s 2
semaphore-depth-10 0.047 s 0.036 s 1.31

This comment was automatically generated by workflow using github-action-benchmark.

CC: @TomAFrench

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Performance Alert ⚠️

Possible performance regression was detected for benchmark 'Execution Memory'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.20.

Benchmark suite Current: b445198 Previous: 075a31b Ratio
rollup-checkpoint-root-single-block 9680 MB 1030 MB 9.40
rollup-checkpoint-root 9690 MB 1030 MB 9.41

This comment was automatically generated by workflow using github-action-benchmark.

CC: @TomAFrench

@vezenovm vezenovm added the bench-show Display benchmark results on PR label Sep 26, 2025
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Performance Alert ⚠️

Possible performance regression was detected for benchmark 'Test Suite Duration'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.20.

Benchmark suite Current: d73fed9 Previous: 8ca4af7 Ratio
test_report_zkpassport_noir-ecdsa_ 2 s 1 s 2

This comment was automatically generated by workflow using github-action-benchmark.

CC: @TomAFrench


// Because we removed some instruction from this block, we will have to (re)visit its successors.
if origin != block {
origins.reused.insert(origin);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these cascading optimizations only occur in the NeedToHoistToCommonBlock case. Looking at #9963 (comment), b1 does not dominate b3 (as every path from the entry to b3 does not have to go through b1). If b1 were to dominate b3 we would simply remove the instruction in b3 and be left with the single instruction in b1. The issue is that upon hoisting we are not actually folding the instruction that triggered the hoisting in the first place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, and the revisits are only triggered by hoisting. This instruction is here for a different reason, it is to prevent the "missing instruction" failure in avoid_unmapped_instructions_during_revisit.

There is an explanation in that test. The problem is that if between b1 and b3 we were to find a block b2 which b1 dominates, then in b2 we would just reuse the instruction results from b1. Great, but then b3 causes the instruction that created those results to be hoisted from b3 into b0, and then be replaced in b1 by the new results of the hoisted instruction. If we don't revisit b2, then it will have some instruction ID in its block that doesn't exist any more, so we need to go and do the values_to_replace chore on it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. I did only skim the PR, but it is a little bit difficult to keep track of these revisits. I think it may be simpler to just do a fixed point loop but with some informed starting point as outlined in #9963 (comment). Rather than trying to be so granular in a single pass, we can just track the first origin for each NeedToHoistToCommonBlock case. After the entire constant_folding pass is completed we just kick start another pass at the first origin in our RPO. We then keep looping until no more instructions are folded.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would also give us a chance to clear out the values_to_replace cache, although I think it would still keep the transient instructions in the DFG.

A slightly less granular option would be to just clear the "visited" from the block_queue to allow all descendant to be reprocessed, not just the dependant ones.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also experiment with not even trying to start at the first origin of the RPO and just go through all of the origins until instructions are no longer being folded.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the performance alert is just mislabeled. We can see the compilation memory degradation here #10013 (review)

Copy link
Contributor Author

@aakoshh aakoshh Sep 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that's the degradation, it's all +/- 1%, that would be great.

I just can't find the following on the list at #10013 (review):
Screenshot 2025-09-26 at 16 09 02

Which one is mislabelled you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I was looking at the wrong thing 🤦. If we look at the comment history for #10013 (review) we see those circuits which we alerted for in the first comment but not the later ones. Possibly a bug in the reports.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I suspected those could have been the ones to bring fresh data.

Some tests are much slower as well, in particular execution_success/slices takes ages, while it's fast on master. I'll try to see if there is some simple limit I could add to stop it from spinning its wheels.

@aakoshh
Copy link
Contributor Author

aakoshh commented Sep 26, 2025

Investigate the long compilation time involving Poseidon hashes in regression_5252 and fold_2_to_17

These turn out to be slow on master as well:

cargo nextest run --cargo-quiet -p nargo_cli --test execute 5252
────────────
 Nextest run ID 3375c065-65fe-474f-a30b-2524979cf728 with nextest profile: default
    Starting 14 tests across 1 binary (5581 tests skipped)
        PASS [   2.820s] nargo_cli::execute tests::interpret_execution_success::test_regression_5252::forcebrillig_true_inliner_i64_max_expects
        PASS [   2.979s] nargo_cli::execute tests::interpret_execution_success::test_regression_5252::forcebrillig_true_inliner_0_expects
        PASS [   3.055s] nargo_cli::execute tests::interpret_execution_success::test_regression_5252::forcebrillig_false_inliner_i64_min_expects
        PASS [   3.056s] nargo_cli::execute tests::interpret_execution_success::test_regression_5252::forcebrillig_false_inliner_0_expects
        PASS [   3.062s] nargo_cli::execute tests::interpret_execution_success::test_regression_5252::forcebrillig_false_inliner_i64_max_expects
        PASS [   3.069s] nargo_cli::execute tests::interpret_execution_success::test_regression_5252::forcebrillig_true_inliner_i64_min_expects
        PASS [  18.579s] nargo_cli::execute tests::execution_success::test_regression_5252::forcebrillig_false_inliner_i64_min_expects
        PASS [  18.707s] nargo_cli::execute tests::execution_success::test_regression_5252::forcebrillig_true_inliner_0_expects
        PASS [  21.785s] nargo_cli::execute tests::execution_success::test_regression_5252::forcebrillig_false_inliner_i64_max_expects
        PASS [  21.917s] nargo_cli::execute tests::execution_success::test_regression_5252::forcebrillig_true_inliner_i64_min_expects
        PASS [  22.064s] nargo_cli::execute tests::minimal_execution_success::test_regression_5252::forcebrillig_false_inliner_0_expects
        PASS [  25.064s] nargo_cli::execute tests::execution_success::test_regression_5252::forcebrillig_false_inliner_0_expects
        PASS [  25.209s] nargo_cli::execute tests::execution_success::test_regression_5252::forcebrillig_true_inliner_i64_max_expects
        PASS [  34.463s] nargo_cli::execute tests::nargo_expand_execution_success::test_regression_5252
────────────
     Summary [  34.522s] 14 tests run: 14 passed, 5581 skippedcargo nextest run --cargo-quiet -p nargo_cli --test execute fold_2_to_17

────────────
 Nextest run ID 8eefa93a-0e63-40c0-aa29-10fc243422bf with nextest profile: default
    Starting 14 tests across 1 binary (5581 tests skipped)
        PASS [   8.060s] nargo_cli::execute tests::interpret_execution_success::test_fold_2_to_17::forcebrillig_false_inliner_i64_min_expects
        PASS [  15.213s] nargo_cli::execute tests::interpret_execution_success::test_fold_2_to_17::forcebrillig_false_inliner_0_expects
        PASS [  20.649s] nargo_cli::execute tests::interpret_execution_success::test_fold_2_to_17::forcebrillig_true_inliner_i64_min_expects
        PASS [  23.991s] nargo_cli::execute tests::interpret_execution_success::test_fold_2_to_17::forcebrillig_true_inliner_i64_max_expects
        PASS [  27.638s] nargo_cli::execute tests::interpret_execution_success::test_fold_2_to_17::forcebrillig_true_inliner_0_expects
        PASS [  33.247s] nargo_cli::execute tests::interpret_execution_success::test_fold_2_to_17::forcebrillig_false_inliner_i64_max_expects
        PASS [  37.328s] nargo_cli::execute tests::execution_success::test_fold_2_to_17::forcebrillig_true_inliner_0_expects
        PASS [  37.462s] nargo_cli::execute tests::minimal_execution_success::test_fold_2_to_17::forcebrillig_false_inliner_0_expects
        PASS [  39.481s] nargo_cli::execute tests::execution_success::test_fold_2_to_17::forcebrillig_false_inliner_i64_max_expects
        PASS [  41.508s] nargo_cli::execute tests::execution_success::test_fold_2_to_17::forcebrillig_false_inliner_i64_min_expects
        PASS [  41.548s] nargo_cli::execute tests::nargo_expand_execution_success::test_fold_2_to_17
        PASS [  41.672s] nargo_cli::execute tests::execution_success::test_fold_2_to_17::forcebrillig_true_inliner_i64_max_expects
        PASS [  43.480s] nargo_cli::execute tests::execution_success::test_fold_2_to_17::forcebrillig_false_inliner_0_expects
        PASS [  43.636s] nargo_cli::execute tests::execution_success::test_fold_2_to_17::forcebrillig_true_inliner_i64_min_expects
────────────
     Summary [  43.679s] 14 tests run: 14 passed, 5581 skippedgit rev-parse HEAD
6425a803f3045687174ce97650b88f4adc286787

@TomAFrench
Copy link
Member

This seems like something that can be delayed until post 1.0.

@aakoshh
Copy link
Contributor Author

aakoshh commented Sep 26, 2025

This seems like something that can be delayed until post 1.0.

I agree it's speculative, but I'd lift the two "Other changes:" from it, as they have a benefit on their own and require no looping, just doing less work.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACVM Benchmarks

Details
Benchmark suite Current: d73fed9 Previous: 8ca4af7 Ratio
purely_sequential_opcodes 251117 ns/iter (± 535) 251566 ns/iter (± 1107) 1.00
perfectly_parallel_opcodes 221914 ns/iter (± 7275) 222720 ns/iter (± 1455) 1.00
perfectly_parallel_batch_inversion_opcodes 2778277 ns/iter (± 1279) 2781636 ns/iter (± 2135) 1.00

This comment was automatically generated by workflow using github-action-benchmark.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compilation Time

Details
Benchmark suite Current: d73fed9 Previous: 8ca4af7 Ratio
private-kernel-inner 1.802 s 1.806 s 1.00
private-kernel-reset 7.746 s 8.17 s 0.95
private-kernel-tail 1.452 s 1.372 s 1.06
rollup-block-root-first-empty-tx 1.434 s 1.452 s 0.99
rollup-block-root-single-tx 1.34 s 1.38 s 0.97
rollup-block-root 1.48 s 1.42 s 1.04
rollup-checkpoint-merge 1.492 s 1.502 s 0.99
rollup-checkpoint-root-single-block 193 s 192 s 1.01
rollup-checkpoint-root 195 s 202 s 0.97
rollup-root 1.472 s 1.488 s 0.99
rollup-tx-base-private 18.7 s 18.82 s 0.99
rollup-tx-base-public 77.32 s 76.54 s 1.01
rollup-tx-merge 1.416 s 1.436 s 0.99
semaphore-depth-10 0.763 s 0.776 s 0.98
sha512-100-bytes 1.75 s 1.576 s 1.11

This comment was automatically generated by workflow using github-action-benchmark.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Execution Time

Details
Benchmark suite Current: d73fed9 Previous: 8ca4af7 Ratio
private-kernel-inner 0.014 s 0.013 s 1.08
private-kernel-reset 0.161 s 0.161 s 1
private-kernel-tail 0.01 s 0.01 s 1
rollup-block-root-first-empty-tx 0.003 s 0.003 s 1
rollup-block-root-single-tx 0.004 s 0.002 s 2
rollup-block-root 0.004 s 0.004 s 1
rollup-checkpoint-merge 0.003 s 0.004 s 0.75
rollup-checkpoint-root-single-block 14.4 s 14.1 s 1.02
rollup-checkpoint-root 12.6 s 12.7 s 0.99
rollup-root 0.004 s 0.004 s 1
rollup-tx-base-private 0.308 s 0.31 s 0.99
rollup-tx-base-public 0.247 s 0.252 s 0.98
rollup-tx-merge 0.002 s 0.002 s 1
semaphore-depth-10 0.047 s 0.036 s 1.31
sha512-100-bytes 0.061 s 0.052 s 1.17

This comment was automatically generated by workflow using github-action-benchmark.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Artifact Size

Details
Benchmark suite Current: d73fed9 Previous: 8ca4af7 Ratio
private-kernel-inner 740.3 KB 740.3 KB 1
private-kernel-reset 2087.3 KB 2087.3 KB 1
private-kernel-tail 543.8 KB 543.7 KB 1.00
rollup-block-root-first-empty-tx 172.2 KB 172.2 KB 1
rollup-block-root-single-tx 170.9 KB 170.9 KB 1
rollup-block-root 234 KB 234 KB 1
rollup-checkpoint-merge 386.4 KB 386.4 KB 1
rollup-checkpoint-root-single-block 27907 KB 27899.6 KB 1.00
rollup-checkpoint-root 27939.7 KB 27936.7 KB 1.00
rollup-root 415.1 KB 415.1 KB 1
rollup-tx-base-private 5120.6 KB 5120.6 KB 1
rollup-tx-base-public 5108.5 KB 5108.5 KB 1
rollup-tx-merge 186.9 KB 186.9 KB 1
semaphore-depth-10 570.7 KB 570.7 KB 1
sha512-100-bytes 506.4 KB 506.4 KB 1

This comment was automatically generated by workflow using github-action-benchmark.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opcode count

Details
Benchmark suite Current: d73fed9 Previous: 8ca4af7 Ratio
private-kernel-inner 15937 opcodes 15937 opcodes 1
private-kernel-reset 76858 opcodes 76858 opcodes 1
private-kernel-tail 11710 opcodes 11710 opcodes 1
rollup-block-root-first-empty-tx 1351 opcodes 1351 opcodes 1
rollup-block-root-single-tx 1047 opcodes 1047 opcodes 1
rollup-block-root 2340 opcodes 2340 opcodes 1
rollup-checkpoint-merge 2334 opcodes 2334 opcodes 1
rollup-checkpoint-root-single-block 962971 opcodes 962971 opcodes 1
rollup-checkpoint-root 964281 opcodes 964281 opcodes 1
rollup-root 2835 opcodes 2835 opcodes 1
rollup-tx-base-private 270240 opcodes 270240 opcodes 1
rollup-tx-base-public 273204 opcodes 273204 opcodes 1
rollup-tx-merge 1426 opcodes 1426 opcodes 1
semaphore-depth-10 5700 opcodes 5700 opcodes 1
sha512-100-bytes 13173 opcodes 13173 opcodes 1

This comment was automatically generated by workflow using github-action-benchmark.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Execution Memory

Details
Benchmark suite Current: d73fed9 Previous: 8ca4af7 Ratio
private-kernel-inner 255.51 MB 255.51 MB 1
private-kernel-reset 292.82 MB 292.82 MB 1
private-kernel-tail 243.73 MB 243.73 MB 1
rollup-block-root 336.49 MB 336.49 MB 1
rollup-checkpoint-merge 335.47 MB 335.47 MB 1
rollup-checkpoint-root-single-block 1030 MB 1030 MB 1
rollup-checkpoint-root 1030 MB 1030 MB 1
rollup-root 336.67 MB 336.67 MB 1
rollup-tx-base-private 456.44 MB 456.44 MB 1
rollup-tx-base-public 540.08 MB 540.08 MB 1
rollup-tx-merge 334.77 MB 334.77 MB 1
semaphore_depth_10 73.48 MB 73.48 MB 1
sha512_100_bytes 71.73 MB 71.73 MB 1

This comment was automatically generated by workflow using github-action-benchmark.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compilation Memory

Details
Benchmark suite Current: d73fed9 Previous: 8ca4af7 Ratio
private-kernel-inner 286.48 MB 286.46 MB 1.00
private-kernel-reset 562.6 MB 585.23 MB 0.96
private-kernel-tail 257.13 MB 258.72 MB 0.99
rollup-block-root-first-empty-tx 340.95 MB 341.03 MB 1.00
rollup-block-root-single-tx 338.18 MB 338.17 MB 1.00
rollup-block-root 341.99 MB 341.99 MB 1
rollup-checkpoint-merge 342.07 MB 342.09 MB 1.00
rollup-checkpoint-root-single-block 9680 MB 9680 MB 1
rollup-checkpoint-root 9690 MB 9690 MB 1
rollup-root 344.63 MB 347.38 MB 0.99
rollup-tx-base-private 1470 MB 1490 MB 0.99
rollup-tx-base-public 6960 MB 6960 MB 1
rollup-tx-merge 339.44 MB 339.44 MB 1
semaphore_depth_10 96.82 MB 107.46 MB 0.90
sha512_100_bytes 251.26 MB 251.19 MB 1.00

This comment was automatically generated by workflow using github-action-benchmark.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test Suite Duration

Details
Benchmark suite Current: d73fed9 Previous: 8ca4af7 Ratio
test_report_AztecProtocol_aztec-packages_noir-projects_aztec-nr 116 s 119 s 0.97
test_report_AztecProtocol_aztec-packages_noir-projects_noir-protocol-circuits_crates_blob 256 s 244 s 1.05
test_report_AztecProtocol_aztec-packages_noir-projects_noir-protocol-circuits_crates_private-kernel-lib 211 s 237 s 0.89
test_report_AztecProtocol_aztec-packages_noir-projects_noir-protocol-circuits_crates_reset-kernel-lib 35 s 35 s 1
test_report_AztecProtocol_aztec-packages_noir-projects_noir-protocol-circuits_crates_types 133 s 127 s 1.05
test_report_noir-lang_noir-bignum_ 159 s 164 s 0.97
test_report_noir-lang_sha256_ 15 s 16 s 0.94
test_report_noir-lang_sha512_ 12 s 14 s 0.86
test_report_zkpassport_noir-ecdsa_ 2 s 1 s 2
test_report_zkpassport_noir_rsa_ 1 s 1 s 1

This comment was automatically generated by workflow using github-action-benchmark.

@aakoshh
Copy link
Contributor Author

aakoshh commented Sep 26, 2025

Closing in favour of #10019 which at least doesn't fail to compile noir-contracts.

@aakoshh aakoshh closed this Sep 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bench-show Display benchmark results on PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Constant folding: applying in a loop until fix point

3 participants