feat(perf): Cache commutative instructions in constant folding#5832
feat(perf): Cache commutative instructions in constant folding#5832
Conversation
| if let Instruction::Binary(binary) = &instruction { | ||
| match binary.operator { | ||
| BinaryOp::Add | ||
| | BinaryOp::Mul | ||
| | BinaryOp::Eq | ||
| | BinaryOp::And | ||
| | BinaryOp::Or | ||
| | BinaryOp::Xor => { | ||
| let commutative_instruction = Instruction::Binary(Binary { | ||
| lhs: binary.rhs, | ||
| rhs: binary.lhs, | ||
| operator: binary.operator, | ||
| }); |
There was a problem hiding this comment.
Instead of this hard check here, we should sort the operands when creating a commutative binary op. E.g. lower ValueIds are always the lhs.
There was a problem hiding this comment.
Done. However, we seem to be getting similar results as before.
Changes to Brillig bytecode sizes
🧾 Summary (10% most significant diffs)
Full diff report 👇
|
Changes to circuit sizes
🧾 Summary (10% most significant diffs)
Full diff report 👇
|
|
I don't understand the regression_5252 size increase |
…d check inserting extra instructions
Yeah I don't either. I guess we can close this PR as it doesn't seem to be providing many benefits. |
| // Sort operands of commutative instructions | ||
| if let Instruction::Binary(binary) = &instruction { | ||
| match binary.operator { |
There was a problem hiding this comment.
Oh, I meant sorting when the instructions are first created. Not during constant folding. Probably won't remove the size increase in the tests though.
|
Going to close this PR as the benefits seem minimal in Brillig code size and actually cause regressions for ACIR. |
Description
Problem*
Resolves
Summary*
Keeping a draft as I want to see how it runs in the code size benchmarks
Additional Context
Documentation*
Check one:
PR Checklist*
cargo fmton default settings.