Conversation
There was a problem hiding this comment.
⚠️ Performance Alert ⚠️
Possible performance regression was detected for benchmark 'Test Suite Duration'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.20.
| Benchmark suite | Current: c4d7ef5 | Previous: 62a4432 | Ratio |
|---|---|---|---|
test_report_zkpassport_noir_rsa_ |
1 s |
0 s |
+∞ |
This comment was automatically generated by workflow using github-action-benchmark.
CC: @TomAFrench
There was a problem hiding this comment.
⚠️ Performance Alert ⚠️
Possible performance regression was detected for benchmark 'Execution Time'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.20.
| Benchmark suite | Current: 6380504 | Previous: 1b1985e | Ratio |
|---|---|---|---|
rollup-block-root-single-tx |
0.003 s |
0.002 s |
1.50 |
This comment was automatically generated by workflow using github-action-benchmark.
CC: @TomAFrench
There was a problem hiding this comment.
⚠️ Performance Alert ⚠️
Possible performance regression was detected for benchmark 'Compilation Time'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.20.
| Benchmark suite | Current: 52d5716 | Previous: fe0bfa0 | Ratio |
|---|---|---|---|
rollup-checkpoint-root |
482 s |
391 s |
1.23 |
This comment was automatically generated by workflow using github-action-benchmark.
CC: @TomAFrench
There was a problem hiding this comment.
⚠️ Performance Alert ⚠️
Possible performance regression was detected for benchmark 'ACVM Benchmarks'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.20.
| Benchmark suite | Current: c4d7ef5 | Previous: 62a4432 | Ratio |
|---|---|---|---|
perfectly_parallel_batch_inversion_opcodes |
2782829 ns/iter (± 1692) |
2264195 ns/iter (± 1976) |
1.23 |
This comment was automatically generated by workflow using github-action-benchmark.
CC: @TomAFrench
|
It took me a while to realise that the They fail because this test requires the I thought about adding it, but the Even if we passed that package somehow, we aren't passing packages of the dependencies. The dependency setup in Wasm works by building up an in-memory source map and dependency map. Packages would at that point be redundant. Perhaps it's okay if Wasm doesn't have access to unstable features for now. |
Automated pull of nightly from the [noir](https://github.com/noir-lang/noir) programming language, a dependency of Aztec. BEGIN_COMMIT_OVERRIDE chore: remove `local_annotations` from flattening (noir-lang/noir#10483) chore: better error recovery for multiple mut in pattern (noir-lang/noir#10490) chore(frontend): Tuple pattern tests and remove confusing arity error (noir-lang/noir#10480) chore: monomorphizer public fields (noir-lang/noir#9979) chore: remove a bunch of dummy definitions (noir-lang/noir#10482) feat(ssa): Limit the number of steps executed by the SSA interpreter during constant folding (noir-lang/noir#10481) fix: remove saturation from loop bound increments (noir-lang/noir#10479) fix(print): Print enums (noir-lang/noir#10472) fix(frontend): No negative overflow when quoting signed integer (noir-lang/noir#10331) chore: green light Brillig for audit (noir-lang/noir#10376) END_COMMIT_OVERRIDE
Description
Problem
Resolves #9294
Summary
decode_printable_valueto consume the correct number of fields when an enum is encountered.to_stringto format enums, rather than returnNone.PrintableValue::Enumto communicate thetagfrom the decoder to the formatter.to_stringto so it can force us to implement handling for any new printable type.Additional Context
decode_printable_valuelooked at thetagand picked the corresponding list of fields from the variants, then consumed those fields. This assumes that we will see a varying number of fields, depending on which variant was serialised.I can't remember now what kind of repercussions this would have on slices and arrays, which generally assume that each element has a fixed size when we calculate the flattened size as
length * element_size. It would have worked in the decoder, but it was suspicious.It turns out that Noir in fact passes a fixed number of fields regardless of the variant. I thought maybe this would be a number of
Fields to accommodate the largest variant, but instead it is the concatenation of all variants, with all but one being defaults.For example the following program:
turns into the following monomorphized AST:
It clearly shows that
OneandTwoare constructors return identical data structures with(<tag>, (<One fields>,), (<Two fields>,), (<Three has no fields>)).In SSA these are further flattened:
The array
[(Field, Field, u32, u64); 3]has thetag, followed by the singleFieldinOne, then two fields inTwo:User Documentation
Check one:
PR Checklist
cargo fmton default settings.