Conversation
subrepo: subdir: "noir" merged: "c44ef1484" upstream: origin: "https://github.com/noir-lang/noir" branch: "master" commit: "c44ef1484" git-subrepo: version: "0.4.6" origin: "???" commit: "???"
This stack of pull requests is managed by Graphite. Learn more about stacking. |
| /// As a convention, we take use 64 bits. This means that we assume that | ||
| /// memory has 2^64 memory slots. | ||
| pub(crate) const BRILLIG_MEMORY_ADDRESSING_BIT_SIZE: u32 = 32; | ||
| pub(crate) const BRILLIG_MEMORY_ADDRESSING_BIT_SIZE: u32 = 64; // This did not match the comment |
There was a problem hiding this comment.
Note that AVM has 32-bit memory address space. Not sure if that matters, but maybe Brillig should have a 32-bit space too?
There was a problem hiding this comment.
the reason i changed it is that the ssa bit size was 64 bits, which meant that when brillig took a constant from ssa, and a constant from brillig tags would fail on operations.
I can go deeper into the compiler and see what happens if i switch ssa to assign constants to be 32 bits rather than 64 but it will take a bit more work
Benchmark resultsMetrics with a significant change:
Detailed resultsAll benchmarks are run on txs on the This benchmark source data is available in JSON format on S3 here. Values are compared against data from master at commit L2 block published to L1Each column represents the number of txs on an L2 block published to L1.
L2 chain processingEach column represents the number of blocks on the L2 chain where each block has 16 txs.
Circuits statsStats on running time and I/O sizes collected for every circuit run across all benchmarks.
Tree insertion statsThe duration to insert a fixed batch of leaves into each tree type.
MiscellaneousTransaction sizes based on how many contracts are deployed in the tx.
Transaction processing duration by data writes.
|
| case AddressingMode.INDIRECT: | ||
| mem.checkTag(TypeTag.UINT32, offset); | ||
| mem.checkTag(TypeTag.UINT64, offset); // brillig word size is 64 bits | ||
| resolved[i] = Number(mem.get(offset).toBigInt()); |
There was a problem hiding this comment.
🤔 we shouldn't have to do this since AVM memory offsets are all supposed to be 32 bits
There was a problem hiding this comment.
This is a consequence of the comment above, the address space of brillig / ssa were misaligned, as long as no one is trying to use the avm this week, I can wait for that to be resolved and just stack on this pr
dbanks12
left a comment
There was a problem hiding this comment.
LGTM! Only weirdness is the 64bit addressing in brillig and the fact that that's forcing us to expect 64 bit words as AVM memory offsets
fcarreiro
left a comment
There was a problem hiding this comment.
Please link the github issue for this PR on the description (but don't close it until we drop the arrays :))
| } | ||
| } | ||
|
|
||
| public checkTagLessThanEqual(tag: TypeTag, offset: number) { |
There was a problem hiding this comment.
Consider having checkIsValidMemoryOffsetTag() or something like that, which we can then just use everywhere we need to check for mem.
There was a problem hiding this comment.
in replace of this or in addition?
| public write(storageAddress: Fr, key: Fr, value: Fr) { | ||
| this.cache.write(storageAddress, key, value); | ||
| public write(storageAddress: Fr, key: Fr, values: Fr[]) { | ||
| for (const [index, value] of Object.entries(values)) { |
There was a problem hiding this comment.
You don't need to do this here. You can do the loop in the opcode (opcodes/storage.ts) and write your journal/trace accesses/tests the way you want them to be in the final version. Then when you can do the loop in noir you just change the opcode.
| @@ -64,6 +64,24 @@ describe('Hashing Opcodes', () => { | |||
| const result = context.machineState.memory.get(dstOffset); | |||
There was a problem hiding this comment.
I'm not against more tests, but unrelated to this PR? :)
## Overview Works around brillig blowup issue by altering the read and write opcodes to take in arrays of data. This is potentially just a short term fix. - Reading and writing to storage now take in arrays, code will not compile without this change, due to an ssa issue ->[ ISSUE ](AztecProtocol/aztec-packages#4979) - Tag checks on memory now just make sure the tag is LESS than uint64, rather than asserting that the memory tag is uint32, this should be fine. - We had to blow up the memory space of the avm to u64 as the entire noir compiler works with u64s. Arrays will not work unless we either - Make the avm 64 bit addressable ( probably fine ) - Make noir 32 bit addressable ( requires alot of buy in ) AztecProtocol/aztec-packages#4814 --------- Co-authored-by: sirasistant <sirasistant@gmail.com>
Reverts #4673 due to an uncaught issue with end-to-end tests
Reverts AztecProtocol/aztec-packages#4673 due to an uncaught issue with end-to-end tests
Reverts AztecProtocol/aztec-packages#4673 due to an uncaught issue with end-to-end tests
Reverts AztecProtocol/aztec-packages#4673 due to an uncaught issue with end-to-end tests
Reverts AztecProtocol/aztec-packages#4673 due to an uncaught issue with end-to-end tests

Overview
Works around brillig blowup issue by altering the read and write opcodes to take in arrays of data. This is potentially just a short term fix.
Reading and writing to storage now take in arrays, code will not compile without this change, due to an ssa issue -> ISSUE
Tag checks on memory now just make sure the tag is LESS than uint64, rather than asserting that the memory tag is uint32, this should be fine.
We had to blow up the memory space of the avm to u64 as the entire noir compiler works with u64s. Arrays will not work unless we either