-
Notifications
You must be signed in to change notification settings - Fork 598
feat: WIP integrate batched blobs into l1 contracts + ts #14329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
97e2a01
52801e6
35e8be2
db03339
68be71e
1fa5d49
33d62a7
46b8866
346ca9a
f655bf5
8d57216
9490422
c300c77
63ffe96
606a942
75d6d35
40ffd8b
951d329
307cb09
5251cd1
de8ec1e
5f2fe64
070cbd2
f9e5364
201711e
ca0da9d
d5f11d4
34e9e1a
d3ac058
6750476
8e968ac
7cd9b97
8a2570f
95ee6bf
6051d14
b1771c8
1fc2c0d
cda163a
4806435
508a11f
f92a3d5
0e45923
cfb8a32
7986bc4
49c7be3
ee900fe
2216519
05cffba
d32742e
cdb2136
dffe9b4
563e765
6e830c7
b2f808d
2578442
a01dc36
c2637c3
f662ee3
55bc974
df48994
c1e149b
df7665b
594fb15
dc1bd18
7a2a1e7
af7c56e
6b31327
cb146ae
18db30a
1e6391e
dbf32e7
98eacb4
3398526
e0f687a
c8ec933
341a96c
5f02ae1
480b8de
51899bd
f5bc35e
3bda135
11b0ec9
ea162d0
4c5c437
1b7fbf0
c9b39f0
9c075ed
e44d07d
f392979
8667c5c
ff52662
0c91085
09e638a
b96c499
b2c9adb
6b79af7
76c4a68
2452b5c
3064028
cef7c07
198c2b1
0de32bc
ab5461c
b358b3e
fb8e45a
be91807
5f56855
ff1a70e
5b7d7bd
28b81d7
18e3721
bb69627
740b952
c75232b
9e80ff2
2d1f35e
df4c693
542e851
b991ad8
ea19acd
e89fd4a
92f6e5f
88b4b28
0ee11fd
7429aac
bbaa96f
ac78bf3
0716329
0d4ebdd
3ae3a8b
4a6ee02
4c0f1a9
3c81612
1a16d7a
42230da
1f912b7
4f2320f
5783929
295c59d
39ff55d
83e7d3a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -27,7 +27,7 @@ struct SubmitEpochRootProofArgs { | |
| uint256 end; // inclusive | ||
| PublicInputArgs args; | ||
| bytes32[] fees; | ||
| bytes blobPublicInputs; | ||
| bytes blobInputs; | ||
| bytes proof; | ||
| } | ||
|
|
||
|
|
@@ -93,18 +93,18 @@ struct RollupConfig { | |
| uint256 version; | ||
| } | ||
|
|
||
| // The below blobPublicInputsHashes are filled when proposing a block, then used to verify an epoch proof. | ||
| // TODO(#8955): When implementing batched kzg proofs, store one instance per epoch rather than block | ||
| struct RollupStore { | ||
| ChainTips tips; // put first such that the struct slot structure is easy to follow for cheatcodes | ||
| mapping(uint256 blockNumber => BlockLog log) blocks; | ||
| mapping(uint256 blockNumber => bytes32) blobPublicInputsHashes; | ||
| mapping(address => uint256) sequencerRewards; | ||
| mapping(Epoch => EpochRewards) epochRewards; | ||
| // @todo Below can be optimised with a bitmap as we can benefit from provers likely proving for epochs close | ||
| // to one another. | ||
| mapping(address prover => mapping(Epoch epoch => bool claimed)) proverClaimed; | ||
| RollupConfig config; | ||
| // TODO(#14646): We only ever need to store AZTEC_MAX_EPOCH_DURATION values below => make fixed length and overwrite once we start a new epoch | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm guess the However, it is not sufficient to just store for one epoch, as prune can remove more than that, I think that the value you are probably interested in is the submission window. Regardless, something that could be potentially of interest (not now I would expect) is keeping a number of running hashes in there instead. Essentially allowing you are the time of proof submission to just read one storage variable instead of one for each block. Separately, why is this value in here, and not part of the
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Ah ok. I didn't realise this. This was the reason why the value is here and not
I had assumed this wouldn't work because of pruning, but thinking about it now this would work. We only need the 'current' value of the epoch, then iterate and update that value, unless it's the first block of an epoch where we initialise the value. I'll add an issue for this, thanks!
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fixed here: 32bf652 |
||
| // Requires us to clear values on successful proven epoch and check when a block starts a new epoch. | ||
| mapping(uint256 blockNumber => bytes32) blobCommitmentsHash; // = H(...H(H(commitment_0), commitment_1).... commitment_n) - used to validate we are using the same blob commitments on L1 and in the rollup circuit | ||
| } | ||
|
|
||
| interface ITestRollup { | ||
|
|
@@ -184,7 +184,7 @@ interface IRollup is IRollupCore { | |
| function validateBlobs(bytes calldata _blobsInputs) | ||
| external | ||
| view | ||
| returns (bytes32[] memory, bytes32, bytes32); | ||
| returns (bytes32[] memory, bytes32, bytes[] memory); | ||
|
|
||
| function getManaBaseFeeComponentsAt(Timestamp _timestamp, bool _inFeeAsset) | ||
| external | ||
|
|
@@ -203,7 +203,8 @@ interface IRollup is IRollupCore { | |
| function getPendingBlockNumber() external view returns (uint256); | ||
| function getBlock(uint256 _blockNumber) external view returns (BlockLog memory); | ||
| function getFeeHeader(uint256 _blockNumber) external view returns (FeeHeader memory); | ||
| function getBlobPublicInputsHash(uint256 _blockNumber) external view returns (bytes32); | ||
| function getBlobCommitmentsHash(uint256 _blockNumber) external view returns (bytes32); | ||
| function getCurrentBlobCommitmentsHash() external view returns (bytes32); | ||
|
|
||
| function getSequencerRewards(address _sequencer) external view returns (uint256); | ||
| function getCollectiveProverRewardsForEpoch(Epoch _epoch) external view returns (uint256); | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -29,74 +29,134 @@ library BlobLib { | |
| * @notice Validate an L2 block's blobs and return the hashed blobHashes and public inputs. | ||
| * Input bytes: | ||
| * input[:1] - num blobs in block | ||
| * input[1:] - 192 * num blobs of the above _blobInput | ||
| * @param _blobsInput - The above bytes to verify a blob | ||
| * input[1:] - blob commitments (48 bytes * num blobs in block) | ||
| * @param _blobsInput - The above bytes to verify our input blob commitments match real blobs | ||
MirandaWood marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| * @param _checkBlob - Whether to skip blob related checks. Hardcoded to true (See RollupCore.sol -> checkBlob), exists only to be overriden in tests. | ||
| */ | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please add
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also probably good to add an assumption about there not being any non-aztec blobs in the tx.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would it be worth adding a loop from
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Added here: b90b979 |
||
| function validateBlobs(bytes calldata _blobsInput, bool _checkBlob) | ||
| internal | ||
| view | ||
| returns ( | ||
| // All of the blob hashes included in this blob | ||
| // All of the blob hashes included in this block | ||
| bytes32[] memory blobHashes, | ||
| bytes32 blobsHashesCommitment, | ||
| bytes32 blobPublicInputsHash | ||
| bytes[] memory blobCommitments | ||
MirandaWood marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ) | ||
| { | ||
| // We cannot input the incorrect number of blobs below, as the blobsHash | ||
| // and epoch proof verification will fail. | ||
| uint8 numBlobs = uint8(_blobsInput[0]); | ||
| blobHashes = new bytes32[](numBlobs); | ||
| bytes memory blobPublicInputs; | ||
| blobCommitments = new bytes[](numBlobs); | ||
| bytes32 blobHash; | ||
| // Add 1 for the numBlobs prefix | ||
| uint256 blobInputStart = 1; | ||
| for (uint256 i = 0; i < numBlobs; i++) { | ||
| // Add 1 for the numBlobs prefix | ||
| uint256 blobInputStart = i * 192 + 1; | ||
| // Since an invalid blob hash here would fail the consensus checks of | ||
| // the header, the `blobInput` is implicitly accepted by consensus as well. | ||
| blobHashes[i] = validateBlob(_blobsInput[blobInputStart:blobInputStart + 192], i, _checkBlob); | ||
| // We want to extract the 112 bytes we use for public inputs: | ||
| // * input[32:64] - z | ||
| // * input[64:96] - y | ||
| // * input[96:144] - commitment C | ||
| // Out of 192 bytes per blob. | ||
| blobPublicInputs = abi.encodePacked( | ||
| blobPublicInputs, | ||
| _blobsInput[blobInputStart + 32:blobInputStart + 32 + Constants.BLOB_PUBLIC_INPUTS_BYTES] | ||
| // Commitments = arrays of bytes48 compressed points | ||
| blobCommitments[i] = abi.encodePacked( | ||
| _blobsInput[blobInputStart:blobInputStart + Constants.BLS12_POINT_COMPRESSED_BYTES] | ||
| ); | ||
| blobInputStart += Constants.BLS12_POINT_COMPRESSED_BYTES; | ||
|
|
||
| // TODO(#14646): Use kzg_to_versioned_hash & VERSIONED_HASH_VERSION_KZG | ||
| // Using bytes32 array to force bytes into memory | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why you want to force it into memory vs on stack here? Personally, I think it is slightly simpler to just read as some binary operations on the stack, e.g., as seen below. When seeing anything with arrays and memory, my mind will first be thinking "oh is it updating the length of the array" as that is what would have happened if it was not a fixed size. bytes32 blobHashCheck = bytes32(
(
uint256(sha256(blobCommitments[i]))
& 0x00FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
) | 0x0100000000000000000000000000000000000000000000000000000000000000
);
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Purely because I thought it was a little clearer to see the 'overwriting' happen with
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fixed here: b90b979 |
||
| bytes32[1] memory blobHashCheck = [sha256(blobCommitments[i])]; | ||
| // Until we use an external kzg_to_versioned_hash(), calculating it here: | ||
| // EIP-4844 spec blobhash is 32 bytes: [version, ...sha256(commitment)[1:32]] | ||
| // The version = VERSIONED_HASH_VERSION_KZG, currently 0x01. | ||
| assembly { | ||
| mstore8(blobHashCheck, 0x01) | ||
MirandaWood marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| } | ||
| if (_checkBlob) { | ||
| assembly { | ||
| blobHash := blobhash(i) | ||
| } | ||
| // The below check ensures that our injected blobCommitments indeed match the real | ||
| // blobs submitted with this block. They are then used in the blobCommitmentsHash (see below). | ||
| require( | ||
| blobHash == blobHashCheck[0], Errors.Rollup__InvalidBlobHash(blobHash, blobHashCheck[0]) | ||
| ); | ||
| } else { | ||
| blobHash = blobHashCheck[0]; | ||
| } | ||
MirandaWood marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| blobHashes[i] = blobHash; | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can't highlight it, because it's not a change in this PR, but
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree - I aimed to remove |
||
| } | ||
| // Return the hash of all z, y, and Cs, so we can use them in proof verification later | ||
| blobPublicInputsHash = sha256(blobPublicInputs); | ||
| // Hash the EVM blob hashes for the block header | ||
| // TODO(#13430): The below blobsHashesCommitment known as blobsHash elsewhere in the code. The name blobsHashesCommitment is confusingly similar to blobCommitmentsHash | ||
| // which are different values: | ||
| // - blobsHash := sha256([blobhash_0, ..., blobhash_m]) = a hash of all blob hashes in a block with m+1 blobs inserted into the header, exists so a user can cross check blobs. | ||
| // - blobCommitmentsHash := sha256( ...sha256(sha256(C_0), C_1) ... C_n) = iteratively calculated hash of all blob commitments in an epoch with n+1 blobs (see calculateBlobCommitmentsHash()), | ||
| // exists so we can validate injected commitments to the rollup circuits correspond to the correct real blobs. | ||
| // We may be able to combine these values e.g. blobCommitmentsHash := sha256( ...sha256(sha256(blobshash_0), blobshash_1) ... blobshash_l) for an epoch with l+1 blocks. | ||
| blobsHashesCommitment = Hash.sha256ToField(abi.encodePacked(blobHashes)); | ||
| } | ||
|
|
||
| /** | ||
| * @notice Validate a blob. | ||
| * @notice Validate a batched blob. | ||
| * Input bytes: | ||
| * input[:32] - versioned_hash | ||
| * input[32:64] - z | ||
| * input[64:96] - y | ||
| * input[96:144] - commitment C | ||
| * input[144:192] - proof (a commitment to the quotient polynomial q(X)) | ||
| * - This can be relaxed to happen at the time of `submitProof` instead | ||
| * @notice Apparently there is no guarantee that the blobs will be processed in the order sent | ||
| * so the use of blobhash(_blobNumber) may fail in production | ||
| * @param _blobInput - The above bytes to verify a blob | ||
| * input[:32] - versioned_hash - NB for a batched blob, this is simply the versioned hash of the batched commitment | ||
| * input[32:64] - z = poseidon2( ...poseidon2(poseidon2(z_0, z_1), z_2) ... z_n) | ||
| * input[64:96] - y = y_0 + gamma * y_1 + gamma^2 * y_2 + ... + gamma^n * y_n | ||
| * input[96:144] - commitment C = C_0 + gamma * C_1 + gamma^2 * C_2 + ... + gamma^n * C_n | ||
| * input[144:192] - proof (a commitment to the quotient polynomial q(X)) = Q_0 + gamma * Q_1 + gamma^2 * Q_2 + ... + gamma^n * Q_n | ||
| * @param _blobInput - The above bytes to verify a batched blob | ||
| * | ||
| * If this function passes where the values of z, y, and C are valid public inputs to the final epoch root proof, then | ||
| * we know that the data in each blob of the epoch corresponds to the tx effects of all our proven txs in the epoch. | ||
| * | ||
| * The rollup circuits calculate each z_i and y_i as above, so if this function passes but they do not match the values from the | ||
| * circuit, then proof verification will fail. | ||
| * | ||
| * Each commitment C_i is injected into the circuits and their correctness is validated using the blobCommitmentsHash, as | ||
| * explained below in calculateBlobCommitmentsHash(). | ||
| * | ||
| */ | ||
| function validateBlob(bytes calldata _blobInput, uint256 _blobNumber, bool _checkBlob) | ||
| internal | ||
| view | ||
| returns (bytes32 blobHash) | ||
| { | ||
| if (!_checkBlob) { | ||
| return bytes32(_blobInput[0:32]); | ||
| function validateBatchedBlob(bytes calldata _blobInput) internal view returns (bool success) { | ||
| // Staticcall the point eval precompile https://eips.ethereum.org/EIPS/eip-4844#point-evaluation-precompile : | ||
| (success,) = address(0x0a).staticcall(_blobInput); | ||
| require(success, Errors.Rollup__InvalidBlobProof(bytes32(_blobInput[0:32]))); | ||
| } | ||
|
|
||
| /** | ||
| * @notice Calculate the current state of the blobCommitmentsHash. Called for each new proposed block. | ||
| * @param _previousblobCommitmentsHash - The previous block's blobCommitmentsHash. | ||
| * @param _blobCommitments - The commitments corresponding to this block's blobs. | ||
| * @param _isFirstBlockOfEpoch - Whether this block is the first of an epoch (see below). | ||
| * | ||
| * The blobCommitmentsHash is an accumulated value calculated in the rollup circuits as: | ||
| * blobCommitmentsHash_i := sha256(blobCommitmentsHash_(i - 1), C_i) | ||
| * for each blob commitment C_i in an epoch. For the first blob in the epoch (i = 0): | ||
| * blobCommitmentsHash_i := sha256(C_0) | ||
| * which is why we require _isFirstBlockOfEpoch here. | ||
| * | ||
| * Each blob commitment is injected into the rollup circuits and we rely on the L1 contracts to validate | ||
| * these commitments correspond to real blobs. The input _blobCommitments below come from validateBlobs() | ||
| * so we know they are valid commitments here. | ||
| * | ||
| * We recalculate the same blobCommitmentsHash (which encompasses all claimed blobs in the epoch) | ||
| * as in the rollup circuits, then use the final value as a public input to the root rollup proof | ||
| * verification in EpochProofLib.sol. | ||
| * | ||
| * If the proof verifies, we know that the injected commitments used in the rollup circuits match | ||
| * the real commitments to L1 blobs. | ||
| * | ||
| */ | ||
| function calculateBlobCommitmentsHash( | ||
| bytes32 _previousblobCommitmentsHash, | ||
| bytes[] memory _blobCommitments, | ||
| bool _isFirstBlockOfEpoch | ||
| ) internal pure returns (bytes32 currentblobCommitmentsHash) { | ||
| uint256 i = 0; | ||
| currentblobCommitmentsHash = _previousblobCommitmentsHash; | ||
| // If we are at the first block of an epoch, we reinitialise the blobCommitmentsHash. | ||
| // Blob commitments are collected and proven per root rollup proof => per epoch. | ||
| if (_isFirstBlockOfEpoch) { | ||
| // Initialise the blobCommitmentsHash | ||
| currentblobCommitmentsHash = Hash.sha256ToField(abi.encodePacked(_blobCommitments[i++])); | ||
| } | ||
| assembly { | ||
| blobHash := blobhash(_blobNumber) | ||
| for (i; i < _blobCommitments.length; i++) { | ||
| currentblobCommitmentsHash = | ||
| Hash.sha256ToField(abi.encodePacked(currentblobCommitmentsHash, _blobCommitments[i])); | ||
| } | ||
| require(blobHash == bytes32(_blobInput[0:32]), Errors.Rollup__InvalidBlobHash(blobHash)); | ||
|
|
||
| // Staticcall the point eval precompile https://eips.ethereum.org/EIPS/eip-4844#point-evaluation-precompile : | ||
| (bool success,) = address(0x0a).staticcall(_blobInput); | ||
| require(success, Errors.Rollup__InvalidBlobProof(blobHash)); | ||
| } | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain this a bit? I think I'm getting confused! We assign the stored
blobCommitmentsHash[blockNumber]when proposingblockNumber, whereblockNumber = rollupStore.tips.pendingBlockNumber. So won't this function return the newly calculatedblobCommitmentsHash?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the start of a new epoch, this value will reflect the blobCommitmentsHash as at the end of the previous epoch.
If the previous epoch was not proven, this value will reflect the blobCommitmentsHash of the last-proven epoch.
So all my comment is highlighting is that the name "get current blobCommitmentsHash" might not do what the caller expects, depending on when the caller calls this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm missing something - at the start of a new epoch, the
blobCommitmentsHashstored at the pending block number would be 0 if no blocks had been proposed, or the currentblobCommitmentsHashif they had, right? Or I am getting this shifted by 1 in my brain?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, it's me getting confused.
I had in my head that this
rollupStore.blobCommitmentsHash[block_number]could be recycled every epoch, as a way of saving gas. But that's not what's happening. Pure confusion from me.E.g. I was thinking for block_number n, we could store against a number
m = n % num_blocks_per_epoch.rollupStore.blobCommitmentsHash[m]. Then it's only 5k gas to overwrite this with each proposal, instead of 20k gas.(There's some complexity in that a rollback could be larger than
num_blocks_per_epoch, so maybe the cyclic window needs to be larger.)Something that can be considered later.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can! This is already documented as a TODO in
RollupStore. The difficulty is in pruning due to rollbacks and ensuring values are correct for varying epoch sizes (I think we chatted about this during the onsite?).