Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
141 commits
Select commit Hold shift + click to select a range
97e2a01
feat: blob batching methods - nr only
MirandaWood Apr 15, 2025
52801e6
chore: fmt, more tests, rearranging
MirandaWood Apr 16, 2025
35e8be2
feat: BLS12 field, curve methods, blob batching methods, ts only
MirandaWood Apr 16, 2025
db03339
chore: lint, cleanup
MirandaWood Apr 16, 2025
68be71e
chore: remove trusted setup file + test using it (size issues)
MirandaWood Apr 16, 2025
1fa5d49
Revert "chore: remove trusted setup file + test using it (size issues)"
MirandaWood Apr 17, 2025
33d62a7
chore: cleanup packages + increase playground size
MirandaWood Apr 17, 2025
46b8866
feat: address some comments, cleanup
MirandaWood Apr 17, 2025
346ca9a
chore: update some comments
MirandaWood Apr 21, 2025
f655bf5
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood Apr 21, 2025
8d57216
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood Apr 21, 2025
9490422
chore: renaming, cleanup
MirandaWood Apr 22, 2025
c300c77
chore: renaming, cleanup
MirandaWood Apr 22, 2025
63ffe96
chore: add issue nums (hopefully force ci cache reset)
MirandaWood Apr 22, 2025
606a942
feat: as isNegative to F, rename proof -> Q
MirandaWood Apr 22, 2025
75d6d35
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood Apr 22, 2025
40ffd8b
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood Apr 22, 2025
951d329
chore: bumped vite kb limit 1700 -> 1720
MirandaWood Apr 22, 2025
307cb09
chore: bumped vite kb limit 1700 -> 1750
MirandaWood Apr 22, 2025
5251cd1
feat: adding helpers, constants, docs, etc. for integration
MirandaWood Apr 24, 2025
de8ec1e
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood Apr 24, 2025
5f2fe64
feat: WIP batch blobs and validate in rollup
MirandaWood Apr 24, 2025
070cbd2
feat: add more tests, cleanup, remove some warnings
MirandaWood Apr 25, 2025
f9e5364
chore: update to fixed noir version
TomAFrench Apr 29, 2025
201711e
feat: add final accumulator pub inputs for root
MirandaWood May 1, 2025
ca0da9d
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood May 7, 2025
d5f11d4
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 7, 2025
34e9e1a
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 7, 2025
d3ac058
chore: rename v -> blob_commitments_hash, move noir ref further up stack
MirandaWood May 13, 2025
6750476
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 13, 2025
8e968ac
chore: rename v to blobCommitmentsHash
MirandaWood May 13, 2025
7cd9b97
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 13, 2025
8a2570f
chore: renaming, reduce root pub inputs, cleanup
MirandaWood May 13, 2025
95ee6bf
chore: add back testing fixtures (too much effort to remove rn)
MirandaWood May 14, 2025
6051d14
chore: add back more testing fixtures
MirandaWood May 14, 2025
b1771c8
feat: WIP integrate batched blobs into l1 contracts + ts
MirandaWood May 15, 2025
1fc2c0d
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood May 15, 2025
cda163a
chore: use updated methods from bignum, remove warnings
MirandaWood May 15, 2025
4806435
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 15, 2025
508a11f
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 15, 2025
f92a3d5
chore: update methods to bignum 0.7, cleanup
MirandaWood May 15, 2025
0e45923
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 15, 2025
cfb8a32
chore: fmt, move to bigcurve mw/bump to avoid private module warnings
MirandaWood May 15, 2025
7986bc4
feat: use branch for visibility warnings, handle blobcommitmentshash …
MirandaWood May 15, 2025
49c7be3
chore: switch bigcurve branch to remove visibility warnings
MirandaWood May 16, 2025
ee900fe
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood May 16, 2025
2216519
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 16, 2025
05cffba
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 16, 2025
d32742e
chore: cleanup, remove warnings
MirandaWood May 16, 2025
cdb2136
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 16, 2025
dffe9b4
chore: remove logs oops
MirandaWood May 16, 2025
563e765
chore: oops remove more accidentally pushed console logs
MirandaWood May 16, 2025
6e830c7
chore: import fixes and hacks
MirandaWood May 16, 2025
b2f808d
fix: properly handle epoch boundary in l1 pub test
MirandaWood May 16, 2025
2578442
chore: update prover.tomls (i hate past me)
MirandaWood May 16, 2025
a01dc36
chore: cleanup + dummy changes to force n-p recompile for ci
MirandaWood May 19, 2025
c2637c3
chore: account for faster proving in epochs_empty test
MirandaWood May 20, 2025
f662ee3
chore: oops remove logs
MirandaWood May 20, 2025
55bc974
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood May 20, 2025
df48994
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 20, 2025
c1e149b
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 20, 2025
df7665b
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 20, 2025
594fb15
chore: update constants, fixtures, fmt
MirandaWood May 20, 2025
dc1bd18
chore: fmt
MirandaWood May 20, 2025
7a2a1e7
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 20, 2025
af7c56e
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 20, 2025
6b31327
chore: fmt
MirandaWood May 20, 2025
cb146ae
fix: include is_inf in all serialization so recursion works
MirandaWood May 20, 2025
18db30a
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 20, 2025
1e6391e
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 20, 2025
dbf32e7
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 20, 2025
98eacb4
chore: update constants
MirandaWood May 20, 2025
3398526
chore: update import
MirandaWood May 20, 2025
e0f687a
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 20, 2025
c8ec933
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 20, 2025
341a96c
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 20, 2025
5f02ae1
feat: add point compression unit test
MirandaWood May 20, 2025
480b8de
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 20, 2025
51899bd
chore: add fixture test for point compression, bring down new bls met…
MirandaWood May 21, 2025
f5bc35e
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 21, 2025
3bda135
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 21, 2025
11b0ec9
chore: cleanup, remove new node lint warnings
MirandaWood May 21, 2025
ea162d0
feat: give up on omitting is_inf field for now, cleanup
MirandaWood May 21, 2025
4c5c437
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood May 22, 2025
1b7fbf0
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 22, 2025
c9b39f0
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 22, 2025
9c075ed
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 22, 2025
e44d07d
chore: update fixtures
MirandaWood May 22, 2025
f392979
chore: update (seemingly unused?) epoch proof fixture
MirandaWood May 22, 2025
8667c5c
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood May 28, 2025
ff52662
chore: bump bignum
MirandaWood May 28, 2025
0c91085
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 28, 2025
09e638a
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 28, 2025
b96c499
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 28, 2025
b2c9adb
chore: post merge fixes, bring down small changes
MirandaWood May 28, 2025
6b79af7
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 28, 2025
76c4a68
chore: remake constants, fixtures, & merge fixes
MirandaWood May 28, 2025
2452b5c
feat: remove old constants, increase timeout
MirandaWood May 28, 2025
3064028
feat: address some comments
MirandaWood May 28, 2025
cef7c07
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 28, 2025
198c2b1
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 28, 2025
0de32bc
chore: test using toEqual in jest
MirandaWood May 29, 2025
ab5461c
Revert "chore: test using toEqual in jest"
MirandaWood May 29, 2025
b358b3e
chore: test using toEqual in jest
MirandaWood May 29, 2025
fb8e45a
feat: init bigint and buffer, remove static compress
MirandaWood May 29, 2025
be91807
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 29, 2025
5f56855
feat: rename, add constant
MirandaWood May 29, 2025
ff1a70e
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 29, 2025
5b7d7bd
feat: address comments, remove old notes, update constants
MirandaWood May 29, 2025
28b81d7
feat: check epoch duration, clear old comments + unnec. checks
MirandaWood May 30, 2025
18e3721
feat: rework orch blob accumulation
MirandaWood May 30, 2025
bb69627
chore: import blob-lib into prover-node
MirandaWood May 30, 2025
740b952
chore: revert blobcommitment mess (now exists in cleanup PR)
MirandaWood May 30, 2025
c75232b
feat: replace empty blob assumption
MirandaWood May 30, 2025
9e80ff2
feat: address some comments
MirandaWood May 30, 2025
2d1f35e
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood May 30, 2025
df4c693
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood May 30, 2025
542e851
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood May 30, 2025
b991ad8
feat: integrate bls12 point constant
MirandaWood May 30, 2025
ea19acd
chore: add extra check before blob acc init
MirandaWood Jun 2, 2025
e89fd4a
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood Jun 2, 2025
92f6e5f
chore: renaming, bring down changes from integration branch, cleanup
MirandaWood Jun 2, 2025
88b4b28
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils' into…
MirandaWood Jun 2, 2025
0ee11fd
chore: cleanup, bring down changes from other PRs
MirandaWood Jun 2, 2025
7429aac
Merge remote-tracking branch 'origin/mw/blob-batching-bls-utils-ts' i…
MirandaWood Jun 2, 2025
bbaa96f
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood Jun 2, 2025
ac78bf3
chore: renaming from methods branch
MirandaWood Jun 2, 2025
0716329
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood Jun 2, 2025
0d4ebdd
feat: address comments, docs, renaming
MirandaWood Jun 2, 2025
3ae3a8b
chore: update fixtures
MirandaWood Jun 2, 2025
4a6ee02
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood Jun 3, 2025
4c0f1a9
chore: bump bignum post merge
MirandaWood Jun 3, 2025
3c81612
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood Jun 3, 2025
1a16d7a
chore: revert fix to epochs test (no longer req)
MirandaWood Jun 3, 2025
42230da
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood Jun 3, 2025
1f912b7
Merge remote-tracking branch 'origin/mw/blob-batching-integration' in…
MirandaWood Jun 3, 2025
4f2320f
chore: bring down next fix from #14722
MirandaWood Jun 3, 2025
5783929
chore: bring down next fix again
MirandaWood Jun 3, 2025
295c59d
Merge remote-tracking branch 'origin/mw/blob-batching' into mw/blob-b…
MirandaWood Jun 4, 2025
39ff55d
chore: clarify docs, add issue numbers to todos
MirandaWood Jun 4, 2025
83e7d3a
fix: bad merge
MirandaWood Jun 4, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 8 additions & 3 deletions l1-contracts/src/core/Rollup.sol
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,7 @@ contract Rollup is IStaking, IValidatorSelection, IRollup, RollupCore {
external
view
override(IRollup)
returns (bytes32[] memory, bytes32, bytes32)
returns (bytes32[] memory, bytes32, bytes[] memory)
{
return ExtRollupLib.validateBlobs(_blobsInput, checkBlob);
}
Expand Down Expand Up @@ -357,13 +357,18 @@ contract Rollup is IStaking, IValidatorSelection, IRollup, RollupCore {
return FeeHeaderLib.decompress(FeeLib.getStorage().feeHeaders[_blockNumber]);
}

function getBlobPublicInputsHash(uint256 _blockNumber)
function getBlobCommitmentsHash(uint256 _blockNumber)
external
view
override(IRollup)
returns (bytes32)
{
return STFLib.getStorage().blobPublicInputsHashes[_blockNumber];
return STFLib.getStorage().blobCommitmentsHash[_blockNumber];
}

function getCurrentBlobCommitmentsHash() external view override(IRollup) returns (bytes32) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
function getCurrentBlobCommitmentsHash() external view override(IRollup) returns (bytes32) {
// Warning: At the start of a new epoch, the returned value will be that of the last proven epoch.
// The caller of this function must handle this.
function getCurrentBlobCommitmentsHash() external view override(IRollup) returns (bytes32) {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain this a bit? I think I'm getting confused! We assign the stored blobCommitmentsHash[blockNumber] when proposing blockNumber, where blockNumber = rollupStore.tips.pendingBlockNumber. So won't this function return the newly calculated blobCommitmentsHash?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the start of a new epoch, this value will reflect the blobCommitmentsHash as at the end of the previous epoch.
If the previous epoch was not proven, this value will reflect the blobCommitmentsHash of the last-proven epoch.

So all my comment is highlighting is that the name "get current blobCommitmentsHash" might not do what the caller expects, depending on when the caller calls this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the start of a new epoch, this value will reflect the blobCommitmentsHash as at the end of the previous epoch.

I think I'm missing something - at the start of a new epoch, the blobCommitmentsHash stored at the pending block number would be 0 if no blocks had been proposed, or the current blobCommitmentsHash if they had, right? Or I am getting this shifted by 1 in my brain?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, it's me getting confused.
I had in my head that this rollupStore.blobCommitmentsHash[block_number] could be recycled every epoch, as a way of saving gas. But that's not what's happening. Pure confusion from me.

E.g. I was thinking for block_number n, we could store against a number m = n % num_blocks_per_epoch.

rollupStore.blobCommitmentsHash[m]. Then it's only 5k gas to overwrite this with each proposal, instead of 20k gas.

(There's some complexity in that a rollback could be larger than num_blocks_per_epoch, so maybe the cyclic window needs to be larger.)

Something that can be considered later.

Copy link
Contributor Author

@MirandaWood MirandaWood Jun 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

E.g. I was thinking for block_number n, we could store against a number m = n % num_blocks_per_epoch.

We can! This is already documented as a TODO in RollupStore. The difficulty is in pruning due to rollbacks and ensuring values are correct for varying epoch sizes (I think we chatted about this during the onsite?).

RollupStore storage rollupStore = STFLib.getStorage();
return rollupStore.blobCommitmentsHash[rollupStore.tips.pendingBlockNumber];
}

function getConfig(address _attester)
Expand Down
13 changes: 7 additions & 6 deletions l1-contracts/src/core/interfaces/IRollup.sol
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ struct SubmitEpochRootProofArgs {
uint256 end; // inclusive
PublicInputArgs args;
bytes32[] fees;
bytes blobPublicInputs;
bytes blobInputs;
bytes proof;
}

Expand Down Expand Up @@ -93,18 +93,18 @@ struct RollupConfig {
uint256 version;
}

// The below blobPublicInputsHashes are filled when proposing a block, then used to verify an epoch proof.
// TODO(#8955): When implementing batched kzg proofs, store one instance per epoch rather than block
struct RollupStore {
ChainTips tips; // put first such that the struct slot structure is easy to follow for cheatcodes
mapping(uint256 blockNumber => BlockLog log) blocks;
mapping(uint256 blockNumber => bytes32) blobPublicInputsHashes;
mapping(address => uint256) sequencerRewards;
mapping(Epoch => EpochRewards) epochRewards;
// @todo Below can be optimised with a bitmap as we can benefit from provers likely proving for epochs close
// to one another.
mapping(address prover => mapping(Epoch epoch => bool claimed)) proverClaimed;
RollupConfig config;
// TODO(#14646): We only ever need to store AZTEC_MAX_EPOCH_DURATION values below => make fixed length and overwrite once we start a new epoch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guess the AZTEC_MAX_EPOCH_DURATION is something that is here to keep some flexibility for the circuits to not necessarily follow the actual epoch duration.

However, it is not sufficient to just store for one epoch, as prune can remove more than that, I think that the value you are probably interested in is the submission window.

Regardless, something that could be potentially of interest (not now I would expect) is keeping a number of running hashes in there instead. Essentially allowing you are the time of proof submission to just read one storage variable instead of one for each block.

Separately, why is this value in here, and not part of the BlockLog it seems more convenient if we keep them close (BlockLog) as it is much clearer when doing optimisations what can be grouped. For example storing a hash of the blocklog content and storing that reducing to a single store etc, and just simpler to figure out then.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as prune can remove more than that, I think that the value you are probably interested in is the submission window.

Ah ok. I didn't realise this. This was the reason why the value is here and not BlockLog - I wanted it to be separate so it would be possible to change later to a fixed length thing and just re use the slots once an epoch is proven. But that was based on a wrong assumption, so I'll add the value to BlockLog for now.

Regardless, something that could be potentially of interest (not now I would expect) is keeping a number of running hashes in there instead. Essentially allowing you are the time of proof submission to just read one storage variable instead of one for each block.

I had assumed this wouldn't work because of pruning, but thinking about it now this would work. We only need the 'current' value of the epoch, then iterate and update that value, unless it's the first block of an epoch where we initialise the value. I'll add an issue for this, thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed here: 32bf652
(Didn't have time to test yet, may fail)

// Requires us to clear values on successful proven epoch and check when a block starts a new epoch.
mapping(uint256 blockNumber => bytes32) blobCommitmentsHash; // = H(...H(H(commitment_0), commitment_1).... commitment_n) - used to validate we are using the same blob commitments on L1 and in the rollup circuit
}

interface ITestRollup {
Expand Down Expand Up @@ -184,7 +184,7 @@ interface IRollup is IRollupCore {
function validateBlobs(bytes calldata _blobsInputs)
external
view
returns (bytes32[] memory, bytes32, bytes32);
returns (bytes32[] memory, bytes32, bytes[] memory);

function getManaBaseFeeComponentsAt(Timestamp _timestamp, bool _inFeeAsset)
external
Expand All @@ -203,7 +203,8 @@ interface IRollup is IRollupCore {
function getPendingBlockNumber() external view returns (uint256);
function getBlock(uint256 _blockNumber) external view returns (BlockLog memory);
function getFeeHeader(uint256 _blockNumber) external view returns (FeeHeader memory);
function getBlobPublicInputsHash(uint256 _blockNumber) external view returns (bytes32);
function getBlobCommitmentsHash(uint256 _blockNumber) external view returns (bytes32);
function getCurrentBlobCommitmentsHash() external view returns (bytes32);

function getSequencerRewards(address _sequencer) external view returns (uint256);
function getCollectiveProverRewardsForEpoch(Epoch _epoch) external view returns (uint256);
Expand Down
5 changes: 2 additions & 3 deletions l1-contracts/src/core/libraries/ConstantsGen.sol
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,9 @@ library Constants {
uint256 internal constant GENESIS_ARCHIVE_ROOT =
1002640778211850180189505934749257244705296832326768971348723156503780793518;
uint256 internal constant FEE_JUICE_ADDRESS = 5;
uint256 internal constant BLOB_PUBLIC_INPUTS = 6;
uint256 internal constant BLOB_PUBLIC_INPUTS_BYTES = 112;
uint256 internal constant BLS12_POINT_COMPRESSED_BYTES = 48;
uint256 internal constant PROPOSED_BLOCK_HEADER_LENGTH_BYTES = 348;
uint256 internal constant ROOT_ROLLUP_PUBLIC_INPUTS_LENGTH = 1015;
uint256 internal constant ROOT_ROLLUP_PUBLIC_INPUTS_LENGTH = 158;
uint256 internal constant NUM_MSGS_PER_BASE_PARITY = 4;
uint256 internal constant NUM_BASE_PARITY_PER_ROOT_PARITY = 4;
}
10 changes: 5 additions & 5 deletions l1-contracts/src/core/libraries/Errors.sol
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,8 @@ library Errors {
error Rollup__InvalidProof(); // 0xa5b2ba17
error Rollup__InvalidProposedArchive(bytes32 expected, bytes32 actual); // 0x32532e73
error Rollup__InvalidTimestamp(Timestamp expected, Timestamp actual); // 0x3132e895
error Rollup__InvalidBlobHash(bytes32 blobHash); // 0xc4a168c6
error Rollup__InvalidBlobHash(bytes32 expected, bytes32 actual); // 0x13031e6a
error Rollup__InvalidBlobProof(bytes32 blobHash); // 0x5ca17bef
error Rollup__InvalidBlobPublicInputsHash(bytes32 expected, bytes32 actual); // 0xfe6b4994
error Rollup__NoEpochToProve(); // 0xcbaa3951
error Rollup__NonSequentialProving(); // 0x1e5be132
error Rollup__NothingToPrune(); // 0x850defd3
Expand All @@ -71,9 +70,10 @@ library Errors {
error Rollup__NonZeroDaFee(); // 0xd9c75f52
error Rollup__InvalidBasisPointFee(uint256 basisPointFee); // 0x4292d136
error Rollup__InvalidManaBaseFee(uint256 expected, uint256 actual); // 0x73b6d896
error Rollup__StartAndEndNotSameEpoch(Epoch start, Epoch end);
error Rollup__StartIsNotFirstBlockOfEpoch();
error Rollup__StartIsNotBuildingOnProven();
error Rollup__StartAndEndNotSameEpoch(Epoch start, Epoch end); // 0xb64ec33e
error Rollup__StartIsNotFirstBlockOfEpoch(); // 0x4ef11e0d
error Rollup__StartIsNotBuildingOnProven(); // 0x4a59f42e
error Rollup__TooManyBlocksInEpoch(uint256 expected, uint256 actual); // 0x7d5b1408
error Rollup__AlreadyClaimed(address prover, Epoch epoch);
error Rollup__NotPastDeadline(Slot deadline, Slot currentSlot);
error Rollup__PastDeadline(Slot deadline, Slot currentSlot);
Expand Down
148 changes: 104 additions & 44 deletions l1-contracts/src/core/libraries/rollup/BlobLib.sol
Original file line number Diff line number Diff line change
Expand Up @@ -29,74 +29,134 @@ library BlobLib {
* @notice Validate an L2 block's blobs and return the hashed blobHashes and public inputs.
* Input bytes:
* input[:1] - num blobs in block
* input[1:] - 192 * num blobs of the above _blobInput
* @param _blobsInput - The above bytes to verify a blob
* input[1:] - blob commitments (48 bytes * num blobs in block)
* @param _blobsInput - The above bytes to verify our input blob commitments match real blobs
* @param _checkBlob - Whether to skip blob related checks. Hardcoded to true (See RollupCore.sol -> checkBlob), exists only to be overriden in tests.
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add @return for the different values here to make it a bit simpler to follow.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also probably good to add an assumption about there not being any non-aztec blobs in the tx.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be worth adding a loop from numBlobs to whatever the max is and ensuring the blobHash is 0 for those?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added here: b90b979

function validateBlobs(bytes calldata _blobsInput, bool _checkBlob)
internal
view
returns (
// All of the blob hashes included in this blob
// All of the blob hashes included in this block
bytes32[] memory blobHashes,
bytes32 blobsHashesCommitment,
bytes32 blobPublicInputsHash
bytes[] memory blobCommitments
)
{
// We cannot input the incorrect number of blobs below, as the blobsHash
// and epoch proof verification will fail.
uint8 numBlobs = uint8(_blobsInput[0]);
blobHashes = new bytes32[](numBlobs);
bytes memory blobPublicInputs;
blobCommitments = new bytes[](numBlobs);
bytes32 blobHash;
// Add 1 for the numBlobs prefix
uint256 blobInputStart = 1;
for (uint256 i = 0; i < numBlobs; i++) {
// Add 1 for the numBlobs prefix
uint256 blobInputStart = i * 192 + 1;
// Since an invalid blob hash here would fail the consensus checks of
// the header, the `blobInput` is implicitly accepted by consensus as well.
blobHashes[i] = validateBlob(_blobsInput[blobInputStart:blobInputStart + 192], i, _checkBlob);
// We want to extract the 112 bytes we use for public inputs:
// * input[32:64] - z
// * input[64:96] - y
// * input[96:144] - commitment C
// Out of 192 bytes per blob.
blobPublicInputs = abi.encodePacked(
blobPublicInputs,
_blobsInput[blobInputStart + 32:blobInputStart + 32 + Constants.BLOB_PUBLIC_INPUTS_BYTES]
// Commitments = arrays of bytes48 compressed points
blobCommitments[i] = abi.encodePacked(
_blobsInput[blobInputStart:blobInputStart + Constants.BLS12_POINT_COMPRESSED_BYTES]
);
blobInputStart += Constants.BLS12_POINT_COMPRESSED_BYTES;

// TODO(#14646): Use kzg_to_versioned_hash & VERSIONED_HASH_VERSION_KZG
// Using bytes32 array to force bytes into memory
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why you want to force it into memory vs on stack here?

Personally, I think it is slightly simpler to just read as some binary operations on the stack, e.g., as seen below. When seeing anything with arrays and memory, my mind will first be thinking "oh is it updating the length of the array" as that is what would have happened if it was not a fixed size.

bytes32 blobHashCheck = bytes32(
  (
    uint256(sha256(blobCommitments[i]))
      & 0x00FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
  ) | 0x0100000000000000000000000000000000000000000000000000000000000000
);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Purely because I thought it was a little clearer to see the 'overwriting' happen with mstore but you raise a good point. I will use your suggestion, thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed here: b90b979

bytes32[1] memory blobHashCheck = [sha256(blobCommitments[i])];
// Until we use an external kzg_to_versioned_hash(), calculating it here:
// EIP-4844 spec blobhash is 32 bytes: [version, ...sha256(commitment)[1:32]]
// The version = VERSIONED_HASH_VERSION_KZG, currently 0x01.
assembly {
mstore8(blobHashCheck, 0x01)
}
if (_checkBlob) {
assembly {
blobHash := blobhash(i)
}
// The below check ensures that our injected blobCommitments indeed match the real
// blobs submitted with this block. They are then used in the blobCommitmentsHash (see below).
require(
blobHash == blobHashCheck[0], Errors.Rollup__InvalidBlobHash(blobHash, blobHashCheck[0])
);
} else {
blobHash = blobHashCheck[0];
}
blobHashes[i] = blobHash;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't highlight it, because it's not a change in this PR, but blobHashesCommitment is confusingly similar to blobCommitmentsHash. Perhaps it should be blobHashesHash and blobCommitmentsHash? And perhaps even a comment that this is subtly different from the blobCommitmentsHash.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree - I aimed to remove blobHashesCommitment (I also named it blobsHash but not sure what happened there, or whether I changed it after a review) since it's not protocol crucial. But I don't think we have time now, so I'll add a comment

}
// Return the hash of all z, y, and Cs, so we can use them in proof verification later
blobPublicInputsHash = sha256(blobPublicInputs);
// Hash the EVM blob hashes for the block header
// TODO(#13430): The below blobsHashesCommitment known as blobsHash elsewhere in the code. The name blobsHashesCommitment is confusingly similar to blobCommitmentsHash
// which are different values:
// - blobsHash := sha256([blobhash_0, ..., blobhash_m]) = a hash of all blob hashes in a block with m+1 blobs inserted into the header, exists so a user can cross check blobs.
// - blobCommitmentsHash := sha256( ...sha256(sha256(C_0), C_1) ... C_n) = iteratively calculated hash of all blob commitments in an epoch with n+1 blobs (see calculateBlobCommitmentsHash()),
// exists so we can validate injected commitments to the rollup circuits correspond to the correct real blobs.
// We may be able to combine these values e.g. blobCommitmentsHash := sha256( ...sha256(sha256(blobshash_0), blobshash_1) ... blobshash_l) for an epoch with l+1 blocks.
blobsHashesCommitment = Hash.sha256ToField(abi.encodePacked(blobHashes));
}

/**
* @notice Validate a blob.
* @notice Validate a batched blob.
* Input bytes:
* input[:32] - versioned_hash
* input[32:64] - z
* input[64:96] - y
* input[96:144] - commitment C
* input[144:192] - proof (a commitment to the quotient polynomial q(X))
* - This can be relaxed to happen at the time of `submitProof` instead
* @notice Apparently there is no guarantee that the blobs will be processed in the order sent
* so the use of blobhash(_blobNumber) may fail in production
* @param _blobInput - The above bytes to verify a blob
* input[:32] - versioned_hash - NB for a batched blob, this is simply the versioned hash of the batched commitment
* input[32:64] - z = poseidon2( ...poseidon2(poseidon2(z_0, z_1), z_2) ... z_n)
* input[64:96] - y = y_0 + gamma * y_1 + gamma^2 * y_2 + ... + gamma^n * y_n
* input[96:144] - commitment C = C_0 + gamma * C_1 + gamma^2 * C_2 + ... + gamma^n * C_n
* input[144:192] - proof (a commitment to the quotient polynomial q(X)) = Q_0 + gamma * Q_1 + gamma^2 * Q_2 + ... + gamma^n * Q_n
* @param _blobInput - The above bytes to verify a batched blob
*
* If this function passes where the values of z, y, and C are valid public inputs to the final epoch root proof, then
* we know that the data in each blob of the epoch corresponds to the tx effects of all our proven txs in the epoch.
*
* The rollup circuits calculate each z_i and y_i as above, so if this function passes but they do not match the values from the
* circuit, then proof verification will fail.
*
* Each commitment C_i is injected into the circuits and their correctness is validated using the blobCommitmentsHash, as
* explained below in calculateBlobCommitmentsHash().
*
*/
function validateBlob(bytes calldata _blobInput, uint256 _blobNumber, bool _checkBlob)
internal
view
returns (bytes32 blobHash)
{
if (!_checkBlob) {
return bytes32(_blobInput[0:32]);
function validateBatchedBlob(bytes calldata _blobInput) internal view returns (bool success) {
// Staticcall the point eval precompile https://eips.ethereum.org/EIPS/eip-4844#point-evaluation-precompile :
(success,) = address(0x0a).staticcall(_blobInput);
require(success, Errors.Rollup__InvalidBlobProof(bytes32(_blobInput[0:32])));
}

/**
* @notice Calculate the current state of the blobCommitmentsHash. Called for each new proposed block.
* @param _previousblobCommitmentsHash - The previous block's blobCommitmentsHash.
* @param _blobCommitments - The commitments corresponding to this block's blobs.
* @param _isFirstBlockOfEpoch - Whether this block is the first of an epoch (see below).
*
* The blobCommitmentsHash is an accumulated value calculated in the rollup circuits as:
* blobCommitmentsHash_i := sha256(blobCommitmentsHash_(i - 1), C_i)
* for each blob commitment C_i in an epoch. For the first blob in the epoch (i = 0):
* blobCommitmentsHash_i := sha256(C_0)
* which is why we require _isFirstBlockOfEpoch here.
*
* Each blob commitment is injected into the rollup circuits and we rely on the L1 contracts to validate
* these commitments correspond to real blobs. The input _blobCommitments below come from validateBlobs()
* so we know they are valid commitments here.
*
* We recalculate the same blobCommitmentsHash (which encompasses all claimed blobs in the epoch)
* as in the rollup circuits, then use the final value as a public input to the root rollup proof
* verification in EpochProofLib.sol.
*
* If the proof verifies, we know that the injected commitments used in the rollup circuits match
* the real commitments to L1 blobs.
*
*/
function calculateBlobCommitmentsHash(
bytes32 _previousblobCommitmentsHash,
bytes[] memory _blobCommitments,
bool _isFirstBlockOfEpoch
) internal pure returns (bytes32 currentblobCommitmentsHash) {
uint256 i = 0;
currentblobCommitmentsHash = _previousblobCommitmentsHash;
// If we are at the first block of an epoch, we reinitialise the blobCommitmentsHash.
// Blob commitments are collected and proven per root rollup proof => per epoch.
if (_isFirstBlockOfEpoch) {
// Initialise the blobCommitmentsHash
currentblobCommitmentsHash = Hash.sha256ToField(abi.encodePacked(_blobCommitments[i++]));
}
assembly {
blobHash := blobhash(_blobNumber)
for (i; i < _blobCommitments.length; i++) {
currentblobCommitmentsHash =
Hash.sha256ToField(abi.encodePacked(currentblobCommitmentsHash, _blobCommitments[i]));
}
require(blobHash == bytes32(_blobInput[0:32]), Errors.Rollup__InvalidBlobHash(blobHash));

// Staticcall the point eval precompile https://eips.ethereum.org/EIPS/eip-4844#point-evaluation-precompile :
(bool success,) = address(0x0a).staticcall(_blobInput);
require(success, Errors.Rollup__InvalidBlobProof(blobHash));
}
}
Loading