Skip to content

feat: blob batching methods (ts)#13606

Merged
MirandaWood merged 59 commits intomw/blob-batchingfrom
mw/blob-batching-bls-utils-ts
Jun 3, 2025
Merged

feat: blob batching methods (ts)#13606
MirandaWood merged 59 commits intomw/blob-batchingfrom
mw/blob-batching-bls-utils-ts

Conversation

@MirandaWood
Copy link
Contributor

@MirandaWood MirandaWood commented Apr 16, 2025

Ts only blob batching methods plus tests. Points to the parent methods PR: #13583.

TODOs (Marked in files as TODO(MW)):

  • Remove the large trusted setup file? Not sure if it's required, but it is currently the only way I show in tests that our BLS12 methods match those in c-kzg.
  • Add nr fixture where we can use updateInlineTestData for point compression.

Other TODOs must wait until we actually integrate batching, otherwise I will break the repo.

NB: The files bls12_fields.ts and bls12_point.ts and their tests are essentially copies of ./fields.ts and ./point.ts. When reviewing please keep that in mind and double check the original file if you see an issue before commenting (@iAmMichaelConnor ;) ).


PR Stack

@MirandaWood MirandaWood requested a review from Thunkar as a code owner April 17, 2025 13:53
// }),
bundlesize({
limits: [{ name: 'assets/index-*', limit: '1600kB' }],
limits: [{ name: 'assets/index-*', limit: '1650kB' }],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Thunkar Was having problems just hitting the size limit here after adding a package to foundation. Not sure of the best way to solve it (since the package is needed) so bumped this - any advice appreciated!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's been bumped to 1680kb due to bb.js changes, would that work?

It's ok as long as it's justified (and this is!), I just want to make sure it doesn't get completely out of control again!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't be an issue! When I merge master if I need to add a lot more to get it working I will get in touch, thank you!

Copy link
Contributor Author

@MirandaWood MirandaWood Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had to bump from 1700 -> 1750* just now. Running CI to see if it works, but happy to take a look and find an alternative if you feel 1750* is pushing the limit!

Copy link
Contributor

@iAmMichaelConnor iAmMichaelConnor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing! Thanks! Only some really minor comments from me.

@iAmMichaelConnor
Copy link
Contributor

(Oh, I haven't reviewed the nr files that got merged into this PR. Would you like me to? Or should I wait?)

const anotherString = '1000a000';
const anotherStringPrepended = '0x1000a000';

const expectedValueOfAnotherHexString = 268476416n;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe define it (and others) as BigInt(0x1000a000) so that we know this is not taken from the value to be compared below and we end up just comparing two wrong values.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was just copied over from fields.test.ts, I can change both?

Comment on lines +26 to +28
* Conversions from Buffer to BigInt and vice-versa are not cheap.
* We allow construction with either form and lazily convert to other as needed.
* We only check we are within the field modulus when initializing with bigint.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was copied over from fields.ts - though in testing I found no discernable slowdown when initiating a field as assigning its buffer when inputting a bigint and vice versa. So I have since changed the constructor to assign both the bigint and buffer forms.

Copy link
Contributor

@LeilaWang LeilaWang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

@MirandaWood MirandaWood mentioned this pull request May 30, 2025
6 tasks
Copy link
Contributor

@iAmMichaelConnor iAmMichaelConnor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

Base automatically changed from mw/blob-batching-bls-utils to mw/blob-batching June 3, 2025 08:54
@MirandaWood MirandaWood merged commit 2c45397 into mw/blob-batching Jun 3, 2025
3 of 4 checks passed
@MirandaWood MirandaWood deleted the mw/blob-batching-bls-utils-ts branch June 3, 2025 09:19
MirandaWood added a commit that referenced this pull request Jun 4, 2025
WIP

TODOs

- [ ] Compress BLS12 fq and fr values to fewer native fields to reduce
number of public inputs (somewhat blocked by #13608 since that dictates
how large bls12fr value gamma is)
- [ ] Delete old `blob.nr` files and remove `pub`s w/o batching (will do
this later so it's easier to review)
- [x] Rework `RootRollupPublicInputs` so it doesn't contain unnecessary
values not needed for L1 verification

---

## PR Stack

- [ ] `mw/blob-batching` <- main feature
- [ ] ^ `mw/blob-batching-bls-utils` <- BLS12-381 bigcurve and bignum
utils (noir) (#13583)
- [ ] ^ `mw/blob-batching-bls-utils-ts` <- BLS12-381 bigcurve and bignum
utils (ts) (#13606)
- [x] ^ `mw/blob-batching-integration` <- Integrate batching into noir
protocol circuits (#13817)
- [ ] ^ `mw/blob-batching-integration-ts-sol` <- Integrate batching into
ts and solidity (#14329)

---------

Co-authored-by: Tom French <15848336+TomAFrench@users.noreply.github.com>
MirandaWood added a commit that referenced this pull request Jun 4, 2025
## Finalises integration of batched blobs

`mw/blob-batching-integration` adds batching to the rollup .nr circuits
only (=> will not run in the repo). This PR brings those changes
downstream to the typescript and L1 contracts. Main changes:

- L1 Contracts:
- No longer calls the point evaluation precompile on `propose`, instead
injects the blob commitments, check they correspond to the broadcast
blobs, and stores them in the `blobCommitmentsHash`
- Does not store any blob public inputs apart from the
`blobCommitmentsHash` (no longer required)
- Calls the point evaluation precompile once on `submitEpochRootProof`
for ALL blobs in the epoch
- Uses the same precompile inputs as pubic inputs to the root proof
verification alonge with the `blobCommitmentsHash` to link the circuit
batched blob, real L1 blobs, and the batched blob verified on L1
- Refactors mock blob oracle
- Injects the final blob challenges used on each blob into all block
building methods in `orchestrator`
- Accumulates blobs in ts when building blocks and uses as inputs to
each rollup circuit
- Returns the blob inputs required for `submitEpochRootProof` on
`finaliseEpoch()`
- Updates nr structs in ts plus fixtures and tests


## TODOs/Current issues

- ~When using real proofs (e.g.
`yarn-project/prover-client/src/test/bb_prover_full_rollup.test.ts`),
the root rollup proof is generated correctly but fails verification
checks in `bb` due to incorrect number of public inputs. Changing the
number correctly updates vks and all constants elsewhere, but `bb` does
not change.~ EDIT: solved - must include the `is_inf` point member for
now (see below TODO)
- ~The `Prover.toml` for block-root is not executing. The error
manifests in the same way as that in
#12540 (but may be
different).~ EDIT: temporarily fixed - details in this repro (#14381)
and noir issue (noir-lang/noir#8563).
- BLS points in noir take up 9 fields (4 for each coordinate as a limbed
bignum, 1 for the `is_inf` flag) but can be compressed to only 2. For
recursive verification in block root and above, would it be worth the
gates to compress these? It depends whether the gate cost of compression
is more/less than gate cost of recursively verifying 7 more public
inputs.

## PR Stack

- [ ] `mw/blob-batching` <- main feature
- [ ] ^ `mw/blob-batching-bls-utils` <- BLS12-381 bigcurve and bignum
utils (noir) (#13583)
- [ ] ^ `mw/blob-batching-bls-utils-ts` <- BLS12-381 bigcurve and bignum
utils (ts) (#13606)
- [ ] ^ `mw/blob-batching-integration` <- Integrate batching into noir
protocol circuits (#13817)
- [x] ^ `mw/blob-batching-integration-ts-sol` <- Integrate batching into
ts and solidity (#14329)

---------

Co-authored-by: Tom French <15848336+TomAFrench@users.noreply.github.com>
github-merge-queue bot pushed a commit that referenced this pull request Jun 9, 2025
## The blobs are back in town.

This PR reworks blobs so that instead of calling the point evaluation
precompile for each blob (currently up to 3 per block => up to 96 (?)
calls per epoch), we call it once per epoch by batching blobs to a
single kzg commitment, opening, challenge, and proof.

How we can be sure that this one pairing check is equivalent to a check
per blob is covered in the maths by @iAmMichaelConnor
[here](https://hackmd.io/WUtNusQxS5KAw-af3gxycA?view) 🎉

## Overview

Instead of pushing to a long array of `BlobPublicInputs`, which are then
individually checked on L1, we batch each blob together to a single set
of `BlobAccumulatorPublicInputs`. The `start` accumulator state is fed
into each block root circuit, where the block's blobs are accumulated
and the `end` state is set. Each block merge circuit checks that the
state follows on correctly and, finally, the root circuit checks that
the very `start` state was empty and finalises the last `end` state.

This last `end` state makes up the set of inputs for the point
evaluation precompile. If the pairing check in that precompile passes,
we know that all blobs for all blocks in the epoch are valid and contain
only the tx effects validated by the rollup.

### Circuits

Key changes:
- Integrate BLS12-381 curve operations with `bignum` and `bigcurve`
libraries, plus tests.
- Rework the `blob` package to batch blobs and store in reworked
structs, plus tests.
- Rework the rollup circuits from `block_root` above to handle blob
accumulation state rather than a list of individual blob inputs, plus
(you guessed it) tests.

### Contracts

The contracts:
- No longer call the point evaluation precompile on `propose`, instead
inject the blob commitments, check they correspond to the broadcast
blobs, and stores them in the `blobCommitmentsHash`.
- Do not store any blob public inputs apart from the
`blobCommitmentsHash`.
- Call the point evaluation precompile once on `submitEpochRootProof`
for ALL blobs in the epoch.
- Use the same precompile inputs as pubic inputs to the root proof
verification along with the `blobCommitmentsHash` to link the circuit
batched blob, real L1 blobs, and the batched blob verified on L1.

### TypeScript

Key changes:
- Edit all the structs and methods reliant on the circuits/contracts to
match the above changes.
- Inject the final blob challenges used on each blob into all block
building methods in `orchestrator`.
- Accumulate blobs in ts when building blocks and use as inputs to each
rollup circuit, plus tests.
- Return the blob inputs required for `submitEpochRootProof` on
`finaliseEpoch()`.

### TODOs/Related Issues

- Choose field for hashing challenge:
#13608
- Instead of exponentiating `gamma` (expensive!), hash it for each
iteration: #13740
- Number of public inputs: BLS points in noir take up 9 fields (4 for
each coordinate as a limbed bignum, 1 for the is_inf flag) but can be
compressed to only 2. For recursive verification in block root and
above, would it be worth the gates to compress these? It depends whether
the gate cost of compression is more/less than gate cost of recursively
verifying 7 more public inputs.
- Remove the large trusted setup file from
`yarn-project/blob-lib/src/trusted_setup_bit_reversed.json`? Used in
testing, but may not be worth keeping (see code comments).
- Cleanup old, unused blob stuff in #14637.

## PR Stack

- [x] `mw/blob-batching` <- main feature
- [x] ^ `mw/blob-batching-bls-utils` <- BLS12-381 bigcurve and bignum
utils (noir) (#13583)
- [x] ^ `mw/blob-batching-bls-utils-ts` <- BLS12-381 bigcurve and bignum
utils (ts) (#13606)
- [x] ^ `mw/blob-batching-integration` <- Integrate batching into noir
protocol circuits (#13817)
- [x] ^ `mw/blob-batching-integration-ts-sol` <- Integrate batching into
ts and solidity (#14329)
- [ ] ^ `mw/blob-batching-cleanup` <- Remove old blob code

---------

Co-authored-by: Tom French <15848336+TomAFrench@users.noreply.github.com>
danielntmd pushed a commit to danielntmd/aztec-packages that referenced this pull request Jul 16, 2025
## The blobs are back in town.

This PR reworks blobs so that instead of calling the point evaluation
precompile for each blob (currently up to 3 per block => up to 96 (?)
calls per epoch), we call it once per epoch by batching blobs to a
single kzg commitment, opening, challenge, and proof.

How we can be sure that this one pairing check is equivalent to a check
per blob is covered in the maths by @iAmMichaelConnor
[here](https://hackmd.io/WUtNusQxS5KAw-af3gxycA?view) 🎉

## Overview

Instead of pushing to a long array of `BlobPublicInputs`, which are then
individually checked on L1, we batch each blob together to a single set
of `BlobAccumulatorPublicInputs`. The `start` accumulator state is fed
into each block root circuit, where the block's blobs are accumulated
and the `end` state is set. Each block merge circuit checks that the
state follows on correctly and, finally, the root circuit checks that
the very `start` state was empty and finalises the last `end` state.

This last `end` state makes up the set of inputs for the point
evaluation precompile. If the pairing check in that precompile passes,
we know that all blobs for all blocks in the epoch are valid and contain
only the tx effects validated by the rollup.

### Circuits

Key changes:
- Integrate BLS12-381 curve operations with `bignum` and `bigcurve`
libraries, plus tests.
- Rework the `blob` package to batch blobs and store in reworked
structs, plus tests.
- Rework the rollup circuits from `block_root` above to handle blob
accumulation state rather than a list of individual blob inputs, plus
(you guessed it) tests.

### Contracts

The contracts:
- No longer call the point evaluation precompile on `propose`, instead
inject the blob commitments, check they correspond to the broadcast
blobs, and stores them in the `blobCommitmentsHash`.
- Do not store any blob public inputs apart from the
`blobCommitmentsHash`.
- Call the point evaluation precompile once on `submitEpochRootProof`
for ALL blobs in the epoch.
- Use the same precompile inputs as pubic inputs to the root proof
verification along with the `blobCommitmentsHash` to link the circuit
batched blob, real L1 blobs, and the batched blob verified on L1.

### TypeScript

Key changes:
- Edit all the structs and methods reliant on the circuits/contracts to
match the above changes.
- Inject the final blob challenges used on each blob into all block
building methods in `orchestrator`.
- Accumulate blobs in ts when building blocks and use as inputs to each
rollup circuit, plus tests.
- Return the blob inputs required for `submitEpochRootProof` on
`finaliseEpoch()`.

### TODOs/Related Issues

- Choose field for hashing challenge:
AztecProtocol#13608
- Instead of exponentiating `gamma` (expensive!), hash it for each
iteration: AztecProtocol#13740
- Number of public inputs: BLS points in noir take up 9 fields (4 for
each coordinate as a limbed bignum, 1 for the is_inf flag) but can be
compressed to only 2. For recursive verification in block root and
above, would it be worth the gates to compress these? It depends whether
the gate cost of compression is more/less than gate cost of recursively
verifying 7 more public inputs.
- Remove the large trusted setup file from
`yarn-project/blob-lib/src/trusted_setup_bit_reversed.json`? Used in
testing, but may not be worth keeping (see code comments).
- Cleanup old, unused blob stuff in AztecProtocol#14637.

## PR Stack

- [x] `mw/blob-batching` <- main feature
- [x] ^ `mw/blob-batching-bls-utils` <- BLS12-381 bigcurve and bignum
utils (noir) (AztecProtocol#13583)
- [x] ^ `mw/blob-batching-bls-utils-ts` <- BLS12-381 bigcurve and bignum
utils (ts) (AztecProtocol#13606)
- [x] ^ `mw/blob-batching-integration` <- Integrate batching into noir
protocol circuits (AztecProtocol#13817)
- [x] ^ `mw/blob-batching-integration-ts-sol` <- Integrate batching into
ts and solidity (AztecProtocol#14329)
- [ ] ^ `mw/blob-batching-cleanup` <- Remove old blob code

---------

Co-authored-by: Tom French <15848336+TomAFrench@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants