Skip to content

feat: multi blob#935

Open
chengwenxi wants to merge 32 commits intomainfrom
feat/multi_batch
Open

feat: multi blob#935
chengwenxi wants to merge 32 commits intomainfrom
feat/multi_batch

Conversation

@chengwenxi
Copy link
Copy Markdown
Collaborator

@chengwenxi chengwenxi commented Apr 17, 2026

Summary by CodeRabbit

  • New Features

    • Multi-blob batch support with header versioning (V2) and aggregated blob-hash semantics; tooling updated end-to-end to produce/consume multi-blob payloads.
  • Configuration

    • New CLI/env flags to configure max blob count and V2 upgrade time.
    • Batching interval/timeout now read from governance settings.
  • Improvements

    • Enhanced batch packing, capacity/sealing calculations, progress logging, compression and blob chunking, and fee estimation.
  • Tests

    • Added unit and integration tests covering V2 and multi-blob behavior.

Kukoomomo and others added 6 commits April 15, 2026 15:04
- Add isBatchV2Upgraded hook to BatchCache so V2 header is always
  generated once the upgrade is activated, regardless of blob count.
  Previously the code fell back to V1 for single-blob batches, which
  is incompatible with the V2 public_input_hash (keccak(hash[0]) ≠ hash[0]).

- Remove the MAX_BLOB_PER_BLOCK = 6 constant from Rollup.sol and rely
  solely on blobhash(i) == bytes32(0) to terminate the blob-count loop.
  Per spec §9 design decision, blob count limits should be controlled
  by tx-submitter MaxBlobCount config, not a hardcoded contract constant,
  so Ethereum protocol upgrades (e.g. EIP-7691) require no contract change.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… challenge handler

- Change ExecutorInput.blob_info to blob_infos (Vec<BlobInfo>) with batch_version field
- Add BlobVerifier::verify_blobs for multi-blob KZG verification
- Add BatchInfo::public_input_hash_v2 using keccak256(hash[0]||...||hash[N-1])
- Add multi-blob encoding (encode_multi_blob, encode_blob_from_bytes) in host blob.rs
- Route verify() on batch_version: V2 uses aggregated blob hashes, V0/V1 unchanged
- Update shadow_rollup calc_batch_pi to parse V2 header with blob_count at offset 257
- Add blob_count and extra_blob_hashes to challenge handler BatchInfo and encode

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… entrypoints

- Add batch_version to ProveRequest and thread through gen_client_input/execute_batch
- Use verify_blobs (all blobs) instead of verify (first blob only) in server queue
- Compute blobHashesHash for V2 in server batch_header_ex; pass individual hashes for fill_ext
- fill_ext parses V2 blob_count + per-blob hashes from extended batch_header_ex
- Add batch_version param to try_execute_batch; callers extract version from batch_header[0]
- Add --batch-version CLI arg to host binary
- Add blob_count param to execute_batch for correct PI hash routing

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…V0/V1 compat fields

Replace blob_versioned_hash + blob_count + extra_blob_hashes with a single
blob_hashes: Vec<[u8; 32]>. fill_ext parses all hashes from batch_header_ex,
encode writes blob_hashes[0] at offset 57 and appends count + remaining hashes
for V2. No backward-compatibility shims needed since prover components upgrade together.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 17, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1104cfac-d54b-49d1-9e6a-9212f47e7d24

📥 Commits

Reviewing files that changed from the base of the PR and between 245656f and d61284e.

📒 Files selected for processing (7)
  • contracts/src/deploy-config/holesky.ts
  • contracts/src/deploy-config/hoodi.ts
  • contracts/src/deploy-config/l1.ts
  • contracts/src/deploy-config/qanetl1.ts
  • contracts/src/deploy-config/sepolia.ts
  • contracts/src/deploy-config/testnetl1.ts
  • prover/bin/client/elf/verifier-client

📝 Walkthrough

Walkthrough

Adds BatchHeader V2 and end-to-end multi-blob support: blob utilities and configs, cache packing/sealing and V2 headers, ordered blob verification/parsing, prover/executor V2 inputs and hashing, Rollup V2 commit handling, tx-submitter and gas-oracle plumbing, tests and infra updates.

Changes

Batch V2 header, interfaces, and shared blob/encoding utilities

Layer / File(s) Summary
Interfaces & data contracts
common/batch/interfaces.go, common/batch/batch_header.go
Defines SealedBatchKV, L1/L2/rollup/gov interfaces; adds BatchHeader V2 and version-gated blob accessors.
Encoding helpers and blob payload/sidecar utilities
common/batch/encoding.go, common/batch/blob.go, common/blob/payload.go
Adds big-endian/height helpers; implements multi-blob canonicalization, commitments, sidecar build, compression, retrieval.
Common blob fee config and helpers
common/blob/fee.go
Provides chain configs, fee denominator, blob hashes/proofs, sidecar version utilities.
Common module metadata
common/go.mod
Introduces module and dependencies.

Batch cache V2 enablement and tests

Layer / File(s) Summary
Cache construction and config via L2Gov
common/batch/batch_cache.go
Constructor adds V2 predicate and maxBlobCount; wires L2Gov and storage; reads timing from L2Gov.
Capacity calculation and sealing with multi-blob
common/batch/batch_cache.go
Uses effectiveMaxBlobCount, blob-cap thresholds, and builds sidecars with cap; enhanced logging; l2Gov in sealing.
Packing progress tracking and sealed-batch logging
common/batch/batch_cache.go
Adds throttled progress logs; extended sealed-batch metadata logs.
Header building with aggregated blob hashes (V2)
common/batch/batch_cache.go
Aggregates blob hashes and sets version 2 in header.
Tests, genesis init, restart, and helper KV
common/batch/*_test.go, common/batch/helpers_test.go
Adds test KV and loop; genesis init test; restart test via L2Gov; commit tests updated.
Batch data hashing and compression limits
common/batch/batch_data.go
Local height/encoding; limit APIs accept maxBlobCount; shared compressor.
Batch storage generalized to SealedBatchKV
common/batch/batch_storage.go
Abstracts DB and unifies not-found handling.

Solidity Rollup V2 and tests

Layer / File(s) Summary
Rollup v2 commit/verify and header write
contracts/contracts/l1/rollup/Rollup.sol
Aggregates blobhash(i) for V2; accepts versions ≤2; refactors header writing; verify uses unified blob input.
Contract tests for V2 and invalid versions
contracts/contracts/test/Rollup.t.sol
Invalid version now 3; V2 requires blob; adds comprehensive V2 header and aggregation tests.

Node DA verification and batch parsing updates

Layer / File(s) Summary
Beacon blob verification and ordered assembly
node/derivation/beacon.go, node/derivation/derivation.go
Adds verifyBlob; builds sidecar by expected-hash order; proofs omitted.
ParseBatch multi-blob handling and tests
node/derivation/batch_info.go, node/derivation/batch_info_test.go
Concatenates blob bodies and decompresses once; uses common batch header/tx decode; adds tests.
Deprecated duplicate node header type
node/types/batch_header.go
Marks duplicate; adds V2 constant and accessors.
Oracle parent header decoding switch
oracle/oracle/batch.go
Uses common batch header for parent decoding.

Prover multi-blob input, hashing, and verification

Layer / File(s) Summary
Executor input and BatchInfo V2 hash
prover/crates/executor/client/src/types/*
ExecutorInput holds blob_infos and batch_version; adds V2 public-input hashing; blob scalar decoding helper.
BlobVerifier multi-blob APIs
prover/crates/executor/client/src/verifier/blob_verifier.rs
Adds verify_blobs/verify_raw; separates verify_kzg; concatenated scalars then single decompress.
Host multi-blob encoding and execute API
prover/crates/executor/host/src/blob.rs, prover/bin/host/src/execute.rs
Encodes multi-blob payloads; execute_batch accepts batch_version and returns blob_infos.
Queue/shadow pass batch_version and aggregate blob input
prover/bin/server/*, prover/bin/shadow-prove/*
Adds batch_version; aggregates hashes for V2; shadow derives version from header.
Challenge handler stores blob_hashes vector
prover/bin/challenge/src/handler.rs
Stores blob_hashes Vec and uses first for header encoding.

Gas oracle multi-blob payload and scalar

Layer / File(s) Summary
Blob payload extraction and zstd detection
gas-oracle/app/src/da_scalar/blob.rs
Adds get_payload_bytes; detect/check accept num_blobs and validate capacity.
Combined payload and scalar computation
gas-oracle/app/src/da_scalar/{calculate,l1_scalar}.rs
Concatenates payloads, detects/decompresses once; returns (batch_size, num_blobs, txn_count); blob scalar uses multi-blob capacity.

Tx-submitter refactor, config, and infra

Layer / File(s) Summary
Compat wrappers for blob helpers and configs
tx-submitter/types/*
Removes old blob files; adds blob_compat.go re-exporting common/blob.
Rollup service, client Len, converters, and L2Caller alias
tx-submitter/services/rollup.go, iface/client.go, types/{converter,l2Caller}.go
Blob-aware EstimateGas and fees; client Len(); converters use common/batch; L2Caller aliases L2Gov.
Flags and runtime config
tx-submitter/{flags,utils}/*.go
Adds MaxBlobCount and BatchV2UpgradeTime flags and config fields.
Docker/env deploy configs and submodule
ops/docker/*, ops/l2-genesis/*, go-ethereum, tx-submitter/go.mod, tx-submitter/batch/batch_storage_test.go
Enable priority/seal/V2; profiles; increase sampling/custody/min epochs; batch interval; update submodule; prune indirects; remove a test.

Sequence Diagram(s)

sequenceDiagram
  participant Cache
  participant L2Gov
  participant BlobUtils
  participant TxSubmitter
  participant Rollup
  participant Beacon
  participant Prover

  Cache->>L2Gov: BatchBlockInterval/BatchTimeout
  Cache->>BlobUtils: CompressBatchBytes -> MakeBlobTxSidecar
  Cache->>TxSubmitter: Emit V2 header (aggregated blob hashes)
  TxSubmitter->>Rollup: commitBatch(V<=2, sidecar)
  Rollup-->>TxSubmitter: committed
  TxSubmitter->>Beacon: fetch sidecars by tx
  Beacon-->>TxSubmitter: blobs+commitments
  Prover->>Prover: verify_blobs -> versioned_hashes
  Prover->>Prover: public_input_hash_v2(aggregated hash)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • morph-l2/morph#753: Related changes to blob-matching logic in node/derivation/derivation.go.
  • morph-l2/morph#798: Related blob fee configuration and ChainConfigMap plumbing used by tx-submitter.
  • morph-l2/morph#911: Related Rollup contract blob handling and versioned-blob computation.

Suggested reviewers

  • Kukoomomo
  • FletcherMan

(/) I stack my blobs in twos,
( •
•) then fours—compressed hues.
/ >🥕 headers hum “V2!”
Roots align, proofs accrue;
Beacon sings, the prover chews.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/multi_batch
⚔️ Resolve merge conflicts
  • Resolve merge conflict in branch feat/multi_batch

chengwenxi and others added 7 commits April 17, 2026 10:35
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…mpress once

Split get_origin_batch into unpack_blob (field-element unpack) and decompress_batch
(zstd decompress). verify_blobs now KZG-verifies each blob independently, unpacks
all compressed chunks, concatenates them, then calls decompress_batch once.
Previously each blob was independently decompressed which fails for N>1 since the
zstd frame spans all chunks.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…fset 57

V2 headers now use the same 257-byte format as V1 with the aggregated
blob hash (keccak256 of all blob hashes) at offset 57. This eliminates
BatchHeaderCodecV2, simplifies contracts/prover/submitter, and fixes the
multi-blob decompression bug in blob_verifier.

- Delete BatchHeaderCodecV2.sol; V2 commitBatch computes aggregated hash inline
- Unify _verifyProof and _loadBatchHeader for all versions
- Remove BatchHeaderV2 struct in Go; V2 uses V1 format + version override
- Simplify Rust challenge handler, queue, shadow_rollup (uniform 96-byte batch_header_ex)
- Fix verify_blobs: decode BLS scalars per blob, concatenate, decompress once

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
V2 should store the aggregated blob hash (keccak256 of all blob hashes)
in batchBlobVersionedHashes, consistent with the header offset 57 value,
instead of blobhash(0).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…struction

Move blobVersionedHash computation out of _commitBatchWithBatchData into
callers via a new _computeBlobVersionedHash(version) helper:
- V0/V1: blobhash(0) or ZERO_VERSIONED_HASH
- V2: keccak256(blobhash(0)||...||blobhash(N-1)), requires >=1 blob

_commitBatchWithBatchData now has a single unified header construction
path for all versions — no more V2/V0V1 branch split.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove the V2 restriction in commitState — with the simplified V2 header
format (aggregated hash at offset 57), the stored batchBlobVersionedHashes
value is sufficient to recommit without a blob, same as V0/V1.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@Kukoomomo Kukoomomo requested a review from dylanCai9 April 23, 2026 02:32
chengwenxi and others added 15 commits April 23, 2026 17:46
…rsionedHash

- Move `require(_blobCount > 0)` before the keccak256 assembly block to
  avoid computing keccak256 on empty input when no blobs are attached
- Add multi-blob aggregated hash unit tests to RollupCommitBatchV2Test:
  single/two/three blob correctness, order-sensitivity, and _verifyProof
  publicInputHash alignment with V2 aggregated hash semantics
- Add Rust unit tests for public_input_hash_v2 in prover-executor-client:
  single/two/three blob correctness, order-sensitivity, and structural
  difference from V1 (V2 uses keccak(h0||...||hN-1), not raw hash)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: corey <corey.zhang@bitget.com>
Co-authored-by: corey <corey.zhang@bitget.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
Resolve conflicts after PR #939 (remove sequencer batch generation):
- node/types/batch_test.go: keep HEAD (multi_batch) version; it carries
  TestBatchHeaderV2 / TestBlobHashesHashUnavailableForLegacy which still
  exercise BatchHeaderV0/V1/V2 helpers retained on this branch.
- go.work.sum: union both sides (toolchain entries from x/exp, x/mod,
  x/tools); identical go directive on both branches.
- go-ethereum submodule: take main's 045be0fd (v2.2.2, includes the
  matching "Remove sequencer batch write paths" change for #939) over
  multi_batch's older 62952ec7.
- common/go.mod: bump tendermint replace from v0.3.4 -> v0.3.7 to align
  with node/tx-submitter/contracts/oracle/ops modules; common itself
  has no tendermint imports, the directive was stale and was the cause
  of "conflicting replacements for github.com/tendermint/tendermint" in
  the workspace build.

Co-authored-by: Cursor <cursoragent@cursor.com>
…/batch (#945)

Co-authored-by: corey <corey.zhang@bitget.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
@Kukoomomo Kukoomomo marked this pull request as ready for review May 8, 2026 05:49
@Kukoomomo Kukoomomo requested a review from a team as a code owner May 8, 2026 05:49
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
ops/docker/docker-compose-4nodes.yml (1)

485-497: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Use one batch-V2 cutover config for every submitter.

This compose file now enables V2 batching only on tx-submitter-0: it hardcodes TX_SUBMITTER_BATCH_V2_UPGRADE_TIME and TX_SUBMITTER_SEAL_BATCH, while the multi-submitter instances can be started without either flag. In the same file the nodes read ${BATCH_UPGRADE_TIME}, so enabling that profile can put submitters and nodes on different batch/header versions.

Suggested config alignment
   tx-submitter-0:
     environment:
-      - TX_SUBMITTER_BATCH_V2_UPGRADE_TIME=1777533291
+      - TX_SUBMITTER_BATCH_V2_UPGRADE_TIME=${BATCH_UPGRADE_TIME}
       - TX_SUBMITTER_SEAL_BATCH=true

   tx-submitter-1:
     environment:
+      - TX_SUBMITTER_SEAL_BATCH=true
+      - TX_SUBMITTER_BATCH_V2_UPGRADE_TIME=${BATCH_UPGRADE_TIME}

   tx-submitter-2:
     environment:
+      - TX_SUBMITTER_SEAL_BATCH=true
+      - TX_SUBMITTER_BATCH_V2_UPGRADE_TIME=${BATCH_UPGRADE_TIME}

   tx-submitter-3:
     environment:
+      - TX_SUBMITTER_SEAL_BATCH=true
+      - TX_SUBMITTER_BATCH_V2_UPGRADE_TIME=${BATCH_UPGRADE_TIME}

Also applies to: 499-541, 542-584, 585-626

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@ops/docker/docker-compose-4nodes.yml` around lines 485 - 497, The compose
file enables batch-V2 only for tx-submitter-0 by hardcoding
TX_SUBMITTER_BATCH_V2_UPGRADE_TIME and TX_SUBMITTER_SEAL_BATCH while other
submitters (multi-submitter) lack them; change the submitter service environment
blocks so all submitter instances use the same batch cutover vars (reference
TX_SUBMITTER_BATCH_V2_UPGRADE_TIME and TX_SUBMITTER_SEAL_BATCH) and/or point
them to the shared ${BATCH_UPGRADE_TIME} variable used by the nodes; update the
tx-submitter-0 and multi-submitter service env lists so they consistently
include TX_SUBMITTER_BATCH_V2_UPGRADE_TIME (or ${BATCH_UPGRADE_TIME}) and
TX_SUBMITTER_SEAL_BATCH to ensure all submitters flip to batch-V2 together.
common/batch/batch_cache_test.go (1)

28-59: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Don't block a test on manual SIGINT.

TestBatchCacheInitServer never returns unless someone sends os.Interrupt, so go test can hang indefinitely in CI and local runs. Please gate this as a manual/integration test or give it a bounded timeout and stop the signal handler on exit.

Suggested fix
 func TestBatchCacheInitServer(t *testing.T) {
+	if os.Getenv("RUN_BATCH_CACHE_INIT_SERVER_TEST") == "" {
+		t.Skip("manual integration test; set RUN_BATCH_CACHE_INIT_SERVER_TEST=1 to run")
+	}
+
 	testDB := openTestKV(t)
 	cache := NewBatchCache(nil, nil, 2, l1Client, &SingleL2Client{C: l2Client}, rollupContract, l2Gov, testDB)
@@
 	interrupt := make(chan os.Signal, 1)
 	signal.Notify(interrupt, os.Interrupt)
-	<-interrupt
+	defer signal.Stop(interrupt)
+
+	select {
+	case <-interrupt:
+	case <-time.After(30 * time.Second):
+		t.Fatal("timed out waiting for TestBatchCacheInitServer to finish")
+	}
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@common/batch/batch_cache_test.go` around lines 28 - 59,
TestBatchCacheInitServer currently blocks waiting for os.Interrupt and can hang
CI; change it to be bounded or skipped: either (A) skip in normal runs by adding
at the top if testing.Short() { t.Skip("manual/integration test") } or (B)
enforce a timeout by replacing the blocking interrupt wait with a
context.WithTimeout(ctx, <duration>) and select on ctx.Done() instead of
<-interrupt, and ensure you stop the signal handler by calling
signal.Stop(interrupt) and closing the channel when the test exits. Update
references to TestBatchCacheInitServer, interrupt, InitAndSyncFromDatabase, and
AssembleCurrentBatchHeader accordingly so the goroutines exit when the test
context cancels.
node/derivation/batch_info.go (1)

127-145: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Validate block-number ordering before computing blockCount.

Both subtraction paths can underflow if a malformed batch reports a LastBlockNumber smaller than the decoded start/parent block. That turns blockCount into a huge value and later blows up allocation/slicing in this parser. Reject the batch before subtracting when the numbers go backwards.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@node/derivation/batch_info.go` around lines 127 - 145, Before computing
blockCount, validate that batch.LastBlockNumber is not less than the reference
block to avoid unsigned underflow: in the parentVersion == 0 branch, after
decoding startBlock (BlockContext.Decode on batchBytes[:60]) check that
batch.LastBlockNumber >= startBlock.Number and return a clear error if not; in
the else branch, after obtaining parentBatchBlock from
parentBatchHeader.LastBlockNumber(), check that batch.LastBlockNumber >=
parentBatchBlock and return an error if not. Keep the existing blockCount
formulas (batch.LastBlockNumber - startBlock.Number + 1 and
batch.LastBlockNumber - parentBatchBlock) but only perform them after these
validations.
tx-submitter/services/rollup.go (1)

1260-1261: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Potential truncation when converting gas caps to uint256.

tip.Int64() and gasFeeCap.Int64() truncate values exceeding math.MaxInt64 (~9.2 ETH in wei). While unlikely under normal conditions, extreme gas spikes could cause silent data loss. Pass the *big.Int directly to uint256.MustFromBig.

Proposed fix
 	return ethtypes.NewTx(&ethtypes.BlobTx{
 		ChainID:    uint256.MustFromBig(r.chainId),
 		Nonce:      nonce,
-		GasTipCap:  uint256.MustFromBig(big.NewInt(tip.Int64())),
-		GasFeeCap:  uint256.MustFromBig(big.NewInt(gasFeeCap.Int64())),
+		GasTipCap:  uint256.MustFromBig(tip),
+		GasFeeCap:  uint256.MustFromBig(gasFeeCap),
 		Gas:        gas,
 		To:         r.rollupAddr,
 		Data:       calldata,
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tx-submitter/services/rollup.go` around lines 1260 - 1261, Replace the
truncating Int64 conversions for gas caps by passing the original *big.Int
values into uint256.MustFromBig: locate the struct/assignment where GasTipCap
and GasFeeCap are set (the lines using GasTipCap, GasFeeCap,
uint256.MustFromBig, and the tip and gasFeeCap variables) and change the
arguments from big.NewInt(tip.Int64()) and big.NewInt(gasFeeCap.Int64()) to pass
tip and gasFeeCap directly to uint256.MustFromBig so no data is silently
truncated.
prover/crates/executor/client/src/verifier/blob_verifier.rs (1)

48-50: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Replace unwrap() calls with proper error handling in lines 48-50.

The function returns Result<B256, anyhow::Error>, and the immediately following operation (line 54) correctly uses .map_err(...)? for error propagation. The from_slice() calls on lines 48-50 should follow the same pattern. Using unwrap() on potentially invalid input will panic instead of returning a recoverable error, which is problematic in a ZK proof verification context.

Proposed fix
-        let blob = KzgRsBlob::from_slice(&blob_info.blob_data).unwrap();
-        let commitment = Bytes48::from_slice(&blob_info.commitment).unwrap();
-        let proof = Bytes48::from_slice(&blob_info.proof).unwrap();
+        let blob = KzgRsBlob::from_slice(&blob_info.blob_data)
+            .map_err(|e| anyhow!("invalid blob data: {e:?}"))?;
+        let commitment = Bytes48::from_slice(&blob_info.commitment)
+            .map_err(|e| anyhow!("invalid commitment: {e:?}"))?;
+        let proof = Bytes48::from_slice(&blob_info.proof)
+            .map_err(|e| anyhow!("invalid proof: {e:?}"))?;
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@prover/crates/executor/client/src/verifier/blob_verifier.rs` around lines 48
- 50, Replace the three unwrap() calls so malformed input yields a propagated
error instead of panicking: call
KzgRsBlob::from_slice(&blob_info.blob_data).map_err(|e| anyhow::anyhow!("invalid
blob_data: {}", e))? and do the same pattern for
Bytes48::from_slice(&blob_info.commitment) and
Bytes48::from_slice(&blob_info.proof), using descriptive messages (e.g. "invalid
commitment" / "invalid proof") and the ? operator to return Result<B256,
anyhow::Error>, consistent with the existing .map_err(...)? style used later in
this function.
🧹 Nitpick comments (1)
gas-oracle/app/src/da_scalar/blob.rs (1)

202-206: ⚡ Quick win

Add at least one true multi-blob decode test.

These assertions still exercise num_blobs = 1, so they won't catch boundary mistakes in concatenation/trimming across blob joins—the core behavior added by this PR. Please add a fixture or generated case whose compressed payload spans at least two blobs.

Also applies to: 250-254

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@gas-oracle/app/src/da_scalar/blob.rs` around lines 202 - 206, Add a true
multi-blob decode unit test that constructs at least two Blob instances whose
concatenated payload contains a single compressed batch split across the blob
boundary (so the compressed bytes start in blob A and continue in blob B), then
call Blob::get_payload_bytes on each blob and run Blob::detect_zstd_compressed
and the downstream decompression path (decompress_batch or whatever consumer is
used) to assert the full batch decodes correctly; target the same test module
that currently uses get_payload_bytes and Blob::detect_zstd_compressed, create
fixtures or generate a zstd-compressed payload long enough to span blobs, split
it across two Blob objects, and assert the decoded records match the original
uncompressed data.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@common/batch/batch_cache_genesis_header_test.go`:
- Around line 127-141: The test blocks on waiting for an OS interrupt and leaves
the background go testLoop running; make it self-terminating by replacing the
manual interrupt with a cancellable/timeout approach: either call
cache.AssembleCurrentBatchHeader() synchronously once (removing go testLoop and
the interrupt channel), or run go testLoop with a context.WithTimeout/WithCancel
and call cancel after the timeout (or use time.After) so the loop exits and the
test continues; update references to batchCacheSyncMu, go testLoop and
AssembleCurrentBatchHeader to ensure the lock is held around the single call or
is used inside the timed loop and remove the blocking <-interrupt.

In `@common/batch/commit_test.go`:
- Around line 26-29: The test currently defines pk = "" which causes a panic
when code slices pk[2:] later; update the test to either supply a valid
non-empty test key or guard the test with an explicit skip when the key is
unset: check the package-level variable pk before any slicing (the code that
uses pk[2:] or functions that derive from pk) and call t.Skipf with a clear
message if pk == "" so the test won't panic, or replace the empty pk with a real
test key value sourced from a test config/env variable used by the test helpers.

In `@common/blob/fee.go`:
- Around line 241-248: The loop in BlobHashes drives iteration by blobs but the
output slice and inputs are commitments, causing panics or zeroed tails; update
BlobHashes to iterate over commitments (e.g., for i := range commitments) and
compute h[i] = kzg4844.CalcBlobHashV1(hasher, &commitments[i]) so the loop index
aligns with the sized result slice and the commitment used to compute each hash.
- Around line 251-261: MakeBlobProof currently indexes commitment[i] without
validating that commitment and blobs have matching lengths, which can panic when
commitment is shorter; add a preflight check at the start of MakeBlobProof that
validates len(commitment) == len(blobs) (or at least len(commitment) >=
len(blobs)) and return a clear error if not, before entering the loop that calls
kzg4844.ComputeBlobProof; this prevents panics and surfaces a normal error to
the caller.

In `@common/blob/payload.go`:
- Around line 43-51: In RetrieveBlobBytes, the validation error prints
data[i*32] (the zeroed output buffer) so it always reports 0x00; change the
logged byte to the source blob byte (use blob[i*32]) in the fmt.Errorf call so
the error shows the actual offending byte value; keep the rest of the loop and
the copy from blob to data intact and ensure you reference the same i and
indexing used in the copy when building the error message.

In `@contracts/contracts/l1/rollup/Rollup.sol`:
- Around line 921-942: In _computeBlobVersionedHash, for the legacy branch (when
_version < 2) add a guard to reject extraneous blobs by requiring blobhash(1) ==
bytes32(0) before accepting the batch; update the else branch that computes
_blobVersionedHash (which currently uses blobhash(0) and ZERO_VERSIONED_HASH) to
first require blobhash(1) == bytes32(0) so any non-zero extra blob causes a
revert, referencing the existing blobhash and ZERO_VERSIONED_HASH symbols.

In `@gas-oracle/app/src/da_scalar/l1_scalar.rs`:
- Around line 301-303: The function currently returns Ok((0,0,0)) when
indexed_hashes.is_empty(), which later hits the l2_data_len == 0 =>
MAX_BLOB_SCALAR branch in calculate_from_rollup and forces a max blob scalar;
instead return a sentinel that means “no blobs, skip blob-scalar update” (e.g.
change the return type to Result<Option<(u64,u64,u64)>, _> or return a distinct
enum variant) from the blob-processing function where indexed_hashes and tx_hash
are available, and update calculate_from_rollup to detect that sentinel and skip
updating the blob scalar rather than treating it as zero-length calldata; ensure
callers of the blob function (names: indexed_hashes, calculate_from_rollup) are
updated to handle the Option/enum and preserve existing behavior for real
zero-length calldata vs blobless batches.

In `@node/derivation/batch_info.go`:
- Around line 150-155: The code reads the 2-byte block count prefix without
checking length, causing a panic when batch.BlockContexts is non-nil but shorter
than 2; before calling binary.BigEndian.Uint16(batch.BlockContexts[:2]) (and
before computing blockCount), add a guard that len(batch.BlockContexts) >= 2 and
return a clear error if not; keep the subsequent existing length check
(len(batch.BlockContexts) < 2+60*blockCount) intact and reference
batch.BlockContexts and blockCount when updating the error messages.

In `@prover/crates/executor/client/src/lib.rs`:
- Line 13: BlobVerifier::verify_blobs currently returns versioned_hashes and
batch_data_from_blob but the caller blindly indexes versioned_hashes[0] for
V0/V1 and treats any >=2 as V2; update the caller to explicitly validate the
blob version and the exact number of hashes before computing the public-input
(PI) hash: check the version value in versioned_hashes (only accept the explicit
supported variants, e.g., 0,1,2), assert that the hash cardinality matches the
expected count for that version (e.g., V0/V1 => exactly 1 hash, V2 => expected 2
hashes), and return an Err when the version is unknown or the count mismatches
instead of proceeding to compute the PI hash (affects the code that uses
versioned_hashes and batch_data_from_blob, including the logic around lines
37-41).

In `@prover/crates/executor/client/src/types/blob.rs`:
- Around line 17-29: The function decode_blob_scalars assumes blob_data has
BLOB_WIDTH*32 bytes and will panic on short inputs; add an explicit length check
at the start of decode_blob_scalars to validate blob_data.len() is at least (or
exactly) BLOB_WIDTH * 32 and return an Err (anyhow::Error) if not, so you avoid
indexing panics and clarify behavior for longer inputs (either reject lengths !=
BLOB_WIDTH*32 or document/handle > by only processing the first BLOB_WIDTH*32
bytes); reference the function name decode_blob_scalars and the constant
BLOB_WIDTH to locate where to add this guard.

In `@prover/crates/executor/client/src/types/input.rs`:
- Around line 72-74: The batch_version field currently accepts any u8; add
validation at the input boundary to reject unsupported values (allow only 0, 1,
2) by returning an error on deserialization or conversion. Implement this by
either adding a custom Deserialize for the input struct in types/input.rs or
implementing TryFrom<RawInput> (or a validate() method invoked immediately after
deserialization) that checks the batch_version field and returns an Err for any
value not in {0,1,2}; ensure the error propagates back to the caller so
malformed inputs are rejected before any execution or hash-version branching
occurs.

---

Outside diff comments:
In `@common/batch/batch_cache_test.go`:
- Around line 28-59: TestBatchCacheInitServer currently blocks waiting for
os.Interrupt and can hang CI; change it to be bounded or skipped: either (A)
skip in normal runs by adding at the top if testing.Short() {
t.Skip("manual/integration test") } or (B) enforce a timeout by replacing the
blocking interrupt wait with a context.WithTimeout(ctx, <duration>) and select
on ctx.Done() instead of <-interrupt, and ensure you stop the signal handler by
calling signal.Stop(interrupt) and closing the channel when the test exits.
Update references to TestBatchCacheInitServer, interrupt,
InitAndSyncFromDatabase, and AssembleCurrentBatchHeader accordingly so the
goroutines exit when the test context cancels.

In `@node/derivation/batch_info.go`:
- Around line 127-145: Before computing blockCount, validate that
batch.LastBlockNumber is not less than the reference block to avoid unsigned
underflow: in the parentVersion == 0 branch, after decoding startBlock
(BlockContext.Decode on batchBytes[:60]) check that batch.LastBlockNumber >=
startBlock.Number and return a clear error if not; in the else branch, after
obtaining parentBatchBlock from parentBatchHeader.LastBlockNumber(), check that
batch.LastBlockNumber >= parentBatchBlock and return an error if not. Keep the
existing blockCount formulas (batch.LastBlockNumber - startBlock.Number + 1 and
batch.LastBlockNumber - parentBatchBlock) but only perform them after these
validations.

In `@ops/docker/docker-compose-4nodes.yml`:
- Around line 485-497: The compose file enables batch-V2 only for tx-submitter-0
by hardcoding TX_SUBMITTER_BATCH_V2_UPGRADE_TIME and TX_SUBMITTER_SEAL_BATCH
while other submitters (multi-submitter) lack them; change the submitter service
environment blocks so all submitter instances use the same batch cutover vars
(reference TX_SUBMITTER_BATCH_V2_UPGRADE_TIME and TX_SUBMITTER_SEAL_BATCH)
and/or point them to the shared ${BATCH_UPGRADE_TIME} variable used by the
nodes; update the tx-submitter-0 and multi-submitter service env lists so they
consistently include TX_SUBMITTER_BATCH_V2_UPGRADE_TIME (or
${BATCH_UPGRADE_TIME}) and TX_SUBMITTER_SEAL_BATCH to ensure all submitters flip
to batch-V2 together.

In `@prover/crates/executor/client/src/verifier/blob_verifier.rs`:
- Around line 48-50: Replace the three unwrap() calls so malformed input yields
a propagated error instead of panicking: call
KzgRsBlob::from_slice(&blob_info.blob_data).map_err(|e| anyhow::anyhow!("invalid
blob_data: {}", e))? and do the same pattern for
Bytes48::from_slice(&blob_info.commitment) and
Bytes48::from_slice(&blob_info.proof), using descriptive messages (e.g. "invalid
commitment" / "invalid proof") and the ? operator to return Result<B256,
anyhow::Error>, consistent with the existing .map_err(...)? style used later in
this function.

In `@tx-submitter/services/rollup.go`:
- Around line 1260-1261: Replace the truncating Int64 conversions for gas caps
by passing the original *big.Int values into uint256.MustFromBig: locate the
struct/assignment where GasTipCap and GasFeeCap are set (the lines using
GasTipCap, GasFeeCap, uint256.MustFromBig, and the tip and gasFeeCap variables)
and change the arguments from big.NewInt(tip.Int64()) and
big.NewInt(gasFeeCap.Int64()) to pass tip and gasFeeCap directly to
uint256.MustFromBig so no data is silently truncated.

---

Nitpick comments:
In `@gas-oracle/app/src/da_scalar/blob.rs`:
- Around line 202-206: Add a true multi-blob decode unit test that constructs at
least two Blob instances whose concatenated payload contains a single compressed
batch split across the blob boundary (so the compressed bytes start in blob A
and continue in blob B), then call Blob::get_payload_bytes on each blob and run
Blob::detect_zstd_compressed and the downstream decompression path
(decompress_batch or whatever consumer is used) to assert the full batch decodes
correctly; target the same test module that currently uses get_payload_bytes and
Blob::detect_zstd_compressed, create fixtures or generate a zstd-compressed
payload long enough to span blobs, split it across two Blob objects, and assert
the decoded records match the original uncompressed data.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4edcdb64-4766-4507-8797-0cbdffcef1ad

📥 Commits

Reviewing files that changed from the base of the PR and between 4fb95dc and 7535858.

⛔ Files ignored due to path filters (4)
  • common/go.sum is excluded by !**/*.sum
  • go.work is excluded by !**/*.work
  • go.work.sum is excluded by !**/*.sum
  • tx-submitter/go.sum is excluded by !**/*.sum
📒 Files selected for processing (58)
  • common/batch/batch_cache.go
  • common/batch/batch_cache_genesis_header_test.go
  • common/batch/batch_cache_test.go
  • common/batch/batch_data.go
  • common/batch/batch_header.go
  • common/batch/batch_header_test.go
  • common/batch/batch_query.go
  • common/batch/batch_restart_test.go
  • common/batch/batch_storage.go
  • common/batch/blob.go
  • common/batch/commit_test.go
  • common/batch/encoding.go
  • common/batch/helpers_test.go
  • common/batch/interfaces.go
  • common/batch/l2_gov.go
  • common/blob/fee.go
  • common/blob/payload.go
  • common/go.mod
  • contracts/contracts/l1/rollup/Rollup.sol
  • contracts/contracts/test/Rollup.t.sol
  • gas-oracle/app/src/da_scalar/blob.rs
  • gas-oracle/app/src/da_scalar/calculate.rs
  • gas-oracle/app/src/da_scalar/l1_scalar.rs
  • go-ethereum
  • node/derivation/batch_info.go
  • node/derivation/batch_info_test.go
  • node/derivation/beacon.go
  • node/derivation/derivation.go
  • node/types/batch_header.go
  • ops/docker/docker-compose-4nodes.yml
  • ops/docker/layer1/configs/values.env.template
  • ops/l2-genesis/deploy-config/devnet-deploy-config.json
  • oracle/oracle/batch.go
  • prover/bin/challenge/src/handler.rs
  • prover/bin/host/src/execute.rs
  • prover/bin/host/src/main.rs
  • prover/bin/server/src/queue.rs
  • prover/bin/shadow-prove/src/execute.rs
  • prover/bin/shadow-prove/src/main.rs
  • prover/bin/shadow-prove/src/shadow_rollup.rs
  • prover/crates/executor/client/src/lib.rs
  • prover/crates/executor/client/src/types/batch.rs
  • prover/crates/executor/client/src/types/blob.rs
  • prover/crates/executor/client/src/types/input.rs
  • prover/crates/executor/client/src/verifier/blob_verifier.rs
  • prover/crates/executor/host/src/blob.rs
  • tx-submitter/batch/batch_storage_test.go
  • tx-submitter/batch/blob.go
  • tx-submitter/flags/flags.go
  • tx-submitter/go.mod
  • tx-submitter/iface/client.go
  • tx-submitter/services/rollup.go
  • tx-submitter/types/blob.go
  • tx-submitter/types/blob_compat.go
  • tx-submitter/types/blob_params.go
  • tx-submitter/types/converter.go
  • tx-submitter/types/l2Caller.go
  • tx-submitter/utils/config.go
💤 Files with no reviewable changes (5)
  • tx-submitter/batch/batch_storage_test.go
  • tx-submitter/go.mod
  • tx-submitter/types/blob_params.go
  • tx-submitter/types/blob.go
  • tx-submitter/batch/blob.go

Comment on lines +127 to +141
go testLoop(cache.ctx, 5*time.Second, func() {
batchCacheSyncMu.Lock()
defer batchCacheSyncMu.Unlock()
err := cache.AssembleCurrentBatchHeader()
if err != nil {
log.Error("Assemble current batch failed, wait for the next try", "error", err)
}
})

// Catch CTRL-C to ensure a graceful shutdown.
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, os.Interrupt)

// Wait until the interrupt signal is received from an OS signal.
<-interrupt
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Make this test self-terminating.

Line 141 blocks until a manual os.Interrupt arrives, so go test never finishes on its own here. The background loop also keeps running after the assertions pass. If the intent is just to exercise AssembleCurrentBatchHeader, call it once synchronously or run the loop under a cancellable context with a deadline.

🛠️ Finite test shape
-	go testLoop(cache.ctx, 5*time.Second, func() {
-		batchCacheSyncMu.Lock()
-		defer batchCacheSyncMu.Unlock()
-		err := cache.AssembleCurrentBatchHeader()
-		if err != nil {
-			log.Error("Assemble current batch failed, wait for the next try", "error", err)
-		}
-	})
-
-	// Catch CTRL-C to ensure a graceful shutdown.
-	interrupt := make(chan os.Signal, 1)
-	signal.Notify(interrupt, os.Interrupt)
-
-	// Wait until the interrupt signal is received from an OS signal.
-	<-interrupt
+	batchCacheSyncMu.Lock()
+	err = cache.AssembleCurrentBatchHeader()
+	batchCacheSyncMu.Unlock()
+	require.NoError(t, err)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@common/batch/batch_cache_genesis_header_test.go` around lines 127 - 141, The
test blocks on waiting for an OS interrupt and leaves the background go testLoop
running; make it self-terminating by replacing the manual interrupt with a
cancellable/timeout approach: either call cache.AssembleCurrentBatchHeader()
synchronously once (removing go testLoop and the interrupt channel), or run go
testLoop with a context.WithTimeout/WithCancel and call cancel after the timeout
(or use time.After) so the loop exits and the test continues; update references
to batchCacheSyncMu, go testLoop and AssembleCurrentBatchHeader to ensure the
lock is held around the single call or is used inside the timed loop and remove
the blocking <-interrupt.

Comment on lines +26 to +29
var (
pk = ""
errExceedFeeLimit = errors.New("exceed fee limit")
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Make the test key non-empty or skip when it is unset.

With pk = "", the later pk[2:] slice panics before the test reaches any assertions. This needs a real test key source or an explicit skip path when the secret is not configured.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@common/batch/commit_test.go` around lines 26 - 29, The test currently defines
pk = "" which causes a panic when code slices pk[2:] later; update the test to
either supply a valid non-empty test key or guard the test with an explicit skip
when the key is unset: check the package-level variable pk before any slicing
(the code that uses pk[2:] or functions that derive from pk) and call t.Skipf
with a clear message if pk == "" so the test won't panic, or replace the empty
pk with a real test key value sourced from a test config/env variable used by
the test helpers.

Comment thread common/blob/fee.go
Comment on lines +241 to +248
// BlobHashes computes the blob hashes of the given blobs.
func BlobHashes(blobs []kzg4844.Blob, commitments []kzg4844.Commitment) []common.Hash {
hasher := sha256.New()
h := make([]common.Hash, len(commitments))
for i := range blobs {
h[i] = kzg4844.CalcBlobHashV1(hasher, &commitments[i])
}
return h
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Iterate over commitments in BlobHashes.

The result slice is sized from commitments, but the loop uses range blobs. If len(blobs) > len(commitments) this panics on commitments[i]; if len(commitments) > len(blobs), the tail of the returned hashes stays zeroed. The hash here is commitment-derived, so this loop should be driven by commitments.

Suggested fix
 func BlobHashes(blobs []kzg4844.Blob, commitments []kzg4844.Commitment) []common.Hash {
 	hasher := sha256.New()
 	h := make([]common.Hash, len(commitments))
-	for i := range blobs {
+	for i := range commitments {
 		h[i] = kzg4844.CalcBlobHashV1(hasher, &commitments[i])
 	}
 	return h
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// BlobHashes computes the blob hashes of the given blobs.
func BlobHashes(blobs []kzg4844.Blob, commitments []kzg4844.Commitment) []common.Hash {
hasher := sha256.New()
h := make([]common.Hash, len(commitments))
for i := range blobs {
h[i] = kzg4844.CalcBlobHashV1(hasher, &commitments[i])
}
return h
// BlobHashes computes the blob hashes of the given blobs.
func BlobHashes(blobs []kzg4844.Blob, commitments []kzg4844.Commitment) []common.Hash {
hasher := sha256.New()
h := make([]common.Hash, len(commitments))
for i := range commitments {
h[i] = kzg4844.CalcBlobHashV1(hasher, &commitments[i])
}
return h
}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@common/blob/fee.go` around lines 241 - 248, The loop in BlobHashes drives
iteration by blobs but the output slice and inputs are commitments, causing
panics or zeroed tails; update BlobHashes to iterate over commitments (e.g., for
i := range commitments) and compute h[i] = kzg4844.CalcBlobHashV1(hasher,
&commitments[i]) so the loop index aligns with the sized result slice and the
commitment used to compute each hash.

Comment thread common/blob/fee.go
Comment on lines +251 to +261
// MakeBlobProof builds KZG proofs for blob transactions (sidecar v0).
func MakeBlobProof(blobs []kzg4844.Blob, commitment []kzg4844.Commitment) ([]kzg4844.Proof, error) {
proofs := make([]kzg4844.Proof, len(blobs))
for i := range blobs {
proof, err := kzg4844.ComputeBlobProof(&blobs[i], commitment[i])
if err != nil {
return nil, err
}
proofs[i] = proof
}
return proofs, nil
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Reject mismatched proof inputs before indexing commitment[i].

MakeBlobProof already returns an error, but today a short commitment slice crashes the caller instead. This should fail fast with a normal error before the loop.

Suggested fix
 func MakeBlobProof(blobs []kzg4844.Blob, commitment []kzg4844.Commitment) ([]kzg4844.Proof, error) {
+	if len(blobs) != len(commitment) {
+		return nil, fmt.Errorf("blob/commitment length mismatch: %d != %d", len(blobs), len(commitment))
+	}
 	proofs := make([]kzg4844.Proof, len(blobs))
 	for i := range blobs {
 		proof, err := kzg4844.ComputeBlobProof(&blobs[i], commitment[i])
 		if err != nil {
 			return nil, err
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@common/blob/fee.go` around lines 251 - 261, MakeBlobProof currently indexes
commitment[i] without validating that commitment and blobs have matching
lengths, which can panic when commitment is shorter; add a preflight check at
the start of MakeBlobProof that validates len(commitment) == len(blobs) (or at
least len(commitment) >= len(blobs)) and return a clear error if not, before
entering the loop that calls kzg4844.ComputeBlobProof; this prevents panics and
surfaces a normal error to the caller.

Comment thread common/blob/payload.go
Comment on lines +43 to +51
func RetrieveBlobBytes(blob *kzg4844.Blob) ([]byte, error) {
data := make([]byte, MaxBlobBytesSize)
for i := 0; i < 4096; i++ {
if blob[i*32] != 0 {
return nil, fmt.Errorf("invalid blob, found non-zero high order byte %x of field element %d", data[i*32], i)
}
copy(data[i*31:i*31+31], blob[i*32+1:i*32+32])
}
return data, nil
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Report the offending blob byte in the validation error.

On Line 47 the error prints data[i*32], but data is the zeroed output buffer, so malformed blobs are always reported as 0x00. Use the source blob byte there.

Suggested fix
 	for i := 0; i < 4096; i++ {
 		if blob[i*32] != 0 {
-			return nil, fmt.Errorf("invalid blob, found non-zero high order byte %x of field element %d", data[i*32], i)
+			return nil, fmt.Errorf("invalid blob, found non-zero high order byte %x of field element %d", blob[i*32], i)
 		}
 		copy(data[i*31:i*31+31], blob[i*32+1:i*32+32])
 	}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@common/blob/payload.go` around lines 43 - 51, In RetrieveBlobBytes, the
validation error prints data[i*32] (the zeroed output buffer) so it always
reports 0x00; change the logged byte to the source blob byte (use blob[i*32]) in
the fmt.Errorf call so the error shows the actual offending byte value; keep the
rest of the loop and the copy from blob to data intact and ensure you reference
the same i and indexing used in the copy when building the error message.

Comment on lines 301 to +303
if indexed_hashes.is_empty() {
log::info!("no blob in this batch, batch_tx_hash: {:#?}", tx_hash);
return Ok((0, 0));
return Ok((0, 0, 0));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Skip blob-scalar recalculation for blobless batches.

Returning (0, 0, 0) here falls through to the existing l2_data_len == 0 => MAX_BLOB_SCALAR path in calculate_from_rollup, so a calldata-only / blobless commit batch can push the oracle to the max blob scalar on the next update. Please propagate a sentinel for “no blobs” and skip the blob-scalar update in that case.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@gas-oracle/app/src/da_scalar/l1_scalar.rs` around lines 301 - 303, The
function currently returns Ok((0,0,0)) when indexed_hashes.is_empty(), which
later hits the l2_data_len == 0 => MAX_BLOB_SCALAR branch in
calculate_from_rollup and forces a max blob scalar; instead return a sentinel
that means “no blobs, skip blob-scalar update” (e.g. change the return type to
Result<Option<(u64,u64,u64)>, _> or return a distinct enum variant) from the
blob-processing function where indexed_hashes and tx_hash are available, and
update calculate_from_rollup to detect that sentinel and skip updating the blob
scalar rather than treating it as zero-length calldata; ensure callers of the
blob function (names: indexed_hashes, calculate_from_rollup) are updated to
handle the Option/enum and preserve existing behavior for real zero-length
calldata vs blobless batches.

Comment on lines 150 to +155
if batch.BlockContexts != nil {
// Block contexts come from calldata; the entire decompressed stream
// is tx payload data.
blockCount = uint64(binary.BigEndian.Uint16(batch.BlockContexts[:2]))
if uint64(len(batch.BlockContexts)) < 2+60*blockCount {
return fmt.Errorf("calldata block contexts too short: have %d, need %d", len(batch.BlockContexts), 2+60*blockCount)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Check the calldata prefix length before slicing [:2].

binary.BigEndian.Uint16(batch.BlockContexts[:2]) panics when BlockContexts is non-nil but shorter than 2 bytes. Add a len(batch.BlockContexts) < 2 guard before reading the block count prefix.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@node/derivation/batch_info.go` around lines 150 - 155, The code reads the
2-byte block count prefix without checking length, causing a panic when
batch.BlockContexts is non-nil but shorter than 2; before calling
binary.BigEndian.Uint16(batch.BlockContexts[:2]) (and before computing
blockCount), add a guard that len(batch.BlockContexts) >= 2 and return a clear
error if not; keep the subsequent existing length check
(len(batch.BlockContexts) < 2+60*blockCount) intact and reference
batch.BlockContexts and blockCount when updating the error messages.

pub fn verify(input: ExecutorInput) -> Result<B256, anyhow::Error> {
// Verify DA
let (versioned_hash, batch_data_from_blob) = BlobVerifier::verify(&input.blob_info)?;
let (versioned_hashes, batch_data_from_blob) = BlobVerifier::verify_blobs(&input.blob_infos)?;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Validate batch version and blob-hash count before computing the PI hash.

The current dispatch fails open: V0/V1 blindly index versioned_hashes[0] and ignore any additional hashes, while >= 2 treats unknown future versions as V2. That can panic on malformed input or compute a public-input hash that does not match the DA commitment. Return an error unless the version is explicitly supported and the blob-hash cardinality matches it.

Suggested fix
-    let public_input_hash = if input.batch_version >= 2 {
-        batch_info.public_input_hash_v2(&versioned_hashes)
-    } else {
-        batch_info.public_input_hash(&versioned_hashes[0])
-    };
+    let public_input_hash = match (input.batch_version, versioned_hashes.as_slice()) {
+        (0 | 1, [hash]) => batch_info.public_input_hash(hash),
+        (2, hashes) if !hashes.is_empty() => batch_info.public_input_hash_v2(hashes),
+        (0 | 1, hashes) => {
+            return Err(anyhow::anyhow!(
+                "batch version {} requires exactly 1 blob hash, got {}",
+                input.batch_version,
+                hashes.len()
+            ));
+        }
+        (2, _) => return Err(anyhow::anyhow!("batch version 2 requires at least 1 blob hash")),
+        (v, _) => return Err(anyhow::anyhow!("unsupported batch version: {v}")),
+    };

Also applies to: 37-41

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@prover/crates/executor/client/src/lib.rs` at line 13,
BlobVerifier::verify_blobs currently returns versioned_hashes and
batch_data_from_blob but the caller blindly indexes versioned_hashes[0] for
V0/V1 and treats any >=2 as V2; update the caller to explicitly validate the
blob version and the exact number of hashes before computing the public-input
(PI) hash: check the version value in versioned_hashes (only accept the explicit
supported variants, e.g., 0,1,2), assert that the hash cardinality matches the
expected count for that version (e.g., V0/V1 => exactly 1 hash, V2 => expected 2
hashes), and return an Err when the version is unknown or the count mismatches
instead of proceeding to compute the PI hash (affects the code that uses
versioned_hashes and batch_data_from_blob, including the logic around lines
37-41).

Comment on lines +17 to +29
pub fn decode_blob_scalars(blob_data: &[u8]) -> Result<Vec<u8>, anyhow::Error> {
let mut chunk = vec![0u8; BLOB_WIDTH * 31];
for i in 0..BLOB_WIDTH {
if blob_data[i * 32] != 0 {
return Err(anyhow!(format!(
return Err(anyhow!(
"Invalid blob, found non-zero high order byte {:x} of field element {}",
blob_data[i * 32],
i
)));
));
}
batch_data[i * 31..i * 31 + 31].copy_from_slice(&blob_data[i * 32 + 1..i * 32 + 32]);
chunk[i * 31..i * 31 + 31].copy_from_slice(&blob_data[i * 32 + 1..i * 32 + 32]);
}
decompress_batch(&batch_data)
Ok(chunk)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Validate the blob length before indexing into it.

Line 17 assumes blob_data is exactly BLOB_WIDTH * 32 bytes. Short inputs will panic, and longer inputs are silently ignored past the first blob-width window.

🛡️ Suggested guard
 pub fn decode_blob_scalars(blob_data: &[u8]) -> Result<Vec<u8>, anyhow::Error> {
+    let expected_len = BLOB_WIDTH * 32;
+    if blob_data.len() != expected_len {
+        return Err(anyhow!(
+            "invalid blob length: got {}, want {}",
+            blob_data.len(),
+            expected_len
+        ));
+    }
+
     let mut chunk = vec![0u8; BLOB_WIDTH * 31];
     for i in 0..BLOB_WIDTH {
         if blob_data[i * 32] != 0 {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn decode_blob_scalars(blob_data: &[u8]) -> Result<Vec<u8>, anyhow::Error> {
let mut chunk = vec![0u8; BLOB_WIDTH * 31];
for i in 0..BLOB_WIDTH {
if blob_data[i * 32] != 0 {
return Err(anyhow!(format!(
return Err(anyhow!(
"Invalid blob, found non-zero high order byte {:x} of field element {}",
blob_data[i * 32],
i
)));
));
}
batch_data[i * 31..i * 31 + 31].copy_from_slice(&blob_data[i * 32 + 1..i * 32 + 32]);
chunk[i * 31..i * 31 + 31].copy_from_slice(&blob_data[i * 32 + 1..i * 32 + 32]);
}
decompress_batch(&batch_data)
Ok(chunk)
pub fn decode_blob_scalars(blob_data: &[u8]) -> Result<Vec<u8>, anyhow::Error> {
let expected_len = BLOB_WIDTH * 32;
if blob_data.len() != expected_len {
return Err(anyhow!(
"invalid blob length: got {}, want {}",
blob_data.len(),
expected_len
));
}
let mut chunk = vec![0u8; BLOB_WIDTH * 31];
for i in 0..BLOB_WIDTH {
if blob_data[i * 32] != 0 {
return Err(anyhow!(
"Invalid blob, found non-zero high order byte {:x} of field element {}",
blob_data[i * 32],
i
));
}
chunk[i * 31..i * 31 + 31].copy_from_slice(&blob_data[i * 32 + 1..i * 32 + 32]);
}
Ok(chunk)
}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@prover/crates/executor/client/src/types/blob.rs` around lines 17 - 29, The
function decode_blob_scalars assumes blob_data has BLOB_WIDTH*32 bytes and will
panic on short inputs; add an explicit length check at the start of
decode_blob_scalars to validate blob_data.len() is at least (or exactly)
BLOB_WIDTH * 32 and return an Err (anyhow::Error) if not, so you avoid indexing
panics and clarify behavior for longer inputs (either reject lengths !=
BLOB_WIDTH*32 or document/handle > by only processing the first BLOB_WIDTH*32
bytes); reference the function name decode_blob_scalars and the constant
BLOB_WIDTH to locate where to add this guard.

Comment on lines +72 to +74
pub blob_infos: Vec<BlobInfo>,
#[serde(default)]
pub batch_version: u8,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Reject unsupported batch_version values at input boundary.

Line 74 accepts any u8; malformed values can pass deserialization and drive invalid execution/hash-version branching. Please validate allowed versions (e.g., 0/1/2) before processing.

Suggested fix pattern
 pub struct ExecutorInput {
     pub block_inputs: Vec<BlockInput>,
     pub blob_infos: Vec<BlobInfo>,
     #[serde(default)]
     pub batch_version: u8,
 }
+
+impl ExecutorInput {
+    pub fn validate(&self) -> Result<(), ClientError> {
+        match self.batch_version {
+            0 | 1 | 2 => Ok(()),
+            v => Err(ClientError::InvalidBatchVersion(v)),
+        }
+    }
+}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@prover/crates/executor/client/src/types/input.rs` around lines 72 - 74, The
batch_version field currently accepts any u8; add validation at the input
boundary to reject unsupported values (allow only 0, 1, 2) by returning an error
on deserialization or conversion. Implement this by either adding a custom
Deserialize for the input struct in types/input.rs or implementing
TryFrom<RawInput> (or a validate() method invoked immediately after
deserialization) that checks the batch_version field and returns an Err for any
value not in {0,1,2}; ensure the error propagates back to the caller so
malformed inputs are rejected before any execution or hash-version branching
occurs.

anylots and others added 2 commits May 8, 2026 18:30
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants