feat: multi blob#935
Conversation
- Add isBatchV2Upgraded hook to BatchCache so V2 header is always generated once the upgrade is activated, regardless of blob count. Previously the code fell back to V1 for single-blob batches, which is incompatible with the V2 public_input_hash (keccak(hash[0]) ≠ hash[0]). - Remove the MAX_BLOB_PER_BLOCK = 6 constant from Rollup.sol and rely solely on blobhash(i) == bytes32(0) to terminate the blob-count loop. Per spec §9 design decision, blob count limits should be controlled by tx-submitter MaxBlobCount config, not a hardcoded contract constant, so Ethereum protocol upgrades (e.g. EIP-7691) require no contract change. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… challenge handler - Change ExecutorInput.blob_info to blob_infos (Vec<BlobInfo>) with batch_version field - Add BlobVerifier::verify_blobs for multi-blob KZG verification - Add BatchInfo::public_input_hash_v2 using keccak256(hash[0]||...||hash[N-1]) - Add multi-blob encoding (encode_multi_blob, encode_blob_from_bytes) in host blob.rs - Route verify() on batch_version: V2 uses aggregated blob hashes, V0/V1 unchanged - Update shadow_rollup calc_batch_pi to parse V2 header with blob_count at offset 257 - Add blob_count and extra_blob_hashes to challenge handler BatchInfo and encode Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… entrypoints - Add batch_version to ProveRequest and thread through gen_client_input/execute_batch - Use verify_blobs (all blobs) instead of verify (first blob only) in server queue - Compute blobHashesHash for V2 in server batch_header_ex; pass individual hashes for fill_ext - fill_ext parses V2 blob_count + per-blob hashes from extended batch_header_ex - Add batch_version param to try_execute_batch; callers extract version from batch_header[0] - Add --batch-version CLI arg to host binary - Add blob_count param to execute_batch for correct PI hash routing Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…V0/V1 compat fields Replace blob_versioned_hash + blob_count + extra_blob_hashes with a single blob_hashes: Vec<[u8; 32]>. fill_ext parses all hashes from batch_header_ex, encode writes blob_hashes[0] at offset 57 and appends count + remaining hashes for V2. No backward-compatibility shims needed since prover components upgrade together. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (7)
📝 WalkthroughWalkthroughAdds BatchHeader V2 and end-to-end multi-blob support: blob utilities and configs, cache packing/sealing and V2 headers, ordered blob verification/parsing, prover/executor V2 inputs and hashing, Rollup V2 commit handling, tx-submitter and gas-oracle plumbing, tests and infra updates. ChangesBatch V2 header, interfaces, and shared blob/encoding utilities
Batch cache V2 enablement and tests
Solidity Rollup V2 and tests
Node DA verification and batch parsing updates
Prover multi-blob input, hashing, and verification
Gas oracle multi-blob payload and scalar
Tx-submitter refactor, config, and infra
Sequence Diagram(s)sequenceDiagram
participant Cache
participant L2Gov
participant BlobUtils
participant TxSubmitter
participant Rollup
participant Beacon
participant Prover
Cache->>L2Gov: BatchBlockInterval/BatchTimeout
Cache->>BlobUtils: CompressBatchBytes -> MakeBlobTxSidecar
Cache->>TxSubmitter: Emit V2 header (aggregated blob hashes)
TxSubmitter->>Rollup: commitBatch(V<=2, sidecar)
Rollup-->>TxSubmitter: committed
TxSubmitter->>Beacon: fetch sidecars by tx
Beacon-->>TxSubmitter: blobs+commitments
Prover->>Prover: verify_blobs -> versioned_hashes
Prover->>Prover: public_input_hash_v2(aggregated hash)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
|
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…mpress once Split get_origin_batch into unpack_blob (field-element unpack) and decompress_batch (zstd decompress). verify_blobs now KZG-verifies each blob independently, unpacks all compressed chunks, concatenates them, then calls decompress_batch once. Previously each blob was independently decompressed which fails for N>1 since the zstd frame spans all chunks. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…fset 57 V2 headers now use the same 257-byte format as V1 with the aggregated blob hash (keccak256 of all blob hashes) at offset 57. This eliminates BatchHeaderCodecV2, simplifies contracts/prover/submitter, and fixes the multi-blob decompression bug in blob_verifier. - Delete BatchHeaderCodecV2.sol; V2 commitBatch computes aggregated hash inline - Unify _verifyProof and _loadBatchHeader for all versions - Remove BatchHeaderV2 struct in Go; V2 uses V1 format + version override - Simplify Rust challenge handler, queue, shadow_rollup (uniform 96-byte batch_header_ex) - Fix verify_blobs: decode BLS scalars per blob, concatenate, decompress once Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
V2 should store the aggregated blob hash (keccak256 of all blob hashes) in batchBlobVersionedHashes, consistent with the header offset 57 value, instead of blobhash(0). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…struction Move blobVersionedHash computation out of _commitBatchWithBatchData into callers via a new _computeBlobVersionedHash(version) helper: - V0/V1: blobhash(0) or ZERO_VERSIONED_HASH - V2: keccak256(blobhash(0)||...||blobhash(N-1)), requires >=1 blob _commitBatchWithBatchData now has a single unified header construction path for all versions — no more V2/V0V1 branch split. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove the V2 restriction in commitState — with the simplified V2 header format (aggregated hash at offset 57), the stored batchBlobVersionedHashes value is sufficient to recommit without a blob, same as V0/V1. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…rsionedHash - Move `require(_blobCount > 0)` before the keccak256 assembly block to avoid computing keccak256 on empty input when no blobs are attached - Add multi-blob aggregated hash unit tests to RollupCommitBatchV2Test: single/two/three blob correctness, order-sensitivity, and _verifyProof publicInputHash alignment with V2 aggregated hash semantics - Add Rust unit tests for public_input_hash_v2 in prover-executor-client: single/two/three blob correctness, order-sensitivity, and structural difference from V1 (V2 uses keccak(h0||...||hN-1), not raw hash) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: corey <corey.zhang@bitget.com>
Co-authored-by: corey <corey.zhang@bitget.com> Co-authored-by: Cursor <cursoragent@cursor.com>
Resolve conflicts after PR #939 (remove sequencer batch generation): - node/types/batch_test.go: keep HEAD (multi_batch) version; it carries TestBatchHeaderV2 / TestBlobHashesHashUnavailableForLegacy which still exercise BatchHeaderV0/V1/V2 helpers retained on this branch. - go.work.sum: union both sides (toolchain entries from x/exp, x/mod, x/tools); identical go directive on both branches. - go-ethereum submodule: take main's 045be0fd (v2.2.2, includes the matching "Remove sequencer batch write paths" change for #939) over multi_batch's older 62952ec7. - common/go.mod: bump tendermint replace from v0.3.4 -> v0.3.7 to align with node/tx-submitter/contracts/oracle/ops modules; common itself has no tendermint imports, the directive was stale and was the cause of "conflicting replacements for github.com/tendermint/tendermint" in the workspace build. Co-authored-by: Cursor <cursoragent@cursor.com>
…/batch (#945) Co-authored-by: corey <corey.zhang@bitget.com> Co-authored-by: Cursor <cursoragent@cursor.com>
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
ops/docker/docker-compose-4nodes.yml (1)
485-497:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUse one batch-V2 cutover config for every submitter.
This compose file now enables V2 batching only on
tx-submitter-0: it hardcodesTX_SUBMITTER_BATCH_V2_UPGRADE_TIMEandTX_SUBMITTER_SEAL_BATCH, while themulti-submitterinstances can be started without either flag. In the same file the nodes read${BATCH_UPGRADE_TIME}, so enabling that profile can put submitters and nodes on different batch/header versions.Suggested config alignment
tx-submitter-0: environment: - - TX_SUBMITTER_BATCH_V2_UPGRADE_TIME=1777533291 + - TX_SUBMITTER_BATCH_V2_UPGRADE_TIME=${BATCH_UPGRADE_TIME} - TX_SUBMITTER_SEAL_BATCH=true tx-submitter-1: environment: + - TX_SUBMITTER_SEAL_BATCH=true + - TX_SUBMITTER_BATCH_V2_UPGRADE_TIME=${BATCH_UPGRADE_TIME} tx-submitter-2: environment: + - TX_SUBMITTER_SEAL_BATCH=true + - TX_SUBMITTER_BATCH_V2_UPGRADE_TIME=${BATCH_UPGRADE_TIME} tx-submitter-3: environment: + - TX_SUBMITTER_SEAL_BATCH=true + - TX_SUBMITTER_BATCH_V2_UPGRADE_TIME=${BATCH_UPGRADE_TIME}Also applies to: 499-541, 542-584, 585-626
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@ops/docker/docker-compose-4nodes.yml` around lines 485 - 497, The compose file enables batch-V2 only for tx-submitter-0 by hardcoding TX_SUBMITTER_BATCH_V2_UPGRADE_TIME and TX_SUBMITTER_SEAL_BATCH while other submitters (multi-submitter) lack them; change the submitter service environment blocks so all submitter instances use the same batch cutover vars (reference TX_SUBMITTER_BATCH_V2_UPGRADE_TIME and TX_SUBMITTER_SEAL_BATCH) and/or point them to the shared ${BATCH_UPGRADE_TIME} variable used by the nodes; update the tx-submitter-0 and multi-submitter service env lists so they consistently include TX_SUBMITTER_BATCH_V2_UPGRADE_TIME (or ${BATCH_UPGRADE_TIME}) and TX_SUBMITTER_SEAL_BATCH to ensure all submitters flip to batch-V2 together.common/batch/batch_cache_test.go (1)
28-59:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDon't block a test on manual
SIGINT.
TestBatchCacheInitServernever returns unless someone sendsos.Interrupt, sogo testcan hang indefinitely in CI and local runs. Please gate this as a manual/integration test or give it a bounded timeout and stop the signal handler on exit.Suggested fix
func TestBatchCacheInitServer(t *testing.T) { + if os.Getenv("RUN_BATCH_CACHE_INIT_SERVER_TEST") == "" { + t.Skip("manual integration test; set RUN_BATCH_CACHE_INIT_SERVER_TEST=1 to run") + } + testDB := openTestKV(t) cache := NewBatchCache(nil, nil, 2, l1Client, &SingleL2Client{C: l2Client}, rollupContract, l2Gov, testDB) @@ interrupt := make(chan os.Signal, 1) signal.Notify(interrupt, os.Interrupt) - <-interrupt + defer signal.Stop(interrupt) + + select { + case <-interrupt: + case <-time.After(30 * time.Second): + t.Fatal("timed out waiting for TestBatchCacheInitServer to finish") + } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@common/batch/batch_cache_test.go` around lines 28 - 59, TestBatchCacheInitServer currently blocks waiting for os.Interrupt and can hang CI; change it to be bounded or skipped: either (A) skip in normal runs by adding at the top if testing.Short() { t.Skip("manual/integration test") } or (B) enforce a timeout by replacing the blocking interrupt wait with a context.WithTimeout(ctx, <duration>) and select on ctx.Done() instead of <-interrupt, and ensure you stop the signal handler by calling signal.Stop(interrupt) and closing the channel when the test exits. Update references to TestBatchCacheInitServer, interrupt, InitAndSyncFromDatabase, and AssembleCurrentBatchHeader accordingly so the goroutines exit when the test context cancels.node/derivation/batch_info.go (1)
127-145:⚠️ Potential issue | 🟠 Major | ⚡ Quick winValidate block-number ordering before computing
blockCount.Both subtraction paths can underflow if a malformed batch reports a
LastBlockNumbersmaller than the decoded start/parent block. That turnsblockCountinto a huge value and later blows up allocation/slicing in this parser. Reject the batch before subtracting when the numbers go backwards.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@node/derivation/batch_info.go` around lines 127 - 145, Before computing blockCount, validate that batch.LastBlockNumber is not less than the reference block to avoid unsigned underflow: in the parentVersion == 0 branch, after decoding startBlock (BlockContext.Decode on batchBytes[:60]) check that batch.LastBlockNumber >= startBlock.Number and return a clear error if not; in the else branch, after obtaining parentBatchBlock from parentBatchHeader.LastBlockNumber(), check that batch.LastBlockNumber >= parentBatchBlock and return an error if not. Keep the existing blockCount formulas (batch.LastBlockNumber - startBlock.Number + 1 and batch.LastBlockNumber - parentBatchBlock) but only perform them after these validations.tx-submitter/services/rollup.go (1)
1260-1261:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winPotential truncation when converting gas caps to uint256.
tip.Int64()andgasFeeCap.Int64()truncate values exceedingmath.MaxInt64(~9.2 ETH in wei). While unlikely under normal conditions, extreme gas spikes could cause silent data loss. Pass the*big.Intdirectly touint256.MustFromBig.Proposed fix
return ethtypes.NewTx(ðtypes.BlobTx{ ChainID: uint256.MustFromBig(r.chainId), Nonce: nonce, - GasTipCap: uint256.MustFromBig(big.NewInt(tip.Int64())), - GasFeeCap: uint256.MustFromBig(big.NewInt(gasFeeCap.Int64())), + GasTipCap: uint256.MustFromBig(tip), + GasFeeCap: uint256.MustFromBig(gasFeeCap), Gas: gas, To: r.rollupAddr, Data: calldata,🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tx-submitter/services/rollup.go` around lines 1260 - 1261, Replace the truncating Int64 conversions for gas caps by passing the original *big.Int values into uint256.MustFromBig: locate the struct/assignment where GasTipCap and GasFeeCap are set (the lines using GasTipCap, GasFeeCap, uint256.MustFromBig, and the tip and gasFeeCap variables) and change the arguments from big.NewInt(tip.Int64()) and big.NewInt(gasFeeCap.Int64()) to pass tip and gasFeeCap directly to uint256.MustFromBig so no data is silently truncated.prover/crates/executor/client/src/verifier/blob_verifier.rs (1)
48-50:⚠️ Potential issue | 🟠 Major | ⚡ Quick winReplace
unwrap()calls with proper error handling in lines 48-50.The function returns
Result<B256, anyhow::Error>, and the immediately following operation (line 54) correctly uses.map_err(...)?for error propagation. Thefrom_slice()calls on lines 48-50 should follow the same pattern. Usingunwrap()on potentially invalid input will panic instead of returning a recoverable error, which is problematic in a ZK proof verification context.Proposed fix
- let blob = KzgRsBlob::from_slice(&blob_info.blob_data).unwrap(); - let commitment = Bytes48::from_slice(&blob_info.commitment).unwrap(); - let proof = Bytes48::from_slice(&blob_info.proof).unwrap(); + let blob = KzgRsBlob::from_slice(&blob_info.blob_data) + .map_err(|e| anyhow!("invalid blob data: {e:?}"))?; + let commitment = Bytes48::from_slice(&blob_info.commitment) + .map_err(|e| anyhow!("invalid commitment: {e:?}"))?; + let proof = Bytes48::from_slice(&blob_info.proof) + .map_err(|e| anyhow!("invalid proof: {e:?}"))?;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@prover/crates/executor/client/src/verifier/blob_verifier.rs` around lines 48 - 50, Replace the three unwrap() calls so malformed input yields a propagated error instead of panicking: call KzgRsBlob::from_slice(&blob_info.blob_data).map_err(|e| anyhow::anyhow!("invalid blob_data: {}", e))? and do the same pattern for Bytes48::from_slice(&blob_info.commitment) and Bytes48::from_slice(&blob_info.proof), using descriptive messages (e.g. "invalid commitment" / "invalid proof") and the ? operator to return Result<B256, anyhow::Error>, consistent with the existing .map_err(...)? style used later in this function.
🧹 Nitpick comments (1)
gas-oracle/app/src/da_scalar/blob.rs (1)
202-206: ⚡ Quick winAdd at least one true multi-blob decode test.
These assertions still exercise
num_blobs = 1, so they won't catch boundary mistakes in concatenation/trimming across blob joins—the core behavior added by this PR. Please add a fixture or generated case whose compressed payload spans at least two blobs.Also applies to: 250-254
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@gas-oracle/app/src/da_scalar/blob.rs` around lines 202 - 206, Add a true multi-blob decode unit test that constructs at least two Blob instances whose concatenated payload contains a single compressed batch split across the blob boundary (so the compressed bytes start in blob A and continue in blob B), then call Blob::get_payload_bytes on each blob and run Blob::detect_zstd_compressed and the downstream decompression path (decompress_batch or whatever consumer is used) to assert the full batch decodes correctly; target the same test module that currently uses get_payload_bytes and Blob::detect_zstd_compressed, create fixtures or generate a zstd-compressed payload long enough to span blobs, split it across two Blob objects, and assert the decoded records match the original uncompressed data.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@common/batch/batch_cache_genesis_header_test.go`:
- Around line 127-141: The test blocks on waiting for an OS interrupt and leaves
the background go testLoop running; make it self-terminating by replacing the
manual interrupt with a cancellable/timeout approach: either call
cache.AssembleCurrentBatchHeader() synchronously once (removing go testLoop and
the interrupt channel), or run go testLoop with a context.WithTimeout/WithCancel
and call cancel after the timeout (or use time.After) so the loop exits and the
test continues; update references to batchCacheSyncMu, go testLoop and
AssembleCurrentBatchHeader to ensure the lock is held around the single call or
is used inside the timed loop and remove the blocking <-interrupt.
In `@common/batch/commit_test.go`:
- Around line 26-29: The test currently defines pk = "" which causes a panic
when code slices pk[2:] later; update the test to either supply a valid
non-empty test key or guard the test with an explicit skip when the key is
unset: check the package-level variable pk before any slicing (the code that
uses pk[2:] or functions that derive from pk) and call t.Skipf with a clear
message if pk == "" so the test won't panic, or replace the empty pk with a real
test key value sourced from a test config/env variable used by the test helpers.
In `@common/blob/fee.go`:
- Around line 241-248: The loop in BlobHashes drives iteration by blobs but the
output slice and inputs are commitments, causing panics or zeroed tails; update
BlobHashes to iterate over commitments (e.g., for i := range commitments) and
compute h[i] = kzg4844.CalcBlobHashV1(hasher, &commitments[i]) so the loop index
aligns with the sized result slice and the commitment used to compute each hash.
- Around line 251-261: MakeBlobProof currently indexes commitment[i] without
validating that commitment and blobs have matching lengths, which can panic when
commitment is shorter; add a preflight check at the start of MakeBlobProof that
validates len(commitment) == len(blobs) (or at least len(commitment) >=
len(blobs)) and return a clear error if not, before entering the loop that calls
kzg4844.ComputeBlobProof; this prevents panics and surfaces a normal error to
the caller.
In `@common/blob/payload.go`:
- Around line 43-51: In RetrieveBlobBytes, the validation error prints
data[i*32] (the zeroed output buffer) so it always reports 0x00; change the
logged byte to the source blob byte (use blob[i*32]) in the fmt.Errorf call so
the error shows the actual offending byte value; keep the rest of the loop and
the copy from blob to data intact and ensure you reference the same i and
indexing used in the copy when building the error message.
In `@contracts/contracts/l1/rollup/Rollup.sol`:
- Around line 921-942: In _computeBlobVersionedHash, for the legacy branch (when
_version < 2) add a guard to reject extraneous blobs by requiring blobhash(1) ==
bytes32(0) before accepting the batch; update the else branch that computes
_blobVersionedHash (which currently uses blobhash(0) and ZERO_VERSIONED_HASH) to
first require blobhash(1) == bytes32(0) so any non-zero extra blob causes a
revert, referencing the existing blobhash and ZERO_VERSIONED_HASH symbols.
In `@gas-oracle/app/src/da_scalar/l1_scalar.rs`:
- Around line 301-303: The function currently returns Ok((0,0,0)) when
indexed_hashes.is_empty(), which later hits the l2_data_len == 0 =>
MAX_BLOB_SCALAR branch in calculate_from_rollup and forces a max blob scalar;
instead return a sentinel that means “no blobs, skip blob-scalar update” (e.g.
change the return type to Result<Option<(u64,u64,u64)>, _> or return a distinct
enum variant) from the blob-processing function where indexed_hashes and tx_hash
are available, and update calculate_from_rollup to detect that sentinel and skip
updating the blob scalar rather than treating it as zero-length calldata; ensure
callers of the blob function (names: indexed_hashes, calculate_from_rollup) are
updated to handle the Option/enum and preserve existing behavior for real
zero-length calldata vs blobless batches.
In `@node/derivation/batch_info.go`:
- Around line 150-155: The code reads the 2-byte block count prefix without
checking length, causing a panic when batch.BlockContexts is non-nil but shorter
than 2; before calling binary.BigEndian.Uint16(batch.BlockContexts[:2]) (and
before computing blockCount), add a guard that len(batch.BlockContexts) >= 2 and
return a clear error if not; keep the subsequent existing length check
(len(batch.BlockContexts) < 2+60*blockCount) intact and reference
batch.BlockContexts and blockCount when updating the error messages.
In `@prover/crates/executor/client/src/lib.rs`:
- Line 13: BlobVerifier::verify_blobs currently returns versioned_hashes and
batch_data_from_blob but the caller blindly indexes versioned_hashes[0] for
V0/V1 and treats any >=2 as V2; update the caller to explicitly validate the
blob version and the exact number of hashes before computing the public-input
(PI) hash: check the version value in versioned_hashes (only accept the explicit
supported variants, e.g., 0,1,2), assert that the hash cardinality matches the
expected count for that version (e.g., V0/V1 => exactly 1 hash, V2 => expected 2
hashes), and return an Err when the version is unknown or the count mismatches
instead of proceeding to compute the PI hash (affects the code that uses
versioned_hashes and batch_data_from_blob, including the logic around lines
37-41).
In `@prover/crates/executor/client/src/types/blob.rs`:
- Around line 17-29: The function decode_blob_scalars assumes blob_data has
BLOB_WIDTH*32 bytes and will panic on short inputs; add an explicit length check
at the start of decode_blob_scalars to validate blob_data.len() is at least (or
exactly) BLOB_WIDTH * 32 and return an Err (anyhow::Error) if not, so you avoid
indexing panics and clarify behavior for longer inputs (either reject lengths !=
BLOB_WIDTH*32 or document/handle > by only processing the first BLOB_WIDTH*32
bytes); reference the function name decode_blob_scalars and the constant
BLOB_WIDTH to locate where to add this guard.
In `@prover/crates/executor/client/src/types/input.rs`:
- Around line 72-74: The batch_version field currently accepts any u8; add
validation at the input boundary to reject unsupported values (allow only 0, 1,
2) by returning an error on deserialization or conversion. Implement this by
either adding a custom Deserialize for the input struct in types/input.rs or
implementing TryFrom<RawInput> (or a validate() method invoked immediately after
deserialization) that checks the batch_version field and returns an Err for any
value not in {0,1,2}; ensure the error propagates back to the caller so
malformed inputs are rejected before any execution or hash-version branching
occurs.
---
Outside diff comments:
In `@common/batch/batch_cache_test.go`:
- Around line 28-59: TestBatchCacheInitServer currently blocks waiting for
os.Interrupt and can hang CI; change it to be bounded or skipped: either (A)
skip in normal runs by adding at the top if testing.Short() {
t.Skip("manual/integration test") } or (B) enforce a timeout by replacing the
blocking interrupt wait with a context.WithTimeout(ctx, <duration>) and select
on ctx.Done() instead of <-interrupt, and ensure you stop the signal handler by
calling signal.Stop(interrupt) and closing the channel when the test exits.
Update references to TestBatchCacheInitServer, interrupt,
InitAndSyncFromDatabase, and AssembleCurrentBatchHeader accordingly so the
goroutines exit when the test context cancels.
In `@node/derivation/batch_info.go`:
- Around line 127-145: Before computing blockCount, validate that
batch.LastBlockNumber is not less than the reference block to avoid unsigned
underflow: in the parentVersion == 0 branch, after decoding startBlock
(BlockContext.Decode on batchBytes[:60]) check that batch.LastBlockNumber >=
startBlock.Number and return a clear error if not; in the else branch, after
obtaining parentBatchBlock from parentBatchHeader.LastBlockNumber(), check that
batch.LastBlockNumber >= parentBatchBlock and return an error if not. Keep the
existing blockCount formulas (batch.LastBlockNumber - startBlock.Number + 1 and
batch.LastBlockNumber - parentBatchBlock) but only perform them after these
validations.
In `@ops/docker/docker-compose-4nodes.yml`:
- Around line 485-497: The compose file enables batch-V2 only for tx-submitter-0
by hardcoding TX_SUBMITTER_BATCH_V2_UPGRADE_TIME and TX_SUBMITTER_SEAL_BATCH
while other submitters (multi-submitter) lack them; change the submitter service
environment blocks so all submitter instances use the same batch cutover vars
(reference TX_SUBMITTER_BATCH_V2_UPGRADE_TIME and TX_SUBMITTER_SEAL_BATCH)
and/or point them to the shared ${BATCH_UPGRADE_TIME} variable used by the
nodes; update the tx-submitter-0 and multi-submitter service env lists so they
consistently include TX_SUBMITTER_BATCH_V2_UPGRADE_TIME (or
${BATCH_UPGRADE_TIME}) and TX_SUBMITTER_SEAL_BATCH to ensure all submitters flip
to batch-V2 together.
In `@prover/crates/executor/client/src/verifier/blob_verifier.rs`:
- Around line 48-50: Replace the three unwrap() calls so malformed input yields
a propagated error instead of panicking: call
KzgRsBlob::from_slice(&blob_info.blob_data).map_err(|e| anyhow::anyhow!("invalid
blob_data: {}", e))? and do the same pattern for
Bytes48::from_slice(&blob_info.commitment) and
Bytes48::from_slice(&blob_info.proof), using descriptive messages (e.g. "invalid
commitment" / "invalid proof") and the ? operator to return Result<B256,
anyhow::Error>, consistent with the existing .map_err(...)? style used later in
this function.
In `@tx-submitter/services/rollup.go`:
- Around line 1260-1261: Replace the truncating Int64 conversions for gas caps
by passing the original *big.Int values into uint256.MustFromBig: locate the
struct/assignment where GasTipCap and GasFeeCap are set (the lines using
GasTipCap, GasFeeCap, uint256.MustFromBig, and the tip and gasFeeCap variables)
and change the arguments from big.NewInt(tip.Int64()) and
big.NewInt(gasFeeCap.Int64()) to pass tip and gasFeeCap directly to
uint256.MustFromBig so no data is silently truncated.
---
Nitpick comments:
In `@gas-oracle/app/src/da_scalar/blob.rs`:
- Around line 202-206: Add a true multi-blob decode unit test that constructs at
least two Blob instances whose concatenated payload contains a single compressed
batch split across the blob boundary (so the compressed bytes start in blob A
and continue in blob B), then call Blob::get_payload_bytes on each blob and run
Blob::detect_zstd_compressed and the downstream decompression path
(decompress_batch or whatever consumer is used) to assert the full batch decodes
correctly; target the same test module that currently uses get_payload_bytes and
Blob::detect_zstd_compressed, create fixtures or generate a zstd-compressed
payload long enough to span blobs, split it across two Blob objects, and assert
the decoded records match the original uncompressed data.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4edcdb64-4766-4507-8797-0cbdffcef1ad
⛔ Files ignored due to path filters (4)
common/go.sumis excluded by!**/*.sumgo.workis excluded by!**/*.workgo.work.sumis excluded by!**/*.sumtx-submitter/go.sumis excluded by!**/*.sum
📒 Files selected for processing (58)
common/batch/batch_cache.gocommon/batch/batch_cache_genesis_header_test.gocommon/batch/batch_cache_test.gocommon/batch/batch_data.gocommon/batch/batch_header.gocommon/batch/batch_header_test.gocommon/batch/batch_query.gocommon/batch/batch_restart_test.gocommon/batch/batch_storage.gocommon/batch/blob.gocommon/batch/commit_test.gocommon/batch/encoding.gocommon/batch/helpers_test.gocommon/batch/interfaces.gocommon/batch/l2_gov.gocommon/blob/fee.gocommon/blob/payload.gocommon/go.modcontracts/contracts/l1/rollup/Rollup.solcontracts/contracts/test/Rollup.t.solgas-oracle/app/src/da_scalar/blob.rsgas-oracle/app/src/da_scalar/calculate.rsgas-oracle/app/src/da_scalar/l1_scalar.rsgo-ethereumnode/derivation/batch_info.gonode/derivation/batch_info_test.gonode/derivation/beacon.gonode/derivation/derivation.gonode/types/batch_header.goops/docker/docker-compose-4nodes.ymlops/docker/layer1/configs/values.env.templateops/l2-genesis/deploy-config/devnet-deploy-config.jsonoracle/oracle/batch.goprover/bin/challenge/src/handler.rsprover/bin/host/src/execute.rsprover/bin/host/src/main.rsprover/bin/server/src/queue.rsprover/bin/shadow-prove/src/execute.rsprover/bin/shadow-prove/src/main.rsprover/bin/shadow-prove/src/shadow_rollup.rsprover/crates/executor/client/src/lib.rsprover/crates/executor/client/src/types/batch.rsprover/crates/executor/client/src/types/blob.rsprover/crates/executor/client/src/types/input.rsprover/crates/executor/client/src/verifier/blob_verifier.rsprover/crates/executor/host/src/blob.rstx-submitter/batch/batch_storage_test.gotx-submitter/batch/blob.gotx-submitter/flags/flags.gotx-submitter/go.modtx-submitter/iface/client.gotx-submitter/services/rollup.gotx-submitter/types/blob.gotx-submitter/types/blob_compat.gotx-submitter/types/blob_params.gotx-submitter/types/converter.gotx-submitter/types/l2Caller.gotx-submitter/utils/config.go
💤 Files with no reviewable changes (5)
- tx-submitter/batch/batch_storage_test.go
- tx-submitter/go.mod
- tx-submitter/types/blob_params.go
- tx-submitter/types/blob.go
- tx-submitter/batch/blob.go
| go testLoop(cache.ctx, 5*time.Second, func() { | ||
| batchCacheSyncMu.Lock() | ||
| defer batchCacheSyncMu.Unlock() | ||
| err := cache.AssembleCurrentBatchHeader() | ||
| if err != nil { | ||
| log.Error("Assemble current batch failed, wait for the next try", "error", err) | ||
| } | ||
| }) | ||
|
|
||
| // Catch CTRL-C to ensure a graceful shutdown. | ||
| interrupt := make(chan os.Signal, 1) | ||
| signal.Notify(interrupt, os.Interrupt) | ||
|
|
||
| // Wait until the interrupt signal is received from an OS signal. | ||
| <-interrupt |
There was a problem hiding this comment.
Make this test self-terminating.
Line 141 blocks until a manual os.Interrupt arrives, so go test never finishes on its own here. The background loop also keeps running after the assertions pass. If the intent is just to exercise AssembleCurrentBatchHeader, call it once synchronously or run the loop under a cancellable context with a deadline.
🛠️ Finite test shape
- go testLoop(cache.ctx, 5*time.Second, func() {
- batchCacheSyncMu.Lock()
- defer batchCacheSyncMu.Unlock()
- err := cache.AssembleCurrentBatchHeader()
- if err != nil {
- log.Error("Assemble current batch failed, wait for the next try", "error", err)
- }
- })
-
- // Catch CTRL-C to ensure a graceful shutdown.
- interrupt := make(chan os.Signal, 1)
- signal.Notify(interrupt, os.Interrupt)
-
- // Wait until the interrupt signal is received from an OS signal.
- <-interrupt
+ batchCacheSyncMu.Lock()
+ err = cache.AssembleCurrentBatchHeader()
+ batchCacheSyncMu.Unlock()
+ require.NoError(t, err)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@common/batch/batch_cache_genesis_header_test.go` around lines 127 - 141, The
test blocks on waiting for an OS interrupt and leaves the background go testLoop
running; make it self-terminating by replacing the manual interrupt with a
cancellable/timeout approach: either call cache.AssembleCurrentBatchHeader()
synchronously once (removing go testLoop and the interrupt channel), or run go
testLoop with a context.WithTimeout/WithCancel and call cancel after the timeout
(or use time.After) so the loop exits and the test continues; update references
to batchCacheSyncMu, go testLoop and AssembleCurrentBatchHeader to ensure the
lock is held around the single call or is used inside the timed loop and remove
the blocking <-interrupt.
| var ( | ||
| pk = "" | ||
| errExceedFeeLimit = errors.New("exceed fee limit") | ||
| ) |
There was a problem hiding this comment.
Make the test key non-empty or skip when it is unset.
With pk = "", the later pk[2:] slice panics before the test reaches any assertions. This needs a real test key source or an explicit skip path when the secret is not configured.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@common/batch/commit_test.go` around lines 26 - 29, The test currently defines
pk = "" which causes a panic when code slices pk[2:] later; update the test to
either supply a valid non-empty test key or guard the test with an explicit skip
when the key is unset: check the package-level variable pk before any slicing
(the code that uses pk[2:] or functions that derive from pk) and call t.Skipf
with a clear message if pk == "" so the test won't panic, or replace the empty
pk with a real test key value sourced from a test config/env variable used by
the test helpers.
| // BlobHashes computes the blob hashes of the given blobs. | ||
| func BlobHashes(blobs []kzg4844.Blob, commitments []kzg4844.Commitment) []common.Hash { | ||
| hasher := sha256.New() | ||
| h := make([]common.Hash, len(commitments)) | ||
| for i := range blobs { | ||
| h[i] = kzg4844.CalcBlobHashV1(hasher, &commitments[i]) | ||
| } | ||
| return h |
There was a problem hiding this comment.
Iterate over commitments in BlobHashes.
The result slice is sized from commitments, but the loop uses range blobs. If len(blobs) > len(commitments) this panics on commitments[i]; if len(commitments) > len(blobs), the tail of the returned hashes stays zeroed. The hash here is commitment-derived, so this loop should be driven by commitments.
Suggested fix
func BlobHashes(blobs []kzg4844.Blob, commitments []kzg4844.Commitment) []common.Hash {
hasher := sha256.New()
h := make([]common.Hash, len(commitments))
- for i := range blobs {
+ for i := range commitments {
h[i] = kzg4844.CalcBlobHashV1(hasher, &commitments[i])
}
return h
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // BlobHashes computes the blob hashes of the given blobs. | |
| func BlobHashes(blobs []kzg4844.Blob, commitments []kzg4844.Commitment) []common.Hash { | |
| hasher := sha256.New() | |
| h := make([]common.Hash, len(commitments)) | |
| for i := range blobs { | |
| h[i] = kzg4844.CalcBlobHashV1(hasher, &commitments[i]) | |
| } | |
| return h | |
| // BlobHashes computes the blob hashes of the given blobs. | |
| func BlobHashes(blobs []kzg4844.Blob, commitments []kzg4844.Commitment) []common.Hash { | |
| hasher := sha256.New() | |
| h := make([]common.Hash, len(commitments)) | |
| for i := range commitments { | |
| h[i] = kzg4844.CalcBlobHashV1(hasher, &commitments[i]) | |
| } | |
| return h | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@common/blob/fee.go` around lines 241 - 248, The loop in BlobHashes drives
iteration by blobs but the output slice and inputs are commitments, causing
panics or zeroed tails; update BlobHashes to iterate over commitments (e.g., for
i := range commitments) and compute h[i] = kzg4844.CalcBlobHashV1(hasher,
&commitments[i]) so the loop index aligns with the sized result slice and the
commitment used to compute each hash.
| // MakeBlobProof builds KZG proofs for blob transactions (sidecar v0). | ||
| func MakeBlobProof(blobs []kzg4844.Blob, commitment []kzg4844.Commitment) ([]kzg4844.Proof, error) { | ||
| proofs := make([]kzg4844.Proof, len(blobs)) | ||
| for i := range blobs { | ||
| proof, err := kzg4844.ComputeBlobProof(&blobs[i], commitment[i]) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| proofs[i] = proof | ||
| } | ||
| return proofs, nil |
There was a problem hiding this comment.
Reject mismatched proof inputs before indexing commitment[i].
MakeBlobProof already returns an error, but today a short commitment slice crashes the caller instead. This should fail fast with a normal error before the loop.
Suggested fix
func MakeBlobProof(blobs []kzg4844.Blob, commitment []kzg4844.Commitment) ([]kzg4844.Proof, error) {
+ if len(blobs) != len(commitment) {
+ return nil, fmt.Errorf("blob/commitment length mismatch: %d != %d", len(blobs), len(commitment))
+ }
proofs := make([]kzg4844.Proof, len(blobs))
for i := range blobs {
proof, err := kzg4844.ComputeBlobProof(&blobs[i], commitment[i])
if err != nil {
return nil, err🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@common/blob/fee.go` around lines 251 - 261, MakeBlobProof currently indexes
commitment[i] without validating that commitment and blobs have matching
lengths, which can panic when commitment is shorter; add a preflight check at
the start of MakeBlobProof that validates len(commitment) == len(blobs) (or at
least len(commitment) >= len(blobs)) and return a clear error if not, before
entering the loop that calls kzg4844.ComputeBlobProof; this prevents panics and
surfaces a normal error to the caller.
| func RetrieveBlobBytes(blob *kzg4844.Blob) ([]byte, error) { | ||
| data := make([]byte, MaxBlobBytesSize) | ||
| for i := 0; i < 4096; i++ { | ||
| if blob[i*32] != 0 { | ||
| return nil, fmt.Errorf("invalid blob, found non-zero high order byte %x of field element %d", data[i*32], i) | ||
| } | ||
| copy(data[i*31:i*31+31], blob[i*32+1:i*32+32]) | ||
| } | ||
| return data, nil |
There was a problem hiding this comment.
Report the offending blob byte in the validation error.
On Line 47 the error prints data[i*32], but data is the zeroed output buffer, so malformed blobs are always reported as 0x00. Use the source blob byte there.
Suggested fix
for i := 0; i < 4096; i++ {
if blob[i*32] != 0 {
- return nil, fmt.Errorf("invalid blob, found non-zero high order byte %x of field element %d", data[i*32], i)
+ return nil, fmt.Errorf("invalid blob, found non-zero high order byte %x of field element %d", blob[i*32], i)
}
copy(data[i*31:i*31+31], blob[i*32+1:i*32+32])
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@common/blob/payload.go` around lines 43 - 51, In RetrieveBlobBytes, the
validation error prints data[i*32] (the zeroed output buffer) so it always
reports 0x00; change the logged byte to the source blob byte (use blob[i*32]) in
the fmt.Errorf call so the error shows the actual offending byte value; keep the
rest of the loop and the copy from blob to data intact and ensure you reference
the same i and indexing used in the copy when building the error message.
| if indexed_hashes.is_empty() { | ||
| log::info!("no blob in this batch, batch_tx_hash: {:#?}", tx_hash); | ||
| return Ok((0, 0)); | ||
| return Ok((0, 0, 0)); |
There was a problem hiding this comment.
Skip blob-scalar recalculation for blobless batches.
Returning (0, 0, 0) here falls through to the existing l2_data_len == 0 => MAX_BLOB_SCALAR path in calculate_from_rollup, so a calldata-only / blobless commit batch can push the oracle to the max blob scalar on the next update. Please propagate a sentinel for “no blobs” and skip the blob-scalar update in that case.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@gas-oracle/app/src/da_scalar/l1_scalar.rs` around lines 301 - 303, The
function currently returns Ok((0,0,0)) when indexed_hashes.is_empty(), which
later hits the l2_data_len == 0 => MAX_BLOB_SCALAR branch in
calculate_from_rollup and forces a max blob scalar; instead return a sentinel
that means “no blobs, skip blob-scalar update” (e.g. change the return type to
Result<Option<(u64,u64,u64)>, _> or return a distinct enum variant) from the
blob-processing function where indexed_hashes and tx_hash are available, and
update calculate_from_rollup to detect that sentinel and skip updating the blob
scalar rather than treating it as zero-length calldata; ensure callers of the
blob function (names: indexed_hashes, calculate_from_rollup) are updated to
handle the Option/enum and preserve existing behavior for real zero-length
calldata vs blobless batches.
| if batch.BlockContexts != nil { | ||
| // Block contexts come from calldata; the entire decompressed stream | ||
| // is tx payload data. | ||
| blockCount = uint64(binary.BigEndian.Uint16(batch.BlockContexts[:2])) | ||
| if uint64(len(batch.BlockContexts)) < 2+60*blockCount { | ||
| return fmt.Errorf("calldata block contexts too short: have %d, need %d", len(batch.BlockContexts), 2+60*blockCount) |
There was a problem hiding this comment.
Check the calldata prefix length before slicing [:2].
binary.BigEndian.Uint16(batch.BlockContexts[:2]) panics when BlockContexts is non-nil but shorter than 2 bytes. Add a len(batch.BlockContexts) < 2 guard before reading the block count prefix.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@node/derivation/batch_info.go` around lines 150 - 155, The code reads the
2-byte block count prefix without checking length, causing a panic when
batch.BlockContexts is non-nil but shorter than 2; before calling
binary.BigEndian.Uint16(batch.BlockContexts[:2]) (and before computing
blockCount), add a guard that len(batch.BlockContexts) >= 2 and return a clear
error if not; keep the subsequent existing length check
(len(batch.BlockContexts) < 2+60*blockCount) intact and reference
batch.BlockContexts and blockCount when updating the error messages.
| pub fn verify(input: ExecutorInput) -> Result<B256, anyhow::Error> { | ||
| // Verify DA | ||
| let (versioned_hash, batch_data_from_blob) = BlobVerifier::verify(&input.blob_info)?; | ||
| let (versioned_hashes, batch_data_from_blob) = BlobVerifier::verify_blobs(&input.blob_infos)?; |
There was a problem hiding this comment.
Validate batch version and blob-hash count before computing the PI hash.
The current dispatch fails open: V0/V1 blindly index versioned_hashes[0] and ignore any additional hashes, while >= 2 treats unknown future versions as V2. That can panic on malformed input or compute a public-input hash that does not match the DA commitment. Return an error unless the version is explicitly supported and the blob-hash cardinality matches it.
Suggested fix
- let public_input_hash = if input.batch_version >= 2 {
- batch_info.public_input_hash_v2(&versioned_hashes)
- } else {
- batch_info.public_input_hash(&versioned_hashes[0])
- };
+ let public_input_hash = match (input.batch_version, versioned_hashes.as_slice()) {
+ (0 | 1, [hash]) => batch_info.public_input_hash(hash),
+ (2, hashes) if !hashes.is_empty() => batch_info.public_input_hash_v2(hashes),
+ (0 | 1, hashes) => {
+ return Err(anyhow::anyhow!(
+ "batch version {} requires exactly 1 blob hash, got {}",
+ input.batch_version,
+ hashes.len()
+ ));
+ }
+ (2, _) => return Err(anyhow::anyhow!("batch version 2 requires at least 1 blob hash")),
+ (v, _) => return Err(anyhow::anyhow!("unsupported batch version: {v}")),
+ };Also applies to: 37-41
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@prover/crates/executor/client/src/lib.rs` at line 13,
BlobVerifier::verify_blobs currently returns versioned_hashes and
batch_data_from_blob but the caller blindly indexes versioned_hashes[0] for
V0/V1 and treats any >=2 as V2; update the caller to explicitly validate the
blob version and the exact number of hashes before computing the public-input
(PI) hash: check the version value in versioned_hashes (only accept the explicit
supported variants, e.g., 0,1,2), assert that the hash cardinality matches the
expected count for that version (e.g., V0/V1 => exactly 1 hash, V2 => expected 2
hashes), and return an Err when the version is unknown or the count mismatches
instead of proceeding to compute the PI hash (affects the code that uses
versioned_hashes and batch_data_from_blob, including the logic around lines
37-41).
| pub fn decode_blob_scalars(blob_data: &[u8]) -> Result<Vec<u8>, anyhow::Error> { | ||
| let mut chunk = vec![0u8; BLOB_WIDTH * 31]; | ||
| for i in 0..BLOB_WIDTH { | ||
| if blob_data[i * 32] != 0 { | ||
| return Err(anyhow!(format!( | ||
| return Err(anyhow!( | ||
| "Invalid blob, found non-zero high order byte {:x} of field element {}", | ||
| blob_data[i * 32], | ||
| i | ||
| ))); | ||
| )); | ||
| } | ||
| batch_data[i * 31..i * 31 + 31].copy_from_slice(&blob_data[i * 32 + 1..i * 32 + 32]); | ||
| chunk[i * 31..i * 31 + 31].copy_from_slice(&blob_data[i * 32 + 1..i * 32 + 32]); | ||
| } | ||
| decompress_batch(&batch_data) | ||
| Ok(chunk) |
There was a problem hiding this comment.
Validate the blob length before indexing into it.
Line 17 assumes blob_data is exactly BLOB_WIDTH * 32 bytes. Short inputs will panic, and longer inputs are silently ignored past the first blob-width window.
🛡️ Suggested guard
pub fn decode_blob_scalars(blob_data: &[u8]) -> Result<Vec<u8>, anyhow::Error> {
+ let expected_len = BLOB_WIDTH * 32;
+ if blob_data.len() != expected_len {
+ return Err(anyhow!(
+ "invalid blob length: got {}, want {}",
+ blob_data.len(),
+ expected_len
+ ));
+ }
+
let mut chunk = vec![0u8; BLOB_WIDTH * 31];
for i in 0..BLOB_WIDTH {
if blob_data[i * 32] != 0 {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn decode_blob_scalars(blob_data: &[u8]) -> Result<Vec<u8>, anyhow::Error> { | |
| let mut chunk = vec![0u8; BLOB_WIDTH * 31]; | |
| for i in 0..BLOB_WIDTH { | |
| if blob_data[i * 32] != 0 { | |
| return Err(anyhow!(format!( | |
| return Err(anyhow!( | |
| "Invalid blob, found non-zero high order byte {:x} of field element {}", | |
| blob_data[i * 32], | |
| i | |
| ))); | |
| )); | |
| } | |
| batch_data[i * 31..i * 31 + 31].copy_from_slice(&blob_data[i * 32 + 1..i * 32 + 32]); | |
| chunk[i * 31..i * 31 + 31].copy_from_slice(&blob_data[i * 32 + 1..i * 32 + 32]); | |
| } | |
| decompress_batch(&batch_data) | |
| Ok(chunk) | |
| pub fn decode_blob_scalars(blob_data: &[u8]) -> Result<Vec<u8>, anyhow::Error> { | |
| let expected_len = BLOB_WIDTH * 32; | |
| if blob_data.len() != expected_len { | |
| return Err(anyhow!( | |
| "invalid blob length: got {}, want {}", | |
| blob_data.len(), | |
| expected_len | |
| )); | |
| } | |
| let mut chunk = vec![0u8; BLOB_WIDTH * 31]; | |
| for i in 0..BLOB_WIDTH { | |
| if blob_data[i * 32] != 0 { | |
| return Err(anyhow!( | |
| "Invalid blob, found non-zero high order byte {:x} of field element {}", | |
| blob_data[i * 32], | |
| i | |
| )); | |
| } | |
| chunk[i * 31..i * 31 + 31].copy_from_slice(&blob_data[i * 32 + 1..i * 32 + 32]); | |
| } | |
| Ok(chunk) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@prover/crates/executor/client/src/types/blob.rs` around lines 17 - 29, The
function decode_blob_scalars assumes blob_data has BLOB_WIDTH*32 bytes and will
panic on short inputs; add an explicit length check at the start of
decode_blob_scalars to validate blob_data.len() is at least (or exactly)
BLOB_WIDTH * 32 and return an Err (anyhow::Error) if not, so you avoid indexing
panics and clarify behavior for longer inputs (either reject lengths !=
BLOB_WIDTH*32 or document/handle > by only processing the first BLOB_WIDTH*32
bytes); reference the function name decode_blob_scalars and the constant
BLOB_WIDTH to locate where to add this guard.
| pub blob_infos: Vec<BlobInfo>, | ||
| #[serde(default)] | ||
| pub batch_version: u8, |
There was a problem hiding this comment.
Reject unsupported batch_version values at input boundary.
Line 74 accepts any u8; malformed values can pass deserialization and drive invalid execution/hash-version branching. Please validate allowed versions (e.g., 0/1/2) before processing.
Suggested fix pattern
pub struct ExecutorInput {
pub block_inputs: Vec<BlockInput>,
pub blob_infos: Vec<BlobInfo>,
#[serde(default)]
pub batch_version: u8,
}
+
+impl ExecutorInput {
+ pub fn validate(&self) -> Result<(), ClientError> {
+ match self.batch_version {
+ 0 | 1 | 2 => Ok(()),
+ v => Err(ClientError::InvalidBatchVersion(v)),
+ }
+ }
+}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@prover/crates/executor/client/src/types/input.rs` around lines 72 - 74, The
batch_version field currently accepts any u8; add validation at the input
boundary to reject unsupported values (allow only 0, 1, 2) by returning an error
on deserialization or conversion. Implement this by either adding a custom
Deserialize for the input struct in types/input.rs or implementing
TryFrom<RawInput> (or a validate() method invoked immediately after
deserialization) that checks the batch_version field and returns an Err for any
value not in {0,1,2}; ensure the error propagates back to the caller so
malformed inputs are rejected before any execution or hash-version branching
occurs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary by CodeRabbit
New Features
Configuration
Improvements
Tests