Skip to content

feat: support private fork releases via ci-release#21778

Merged
ludamad merged 4 commits intomerge-train/spartanfrom
ad/ci-private-releases
Mar 19, 2026
Merged

feat: support private fork releases via ci-release#21778
ludamad merged 4 commits intomerge-train/spartanfrom
ad/ci-private-releases

Conversation

@ludamad
Copy link
Collaborator

@ludamad ludamad commented Mar 19, 2026

When REF_NAME has a 'private' semver prerelease (e.g. v5.0.0-private.20260318), ci-release fetches the matching tag from aztec-packages-private, creates a worktree, and runs the release from there. Cache uploads are suppressed to avoid leaking private source artifacts.

Also locks down ci3-external.yml to use github.token instead of AZTEC_BOT_GITHUB_TOKEN.

When REF_NAME has a 'private' semver prerelease (e.g. v5.0.0-private.20260318),
ci-release fetches the matching tag from aztec-packages-private, creates a
worktree, and runs the release from there. Cache uploads are suppressed to
avoid leaking private source artifacts.

Also locks down ci3-external.yml to use github.token instead of AZTEC_BOT_GITHUB_TOKEN.
@ludamad ludamad requested a review from charlielye as a code owner March 19, 2026 01:17
@ludamad ludamad changed the base branch from next to merge-train/spartan March 19, 2026 14:42
@ludamad ludamad merged commit f913737 into merge-train/spartan Mar 19, 2026
11 of 12 checks passed
@ludamad ludamad deleted the ad/ci-private-releases branch March 19, 2026 17:58
github-merge-queue bot pushed a commit that referenced this pull request Mar 20, 2026
BEGIN_COMMIT_OVERRIDE
feat(p2p): add tx validator for contract instance deployment addresses
(#21771)
fix: always deploy IRM for testnet (#21755)
fix: avoid mutating caller's array via splice in snapshot sync (A-718)
(#21759)
chore: update network logs skill (#21785)
feat(archiver): validate contract instance addresses before storing
(#21787)
fix: ensure no division by 0 (#21786)
feat: support private fork releases via ci-release (#21778)
fix: restrict scenario deployments to only nightly (#21798)
fix(stdlib): zero-pad bufferFromFields when declared length exceeds
payload (#21802)
test(protocol-contracts): verify max-size bytecode fits in contract
class log (#21818)
fix: wire BOT_DA_GAS_LIMIT through helm/terraform for staging-public
(#21809)
fix: remove jest-mock-extended from worker processes + fix
parallelize_strict silent failures (#21821)
fix(archiver): throw on duplicate contract class or instance additions
(#21799)
chore: remove broadcasted function events (#21805)
fix: sync dateProvider from anvil stdout on every mined block (#21829)
fix(sequencer): use wall-clock time instead of L1 block timestamp for
slot estimation (#21769)
fix: use correct EthCheatCodes method name in epochs_missed_l1_slot test
(#21848)
feat(p2p): add tx validator for contract class id verification (#21788)
feat: publisher funding (#21631)
feat: batch chonk verifier TS integration (#21823)
fix(sequencer): remove l1 block timestamp check (#21853)
fix: use local IVC inputs for batch_verifier bench test (#21857)
fix(p2p): centralize gossipsub penalization and fix inconsistencies
(#21863)
chore: publish GitHub releases to AztecProtocol/barretenberg (#21775)
END_COMMIT_OVERRIDE
jfecher pushed a commit that referenced this pull request Mar 23, 2026
The nightly barretenberg debug build was only compiling (`bootstrap.sh
build`) but never running tests. This meant debug-only assertions (like
the `uint256::slice` bounds check that caught an incorrect parameter in

- Change `ci-barretenberg-debug` bootstrap target from `build` to `ci`
(build + test)
- Bump EC2 resources: 16 → 32 CPUs, add 120min shutdown time (debug
tests are ~2x slower)

**Manual follow-up needed**: Update
`.github/workflows/barretenberg-nightly-debug-build.yml` timeout from
120 to 240 minutes (workflow files are blocked from this session).

#21723 — Federico
discovered that `uint256::slice` only asserts `start >= end` in debug
mode. Tests passed in release CI despite incorrect parameters due to
silent wraparound. Running debug tests nightly will catch these kinds of
issues.

ClaudeBox log: https://claudebox.work/s/99cbf067abf65f72?run=1

---------

Co-authored-by: ludamad <adam.domurad@gmail.com>

chore: sha tests for mixed constant-witness inputs (#21123)

Adds some tests of the sha256 hash gadget under various mixed
constant-witness cases.

Also removes buggy but unused logic from the `sparse_value` constructor
related to constant values

chore: combine group+curve and specify "latest commit on next" (#21815)

.

chore: delete unused multisig files from schnorr (#21800)

See title

chore: route backport CI failure notifications to #backports channel (#21779)

Moves the "CI3 failed on backport PR" Slack notification from
`#team-alpha` to `#backports`, which is where all other backport-related
notifications already go.

- Updated `.github/workflows/ci3.yml`: changed the Slack channel for
backport CI failure notifications from `#team-alpha` to `#backports`

ClaudeBox log: https://claudebox.work/s/d16f655779423037?run=1

chore: Update Noir to nightly-2026-03-19

chore(docs): cut new aztec and bb docs version for tag v5.0.0-nightly.20260320

chore!: dyadic circuit size constants update (#21762)

Updates two key constants:
`CONST_PROOF_SIZE_LOG_N = 28` --> `25` (Actual max allowed by SRS)
- Decreases "padding" for Ultra flavors, results in modest decrease in
gate counts for rollup circuits etc

`CONST_FOLDING_LOG_N = 21` --> `24`
- Increases max size of circuit that can be processed by Chonk.
Increases kernel sizes modestly. Could consider leaving at 22/23 with
the argument that 24 is obscenely large. (24 is the max allowed by the
SRS assuming batch-2 multiPCS)

chore: minor pcs fixes (#21727)

fix minor issues discovered by claudebox

fix: add canonical checks for fr/fq in U256Codec::deserialize_from_fields (#21811)

Reject non-canonical field elements (>= modulus) in
`U256Codec::deserialize_from_fields`.

- Split the `bb::fr`/`fq` deserialization from
`uint32_t`/`uint64_t`/`uint256_t` to add `BB_ASSERT_LT` canonical checks
- Matches existing behavior of `FrCodec` and the Solidity verifier
(`require(v < MODULUS)`)
- Added tests verifying acceptance of canonical values and rejection of
non-canonical values

chore: honk verifier opt + fix review

This has been submitted for audit already, so I've attempted to make the changes / optimizations as small as possible. I came across these while reading through it again for the zk verifier.

Commits are squashed to please CI - however ive pushed them to a mirror so they still exist within the repository
- removing a scalar mul which was being multiplied by 1
    - 699c7e2
- LAGRANGE_FIRST is always (1,2), constant_term_acc is always multiplied by (1,2) so add the scalars together
    - 6f2c350
- the remaining are batching all inversions into the same modexp
    - 05217b8
    - f1a5830

This pr will be squashed - see [md/honk-golf-all-commits](https://github.com/AztecProtocol/aztec-packages/tree/md/honk-golf-all-commits) for commit by commit break down

Co-authored-by: Alejo Amiras <alejo.amiras@gmail.com>
Co-authored-by: Alex Gherghisan <alexghr@users.noreply.github.com>
Co-authored-by: Ary Borenszweig <asterite@gmail.com>
Co-authored-by: Charlie Lye <5764343+charlielye@users.noreply.github.com>
Co-authored-by: David Banks <47112877+dbanks12@users.noreply.github.com>
Co-authored-by: Esau <esau@aztecprotocol.com>
Co-authored-by: Facundo <fcarreiro@users.noreply.github.com>
Co-authored-by: IlyasRidhuan <ilyasridhuan@gmail.com>
Co-authored-by: Jean M <132435771+jeanmon@users.noreply.github.com>
Co-authored-by: Jonathan Hao <jonathan@aztec-labs.com>
Co-authored-by: Jonathan Hao <jonathanpohsianghao@gmail.com>
Co-authored-by: Josh Crites <jc@joshcrites.com>
Co-authored-by: José Pedro Sousa <jose@aztecprotocol.com>
Co-authored-by: José Pedro Sousa <outgoing@zpedro.dev>
Co-authored-by: Khashayar Barooti <khashayar@aztecprotocol.com>
Co-authored-by: LHerskind <16536249+LHerskind@users.noreply.github.com>
Co-authored-by: Lasse Herskind <16536249+LHerskind@users.noreply.github.com>
Co-authored-by: Lucas Xia <lucasxia01@gmail.com>
Co-authored-by: MirandaWood <miranda@aztecprotocol.com>
Co-authored-by: Mitch <mitchell@aztecprotocol.com>
Co-authored-by: Mitchell Tracy <mitchellftracy@gmail.com>
Co-authored-by: Nicolás Venturo <nicolas.venturo@gmail.com>
Co-authored-by: PhilWindle <60546371+PhilWindle@users.noreply.github.com>
Co-authored-by: PhilWindle <philip.windle@gmail.com>
Co-authored-by: Raju Krishnamoorthy <krishnamoorthy@gmail.com>
Co-authored-by: Rumata888 <isennovskiy@gmail.com>
Co-authored-by: Santiago Palladino <santiago@aztec-labs.com>
Co-authored-by: Sarkoxed <sarkoxed2013@yandex.ru>
Co-authored-by: Savio <72797635+Savio-Sou@users.noreply.github.com>
Co-authored-by: StoneMac65 <StoneMac65@gmail.com>
Co-authored-by: Tom French <15848336+TomAFrench@users.noreply.github.com>
Co-authored-by: benesjan <13470840+benesjan@users.noreply.github.com>
Co-authored-by: benesjan <janbenes1234@gmail.com>
Co-authored-by: critesjosh <18372439+critesjosh@users.noreply.github.com>
Co-authored-by: danielntmd <danielntmd@nethermind.io>
Co-authored-by: dbanks12 <david@aztec-labs.com>
Co-authored-by: fcarreiro <facundo@aztecprotocol.com>
Co-authored-by: federicobarbacovi <171914500+federicobarbacovi@users.noreply.github.com>
Co-authored-by: guipublic <47281315+guipublic@users.noreply.github.com>
Co-authored-by: iAmMichaelConnor <mike@aztecprotocol.com>
Co-authored-by: jeanmon <jean@aztec-labs.com>
Co-authored-by: jewelofchaos9 <jewelofchaos9@gmail.com>
Co-authored-by: josh crites <jc@joshcrites.com>
Co-authored-by: ledwards2225 <98505400+ledwards2225@users.noreply.github.com>
Co-authored-by: ledwards2225 <l.edwards.d@gmail.com>
Co-authored-by: lucasxia01 <lucasxia01@gmail.com>
Co-authored-by: ludamad <163993+ludamad@users.noreply.github.com>
Co-authored-by: ludamad <adam.domurad@gmail.com>
Co-authored-by: ludamad <domuradical@gmail.com>
Co-authored-by: mralj <nikola.mratinic@gmail.com>
Co-authored-by: nventuro <2530770+nventuro@users.noreply.github.com>
Co-authored-by: sergei iakovenko <105737703+iakovenkos@users.noreply.github.com>
Co-authored-by: sirasistant <sirasistant@gmail.com>
Co-authored-by: thunkar <gregojquiros@gmail.com>

fix: cancel SSM commands on signal to prevent orphaned in-progress commands

When a CI job is interrupted, the SSM command could remain InProgress
on the SSM service even after the EC2 instance is terminated. Add a
SIGINT/SIGTERM trap in ssm_send_command to cancel the command server-side,
and suppress output from aws_terminate_instance in the cleanup trap.

chore: update Chonk README and audit scopes for batched hiding+translator flow (#21695)

Updates Chonk documentation to reflect the actual prover/verifier flow
with batched MegaZK + Translator sumcheck and PCS.

**Chonk README.md:**
- Rewrote Proof Structure section: old `mega_proof` + `GoblinProof` →
actual 5-segment structure (`hiding_oink_proof`, `merge_proof`,
`eccvm_proof`, `ipa_proof`, `joint_proof`)
- Rewrote Verification Architecture: old 4 pairing point sets with
`MegaZKVerifier::reduce_to_pairing_check` → actual
`BatchedHonkTranslatorVerifier` two-phase API with 3 pairing point sets
(PI, Merge, Batched PCS)
- Added Components subsection for Batched Honk + Translator with link to
its README
- Updated Zero-Knowledge section: separate MegaZK/Translator ZK → joint
ZK via joint Libra masking
- Updated Transcript Sharing, verification code snippets, and Appendix
references

**Audit scope files updates**

ClaudeBox log: https://claudebox.work/s/6cbc1f828c73c0be?run=1

---------

Co-authored-by: sergei iakovenko <105737703+iakovenkos@users.noreply.github.com>
Co-authored-by: iakovenkos <sergey.s.yakovenko@gmail.com>

chore: Update Noir to nightly-2026-03-20

fix(avm): keccak pre-audit (#21486)

Mostly docs update for keccak.

fix(avm)!: calldata - internal audit (#21380)

Linear issue:
[AVM-70](https://linear.app/aztec-labs/issue/AVM-70/calldata)

---------

Co-authored-by: Miranda Wood <miranda@aztecprotocol.com>

chore: fix alu comment (#21804)

Comment mentions wrong op

chore: misnamed copy function in public db (#21810)

It's not a deep copy! A shallow copy or more accurately a fork.

fix(avm)!: add std multi-row computation constraint (#21718)

Adds the standard multi-row constraint to prevent valid computation from
being maliciously terminated midway.

Note there there is no vulnerability that this was fixing.
`[WRITE_TO_SLICE]` is a permutation requiring `round == 24` so a
malicious prover could not invalidly terminate the keccak computation.
This is just to standardise multi-row constraints.

fix(avm): fuzzer compilation issue related to calldata (#21838)

.

fix(avm)!: reduce keccakf cols (#21641)

Removes 29 columns and subrelations by optimising the way we constrain
bitwise rotations.

chore: fix ordering of external call arg comments (#21806)

Arg tag comment ordering mistmatch

add params

e2e test

implementation stage 1

unit tests

fix todos & expand e2e test

fix self funding

fix build

fix build

update comments

use multicall

add e2e test

cap publishers

funding cooldown

fix suggestions

increase timeouts

feat(p2p): add tx validator for contract instance deployment addresses

Validates that contract instance deployment logs contain correct addresses
by recomputing the address from the event fields and rejecting mismatches.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: always deploy IRM for testnet (#21755)

fix: avoid mutating caller's array via splice in snapshot sync (A-718)

Use chunk + asyncPool to replace splice-based batching in
BrokerCircuitProverFacade. Both getAllCompletedJobs and job retrieval
now use non-mutating patterns with bounded concurrency.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

chore: update network logs skill (#21785)

.

feat(archiver): validate contract instance addresses before storing

Adds a defense-in-depth check in the archiver's data store updater that
recomputes contract instance addresses from their fields before storing
them, filtering out any instances with mismatched addresses.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: ensure no division by 0 (#21786)

Fixes #21513

feat: support private fork releases via ci-release (#21778)

When REF_NAME has a 'private' semver prerelease (e.g.
v5.0.0-private.20260318), ci-release fetches the matching tag from
aztec-packages-private, creates a worktree, and runs the release from
there. Cache uploads are suppressed to avoid leaking private source
artifacts.

Also locks down ci3-external.yml to use github.token instead of
AZTEC_BOT_GITHUB_TOKEN.

fix: restrict scenario deployments to only nightly

fix(stdlib): zero-pad bufferFromFields when declared length exceeds payload (#21802)

Ensures that `bufferFromFields` always returns a buffer with the length
requested in the first field of the array.

This protects against this method being called with a truncated array,
which could cause a wrong public bytecode commitment to be computed.
Note that this is currently not the case, since this function always
gets called with an array that's exactly
`CONTRACT_CLASS_LOG_SIZE_IN_FIELDS` long, which is greater than the
`MAX_PACKED_PUBLIC_BYTECODE_SIZE_IN_FIELDS`.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

test(protocol-contracts): verify max-size bytecode fits in contract class log (#21818)

Verify that `CONTRACT_CLASS_LOG_SIZE_IN_FIELDS` is large enough to hold
a max-size public bytecode alongside the `ContractClassPublishedEvent`
header fields. If these constants drift, contract class registration
could silently break.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: wire BOT_DA_GAS_LIMIT through helm/terraform for staging-public (#21809)

Bot transactions on staging-public have been skipped for 4+ hours
because the gas estimator returns a DA gas limit of 196,608, which
exceeds the per-block DA gas limit of 117,965. This causes every tx to
be rejected by the sequencer with "Skipping processing of tx due to
block gas limit".

This PR wires the existing `BOT_DA_GAS_LIMIT` and `BOT_L2_GAS_LIMIT` env
vars (already supported by the bot code) through the full deployment
pipeline:

- Helm chart values + configmap template (conditionally rendered)
- Terraform variables (shared across all bot types)
- staging-public.env: sets `BOT_DA_GAS_LIMIT=100000` and
`BOT_L2_GAS_LIMIT=6540000`

The DA gas limit of 100,000 fits within the computed per-block limit of
117,965 (derived from `MAX_PROCESSABLE_DA_GAS_PER_CHECKPOINT=786432` / 8
blocks * 1.2 multiplier). The L2 gas limit of 6,540,000 matches the
protocol constant `MAX_PROCESSABLE_L2_GAS`.

Evidence from sequencer logs:
```
maxBlockGas: {'daGas': 117965, 'l2Gas': 30000000}
txGasLimit:  {'daGas': 196608, 'l2Gas': 6540000}   ← exceeds block DA limit
```

ClaudeBox log: https://claudebox.work/s/8c485ac9c95c297f?run=5

fix: remove jest-mock-extended from worker processes + fix parallelize_strict silent failures (#21821)

Fixes two bugs:

1. **Bench test workers crash on startup**:
`p2p_client_testbench_worker.ts` and `proposal_tx_collector_worker.ts`
import `jest-mock-extended`, which depends on `@jest/globals`. Since
workers run as plain Node processes (forked via `child_process.fork()`),
not inside Jest, `@jest/globals` throws immediately and all workers
SIGSEGV. Fix: replace `mock<EpochCache>()` with the existing
`createMockEpochCache()` from test-helpers.

2. **`parallelize_strict` silently swallows failures**: `run_tests | tee
$output` runs `run_tests` in a subshell due to the pipe. Background jobs
started inside that subshell are invisible to the parent shell's `wait
-n`. When a job fails, `exit 1` kills only the subshell; the parent sees
no jobs and exits 0. Fix: use process substitution (`> >(tee)`) so
`run_tests` runs in the current shell and failures propagate.

- Bench test `p2p_client.proposal_tx_collector.bench.test.ts` should no
longer crash on worker startup
- `parallelize_strict` should now properly fail when a benchmark fails

ClaudeBox log: https://claudebox.work/s/6ba298b4e9b52aa5?run=3

fix(archiver): throw on duplicate contract class or instance additions

Replace silent overwrites with explicit errors when adding a contract
class or instance that already exists in the store. This catches
unexpected double-adds that could lead to data loss on rollback.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: skip protocol contract registration if already present

Makes registerProtocolContracts idempotent by checking if the contract
class already exists before adding. This handles node restarts against
a persisted LMDB store.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

refactor: remove broadcasted function events and membership proofs

The PrivateFunctionBroadcastedEvent and UtilityFunctionBroadcastedEvent
were removed from the ContractClassRegistry contract, so all supporting
TypeScript infrastructure is now dead code. This removes the event types,
membership proof creation/validation, archiver data flows, store methods,
and the privateFunctions/utilityFunctions fields from ContractClassPublic.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix(archiver): bump db schema version for contract class format change

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: update schema tests to omit privateFunctions from ContractClassPublic assertions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: sync dateProvider from anvil stdout on every mined block

Anvil logs block timestamps to stdout on each mined block. Parse these
and update the TestDateProvider so it stays in lockstep with the L1
chain, eliminating drift between wall clock and anvil chain time.

Also rename cheatCodes.timestamp() to lastBlockTimestamp() to clarify
it returns the latest block's discrete timestamp, not the current time.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: do not use time from L1

Fixes A-707

fix: floor next L1 slot timestamp with latest block to prevent early proposals

When the dateProvider wall clock is ahead of the latest mined L1 block,
getNextL1SlotTimestamp can return a timestamp in a future L2 slot. The
canProposeAt simulation passes (with time override) but the actual tx
lands in an L1 block still in the previous slot, causing silent rejection.

Floor the computed timestamp with latestBlock.timestamp + slotDuration
to ensure we never target a slot beyond what the next L1 block can reach.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: use correct EthCheatCodes method name in epochs_missed_l1_slot test (#21848)

Fix TypeScript compilation error in the new
`epochs_missed_l1_slot.test.ts` introduced by #21769.

The test calls `eth.timestamp()` but the method on `EthCheatCodes` is
actually `lastBlockTimestamp()`. This caused `yarn tsgo -b
--emitDeclarationOnly` to fail with:
```
error TS2339: Property 'timestamp' does not exist on type 'EthCheatCodes'.
```

Changed `eth.timestamp()` → `eth.lastBlockTimestamp()` on line 104 of
the test file.

ClaudeBox log: https://claudebox.work/s/e62ab0794047aa8c?run=1

feat(p2p): add tx validator for contract class id verification (#21788)

Contract class registration events contain a class ID alongside the
fields needed to recompute it (artifactHash, privateFunctionsRoot,
packed bytecode). This adds a validation to avoid a malicious tx from
registering a class with a mismatched ID if it found a bug in the
registry contract, poisoning the archiver's contract data.

Made `toContractClassPublic()` a simple synchronous conversion with no
validation (symmetric with how `toContractInstance()` works for contract
instances). Validation is done explicitly at each call site that needs
it: the `DataTxValidator` and the archiver's
`updatePublishedContractClasses`.

- **protocol-contracts**: `toContractClassPublic()` is now sync and
returns `ContractClassPublic` without validation or bytecode commitment;
`toContractClassPublicWithBytecodeCommitment()` adds the commitment but
also does not validate
- **stdlib**: Added `TX_ERROR_INCORRECT_CONTRACT_CLASS_ID` and
`TX_ERROR_MALFORMED_CONTRACT_CLASS_LOG` error constants; added elapsed
timing test for `computeContractClassId` with real artifacts
- **p2p**: Extended `DataTxValidator` with explicit contract class ID
verification (recomputes class ID from event fields and compares)
- **p2p (tests)**: Added contract class ID validation tests to
`data_validator.test.ts`; updated factory tests
- **archiver**: `updatePublishedContractClasses` explicitly validates
class IDs and collects bytecode commitments in a single pass;
`addContractClasses` now takes `ContractClassPublicWithCommitment[]`
instead of separate arrays
- **simulator**: Simplified `addContractClassesFromEvents` and callers
from async to sync since `toContractClassPublic()` is no longer async

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

feat: batch chonk verifier TS integration

Adds TypeScript integration for the batch chonk verifier C++ service:

- BatchChonkVerifier: TS orchestrator managing bb subprocess, FIFO pipe,
  and proof lifecycle (peer path with batching)
- FifoFrameReader: length-delimited frame reader for named pipes
- TestCircuitVerifier used for both peer and RPC in fake-proof mode
- BBCircuitVerifier + QueuedIVCVerifier for RPC path (real proofs)
- Server splits proofVerifier into peerProofVerifier + rpcProofVerifier

Config changes:
- numConcurrentIVCVerifiers -> bbRpcVerifyConcurrency + bbPeerVerifyBatchSize
- bbIVCConcurrency -> bbBatchVerifyThreads
- QueuedIVCVerifier takes (verifier, concurrency) instead of (config, verifier)

Integration tests and queue robustness tests via bb.js bindings.

chore: set batch verifier concurrency default to 6, wire env vars through terraform

- BB_CHONK_VERIFY_BATCH_CONCURRENCY default 0 (auto) → 6 to leave cores
  for the rest of the node on recommended 8-core machines
- Wire BB_CHONK_VERIFY_MAX_BATCH and BB_CHONK_VERIFY_BATCH_CONCURRENCY
  through terraform variables.tf and main.tf to all 5 node types

fix(sequencer): remove l1 block timestamp check

Was needed due to clock drift between sequencer and anvil, but PR #21829
should now remove the need for that.

fix: use local IVC inputs for batch_verifier bench test (#21857)

The `batch_verifier.bench.test.ts` benchmark (added in #21823) was
downloading pinned IVC inputs from S3 at test runtime, but bench tests
run in Docker containers with `--net=none` (no network), causing all 7
tests to fail.

Instead of adding network access, this uses the pre-generated
`example-app-ivc-inputs-out` folder that's already built by
`end-to-end/bootstrap.sh build_bench` — the same pattern used by the IVC
flow benchmarks in `barretenberg/cpp/scripts/ci_benchmark_ivc_flows.sh`.

- `batch_verifier.bench.test.ts`: Replace S3 download with local
`../end-to-end/example-app-ivc-inputs-out` path
- `yarn-project/bootstrap.sh`: Keep original bench command (no `NET=1`
needed)

http://ci.aztec-labs.com/fafcdc0ea9a5d52b

fix(p2p): centralize gossipsub penalization and fix inconsistencies (#21863)

Gossipsub message validation had double-penalization paths: inner
validation functions called `penalizePeer` directly, and the outer
`validateReceivedMessage` wrapper could penalize again on errors.
Attestation cap exceeded was also inconsistently handled (ignored
instead of rejected like proposals).

Centralized all gossipsub penalization into `validateReceivedMessage` by
adding a `severity` field to the `Reject` variant of
`ReceivedMessageValidationResult`. Inner functions now return severity
instead of calling `penalizePeer` directly. Added `tryDeserialize`
helper for graceful deserialization failure handling.

- **p2p (libp2p_service)**: Centralized penalization in
`validateReceivedMessage`, removed direct `penalizePeer` calls from
`handleGossipedTx`, `validateAndStoreBlockProposal`,
`validateAndStoreCheckpointProposal`, and
`validateAndStoreCheckpointAttestation`. Changed attestation cap
exceeded from `Ignore` to `Reject` with `HighToleranceError`.

Fixes A-705

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

chore: publish GitHub releases to AztecProtocol/barretenberg (#21775)

- Move automatic GitHub release creation from
`AztecProtocol/aztec-packages` to `AztecProtocol/barretenberg` — bb
artifacts are the only reason we need programmatic releases
- Update all bb artifact download URLs (`barretenberg-rs build.rs`, rust
bootstrap) to point to `AztecProtocol/barretenberg` releases
- `bbup` tries `AztecProtocol/barretenberg` first, falls back to
`aztec-packages` indefinitely
- Users can still create `aztec-packages` releases manually via the
GitHub "Create a release" button when needed

---------

Co-authored-by: ludamad <adam.domurad@gmail.com>

refactor: revert remove assert_bounded_vec_trimmed (#21758)

feat!: no_from (#21716)

Getting rid of the tyranny of the `Account` entrypoint, and the terrible
sentinel value of `AztecAddress.ZERO` in the `from` parameter.

Now, when a tx is specified as being sent `from: NO_FROM`, the wallet
completely bypasses the account contract. It doesn't even use the
`MulticallEntrypoint` contract, it'll just execute a single call using
`DefaultEntrypoint`. Since we can do wrapping at the app level, this
means ANYTHING goes. Want to wrap 10 calls on a special sauce multicall?
Go ahead. App-sponsored FPC with custom logic that must be the first on
the chain? Knock yourself out!

Even better news: this is thoroughly tested in our codebase thanks to
account contract self-deployments. They're a great example of "I don't
want to use an account contract as entrypoint" flow.

As an extra side effect, this completely deshrines the
`MulticallEntrypoint` protocol contract from the wallet (we only have
the convenience now of it being already registered in every single
wallet)

fix(bot): use NO_FROM instead of AztecAddress.ZERO in deploy simulate (#21402)

The `feat!: no_from` PR (#21716) changed how `AztecAddress.ZERO` is
handled in the embedded wallet's simulate path.
`BotFactory.setupAccountWithPrivateKey` was passing `AztecAddress.ZERO`
as the `from` address in the `simulate` call, which now causes the
wallet to attempt looking up account data for the zero address —
resulting in `"Account 0x000...000 does not exist on this wallet"`.

Fixed by using `NO_FROM` (already imported and used in the adjacent
`send` call) which is the correct API for opting out of account contract
mediation during simulation.

- `e2e_bot › bridge resume › reuses prior bridge claims`
- `e2e_bot › bridge resume › does not reuse prior bridge claims if
recipient address changes`
- `e2e_bot › setup via bridging funds cross-chain › creates bot after
inbox drift`

- All 10 e2e_bot tests pass locally (previously 3 were failing)

ClaudeBox log: https://claudebox.work/s/11a8e25f0aefa248?run=1

feat!: make isContractInitialized a tri-state enum (#21754)

feat: sync poseidon in the browser (#21833)

#20826 completely
tanks performance in the browser. PXE does a lot of hashing and the
`SharedArrayBuffer` comms overhead is way too much. This PR reverts to
the old behavior only in the browser.

fix(aztec-up): add sensible defaults to installer y/n prompts (#21824)

feat: sync poseidon browser (#21851)

Broadens the check to ensure the sync version is used accross the
browser, service workers, web workers and extension contexts.

hotfix: release workflow location

feat(acir_formal_proofs): add claude skill to automate re-verification

formatting

fix(ci): notify on nightly bb debug build

chore: translator checker and builder cleanup (#21094)

Resolves AztecProtocol/barretenberg#1373 and
AztecProtocol/barretenberg#1367

Cleans up `TranslatorCircuitBuilder` and `TranslatorCircuitChecker`:

1. **Checker uses relations**: replaces the hand-rolled check() with the
same relation-based pattern used by ultra circuit checker, populates
`TranslatorFlavor::AllValues` per row and calls `accumulate` for
`OpcodeConstraint`, `AccumulatorTransfer`, `Decomposition`, and
`NonNativeField` relations. `PermutationRelation` and
`DeltaRangeConstraintRelation` are intentionally skipped (require
grand-product polynomials unavailable at circuit-builder level).
2. **Builder**: extracts 7 anonymous lambdas
(`split_limb_into_microlimbs`, `split_wide_limb_into_2_limbs`,
`uint512_to_limbs`, `check_binary_limbs_maximum_values`,
`check_micro_limbs_maximum_values`, `lay_limbs_in_row`,
`process_random_op`) into named private methods.
3. **Checker cleanup**: removes the intermediate `RelationInputs` struct
and `compute_relation_inputs_limbs` helper.

chore: better parallelisation of translator wire instantiation  (#21100)

resolves AztecProtocol/barretenberg#1383

While filling the wire polynomials in translator, we currently process
wires sequentially (and parallelise filling an individual wire).
However, since there's only $2^{13}$ values to be filled, it didn't make
sense to parallelise for a given wire.

Instead, we parallelise across wires: thread $i$ will process wire $i$.
This makes the proving key construction faster by ~30% (but overall
translator proving only improves by a mere ~2% because proving key
construction forms a small part of total proving.)

fix: suppress debug field assertions in high-degree attack tests

The HighDegreeAttackAccept and HighDegreeAttackReject tests in
shplemini.test.cpp intermittently crash in debug builds due to the
coarse-form field assertion (val < twice_modulus) firing during
intermediate arithmetic with deliberately oversized polynomials.

These tests simulate adversarial prover behavior with polynomials whose
degree exceeds what the Gemini folding protocol expects. During
processing, certain random input combinations produce intermediate field
values that violate the [0, 2p) coarse-form invariant checked by
assert_coarse_form(). Since field operations are noexcept, the thrown
exception triggers std::terminate.

Fix: use BB_DISABLE_ASSERTS() to downgrade assertions to warnings for
these adversarial tests, allowing verification to complete and properly
check that the attack is rejected (pairing check fails / IPA returns
false).

This was the root cause of the nightly barretenberg debug build failure
that started on 2026-03-20.

fix: remove hardcoded AVM_TRANSPILER_LIB from debug CMake preset

chore: setup HA cleanup task

fix(spartan): export AZTEC_ADMIN_API_KEY in benchmark functions

feat: auto-generate fuzzer manifest from CMake build metadata

Add CMake-driven fuzzer registration and manifest generation. Fuzzers
register via barretenberg_module() and a post-build step produces
per-preset fuzzer_manifest.json files. A merge script combines them
into a unified manifest for the container. The manifest is attached
to the container image as an OCI artifact via oras so the orchestrator
can read it without pulling or running the container.

fix: consolidate blob source test into single summary log with supernode detection

fix: don't lose precision when sorting bigints

docs: add immutables via salt documentation for defi-wonderland macro

Add documentation for the aztec-immutables-macro library that encodes
immutable values into the contract deployment salt, eliminating the need
for initialization transactions. Include in both current developer docs
and v4.1.0-rc.2 versioned docs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

chore(docs): fix broken anchor tags across versioned docs

Fix broken markdown anchor links in versioned docs (v4.0.0-devnet.2-patch.1,
v4.1.0-rc.2, v5.0.0-nightly.20260317) and source docs. Also adds missing
4.1.0-rc.2 version heading in migration notes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: pin typescript and harden lockfile check in docs examples CI

Two fixes for docs examples CI failures affecting all PRs:

1. Pin typescript to ^5.3.3 (matching yarn-project) instead of
   unpinned latest. TypeScript 6.0 changed JSON import type inference,
   making @ts-expect-error directives unused (TS2578 errors).

2. Change yarn.lock emptiness check to use git state (git show HEAD:...)
   instead of filesystem state (-s). When parallel kills a job mid-yarn
   add, the cleanup trap never runs, leaving lockfiles dirty on disk.
   The retry then hits a false positive. Now checks committed state and
   resets filesystem before validation.

fix: update backport skill for ClaudeBox environment

Fixes the issue where ClaudeBox backport PRs include unrelated commits (e.g., #21843).

**Root cause**: ClaudeBox starts at `origin/next`, but backport work needs to happen on the staging branch (`backport-to-$BRANCH-staging`). Without explicit checkout of the staging branch, `create_pr` pushes from the wrong HEAD, leaking unrelated commits.

Co-authored-by: ludamad <adam.domurad@gmail.com>

fix(docs): integrate recursive_verification into examples execution pipeline

- Fix broken import path in `generate_data.ts` (`../circuits/...` → `../../../circuits/...`)
- Add `setup` field support to `config.yaml` so the runner executes prerequisite scripts (e.g., proof generation) before `index.ts`
- Add `recursive_verification` to the default executed examples list in `run.sh`
- Use `AZTEC_NODE_URL` env var in `index.ts` for Docker Compose compatibility
- Include `scripts/**/*.ts` in tsconfig template for validation type-checking coverage
- Clean up generated `data.json` in cleanup step

- [ ] Verify `docs/examples/bootstrap.sh` validation step still passes (tsconfig now includes `scripts/`)
- [ ] Verify `recursive_verification` setup step runs `generate_data.ts` successfully in Docker Compose
- [ ] Verify `recursive_verification` example executes end-to-end against local network
- [ ] Confirm existing examples are unaffected (no `setup` field = no change in behavior)

> Note: proof generation in `generate_data.ts` is compute-intensive (UltraHonk via bb.js WASM). CI time impact should be validated.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Josh Crites <jc@joshcrites.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants