Skip to content

chore: Accumulated backports to v4#20752

Merged
alexghr merged 32 commits intov4from
backport-to-v4-staging
Feb 25, 2026
Merged

chore: Accumulated backports to v4#20752
alexghr merged 32 commits intov4from
backport-to-v4-staging

Conversation

@AztecBot
Copy link
Collaborator

@AztecBot AztecBot commented Feb 23, 2026

BEGIN_COMMIT_OVERRIDE
chore: fix worker wallet in jest (#20725)
fix: pxe native prover log level (#20724)
chore: encode LOG_LEVEL (#20678)
chore: fix worker wallet log level (#20754)
chore: Removed dead code (#20740)
fix: getVotes return empty instead of stale data (#20756)
fix: pass log level to AVM simulator (#20762)
chore: double Node.js libuv thread count (#20709)
fix: underflow in snapshot synch (#20780)
fix: separate rejected and aborted proving jobs (#20777)
fix: evicted transactions could reappear after a node restart (#20773)
fix: (A-575) prevent checkpoint event spam in L2BlockStream on restart (#20791)
fix: charge 3.6M for epoch verification (#20765)
fix(ethereum): remove viem NonceManager to fix nonce gap after failed sends (#20819)
fix: limit number of threads when verifying server-side proofs (#20818)
chore: Comment update (#20820)
feat: add support for signed integers on contract functions (#20784)
feat: expose blockheader getters (#20790)
docs: minor clarification (#20788)
fix: Use async poseidon (#20826)
chore: update rollup config for testnet and mainnet (#20827)
fix: Reduce tx hash conversions inside tx pool (#20829)
refactor: Consolidate transaction validators (#20631)
feat: use native crypto to compute p2p message ID (#20846)
fix(p2p): wait for GossipSub mesh formation before sending txs in e2e tests (#20626)
chore: send alert to slack on CI3 failures (#20860)
END_COMMIT_OVERRIDE

This PR removes the old `getTxsByHash` call from `P2PClient` as it is no longer used.
Fix `getVotes` in `TallySlashingProposer.sol` to return empty bytes when `_index >= voteCount` (preventing stale per-index reads from reused circular storage), add a regression test in `TallySlashingProposer.t.sol`, and clarify the related staleness comments.
Increases the libuv thread pool size to improve I/O performance in the yarn-project. We have the AVM simulator and liblmdb stuff that can hold multiple threads, so the original count of 4 is seeming low.
This PR tracks rejected and aborted jobs as separate counters enabling us to setup alerts when any proofs fail. This PR has to be backported otherwise we'll get alerts when jobs get cancelled as well.
## Summary

- Fix a bug in `TxPoolV2.addPendingTxs` where evicted transactions could reappear after a node restart ("come back to life")
- Move all throwable I/O (`buildTxMetaData`, `getMinedBlockId`, `validateMeta`) out of the `transactionAsync` callback into a pre-computation phase, so that if any I/O fails, no in-memory or DB mutations have occurred

## Background

`addPendingTxs` processes a batch of transactions inside a single LMDB `transactionAsync`. When tx N causes nullifier-conflict evictions of earlier pool txs and then tx N+1 triggers a throw (e.g. `getTxEffect` I/O failure or validator crash), the LMDB transaction rolls back — but in-memory mutations from the eviction persist. On restart the pool rehydrates from DB where the soft-delete marker was never committed, and the evicted tx loads back into the pool.

Other pool methods (`prepareForSlot`, `handlePrunedBlocks`, `handleFinalizedBlock`, etc.) are not affected because they either perform throwable I/O *before* any deletions, or wrap throwable operations in try/catch via the EvictionManager.

## Approach (TDD)

The fix was developed test-first. Two failing tests were written that reproduce the inconsistency — one for each throwable path (`getMinedBlockId` and `validateMeta`) — by verifying that `getTxStatus` returns the same value before and after a pool restart. With the bug present, the status diverges (`'deleted'` in memory vs `'pending'` after restart). The implementation was then applied to make both tests pass.

## Implementation

`addPendingTxs` is now split into two phases:

1. **Pre-computation** (outside the transaction): For each tx, compute `buildTxMetaData`, `getMinedBlockId`, and `validateMeta`. If any of these throw, the call fails before any mutations happen.
2. **Transaction** (inside `transactionAsync`): Uses only pre-computed results, in-memory index reads, and buffered DB writes.

Supporting changes:
- `#addTx` accepts an optional `precomputedMeta` parameter (backward-compatible — other call sites unchanged)
- `#tryAddRegularPendingTx` receives pre-computed metadata directly instead of calling `buildTxMetaData`/`validateMeta`; validation rejection is handled by the caller

## Test plan

- [x] Two new persistence consistency tests pass (were failing before the fix)
- [x] All 210 `tx_pool_v2.test.ts` tests pass
- [x] Full p2p test suite passes (1292 tests)
PhilWindle and others added 2 commits February 24, 2026 17:28
This PR simply updates a comment around contract update delays.
Closes #20746.

I added both Noir and TS tests to make sure this works as expected. The TS tests are currently failing because we cannot create an `Fr` for a negative number, but I don't know enough about how the encoding works to fix it myself. @sirasistant?

I only added tests for `i64` as `i128` it not yet supported by Noir (noir-lang/noir#7591).
Just some clarifying notes that might help explain this.
This PR switches the poseidon hashing function to use the async barretenberg api. Profiling shows that generating a transaction hash takes ~22ms meaning the node needs to delegate this asynchronously to barretenberg.
## Summary

- **Mana target**: 75M for both testnet and mainnet (was 150M testnet, 0 mainnet)
- **Proving cost per mana**: 25M for both testnet and mainnet (was 100 testnet, 0 mainnet)
- **Committee size**: 48 for mainnet (was 24), testnet already 48
- **Local ejection threshold**: 1.62e23 for mainnet (was 1.96e23)
- **Slash amounts**: 100000e18 for testnet small/medium/large (was 10e18/20e18/50e18)
- **Duplicate proposal/attestation penalties**: 0 for both testnet and mainnet
- **Checkpoint reward**: 500 tokens (was 400)
- **Governance execution delay**: 30 days on mainnet (was 7 days)

## Test plan
- [ ] Verify generated configs match after `yarn generate`
- [ ] Confirm testnet deployment picks up new values
- [ ] Confirm mainnet deployment picks up new values
PhilWindle and others added 3 commits February 25, 2026 11:29
This PR additionally stores transaction hashes as `bigint` in
`TxMetaData` and in priority buckets, significantly reducing the number
of serde operations in the tx pool.
- Split gossiped transaction validation into two stages with a pool
pre-check in between, so expensive proof verification is skipped for
transactions the pool would reject (duplicates, low priority, pool full)
- Consolidated all transaction validator factories into a single
well-documented `factory.ts` with clear entry points for each path:
gossip (two-stage), RPC, req/resp, block proposals, block building, and
pending pool migration
- Simplified `canAddPendingTx` by removing validation from it (now only
checks pool state, not tx validity), narrowing its return type from
`accepted | ignored | rejected` to `accepted | ignored`
- Replaced the monolithic `validatePropagatedTx` and
`createMessageValidators` methods with explicit
`createFirstStageMessageValidators` and
`createSecondStageMessageValidators`
- Added separate `validateTxsReceivedInBlockProposal` path using
well-formedness-only checks (metadata, size, data, proof) since block
proposal txs must be accepted for re-execution
- Documented the full validation strategy in a README with a coverage
matrix showing which validators run at each entry point

**Before:** Deserialize -> run ALL validators (including proof) -> add
to pool
**After:** Deserialize -> Stage 1 (fast checks) -> pool pre-check
(`canAddPendingTx`) -> Stage 2 (proof verification) -> add to pool

This avoids wasting resources on proof verification for transactions the
pool would reject anyway.

- Unit tests for all factory functions verify correct validator
composition and severity assignments
- New libp2p service tests cover: pool pre-check short-circuiting proof
verification, stage ordering guarantees, second-stage failure handling,
severity propagation, no peer penalty on pool ignore, and full happy
path
- Existing pool tests updated (removed `canAddPendingTx` rejected case
that no longer applies)
- All existing validator unit tests continue to pass
@alexghr alexghr enabled auto-merge (squash) February 25, 2026 11:55
Replace aztec/foundation for native sha256. The native sha256 implementations are orders of magnitude faster than hash.js (which is not surprising). I have used the web-crypto subtle.digest since it's async.

Fix A-583

<details>
  <summary> Benchmark run on mainframe</summary>

| Function | CONCURRENCY | Size (KB) | Avg (ms) | P50 (ms) | P99 (ms) | TOTAL (ms) |
|----------|---|-----------|----------|----------|----------|------------|
| hashJs.sha256 x20 | 1 | 1 | 2.65 | 1 | 34 | 39.544517 |
| crypto.createHash x20 | 1 | 1 | 1 | 1 | 1 | 0.520618 |
| globalThis.crypto.subtle.digest x20 | 1 | 1 | 1.75 | 1 | 16 | 18.246695 |
| hashJs.sha256 x20 | 4 | 1 | 1 | 1 | 1 | 1.176059 |
| crypto.createHash x20 | 4 | 1 | 1 | 1 | 1 | 0.261084 |
| globalThis.crypto.subtle.digest x20 | 4 | 1 | 1 | 1 | 1 | 0.734331 |
| hashJs.sha256 x20 | 1 | 64 | 2.25 | 2 | 3 | 56.111665 |
| crypto.createHash x20 | 1 | 64 | 1 | 1 | 1 | 0.982696 |
| globalThis.crypto.subtle.digest x20 | 1 | 64 | 1 | 1 | 1 | 1.911511 |
| hashJs.sha256 x20 | 4 | 64 | 2.25 | 2 | 7 | 58.449153 |
| crypto.createHash x20 | 4 | 64 | 1 | 1 | 1 | 1.031626 |
| globalThis.crypto.subtle.digest x20 | 4 | 64 | 1 | 1 | 1 | 0.994706 |
| hashJs.sha256 x20 | 1 | 512 | 22.7 | 21 | 27 | 464.974449 |
| crypto.createHash x20 | 1 | 512 | 1 | 1 | 1 | 6.18104 |
| globalThis.crypto.subtle.digest x20 | 1 | 512 | 1 | 1 | 1 | 7.551702 |
| hashJs.sha256 x20 | 4 | 512 | 22.5 | 21 | 34 | 462.867755 |
| crypto.createHash x20 | 4 | 512 | 1 | 1 | 1 | 6.185 |
| globalThis.crypto.subtle.digest x20 | 4 | 512 | 1 | 1 | 1 | 3.335444 |
| hashJs.sha256 x20 | 1 | 3072 | 162.55 | 153 | 254 | 3262.797195 |
| crypto.createHash x20 | 1 | 3072 | 1.05 | 1 | 2 | 37.506365 |
| globalThis.crypto.subtle.digest x20 | 1 | 3072 | 2.05 | 2 | 3 | 45.949082 |
| hashJs.sha256 x20 | 4 | 3072 | 158.65 | 153 | 255 | 3182.793907 |
| crypto.createHash x20 | 4 | 3072 | 1 | 1 | 1 | 35.97431 |
| globalThis.crypto.subtle.digest x20 | 4 | 3072 | 3.1 | 2 | 8 | 22.177998 |
| hashJs.sha256 x20 | 1 | 10240 | 581.35 | 561 | 684 | 11641.084243 |
| crypto.createHash x20 | 1 | 10240 | 5.15 | 5 | 6 | 119.489467 |
| globalThis.crypto.subtle.digest x20 | 1 | 10240 | 7.55 | 7 | 13 | 161.12074 |
| hashJs.sha256 x20 | 4 | 10240 | 578.5 | 561 | 655 | 11581.108637 |
| crypto.createHash x20 | 4 | 10240 | 5.1 | 5 | 6 | 119.847924 |
| globalThis.crypto.subtle.digest x20 | 4 | 10240 | 13.15 | 11 | 27 | 85.079143 |

</details>
alexghr and others added 4 commits February 25, 2026 13:54
… tests (#20626)

- Fixes flaky `e2e_p2p_network > should rollup txs from all peers` test
by waiting for GossipSub mesh formation (not just peer connections)
before sending transactions
- Adds `getGossipMeshPeerCount` method through the P2P stack
(`P2PService` → `P2PClient` → test helpers)
- Enhances `waitForP2PMeshConnectivity` to wait for each node to have at
least 1 mesh peer on the tx topic

GossipSub requires heartbeat cycles (~700ms each) after peer connections
to form the actual message-routing mesh. The test was sending
transactions immediately after connections were established but before
the mesh formed. With `allowPublishToZeroTopicPeers: true`, the publish
silently succeeded with 0 recipients, and the tx expired from the
gossipsub cache before the mesh was ready.

- Build passes (`yarn build`)
- The flaky test `e2e_p2p/gossip_network.test.ts` should no longer time
out waiting for transactions to propagate

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Warp to the next slot when there are pending txs so the sequencer can start building blocks with those txs.
This speeds up the starting time of the sandbox, which deploys some testing accounts by default. It used to take ~50s to deploy an account, and is reduced to ~5s with this change. Users' txs will also be mined faster.

Co-authored-by: Leila Wang <leizciw@gmail.com>
@alexghr alexghr merged commit fdd0d0d into v4 Feb 25, 2026
11 checks passed
@alexghr alexghr deleted the backport-to-v4-staging branch February 25, 2026 15:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.