Skip to content

feat(p2p): Integrate TxPoolV2 across codebase#20172

Merged
alexghr merged 15 commits intomerge-train/spartanfrom
pw/tx-pool-v2-integration
Feb 10, 2026
Merged

feat(p2p): Integrate TxPoolV2 across codebase#20172
alexghr merged 15 commits intomerge-train/spartanfrom
pw/tx-pool-v2-integration

Conversation

@spalladino
Copy link
Contributor

Summary

Migrates all consumers from TxPool to TxPoolV2, the new event-driven transaction pool implementation.

Key API Changes

Old Method New Method Notes
addTxs addPendingTxs Returns AddTxsResult with accepted/ignored/rejected
markAsMined handleMinedBlock Takes full L2Block
markMinedAsPending handlePrunedBlocks Takes L2BlockId
markTxsAsNonEvictable protectTxs Requires BlockHeader for slot-based protection
clearNonEvictableTxs prepareForSlot Slot-based protection expiry
deleteTxs handleFailedExecution / handleFinalizedBlock Context-specific deletion
- start() New lifecycle method, must be called before use

Integration Points

P2P Client (p2p_client.ts)

  • Block stream handlers now use pool event methods:
    • handleLatestL2BlockshandleMinedBlock per block
    • handleFinalizedL2BlockshandleFinalizedBlock per block
    • handlePruneL2BlockshandlePrunedBlocks with L2BlockId
  • markTxsAsNonEvictable now requires BlockHeader for slot-based protection
  • getTxStatus maps 'protected''pending' for external API compatibility
  • getTxs('all') combines pending + mined hashes (no getAllTxs in V2)
  • Pool started/stopped with client lifecycle

Factory (factory.ts)

  • Creates AggregateTxValidator for pending tx validation (without proof verification)
  • Instantiates AztecKVTxPoolV2 with dependencies:
    • l2BlockSource (archiver)
    • worldStateSynchronizer
    • pendingTxValidator

Libp2p Service (libp2p_service.ts)

  • Block proposal handler: protectTxs(txHashes, block.blockHeader)
  • Checkpoint proposal handler: protectTxs(txHashes, checkpoint.lastBlock.blockHeader)

Services

  • TxProvider: addPendingTxs for proposal txs
  • TxCollectionSink: addPendingTxs for gossip txs
  • BlockTxsHandler: Type change only (query methods unchanged)

Sequencer (sequencer.ts)

  • TODO added for prepareForSlot at slot boundaries

TODOs for Follow-up

  • TODO(pw/tx-pool): Refactor validator creation into TxValidatorFactory
  • TODO(pw/tx-pool): Wire prepareForSlot calls at slot boundaries
  • TODO(pw/tx-pool): Add context on expected tx state when adding txs

🤖 Generated with Claude Code

@spalladino spalladino added the ci-no-fail-fast Sets NO_FAIL_FAST in the CI so the run is not aborted on the first failure label Feb 4, 2026
@spalladino
Copy link
Contributor Author

TxPoolV2Impl Refactoring Plan

Summary

Refactor TxPoolV2Impl (1265 lines) to reduce duplicate code, improve readability, remove low-value private methods, and extract index management into a separate class.


Codebase Learnings

Architecture Overview

The transaction pool consists of two main classes:

  • AztecKVTxPoolV2 (tx_pool_v2.ts, 210 lines): Thin wrapper that manages a SerialQueue for thread safety, telemetry/metrics, and lifecycle. Delegates all operations to TxPoolV2Impl.
  • TxPoolV2Impl (tx_pool_v2_impl.ts, 1265 lines): Contains all transaction pool logic including state management, validation, eviction, and persistence.

Transaction State Machine

Transactions move through states:

  • pending: Awaiting inclusion in a block (indexed for queries and conflict detection)
  • protected: Being considered for a block proposal (removed from pending indices)
  • mined: Included in a block (tracks minedL2BlockId)

State is derived from:

  • minedL2BlockId field on TxMetaData (mined if set)
  • #protectedTransactions map (protected if present and not mined)
  • Otherwise pending

In-Memory Index Structure

Five private maps manage pool state:

#metadata: Map<string, TxMetaData>           // Primary store: txHash → metadata
#nullifierToTxHash: Map<string, string>      // Conflict detection (pending only)
#feePayerToTxHashes: Map<string, Set<string>> // Balance-based eviction (pending only)
#pendingByPriority: Map<bigint, Set<string>> // Priority ordering (pending only)
#protectedTransactions: Map<string, SlotNumber> // Protection tracking

Key invariant: Only pending txs appear in nullifier/feePayer/priority indices.

Eviction System

Two types of eviction rules:

  1. Pre-add rules (PreAddRule): Run during addPendingTxs to decide if incoming tx should be added

    • NullifierConflictRule: Higher-fee tx evicts lower-fee tx with same nullifier
    • FeePayerBalancePreAddRule: Rejects if fee payer has insufficient balance
    • LowPriorityPreAddRule: Rejects if pool full and incoming tx is lowest priority
  2. Post-event rules (EvictionRule): Run after events (block mined, chain pruned, txs added)

    • InvalidTxsAfterMiningRule: Evicts txs with now-invalid timestamps
    • InvalidTxsAfterReorgRule: Evicts txs with invalid anchor blocks
    • FeePayerBalanceEvictionRule: Evicts txs whose fee payer can't cover fees
    • LowPriorityEvictionRule: Enforces pool size limit

Persistence

  • Uses AztecAsyncKVStore with a single map txs: Map<string, Buffer>
  • Transactions serialized/deserialized via Tx.toBuffer()/Tx.fromBuffer()
  • TxMetaData is purely in-memory (rebuilt on startup via hydrateFromDatabase)
  • Archive system (TxArchive) stores finalized txs separately with FIFO eviction

Key Patterns

  1. Callbacks: TxPoolV2Impl notifies AztecKVTxPoolV2 of events via callbacks (not events), keeping metrics/telemetry in the wrapper.

  2. Pool access adapters: Two interfaces expose pool state to eviction rules:

    • PreAddPoolAccess: Read-only access for pre-add checks
    • PoolOperations: Read + delete for post-event eviction
  3. Transaction validation: All txs entering pending state must pass TxValidator.validateTx(). Protected/mined txs bypass validation.

  4. Hydration: On startup, all txs loaded from DB, mined status checked against archiver, non-mined validated, then added through pre-add rules to rebuild indices and resolve conflicts.


1. Extract Index Management to TxPoolIndices Class

New file: tx_pool_indices.ts

Extract the following private fields and their management into a dedicated class:

class TxPoolIndices {
  // Data
  #metadata: Map<string, TxMetaData>;
  #nullifierToTxHash: Map<string, string>;
  #feePayerToTxHashes: Map<string, Set<string>>;
  #pendingByPriority: Map<bigint, Set<string>>;
  #protectedTransactions: Map<string, SlotNumber>;

  // State queries
  getTxState(meta: TxMetaData): TxState;
  getMetadata(txHash: string): TxMetaData | undefined;
  has(txHash: string): boolean;

  // Iteration
  *iteratePendingByPriority(order: 'asc' | 'desc'): Generator<string>;

  // Index modifications
  addPending(meta: TxMetaData): void;
  addProtected(meta: TxMetaData, slot: SlotNumber): void;
  addMined(meta: TxMetaData, blockId: L2BlockId): void;
  markAsMined(txHash: string, blockId: L2BlockId): void;
  markAsUnmined(txHash: string): void;
  updateProtection(txHash: string, slot: SlotNumber): void;
  clearProtection(txHashes: string[]): void;
  remove(txHash: string): void;
  removeFromPendingIndices(meta: TxMetaData): void;

  // Queries for eviction rules
  getFeePayerPendingTxs(feePayer: string): TxMetaData[];
  getPendingTxCount(): number;
  getLowestPriorityPending(limit: number): string[];
  getPendingTxs(): TxMetaData[];
  getPendingFeePayers(): string[];

  // Find/filter operations
  findTxsMinedAfter(blockNumber: number): TxMetaData[];
  findTxsMinedAtOrBefore(blockNumber: number): string[];
  findExpiredProtectedTxs(slotNumber: SlotNumber): string[];
  filterUnprotected(txs: TxMetaData[]): TxMetaData[];
  filterRestorable(txHashes: string[]): TxMetaData[];
}

Benefits:

  • Single Responsibility: TxPoolV2Impl focuses on business logic, TxPoolIndices on data structure management
  • Testable in isolation
  • Reduces TxPoolV2Impl by ~300 lines

2. Consolidate Duplicate Validation Methods

Current duplication (tx_pool_v2_impl.ts):

  • #validateForPending (lines 750-774)
  • #validateNonMinedTxs (lines 972-987)

Both validate transactions and partition into valid/invalid. The difference is input type (TxMetaData[] vs {tx, meta}[]).

Refactor: Single validation helper with callback for tx retrieval:

async #validateTxs<T extends TxMetaData>(
  items: T[],
  getTx: (item: T) => Promise<Tx | undefined>,
): Promise<{ valid: T[]; invalid: string[] }> {
  const valid: T[] = [];
  const invalid: string[] = [];

  for (const item of items) {
    const tx = await getTx(item);
    if (!tx) {
      invalid.push(item.txHash);
      continue;
    }
    const result = await this.#pendingTxValidator.validateTx(tx);
    if (result.result === 'valid') {
      valid.push(item);
    } else {
      this.#log.info(`Tx ${item.txHash} failed validation: ${result.reason?.join(', ')}`);
      invalid.push(item.txHash);
    }
  }
  return { valid, invalid };
}

Usage:

// For prepareForSlot/handlePrunedBlocks:
const { valid, invalid } = await this.#validateTxs(txs, meta => this.#txsDB.getAsync(meta.txHash).then(b => b && Tx.fromBuffer(b)));

// For hydration (already have Tx objects):
const { valid, invalid } = await this.#validateTxs(nonMined, entry => Promise.resolve(entry.tx));

3. Consolidate Pool Access Adapters

Current duplication:

  • #createPoolOperations() (lines 1185-1220)
  • #createPreAddPoolAccess() (lines 1226-1264)

Both expose similar pool access functionality with overlapping methods.

Refactor: After extracting TxPoolIndices, both adapters delegate to the same indices class:

#createPoolOperations(): PoolOperations {
  return {
    getPendingTxs: () => this.#indices.getPendingTxs(),
    getPendingFeePayers: () => this.#indices.getPendingFeePayers(),
    getFeePayerPendingTxs: (fp) => this.#indices.getFeePayerPendingTxs(fp),
    getPendingTxCount: () => this.#indices.getPendingTxCount(),
    getLowestPriorityPending: (n) => this.#indices.getLowestPriorityPending(n),
    deleteTxs: (hashes) => this.#deleteTxsBatch(hashes),
  };
}

The adapters become thin wrappers, and the shared implementation lives in TxPoolIndices.


4. Inline/Remove Low-Value Private Methods

Method Lines Action
#isDuplicateTx(txHashStr) 1041-1043 Inline: just this.#indices.has(txHashStr)
#partitionByMinedStatus(txs) 953-969 Inline: simple partition loop
#populateMinedIndices(metas) 990-994 Inline: trivial iteration
#unmineTxs(txs) 816-821 Inline: just meta.minedL2BlockId = undefined in caller
#persistTx(txHashStr, tx) 878-881 Keep: provides semantic clarity

5. Consolidate Add Transaction Methods

Current methods (tx_pool_v2_impl.ts):

  • #addNewPendingTx (lines 1046-1059)
  • #addNewProtectedTx (lines 884-895)
  • #addNewMinedTx (lines 898-909)

All share: build metadata → persist tx → add to indices → log.

Refactor: Single #addTx with state parameter:

async #addTx(tx: Tx, state: 'pending' | { protected: SlotNumber } | { mined: L2BlockId }): Promise<TxMetaData> {
  const txHashStr = tx.getTxHash().toString();
  const meta = await buildTxMetaData(tx);

  await this.#txsDB.set(txHashStr, tx.toBuffer());

  if (state === 'pending') {
    this.#indices.addPending(meta);
  } else if ('protected' in state) {
    this.#indices.addProtected(meta, state.protected);
  } else {
    meta.minedL2BlockId = state.mined;
    this.#indices.addMined(meta, state.mined);
  }

  this.#log.verbose(`Added ${typeof state === 'string' ? state : Object.keys(state)[0]} tx ${txHashStr}`);
  return meta;
}

6. Bug Fixes

Bug 1: handleMinedBlock emits wrong txHashes (line 408)

Current: Emits all txHashes from block body
Fix: Only emit hashes that were actually in pool

// Change line 408 from:
this.#callbacks.onTxsRemoved(txHashes.map(h => h.toBigInt()));

// To:
this.#callbacks.onTxsRemoved(found.map(m => BigInt('0x' + m.txHash)));

Bug 2: Non-deterministic nullifier conflict resolution (checkNullifierConflict in tx_metadata.ts:148)

Current: Only compares priorityFee, not txHash tiebreaker
Risk: Equal-fee conflicts resolved by Map iteration order (non-deterministic)
Fix: Use comparePriority for consistent ordering

// Change line 148 from:
if (incomingMeta.priorityFee > conflictingMeta.priorityFee) {

// To:
if (comparePriority(incomingMeta, conflictingMeta) > 0) {

7. Files to Create/Modify

File Action
tx_pool_indices.ts Create: New class for index management
tx_pool_v2_impl.ts Modify: Extract index mgmt, consolidate duplicates, inline trivial methods
tx_metadata.ts Modify: Fix checkNullifierConflict to use comparePriority

8. Verification

  1. Build: yarn build from yarn-project root
  2. Unit tests: yarn workspace @aztec/p2p test src/mem_pools/tx_pool_v2/
  3. Compat tests: yarn workspace @aztec/p2p test src/mem_pools/tx_pool_v2/tx_pool_v2.compat.test.ts
  4. Benchmark: yarn workspace @aztec/p2p test src/mem_pools/tx_pool_v2/tx_pool_v2_bench.test.ts

Expected Results

  • TxPoolV2Impl: Reduced from ~1265 lines to ~800 lines
  • TxPoolIndices: New ~350 line class
  • Duplicate code: Eliminated validation duplication
  • Readability: Clearer separation of concerns between business logic and data structure management
  • Bug fixes: 2 correctness issues resolved

Migrates all consumers from TxPool to TxPoolV2, the new event-driven
transaction pool implementation.

## Key API Changes

- `addTxs` → `addPendingTxs` (returns AddTxsResult with accepted/ignored/rejected)
- `markAsMined` → `handleMinedBlock` (takes full L2Block)
- `markMinedAsPending` → `handlePrunedBlocks` (takes L2BlockId)
- `markTxsAsNonEvictable/clearNonEvictableTxs` → `protectTxs/prepareForSlot`
- `deleteTxs` → `handleFailedExecution` or `handleFinalizedBlock`
- New lifecycle: `start()` must be called before use

## Integration Points

### P2P Client (p2p_client.ts)
- Block stream handlers now use pool event methods:
  - `handleLatestL2Blocks` → `handleMinedBlock` per block
  - `handleFinalizedL2Blocks` → `handleFinalizedBlock` per block
  - `handlePruneL2Blocks` → `handlePrunedBlocks` with L2BlockId
- `markTxsAsNonEvictable` now requires BlockHeader for slot-based protection
- `getTxStatus` maps 'protected' → 'pending' for external API
- `getTxs('all')` combines pending + mined (no getAllTxs in V2)
- Pool started/stopped with client lifecycle

### Factory (factory.ts)
- Creates AggregateTxValidator for pending tx validation
- Instantiates AztecKVTxPoolV2 with dependencies:
  - l2BlockSource (archiver)
  - worldStateSynchronizer
  - pendingTxValidator

### Libp2p Service (libp2p_service.ts)
- Block proposal handler: `protectTxs(txHashes, block.blockHeader)`
- Checkpoint proposal handler: `protectTxs(txHashes, checkpoint.lastBlock.blockHeader)`

### Services
- TxProvider: `addPendingTxs` for proposal txs
- TxCollectionSink: `addPendingTxs` for gossip txs
- BlockTxsHandler: Type change only (query methods unchanged)

### Sequencer (sequencer.ts)
- TODO added for `prepareForSlot` at slot boundaries

## TODOs for Follow-up
- Refactor validator creation into TxValidatorFactory
- Wire `prepareForSlot` calls at slot boundaries
- Add context on expected tx state when adding txs

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@spalladino spalladino force-pushed the pw/tx-pool-v2-integration branch from 056f801 to f0fc884 Compare February 4, 2026 17:36
txPool:
deps.txPool ??
new AztecKVTxPool(store, archive, worldStateSynchronizer, telemetry, {
// TODO(pw/tx-pool): Refactor into a TxValidatorFactory that can be called whenever we need a validator for a block
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Implement the factory taking a block as input
  • Define exactly which validations we need here

@@ -359,58 +341,6 @@ describe('P2P Client', () => {
finalized: { block: { number: BlockNumber(50), hash: expect.any(String) }, checkpoint: anyCheckpoint },
});
});

it('deletes txs created from a pruned block', async () => {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Double-check these tests are captured in the tx pool v2 unit tests

@@ -401,7 +402,7 @@ export class P2PClient<T extends P2PClientType = P2PClientType.Full>

const txs = txBatches.flat();
if (txs.length > 0) {
await this.txPool.addTxs(txs);
await this.txPool.addPendingTxs(txs);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review if this parent method is used at all and try to delete it

@@ -444,8 +445,10 @@ export class P2PClient<T extends P2PClientType = P2PClientType.Full>
let txHashes: TxHash[];

if (filter === 'all') {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this filter used?

await this.txPool.deleteTxs(txHashes, { permanently: true });
await this.txPool.cleanupDeletedMinedTxs(lastBlockNum);
for (const block of blocks) {
await this.txPool.handleFinalizedBlock(block.header);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Call with the last one, do not iterate


// TODO(pw/tx-pool): Figure out when to call!
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review

@@ -607,7 +611,7 @@ export class P2PClient<T extends P2PClientType = P2PClientType.Full>
**/
public async deleteTxs(txHashes: TxHash[]): Promise<void> {
this.#assertIsReady();
await this.txPool.deleteTxs(txHashes);
await this.txPool.handleFailedExecution(txHashes);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. We should expose handleFailedExecution.

const txHash = tx.getTxHash();
txsToDelete.set(txHash.toString(), txHash);
}
const header = await this.l2BlockSource.getBlockHeader(latestBlock);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we get the block hash from the blockstream, so we don't need to go back to the archiver here?

public markTxsAsNonEvictable(txHashes: TxHash[]): Promise<void> {
return this.txPool.markTxsAsNonEvictable(txHashes);
public async markTxsAsNonEvictable(txHashes: TxHash[], blockHeader: BlockHeader): Promise<TxHash[]> {
return this.txPool.protectTxs(txHashes, blockHeader);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the same function name in p2p client as in the tx pool, so then we can move the tx pool out of the p2p client

@@ -114,7 +114,8 @@ export class TxCollectionSink extends (EventEmitter as new () => TypedEventEmitt

// Add the txs to the tx pool (should not fail, but we catch it just in case)
try {
await this.txPool.addTxs(txs, { source: `tx-collection` });
// TODO(pw/tx-pool): Add context on the expected state on this tx
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implement this TODO

@@ -227,6 +227,7 @@ export class TxProvider implements ITxProvider {
return;
}
await this.txValidator.validate(txs);
await this.txPool.addTxs(txs);
// TODO(pw/tx-pool): Add context on the expected state on this tx
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implement too

@@ -376,6 +376,7 @@ export class Sequencer extends (EventEmitter as new () => TypedEventEmitter<Sequ
}

this.lastSlotForCheckpointProposalJob = slot;
// TODO(pw/tx-pool): Call txPool.prepareForSlot(slotNumber) here when transitioning to a new slot
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implement as well

@AztecBot
Copy link
Collaborator

AztecBot commented Feb 9, 2026

Flakey Tests

🤖 says: This CI run detected 2 tests that failed, but were tolerated due to a .test_patterns.yml entry.

\033FLAKED\033 (8;;http://ci.aztec-labs.com/d2ad692f2664520f�d2ad692f2664520f8;;�):  yarn-project/end-to-end/scripts/run_test.sh simple src/e2e_epochs/epochs_mbps.parallel.test.ts "builds multiple blocks per slot with transactions anchored to proposed blocks" (230s) (code: 1) group:e2e-p2p-epoch-flakes
\033FLAKED\033 (8;;http://ci.aztec-labs.com/d5ad0b946d793b2e�d5ad0b946d793b2e8;;�): yarn-project/end-to-end/scripts/run_test.sh ha src/composed/ha/e2e_ha_full.test.ts (129s) (code: 1)

await this.markTxsAsMinedFromBlocks(blocks);
await this.txPool.clearNonEvictableTxs();
await this.handleMinedBlocks(blocks);
await this.maybeCallPrepareForSlot();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm?

Comment on lines +149 to +150
// Ensure we always start with source 0 so we can test the fallback to source 1
jest.spyOn(Math, 'random').mockReturnValue(0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🥲

@alexghr alexghr merged commit 2737717 into merge-train/spartan Feb 10, 2026
11 checks passed
@alexghr alexghr deleted the pw/tx-pool-v2-integration branch February 10, 2026 17:20
github-merge-queue bot pushed a commit that referenced this pull request Feb 11, 2026
BEGIN_COMMIT_OVERRIDE
chore(ci3): add optional local cache for bootstrap artifacts (#20305)
fix: Fix p2p integration test (#20331)
chore: reduce fee log severity (#20336)
feat: restrict response sizes to expected sizes (#20287)
feat: retry web3signer connection (#20342)
feat(p2p): Integrate TxPoolV2 across codebase (#20172)
feat: review and optimize Claude configuration, agents, and skills
(#20270)
fix(prover): handle cross-chain messages when proving mbps (#20354)
chore: retry flakes. if retry pass, is a flake as we know it now. fail
both is hard fail (#19322)
chore(p2p): add mock reqresp layer for tests (#20370)
fix: (A-370) don't propagate on tx mempool add failure (#20374)
chore: Skip the HA test (#20376)
feat: Retain pruned transactions until pruned block is finalised
(#20237)
END_COMMIT_OVERRIDE
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-full Run all master checks. ci-no-fail-fast Sets NO_FAIL_FAST in the CI so the run is not aborted on the first failure

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants