Skip to content

feat: upstream pallet-psm#3

Open
lrazovic wants to merge 112 commits intomasterfrom
leo/psm
Open

feat: upstream pallet-psm#3
lrazovic wants to merge 112 commits intomasterfrom
leo/psm

Conversation

@lrazovic
Copy link
Copy Markdown
Collaborator

@lrazovic lrazovic commented Feb 13, 2026

Description

This PR introduces pallet-psm, a new FRAME pallet that implements a Peg Stability Module (PSM) for pUSD. The pallet enables 1:1 swaps between pUSD and approved external stablecoins (e.g. USDC/USDT), with configurable mint/redeem fees and per-asset circuit breakers.

The pallet enforces a three-tier debt ceiling model before minting:

  • System-wide cap from Vaults (MaximumIssuance)
  • Aggregate PSM cap (MaxPsmDebtOfTotal)
  • Per-asset normalized ceiling (AssetCeilingWeight)

It also adds cross-pallet interfaces in sp-pusd:

  • VaultsInterface (PSM -> Vaults): query system issuance ceiling
  • PsmInterface (Vaults/others -> PSM): query reserved PSM capacity

Integration

For Runtime Developers

To integrate pallet-psm into your runtime:

  1. Add dependency to your runtime Cargo.toml:
pallet-psm = { version = "0.1.0", default-features = false }
  1. Implement the Config trait in your runtime:
impl pallet_psm::Config for Runtime {
    type Asset = Assets;                         // fungibles impl for pUSD + external assets
    type AssetId = u32;                          // asset identifier type
    type VaultsInterface = Vaults;               // must implement sp_pusd::VaultsInterface
    type ManagerOrigin = EnsurePsmManager;       // returns PsmManagerLevel (Full/Emergency)
    type WeightInfo = pallet_psm::weights::SubstrateWeight<Runtime>;
    #[cfg(feature = "runtime-benchmarks")]
    type BenchmarkHelper = PsmBenchmarkHelper;
    type StablecoinAssetId = StablecoinAssetId;  // pUSD asset id
    type InsuranceFund = InsuranceFundAccount;   // fee recipient
    type PalletId = PsmPalletId;                 // PSM reserve account derivation
    type MinSwapAmount = MinSwapAmount;          // minimum mint/redeem amount
}
  1. Add to construct_runtime!:
construct_runtime!(
    pub enum Runtime {
        // ... other pallets
        Psm: pallet_psm,
    }
);
  1. Ensure Vaults exposes issuance ceiling to PSM:
impl sp_pusd::VaultsInterface for Vaults {
    type Balance = Balance;
    fn get_maximum_issuance() -> Balance {
        // return system-wide pUSD ceiling
    }
}
  1. For existing chains, include the migration:
pub struct PsmInitialConfig;

impl pallet_psm::migrations::v1::InitialPsmConfig<Runtime> for PsmInitialConfig {
    fn max_psm_debt_of_total() -> Permill { Permill::from_percent(10) }
    fn external_asset_ids() -> Vec<AssetId> { vec![USDC_ASSET_ID, USDT_ASSET_ID] }
    fn asset_configs() -> BTreeMap<AssetId, (Permill, Permill, Permill)> {
        // asset -> (mint_fee, redeem_fee, ceiling_weight)
        [
            (USDC_ASSET_ID, (Permill::from_percent(1), Permill::from_percent(1), Permill::from_percent(50))),
            (USDT_ASSET_ID, (Permill::from_percent(1), Permill::from_percent(1), Permill::from_percent(50))),
        ].into_iter().collect()
    }
}

pub type Migrations = (
    pallet_psm::migrations::v1::MigrateToV1<Runtime, PsmInitialConfig>,
);

For Pallet Developers

Other pallets can query PSM-reserved issuance capacity via PsmInterface:

use sp_pusd::PsmInterface;

let reserved = <Psm as PsmInterface>::reserved_capacity();

This can be used to account for PSM-reserved issuance when computing vault minting headroom.

Review Notes

Key Features

  • 1:1 swaps: mint (external -> pUSD) and redeem (pUSD -> external)
  • Multi-asset support with explicit approval list (add_external_asset / remove_external_asset)
  • Three-tier debt ceiling enforcement (system-wide, aggregate PSM, per-asset normalized)
  • Per-asset circuit breaker: AllEnabled -> MintingDisabled -> AllDisabled
  • Tiered governance origin:
    • Full: all parameter and asset-management operations
    • Emergency: can only set circuit breaker status
  • Fee model:
    • Mint fee: deducted from minted pUSD, fee minted to Insurance Fund
    • Redeem fee: deducted from output, fee transferred in pUSD to Insurance Fund
  • Safety invariant on redeem: limited by tracked PsmDebt (not just raw reserve), preventing withdrawal of donated reserves
  • Includes benchmarks and V0 -> V1 migration for post-genesis deployment

Swap Lifecycle

Mint (External -> pUSD):

  1. User calls mint(asset_id, external_amount)
  2. Checks: approved asset, circuit breaker, min amount
  3. Enforces ceilings in order: system-wide -> aggregate PSM -> per-asset
  4. Transfers external asset into PSM account
  5. Mints pUSD to user minus fee
  6. Mints fee amount as pUSD to Insurance Fund
  7. Increases PsmDebt[asset_id]

Redeem (pUSD -> External):

  1. User calls redeem(asset_id, pusd_amount)
  2. Checks: approved asset, circuit breaker, min amount
  3. Calculates fee and external output amount
  4. Verifies tracked debt and reserve are sufficient
  5. Burns pUSD principal portion from user
  6. Transfers pUSD fee from user to Insurance Fund
  7. Transfers external asset from PSM account to user
  8. Decreases PsmDebt[asset_id]

Governance/Operations

  • set_minting_fee
  • set_redemption_fee
  • set_max_psm_debt
  • set_asset_ceiling_weight
  • set_asset_status
  • add_external_asset
  • remove_external_asset (requires zero debt; cleans up config storage)

Testing

The pallet includes comprehensive coverage for:

  • Mint/redeem success paths and failure modes
  • Fee edge cases (0%, non-zero, 100%)
  • Three-tier ceiling enforcement and boundary conditions
  • Per-asset ceiling redistribution when weight is set to 0%
  • Circuit breaker behavior per asset
  • Full vs emergency governance permissions
  • Asset onboarding/offboarding invariants and cleanup
  • Reserve-vs-debt safety (donated reserve cannot be redeemed)
  • Long-running mint/redeem cycles and accounting invariants
  • Migration tests (v0 -> v1 and skip-when-already-v1)

marian-radu and others added 2 commits March 20, 2026 10:45
…tech#11153)

### Resumable block sync
- New `block_sync` module syncs backward from the latest finalized block
to the first EVM block, with restart-safe checkpoint tracking via a
`sync_state` SQLite table.
- On restart, fills only the top gap (new blocks) and bottom gap
(remaining backfill) without re-syncing completed ranges.
- Auto-discovers and persists `first_evm_block` — the lowest block with
EVM support on the chain.
- Chain identity verification: stores genesis hash in `sync_state` and
validates on startup; detects stale boundaries after reorgs.

### CLI rework
New `--eth-pruning` flag replaces `--database-url`, `--cache-size`,
`--index-last-n-blocks`, and `--earliest-receipt-block`:
- `--eth-pruning archive` (default): persistent on-disk DB with backward
historical sync.
- `--eth-pruning <N>`: in-memory DB keeping the latest N blocks.

### CLI migration guide

| Previous flag | Replacement | Notes |
|---|---|---|
| `--cache-size N` | `--eth-pruning N` | In-memory DB, keeps latest N
blocks |
| `--database-url sqlite::memory:` | `--eth-pruning N` | --eth-pruning N
always uses in-memory DB |
| `--database-url /path/to/db.sqlite` | `--base-path /path/to/dir` |
Persistent DB stored as `eth-rpc.db` inside the directory |
| `--index-last-n-blocks N` | `--eth-pruning archive` | Syncs all
finalized blocks down to the first EVM block |
| `--earliest-receipt-block N` | _(removed)_ | Replaced by
auto-discovered `first_evm_block` |

> **Note:** `--dev` automatically uses a temporary directory with an
on-disk DB, which is deleted on exit.

> **Note:** When `--base-path` is omitted, the DB is stored in the
default OS data directory:
> - macOS: `~/Library/Application Support/eth-rpc/`
> - Linux: `~/.local/share/eth-rpc/`
> - Windows: `%APPDATA%\eth-rpc\`

### Examples

```bash
# Local dev node (on-disk DB in a temporary directory, deleted on exit)
eth-rpc --dev

# In-memory DB, keep only the latest 512 blocks
eth-rpc --node-rpc-url wss://example.com:443 --eth-pruning 512

# Persistent DB with historical sync (default, --eth-pruning archive is implicit)
eth-rpc --node-rpc-url wss://example.com:443

# Persistent DB with historical sync at a custom path
eth-rpc --node-rpc-url wss://example.com:443 --base-path /data/eth-rpc

# Explicit archive mode
eth-rpc --node-rpc-url wss://example.com:443 --eth-pruning archive

paritytech/contract-issues#271

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@lrazovic lrazovic changed the base branch from leo/auctions to master March 20, 2026 11:42
lrazovic and others added 24 commits March 20, 2026 11:55
Also:

- more relay_parent -> scheduling_parent fixes.
- Some candidate validation cleanup (a bit less spaghetti)
- Make sure we don't use the relay parent to fetch state anywhere -
state might not be available now
- Make sure usages of scheduling parent in disputes for fetching state
are sound
- Via fixing 11272: Simpler backing pipeline - no threading of node
features everywhere
# Description

This PR implements binary search for the gas estimation logic in the
eth-rpc which means that gas estimations are no longer just simple dry
runs but that binary search is now used to find the smallest gas limit
at which the transaction would run.

This PR closes paritytech/contract-issues#217
and also _kind of_ fixes
paritytech/contract-issues#259 or at least
makes it harder to trigger the case in which we observe it, but the
underlying issue still exists.

The binary search algorithm implemented in this PR is as close as
possible to that used in Geth

# Note

This PR **does not** fix
paritytech/contract-issues#259 where the dry
run could fail but the submission succeeds. It makes it so that it's
harder for that case to be triggered by the underlying issue causing
paritytech/contract-issues#259 is still there
and it's caused by the overflows and saturations that happen in the gas
-> fee -> weight computations

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description

Implemented `eth_subscribe` in the eth-rpc. The subscription kinds
implemented is `newHeads` and `logs`.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ate` (paritytech#11417)

While testing the collator protocol revamp on westend I noticed
"Inconsistency while adding a leaf to the `ClaimQueueState`. Expected on
session change." pops up a lot at regular intervals.

Long story short, when writing this code I assumed that the CQ never
change but from a `ClaimQueueState` point of view this is not true. ~~On
session change the validators in the active set are reshuffled and end
up in different backing groups.~~ ON group rotation the validators are
assigned on a new core. In this case we start fetching the claim queue
for the newly assigned core and the future assignments in
`ClaimQueueState` are no longer valid so overwriting them is the right
thing to do.

We could also implement a logic which detects assignment change,
notifies the claim queue and cleans it up but it's an additional
complexity which doesn't add any benefits.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
paritytech#11460)

## Summary

- Set `CallbackHandle =
(pallet_assets_precompiles::ForeignAssetId<Runtime, Instance1>,)`
  in `pallet_assets::Config<Instance1>` for the kitchensink runtime.
- Asset creation (`create`, `force_create`) now automatically populates
a sequential
  foreign asset index mapping. Asset destruction cleans it up.

## Test plan

- [x] Run [end-to-end
tests](paritytech/evm-test-suite#142) (requires
substrate-node, eth-rpc, node, cast)
- [x] Revert CallbackHandle to `()` and confirm end-to-end tests fail


Alternatively run this bashscript for testing:
https://gist.github.com/0xRVE/99bbc5ec7fcabeb54e3b797bd4cc97c8

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
We should tell the collator protocol when we reject a candidate, so the
collator can be punished accordingly.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Add check that workspace compiles before crates publishing to use
--no-verify flag during publishing
This PR introduces the try_state hook to pallet-authorship to verify a
key storage invariant.

closes part of paritytech#239

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
### Summary
1. Resolve the earliest block tag to the first known EVM block across
RPC methods (eth_getBlockByNumber, eth_call, eth_getLogs, etc.)
2. Add a known_first_evm_block_for_chain() lookup for Polkadot, Kusama,
Paseo, and Westend Asset Hubs so earliest works without historical sync
3. Fix tracing_block to propagate errors and handle genesis (no parent)

Fixes paritytech#11383

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ch#11306)

Adds CandidateDescriptorV3 support to the experimental validator-side
collator protocol.

Fixes: paritytech#11084

---------

Signed-off-by: Alexandru Cihodaru <alexandru.cihodaru@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Right now the RPC is using the same thread pool as the rest of the node.
When there is high usage and the node is running out of threads for
blocking futures, RPC calls start to take very long time. This may also
results in problems with other node functionality that would also be
blocked by waiting for new threads. This pull request assigns the rpc
server its own thread pool that gets the same number as threads as
`max_connections`. These threads are only started on demand, but should
allow any RPC connection to have at least one thread to run blocking
tasks.

In a next step we should finally look into the performance metering of
RPC calls and ensure that we have some proper rate limit in place to
give every connection a fair share.


Hopefully helps with:
paritytech#10719

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Migrate _misc_ test:

- parityDb
- malus
changes to fix flaky tests.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…tytech#11441)

When a user calls `refund` with `allow_burn = true`, their token balance
is destroyed, but the asset's total supply was never updated. This
caused `total_issuance()` to overcount. The fix decrements supply and
emits a `Burned` event, consistent with how every other burn path works.

In production, burning path is rarely triggered. The fungibles trait
interface always passes `allow_burn = false`, so only users manually
submitting the refund extrinsic with the burn flag would hit it.

Follow-up issue for migrating the discrepancy (observed on Westend):
paritytech#11443.

Fixes paritytech#10412

---------

Co-authored-by: clangenb <clangenb@users.noreply.github.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
clangenb and others added 29 commits April 2, 2026 20:55
…ritytech#11381)

When a parachain collator starts with `--authoring=slot-based` and
performs warp sync, the `slot-based-block-builder` essential task
immediately calls `slot_duration()` which requires
`AuraApi_slot_duration`. During warp sync the runtime isn't ready, so
this fails and the task returns, shutting down the node.

The lookahead collator avoids this by calling `wait_for_aura()` before
starting. This PR adds an equivalent guard to the slot-based collator.

### Manual test
Before the fix the collator panicked after the relay chain warp sync
with AuraApi_slot_duration not available, which does not occur anymore
now.
```
 ./target/release/polkadot-parachain \                                                                                                                                                                                                                                                                          
    --chain asset-hub-polkadot \
    --sync warp \
    --authoring=slot-based \
    --tmp -- --sync warp
```
Closes paritytech#11072.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: clangenb <clangenb@users.noreply.github.com>
…ytech#11594)

The benchmark failed depending on `MaxAutoRebagPerBlock` (e.g. it passes
with 10 as configured in Westend, Polkadot and Kusama AH runtime but it
failed with 5, as it was configured before, see [runtime
PR](polkadot-fellows/runtimes#1065)).

Replace the bulk `on_idle` benchmark with a per-item `on_idle_rebag`
benchmark that measures the worst-case cost of a single rebag. `on_idle`
now consumes weight per iteration via `WeightMeter` instead of reserving
a single bulk weight upfront.
This decouples the benchmark from `MaxAutoRebagPerBlock`. Changing the
config no longer requires re-running benchmarks.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ics (paritytech#11600)

The current implementation of the `collation_fetch_latency` metric
contains a critical observability blindspot due to insufficient
histogram bucket resolution.


Currently, the the collation fetching is capped at an upper bound of 5
seconds. This effectively creates a black box for investigating latency
events. Any fetch operation exceeding the 5s threshold is aggregated
into the final bucket, regardless if the fetching took 30s or 1h. This
obscures the true distribution of network delays and prevents accurate
performance profiling for high-latency scenarios.

The discrepancy was identified with
https://github.com/lexnv/block-confidence-monitor and confirmed via
manual analysis of the logs. Without granular visibility into these
outliers, we cannot effectively measure the success of our block
confidence / debug bottlenecks in validator-side protocols.

While at it, have increased the granularity of other buckets which might
be relevant.

Part of the block confidence work:
- paritytech#11377


cc @sandreim @skunert

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ch#11052)

# Description

Updates bounty and child-bounty account derivation in
`pallet-multi-asset-bounties` to use the sub-account prefixes `"mbt"`
(multi-asset bounty) and `"mcb"` (multi-asset child bounty) instead of
`"bt"` and `"cb"`. To avoid having the same bounty account by both the
new multi asset bounties pallet and the old bounties pallet.

## Integration

The pallet is only deployed on Westend (not Kusama or Polkadot), so no
production downstream depends on the old derivation; the change is
limited to testnet as the KAH and PAH runtime are configured with the
new prefix already.

## Review Notes
- Doc comments were added at module level (account derivation
subsection) and on both structs to document the prefixes.
- **Version bump:** Major

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…tech#11576)

The current implementation of `prune_for_para` gives edge to new peers
because it uses only the timestamp of the last bump when evicting peers
to the DB. As a result, a high score collator which is inactive for a
while can easily be evicted by new peers with minimal score.

To fix this we now calculate `score / time_since_last_bump` ratio for
each peer and evict the one with the min value.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…h#11628)

Fix a regression introduced by paritytech#11381, where we wrapped the slot-based
collator launch in an async task that first calls `wait_for_aura`, then
spawns the actual long-running collator tasks via `slot_based::run()`.
The wrapper was spawned with `spawn_essential_handle()`.

Essential tasks shut down the node when they complete. The init wrapper
completes immediately after spawning, the TaskManager sees an essential
task exit, and the node shuts down.

This only affects parachain collators started with
`--authoring=slot-based`.

Fix: use `spawn_handle()` for the short-lived init wrapper. The child
tasks inside `slot_based::run()` remain correctly marked as essential.

An easy way to reproduce (same setup used by staking-miner nightly test
- which in fact started to fail after paritytech#11381 got merged e.g.
[here](https://github.com/paritytech/polkadot-staking-miner/actions/runs/23928039324/job/69807526676)
): spawn a Zombienet network with a 2-validator relay chain and a single
slot-based parachain collator. The collator process starts but shuts
down immediately.
For example in your SDK repo:
```
cd substrate/frame/staking-async/runtimes/papi-tests
just setup
just run fake-dev 
```
which launches zombienet spawning
  - alice (relay validator, port 9944) — polkadot
  - bob (relay validator, port 9945) — polkadot
- charlie (parachain collator, port 9946) — polkadot-parachain
--collator --authoring=slot-based

Port 9946 never comes up.

I have also verified that the fix coming from paritytech#11381 still works,
running manually `./target/release/polkadot-parachain --chain
asset-hub-polkadot --sync warp --authoring=slot-based --tmp -- --sync
warp`.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…when parent bounty is not Active (paritytech#11612)

# Description

Fix an authorization bypass in `pallet-multi-asset-bounties` where any
signed account could
forcibly unassign an active child bounty's curator when the parent
bounty was not in `Active` state
(e.g., `CuratorUnassigned`). This also caused the child curator's native
balance hold (deposit) to
be permanently leaked — removed from pallet storage but never released
or burned on-chain.

**Root cause:** In `unassign_curator`, the `BountyStatus::Active`
branch's catch-all `Some(sender)`
arm used `if let Some(parent_curator) = parent_curator { ... }` with no
`else` clause. When
`parent_curator` was `None` (parent bounty not Active), the block was
silently skipped and execution
fell through to the state transition — no `BadOrigin` error was
returned.

## Integration

No integration changes required for downstream projects. This is a fix
internal to
`pallet-multi-asset-bounties` with no public API changes. The extrinsic
signature and behavior for
authorized callers remain identical.

## Review Notes

The fix restructures the `BountyStatus::Active` arm in
`unassign_curator` with two changes:

### 1. Authorization before storage mutation

Previously, `CuratorDeposit::take()` was called unconditionally at the
top of the `Active` arm
(before verifying the caller). Now it is called inside each `match
maybe_sender` arm, only after the
caller is confirmed to be authorized. This prevents the deposit from
being removed from storage on
an unauthorized (and reverted) call path.

```diff
 BountyStatus::Active { ref curator, .. } => {
-    let maybe_curator_deposit =
-        CuratorDeposit::<T, I>::take(parent_bounty_id, child_bounty_id);
     match maybe_sender {
         None => {
-            if let Some(curator_deposit) = maybe_curator_deposit {
+            if let Some(curator_deposit) =
+                CuratorDeposit::<T, I>::take(parent_bounty_id, child_bounty_id)
+            {
                 T::Consideration::burn(curator_deposit, curator);
             }
         },
```

### 2. Explicit rejection when `parent_curator` is `None`

The catch-all `Some(sender)` arm now uses
`parent_curator.ok_or(BadOrigin)?` followed by an
`ensure!`. When `parent_curator` is `None`, the call is immediately
rejected with `BadOrigin`.

```diff
         Some(sender) => {
-            if let Some(parent_curator) = parent_curator {
-                if sender == parent_curator && *curator != parent_curator {
-                    if let Some(curator_deposit) = maybe_curator_deposit {
-                        T::Consideration::burn(curator_deposit, curator);
-                    }
-                } else {
-                    return Err(BadOrigin.into());
-                }
+            let parent_curator = parent_curator.ok_or(BadOrigin)?;
+            ensure!(
+                sender == parent_curator && *curator != parent_curator,
+                BadOrigin
+            );
+            if let Some(curator_deposit) =
+                CuratorDeposit::<T, I>::take(parent_bounty_id, child_bounty_id)
+            {
+                T::Consideration::burn(curator_deposit, curator);
             }
         },
```

### Regression test

A comprehensive test
(`unprivileged_caller_cannot_unassign_active_child_curator_when_parent_not_active`)
is added that:

1. Creates an active child bounty with a separate child curator.
2. Has the parent curator voluntarily unassign (putting parent into
`CuratorUnassigned`).
3. Asserts that an unprivileged attacker is rejected with `BadOrigin`.
4. Verifies the child bounty remains `Active`, the curator deposit stays
in storage, and the
   balance hold is intact.
5. Confirms the child curator can still voluntarily unassign themselves
and that the deposit is
   properly released.

# Checklist

* [x] My PR includes a detailed description as outlined in the
"Description" and its two subsections above.
* [x] My PR follows the [labeling requirements]
* [x] I have made corresponding changes to the documentation (if
applicable)
* [x] I have added tests that prove my fix is effective or that my
feature works (if applicable)

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
implicit views can return paths longer than lookahead, which leads to
is_slot_available skipping the loop
[here](https://github.com/paritytech/polkadot-sdk/blob/a103769c2e2cb739f721d0446acabe97a2c0df08/polkadot/node/network/collator-protocol/src/validator_side/mod.rs#L1598),
because of `ancestor_valid_len` being 0, leading to rejections of
candidates.

I also noticed that `paths_via_relay_parents` can be massively
simplified, which also removes above bug. ImplicitView can likely be
simplified further as I no longer see any need to keep
`block_info_storage` at all anymore.

Fixes: paritytech#11625
…tytech#11107)

# Description
This PR implements optional metadata-based detection to automatically
determine the correct Aura authority ID type from runtime metadata.

The library currently assumes that the Aura authority ID type is
`ed25519` for `asset-hub-polkadot`/`statemint` and `sr25519` for all
other chains. This PR adds the ability to detect the correct type from
runtime metadata when available.

Further implementation of paritytech#11026

## Integration
**No integration changes required.** This is a non-breaking enhancement
that improves detection logic. Behavior remains backward compatible with
existing fallback mechanisms.

## Review Notes

### Implementation Overview
This PR implements optional metadata detection for Aura authority IDs:
- **Optional metadata detection**: Adds support to read the Aura
authority ID type from runtime metadata when available

### Changes Made

**File: `cumulus/polkadot-omni-node/lib/src/common/runtime.rs`**
1. **Added `aura_consensus_id()` method to `MetadataInspector`**:
- Scans runtime metadata types for
`sp_consensus_aura::sr25519::AuthorityId` or
`sp_consensus_aura::ed25519::AuthorityId`
   - Returns `Some(AuraConsensusId)` if found, `None` otherwise
   - Only checks if Aura pallet exists in metadata

2. **Updated `DefaultRuntimeResolver::runtime()`**:
- Calls `metadata_inspector.aura_consensus_id()` for metadata-based
detection
   - Uses detected type immediately when available
- Falls back to chain spec ID check when metadata detection returns
`None`

3. **Added test coverage**:
- Test verifies `aura_consensus_id()` correctly detects `sr25519` from
test runtime metadata

### Example Behavior
**Metadata detection workflow:**
- If metadata detection succeeds → uses detected type (`sr25519` or
`ed25519`)
- If metadata unavailable or detection fails → uses chain spec ID
heuristics (`ed25519` for asset-hub-polkadot/statemint, `sr25519` for
others)

### Code Example
```diff
+ fn aura_consensus_id(&self) -> Option<AuraConsensusId> {
+     if !self.pallet_exists(DEFAULT_AURA_PALLET_NAME) {
+         return None;
+     }
+ 
+     for portable_type in self.0.types().types() {
+         let path = &portable_type.ty.path;
+         let segments = path.segments();
+ 
+         if segments.len() >= 3 {
+             let last_three = &segments[segments.len() - 3..];
+             match last_three {
+                 ["sp_consensus_aura", "sr25519", "AuthorityId"] =>
+                     return Some(AuraConsensusId::Sr25519),
+                 ["sp_consensus_aura", "ed25519", "AuthorityId"] =>
+                     return Some(AuraConsensusId::Ed25519),
+                 _ => continue,
+             }
+         }
+     }
+     None
+ }
```

### Testing

- Added unit test `test_aura_consensus_id()` that verifies metadata
detection works correctly with the test runtime (which uses `sr25519`)

### Notes

- The fallback logic preserves existing behavior while making
assumptions explicit
- Metadata detection is optional and gracefully falls back when metadata
is unavailable or doesn't contain the required information


# Checklist

* [x] My PR includes a detailed description as outlined in the
"Description" and its two subsections above.
* [ ] My PR follows the [labeling requirements](

https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
) of this project (at minimum one label for `T` required)
    * External contributors: Use `/cmd label <label-name>` to add labels
    * Maintainers can also add labels manually
* [ ] I have made corresponding changes to the documentation (if
applicable)
* [x] I have added tests that prove my fix is effective or that my
feature works (if applicable)

## Bot Commands

You can use the following bot commands in comments to help manage your
PR:

**Labeling (Self-service for contributors):**
* `/cmd label T1-FRAME` - Add a single label
* `/cmd label T1-FRAME R0-no-crate-publish-required` - Add multiple
labels
* `/cmd label T6-XCM D2-substantial I5-enhancement` - Add multiple
labels at once
* See [label
documentation](https://paritytech.github.io/labels/doc_polkadot-sdk.html)
for all available labels

**Other useful commands:**
* `/cmd fmt` - Format code (cargo +nightly fmt and taplo)
* `/cmd prdoc` - Generate PR documentation
* `/cmd bench` - Run benchmarks
* `/cmd update-ui` - Update UI tests
* `/cmd --help` - Show help for all available commands

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Iulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
This PR extends existing flow that prepares binaries from custom
revisions and ads also a docker build to it.
At the moment the follwoing bins and docker images can be built and
pushed to `paritypr`:
- polkadot + workers - polkadot-debug image
- polkadot-parachain - polkadot-parachain-debig image
- polkadot-ominnode - polakdot-omin-node image
- chain-spec-builder - chain-spec-builder image

Addresses: paritytech/release-engineering#293
…ech#11643)

## Description

Fix bridges **integration test failure** for the Westend network by
using the slot-based collator for asset-hub-westend nodes.

The `asset-hub-westend` runtime configures `RelayParentOffset = 1` (at
`cumulus/parachains/runtimes/assets/asset-hub-westend/src/lib.rs:138`)
in this
[commit](paritytech@8e911a6)
, which requires relay parent descendant headers in the parachain
inherent data. However, the **zombienet** test was launching the
collators with the default look ahead authoring policy, which passes an
empty `relay_parent_descendants` vec. This caused a runtime panic on
every block build attempt:

```
  Unable to verify provided relay parent descendants.
  expected_rp_descendants_num: 1
  error: InvalidNumberOfDescendants { expected: 2, received: 0 }
```

  The parachain remained stuck at block #0, failing the test assertion:
asset-hub-westend-collator1: reports block height is at least 10 within
180 seconds

Only the slot-based collator
(`collators/slot_based/block_builder_task.rs`) calls
`create_inherent_data_with_rp_offset()` with the required descendant
data. The look ahead and basic collators call `create_inherent_data()`
which passes `None`.

## Integration

This change only affects a zombienet test TOML configuration file. No
crate changes.

## Review Notes

The fix adds "--authoring", "slot-based" to both
asset-hub-westend-collator1 and asset-hub-westend-collator2 in

bridges/testing/environments/rococo-westend/bridge_hub_westend_local_network.toml.

  This is only needed for the Westend side because:
- asset-hub-westend has RelayParentOffset = ConstU32<1> — requires
descendant headers
- asset-hub-rococo has RelayParentOffset = ConstU32<0> — no descendant
verification, works with any collator

The Rococo TOML (bridge_hub_rococo_local_network.toml) is unchanged
since its asset-hub runtime doesn't require relay parent descendants.
…ates release activity (paritytech#11652)

This PR adds an new step to the Publish Crates flow, that will restore
crates Cargo.toml files to intial state before release and keep only
bumped version in there to simplify post crates activity we used to run
before, to aligne with the pipeline checks.

partially address:
paritytech/release-engineering#291
# Description
Implement unit tests for "Channel replacement: verify only
higher-priority statements replace existing entries, corner cases of
replacement logic." paritytech#11534

## Summary
- `channel_replacement_only_higher_priority_succeeds` -> verifies
lower/equal priority rejected with `ChannelPriorityTooLow`, higher
priority replaces, one-per-channel invariant preserved
 
- `channel_replacement_with_size_increase_evicts_others` -> verifies
that replacing a channel message with a larger one triggers additional
eviction of lowest-priority non-channel statements to satisfy size
constraints

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…itytech#11611)

# Description

Implement unit tests for "Propagation under normal load" paritytech#11534 

## Summary
- Add 3 multi-peer propagation tests for the network handler (all-peers
delivery, known-statement filtering, same-statement deduplication)
- Refactor 3 duplicate test builder functions into a single canonical
`build_handler_multi_peers(n)` with thin wrappers

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Andrei Eres <eresav@me.com>
# Description
Implement unit tests for "Eviction: verify the lowest-priority
statements are evicted first, corner cases of priority ordering" paritytech#11534
## Summary
- Extend the existing `constraints()` test with eviction priority
ordering corner cases:
- Verify that equal priority statements are rejected with `AccountFull`
when the account is full
- Verify that specific evicted statement hashes appear in the expired
map

---------

Co-authored-by: Andrei Eres <eresav@me.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
… collations (paritytech#11629)

This PR enhances the observability of our logs when validators refuse to
fetch advertised collations for certain para IDs.

The validator keeps track of advertised collations from collators.
Then, the validator checks if it has sufficient slots
(`is_slot_available`) and if the collation can be seconded
(`can_second`).
If either of the mentioned functions fails, the validator silently
ignores the collation.

This PR aims to surface this behavior and help investigate further
issues.

This PR is already deployed on validators and has surfaced:
- paritytech#11625

Part of:
- paritytech#11377

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Tsvetomir Dimitrov <tsvetomir@parity.io>
…h#11641)

Test runtimes grew too large, we need compression now to fit the limits.
This fixes the runtime upgrading zombienet tests. In addition I added
logging, such that errors like this will be easier to find next time.

Fixes: paritytech#11568

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description

Part of paritytech#11534: covers
tests for malformed input

  Primitives crate:
- decode_checks_fields — extended with more duplicate/out-of-order field
cases
- decode_rejects_malformed_bytes — new test: corrupted encoded bytes
(empty input, invalid
discriminants, truncated payloads, invalid proof variants, inflated
field counts)
- sign_and_verify — extended with Ed25519 and ECDSA invalid signature
cases

  Client crate:
- submit_rejects_malformed_statements — new test: no proof, corrupted
signatures (all 3 key types),
  wrong signer


## Integration

No integration needed
…itytech#11632)

paritytech/release-engineering#291

TODO:
- handle warnings on all workflows regarding deprecated Node version 20
I would suggest to encode our intention directly instead of working
around a symptom. This makes it much easier to reason about the code in
my opinion and should have less edge cases. In particular this change
will also wait for the current relay parent mid-parachain slot, which is
useful as otherwise we would build on an older than expected relay
parent, which could then again affect block confidence as the relay
parent might be out of scope already before the collation can land on
chain.

Also worth mentioning: As the original PR already shows, we have the
implicit assumption that the current relay parent has arrived after 1s
into the relay chain slot. This seems to be the case most of the time,
but not always - triggering the issue this PR is fixing. For best
performance we should consider bumping the slot offset some more. If I
understand correctly the error case we found was by a relay parent
coming late by only a couple of milliseconds - thus 1.5s might already
be plenty for the needed wait to almost never happen, but ideally we
should find a good value based on data from prod.

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Alexandru Vasile <alexandru.vasile@parity.io>
Removal of the Meta Transaction from the Westend Relay Chain.
With this removal, pallet Verify Signature is removed as well.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.