Skip to content

Add flow to create old tag#98

Merged
EgorPopelyaev merged 19 commits intomasterfrom
ep-add-flow-to-creat-old-tag
Nov 18, 2025
Merged

Add flow to create old tag#98
EgorPopelyaev merged 19 commits intomasterfrom
ep-add-flow-to-creat-old-tag

Conversation

@EgorPopelyaev
Copy link
Copy Markdown
Owner

No description provided.

EgorPopelyaev and others added 19 commits November 14, 2025 14:28
This PR adds couple of improvements to the Check semver job for the
stable branches:
1. The `validate: false` option can be set now not only on the `mojor`
bumps but on the `minor` and `patch` as well, this one is useful when
for the backport cases when a desired bump does not match with the one,
that `parity-publish` semver check has predicted (like
[here](https://github.com/paritytech/polkadot-sdk/actions/runs/19135068993/job/54685184577?pr=10221))
2. Possibility to skip check, when it is really not needed but still
fails (like on the post crates release
[prs](https://github.com/paritytech/polkadot-sdk/actions/runs/18311557391/job/52141285274?pr=9951))

closes: paritytech/release-engineering#274
…10305)

When running a single collator (most commonly on testnets), the block
builder task is always able to claim a slot, so we're never triggering
the pre-connect mechanism which happens for slots owned by other
authors.
Additionally I fixed some tests.

---------

Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…no_std` (paritytech#10321)

This fixes `cargo test -p cumulus-pallet-parachain-system --features
runtime-benchmarks`
…aritytech#10329)

Fixes paritytech#10185

This PR is to add support for `paginationStartKey` parameter in
`archive_v1_storage` JSON RPC API for query type: `descendantsValues`
and `descendantsHashes` per [the latest
specs](https://paritytech.github.io/json-rpc-interface-spec/api/archive_v1_storage.html).

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
This renames the `SlotSchedule` runtime api to `TargetBlockRate`. It
also changes the signature to only returning the target block rate. As
discussed at the retreat, we don't need the block time returned as part
of this runtime api.
# Description
paritytech#9724

---------

Co-authored-by: Javier Viola <363911+pepoviola@users.noreply.github.com>
)

Fixes: paritytech#9977

On our Kusama Canary chain YAP-3392 has the log entry:
```
Collation wasn't advertised because it was built on a relay chain block that is now part of an old session
``` 
[show up 400+ times (2025-10-03 --
2025-10-10)](https://grafana.teleport.parity.io/goto/spoPcDeHR?orgId=1).

# Changes
Changed `offset_relay_parent_find_descendants` to return `None` if the
`relay_best_hash` or any of its ancestors contains an epoch change.

---------

Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
…actional extensions (paritytech#9930)

The `ProofSizeExt` extension is used to serve the proof size to the
runtime. It uses the proof recorder to request the current proof size.
The `RecordingProofProvider` extension can record the calls to the proof
size function. Later the `ReplayProofSizeProvider` can be used to replay
these recorded proof sizes. So, the proof recorder is not required
anymore.

Extensions are now also hooked into the transactional system. This means
they are called when a new transaction is created and informed when a
transaction is committed or reverted.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
fix paritytech/contract-issues#220

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ing (paritytech#10333)

We had reports in the past about `polkadot-parachain` consuming a lot of
memory during syncing. I spend some time investigating this again.

This graph shows memory consumption during sync process:
<img width="1256" height="302" alt="image"
src="https://github.com/user-attachments/assets/eec1b510-1aa8-446e-8088-5ff0daab6252"
/>

We see a rise up to 50gb and then release of a lot of memory and node
stabilizes at around 20gb. While I still find that relatively high, I
found that the large reduction in memory towards the end was caused by
finality notifications. I tracked down the culprit to be
`parachain-consensus`. It is doing long-blocking finalization operations
and keeps finality notifications around while doing so.

In this PR I introduce a new task that fetches the included block and
then immediately releases the finality notifications of the relay chain.

Memory is now more bounded at around ~12gb:
<img width="1248" height="308" alt="image"
src="https://github.com/user-attachments/assets/5a8be3bb-02a2-400f-9d0d-87ec298ce09f"
/>

closes paritytech#1662

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description

This is a small PR that allows for the differential testing job to be
manually triggered instead of _only_ being triggered by PRs.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…d polkadot-omni-node builds (paritytech#10343)

This PR changes the RC build flow so that the large github runners will
be used only for the `polakdot-parachain` and `polkadot-omni-node`
builds, as other binaries builds run fine on the standard runners and
what helps as well to save some costs and resources.

closes: paritytech/release-engineering#279
Related to paritytech#9693

In the transaction pool, transaction are identified by the tag they
provide.

For tasks the provided tag is simply the hash for the encoded task.

Nothing in the doc says that implementers should be careful that tasks
are not too many for a single operation. What I mean is if a task is
`migrate_keys(limit)`, with valid first from 1 to 10_000. Then all tasks
`migrate_keys(1)`, `migrate_keys(2)` ... `migrate_keys(10_000)` are
valid and effectively do the same operation: they all migrate part of
the keys.
In this case a malicious person can submit all those tasks at once and
spam the transaction pool with 10_000 transactions.

I see multiple solution:
* (1) we are careful when we implement tasks, we make the doc clear, but
the API is error prone. (in my example above we would implement just
`migrate_keys` and inside the call we would do a basic rate of migration
of 1000 keys in a bulk).
* (2) we have a new value returned that is the provided tag for the
task. Or we use the task index as provided tag.
* (3) we only accept local tasks: <-- implemented in this PR.

maybe (2) is a better API if we want external submission of tasks.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR backports regular version bumps and prdoc reordering from the
release branch back to master
# Description

This PR bumps the commit hash of the revive-differential-tests framework
to a version that contains a fix for the `CodeNotFound` issue we've been
seeing with PolkaVM. The framework now uploads the code of all the
contracts prior to running the tests.

When CI runs for this PR we should observe that there's either no more
`CodeNotFound` errors in PolkaVM tests or that it's greatly reduced.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description

Small PR that changes the DT CI to not require a PR for uploading the
report to the CI job.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@EgorPopelyaev EgorPopelyaev merged commit cfc609b into master Nov 18, 2025
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.