Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add retry mechanics to pallet-scheduler #3060

Merged
merged 44 commits into from
Feb 16, 2024
Merged
Show file tree
Hide file tree
Changes from 41 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
328d0cd
Add option to retry task in scheduler
georgepisaltu Jan 23, 2024
6f56d9f
Add unit tests for retry scheduler
georgepisaltu Jan 23, 2024
22800c5
Add benchmarks for retry scheduler
georgepisaltu Jan 24, 2024
9315c2e
Add some docs to retry functions
georgepisaltu Jan 25, 2024
5bbd31e
Remove redundant clone
georgepisaltu Jan 25, 2024
3eb8016
Add real weights to scheduler pallet
georgepisaltu Jan 25, 2024
9ecae46
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Jan 25, 2024
fcff15c
".git/.scripts/commands/bench-all/bench-all.sh" --pallet=pallet_sched…
Jan 25, 2024
a8fd732
".git/.scripts/commands/bench-all/bench-all.sh" --pallet=pallet_sched…
Jan 25, 2024
3e6e77e
Merge branch 'master' of https://github.com/paritytech/polkadot-sdk i…
Jan 25, 2024
9120d60
".git/.scripts/commands/bench-all/bench-all.sh" --pallet=pallet_sched…
Jan 25, 2024
4167d6f
".git/.scripts/commands/bench-all/bench-all.sh" --pallet=pallet_sched…
Jan 25, 2024
816906a
Use `TaskAddress` in `set_retry`
georgepisaltu Jan 26, 2024
8a28ea5
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Jan 26, 2024
900d2d5
Add prdoc
georgepisaltu Jan 26, 2024
05cbdfc
Refactor agenda query in `set_retry`
georgepisaltu Jan 26, 2024
5fc62f4
Minor renames and fixes
georgepisaltu Jan 26, 2024
5e305e1
Refactor `schedule_retry` return type
georgepisaltu Jan 26, 2024
5943dcc
Implement `ensure_privilege`
georgepisaltu Jan 26, 2024
690f2b8
Add event for setting retry config
georgepisaltu Jan 26, 2024
4a81417
Make retry fail if insufficient weight
georgepisaltu Jan 29, 2024
438effb
Remove redundant weight parameter in `set_retry`
georgepisaltu Jan 29, 2024
d048760
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Jan 29, 2024
3c2b540
Add test for dropping insufficient weight retry
georgepisaltu Jan 29, 2024
7a39a69
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Jan 29, 2024
2e8f954
Clean up retry config on cancel
georgepisaltu Feb 1, 2024
7dfb517
Small refactor
georgepisaltu Feb 1, 2024
2e26707
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 1, 2024
40b567d
Add docs to retry config map
georgepisaltu Feb 1, 2024
2b35465
Add retry count to `RetrySet` event
georgepisaltu Feb 2, 2024
a43df52
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 2, 2024
9da58c6
Make retries independent of periodic runs
georgepisaltu Feb 7, 2024
2976dac
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 7, 2024
8ce66f7
Small refactoring
georgepisaltu Feb 13, 2024
dc9ef2e
Add `cancel_retry` extrinsics
georgepisaltu Feb 13, 2024
945a095
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 13, 2024
39eb209
Add e2e unit test for retry schedule
georgepisaltu Feb 14, 2024
2290240
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 14, 2024
50a2010
Simplify `schedule_retry`
georgepisaltu Feb 15, 2024
e9cc27e
Add docs for `as_retry`
georgepisaltu Feb 15, 2024
497100c
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 15, 2024
863bec7
Update doc comments for `set_retry`
georgepisaltu Feb 16, 2024
7a16648
Merge remote-tracking branch 'upstream/master' into retry-schedule
georgepisaltu Feb 16, 2024
b72da0c
Move common logic under `do_cancel_retry`
georgepisaltu Feb 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,42 +1,41 @@
// Copyright (C) Parity Technologies (UK) Ltd.
// SPDX-License-Identifier: Apache-2.0
// This file is part of Cumulus.

// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Cumulus is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.

// Cumulus is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.

// You should have received a copy of the GNU General Public License
// along with Cumulus. If not, see <http://www.gnu.org/licenses/>.

//! Autogenerated weights for `pallet_scheduler`
//!
//! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 4.0.0-dev
//! DATE: 2023-07-31, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! DATE: 2024-01-25, STEPS: `50`, REPEAT: `20`, LOW RANGE: `[]`, HIGH RANGE: `[]`
//! WORST CASE MAP SIZE: `1000000`
//! HOSTNAME: `runner-ynta1nyy-project-238-concurrent-0`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! EXECUTION: ``, WASM-EXECUTION: `Compiled`, CHAIN: `Some("collectives-polkadot-dev")`, DB CACHE: 1024
//! HOSTNAME: `runner-grjcggob-project-674-concurrent-0`, CPU: `Intel(R) Xeon(R) CPU @ 2.60GHz`
//! WASM-EXECUTION: `Compiled`, CHAIN: `Some("collectives-westend-dev")`, DB CACHE: 1024

// Executed Command:
// ./target/production/polkadot-parachain
// target/production/polkadot-parachain
// benchmark
// pallet
// --chain=collectives-polkadot-dev
// --wasm-execution=compiled
// --pallet=pallet_scheduler
// --no-storage-info
// --no-median-slopes
// --no-min-squares
// --extrinsic=*
// --steps=50
// --repeat=20
// --json
// --header=./file_header.txt
// --output=./parachains/runtimes/collectives/collectives-polkadot/src/weights/
// --extrinsic=*
// --wasm-execution=compiled
// --heap-pages=4096
// --json-file=/builds/parity/mirrors/polkadot-sdk/.git/.artifacts/bench.json
// --pallet=pallet_scheduler
// --chain=collectives-westend-dev
// --header=./cumulus/file_header.txt
// --output=./cumulus/parachains/runtimes/collectives/collectives-westend/src/weights/

#![cfg_attr(rustfmt, rustfmt_skip)]
#![allow(unused_parens)]
Expand All @@ -55,8 +54,8 @@ impl<T: frame_system::Config> pallet_scheduler::WeightInfo for WeightInfo<T> {
// Proof Size summary in bytes:
// Measured: `31`
// Estimated: `1489`
// Minimum execution time: 3_441_000 picoseconds.
Weight::from_parts(3_604_000, 0)
// Minimum execution time: 2_475_000 picoseconds.
Weight::from_parts(2_644_000, 0)
.saturating_add(Weight::from_parts(0, 1489))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
Expand All @@ -68,37 +67,39 @@ impl<T: frame_system::Config> pallet_scheduler::WeightInfo for WeightInfo<T> {
// Proof Size summary in bytes:
// Measured: `77 + s * (177 ±0)`
// Estimated: `159279`
// Minimum execution time: 2_879_000 picoseconds.
Weight::from_parts(2_963_000, 0)
// Minimum execution time: 2_898_000 picoseconds.
Weight::from_parts(1_532_342, 0)
.saturating_add(Weight::from_parts(0, 159279))
// Standard Error: 3_764
.saturating_add(Weight::from_parts(909_557, 0).saturating_mul(s.into()))
// Standard Error: 4_736
.saturating_add(Weight::from_parts(412_374, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
fn service_task_base() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
// Estimated: `0`
// Minimum execution time: 5_172_000 picoseconds.
Weight::from_parts(5_294_000, 0)
// Minimum execution time: 3_171_000 picoseconds.
Weight::from_parts(3_349_000, 0)
.saturating_add(Weight::from_parts(0, 0))
}
/// Storage: `Preimage::PreimageFor` (r:1 w:1)
/// Proof: `Preimage::PreimageFor` (`max_values`: None, `max_size`: Some(4194344), added: 4196819, mode: `Measured`)
/// Storage: `Preimage::StatusFor` (r:1 w:1)
/// Storage: `Preimage::StatusFor` (r:1 w:0)
/// Proof: `Preimage::StatusFor` (`max_values`: None, `max_size`: Some(91), added: 2566, mode: `MaxEncodedLen`)
/// Storage: `Preimage::RequestStatusFor` (r:1 w:1)
/// Proof: `Preimage::RequestStatusFor` (`max_values`: None, `max_size`: Some(91), added: 2566, mode: `MaxEncodedLen`)
/// The range of component `s` is `[128, 4194304]`.
fn service_task_fetched(s: u32, ) -> Weight {
// Proof Size summary in bytes:
// Measured: `213 + s * (1 ±0)`
// Estimated: `3678 + s * (1 ±0)`
// Minimum execution time: 19_704_000 picoseconds.
Weight::from_parts(19_903_000, 0)
.saturating_add(Weight::from_parts(0, 3678))
// Standard Error: 5
.saturating_add(Weight::from_parts(1_394, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(2))
// Measured: `246 + s * (1 ±0)`
// Estimated: `3711 + s * (1 ±0)`
// Minimum execution time: 17_329_000 picoseconds.
Weight::from_parts(17_604_000, 0)
.saturating_add(Weight::from_parts(0, 3711))
// Standard Error: 1
.saturating_add(Weight::from_parts(1_256, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(3))
.saturating_add(T::DbWeight::get().writes(2))
.saturating_add(Weight::from_parts(0, 1).saturating_mul(s.into()))
}
Expand All @@ -108,33 +109,33 @@ impl<T: frame_system::Config> pallet_scheduler::WeightInfo for WeightInfo<T> {
// Proof Size summary in bytes:
// Measured: `0`
// Estimated: `0`
// Minimum execution time: 6_359_000 picoseconds.
Weight::from_parts(6_599_000, 0)
// Minimum execution time: 4_503_000 picoseconds.
Weight::from_parts(4_677_000, 0)
.saturating_add(Weight::from_parts(0, 0))
.saturating_add(T::DbWeight::get().writes(1))
}
fn service_task_periodic() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
// Estimated: `0`
// Minimum execution time: 5_217_000 picoseconds.
Weight::from_parts(5_333_000, 0)
// Minimum execution time: 3_145_000 picoseconds.
Weight::from_parts(3_252_000, 0)
.saturating_add(Weight::from_parts(0, 0))
}
fn execute_dispatch_signed() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
// Estimated: `0`
// Minimum execution time: 2_406_000 picoseconds.
Weight::from_parts(2_541_000, 0)
// Minimum execution time: 1_804_000 picoseconds.
Weight::from_parts(1_891_000, 0)
.saturating_add(Weight::from_parts(0, 0))
}
fn execute_dispatch_unsigned() -> Weight {
// Proof Size summary in bytes:
// Measured: `0`
// Estimated: `0`
// Minimum execution time: 2_370_000 picoseconds.
Weight::from_parts(2_561_000, 0)
// Minimum execution time: 1_706_000 picoseconds.
Weight::from_parts(1_776_000, 0)
.saturating_add(Weight::from_parts(0, 0))
}
/// Storage: `Scheduler::Agenda` (r:1 w:1)
Expand All @@ -144,11 +145,11 @@ impl<T: frame_system::Config> pallet_scheduler::WeightInfo for WeightInfo<T> {
// Proof Size summary in bytes:
// Measured: `77 + s * (177 ±0)`
// Estimated: `159279`
// Minimum execution time: 11_784_000 picoseconds.
Weight::from_parts(5_574_404, 0)
// Minimum execution time: 8_629_000 picoseconds.
Weight::from_parts(6_707_232, 0)
.saturating_add(Weight::from_parts(0, 159279))
// Standard Error: 7_217
.saturating_add(Weight::from_parts(1_035_248, 0).saturating_mul(s.into()))
// Standard Error: 5_580
.saturating_add(Weight::from_parts(471_827, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
Expand All @@ -161,11 +162,11 @@ impl<T: frame_system::Config> pallet_scheduler::WeightInfo for WeightInfo<T> {
// Proof Size summary in bytes:
// Measured: `77 + s * (177 ±0)`
// Estimated: `159279`
// Minimum execution time: 16_373_000 picoseconds.
Weight::from_parts(3_088_135, 0)
// Minimum execution time: 12_675_000 picoseconds.
Weight::from_parts(7_791_682, 0)
.saturating_add(Weight::from_parts(0, 159279))
// Standard Error: 7_095
.saturating_add(Weight::from_parts(1_745_270, 0).saturating_mul(s.into()))
// Standard Error: 5_381
.saturating_add(Weight::from_parts(653_023, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(2))
}
Expand All @@ -178,11 +179,11 @@ impl<T: frame_system::Config> pallet_scheduler::WeightInfo for WeightInfo<T> {
// Proof Size summary in bytes:
// Measured: `468 + s * (179 ±0)`
// Estimated: `159279`
// Minimum execution time: 14_822_000 picoseconds.
Weight::from_parts(9_591_402, 0)
// Minimum execution time: 11_908_000 picoseconds.
Weight::from_parts(11_833_059, 0)
.saturating_add(Weight::from_parts(0, 159279))
// Standard Error: 7_151
.saturating_add(Weight::from_parts(1_058_408, 0).saturating_mul(s.into()))
// Standard Error: 5_662
.saturating_add(Weight::from_parts(482_816, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(2))
}
Expand All @@ -195,12 +196,91 @@ impl<T: frame_system::Config> pallet_scheduler::WeightInfo for WeightInfo<T> {
// Proof Size summary in bytes:
// Measured: `509 + s * (179 ±0)`
// Estimated: `159279`
// Minimum execution time: 18_541_000 picoseconds.
Weight::from_parts(6_522_239, 0)
// Minimum execution time: 15_506_000 picoseconds.
Weight::from_parts(11_372_975, 0)
.saturating_add(Weight::from_parts(0, 159279))
// Standard Error: 8_349
.saturating_add(Weight::from_parts(1_760_431, 0).saturating_mul(s.into()))
// Standard Error: 5_765
.saturating_add(Weight::from_parts(656_322, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(2))
}
/// Storage: `Scheduler::Retries` (r:1 w:2)
/// Proof: `Scheduler::Retries` (`max_values`: None, `max_size`: Some(30), added: 2505, mode: `MaxEncodedLen`)
/// Storage: `Scheduler::Agenda` (r:1 w:1)
/// Proof: `Scheduler::Agenda` (`max_values`: None, `max_size`: Some(155814), added: 158289, mode: `MaxEncodedLen`)
/// Storage: `Scheduler::Lookup` (r:0 w:1)
/// Proof: `Scheduler::Lookup` (`max_values`: None, `max_size`: Some(48), added: 2523, mode: `MaxEncodedLen`)
/// The range of component `s` is `[1, 200]`.
fn schedule_retry(s: u32, ) -> Weight {
// Proof Size summary in bytes:
// Measured: `159`
// Estimated: `159279`
// Minimum execution time: 14_069_000 picoseconds.
Weight::from_parts(14_868_345, 0)
.saturating_add(Weight::from_parts(0, 159279))
// Standard Error: 425
.saturating_add(Weight::from_parts(33_468, 0).saturating_mul(s.into()))
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(4))
}
/// Storage: `Scheduler::Agenda` (r:1 w:0)
/// Proof: `Scheduler::Agenda` (`max_values`: None, `max_size`: Some(155814), added: 158289, mode: `MaxEncodedLen`)
/// Storage: `Scheduler::Retries` (r:0 w:1)
/// Proof: `Scheduler::Retries` (`max_values`: None, `max_size`: Some(30), added: 2505, mode: `MaxEncodedLen`)
fn set_retry() -> Weight {
// Proof Size summary in bytes:
// Measured: `77 + s * (177 ±0)`
// Estimated: `159279`
// Minimum execution time: 7_550_000 picoseconds.
Weight::from_parts(6_735_955, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: `Scheduler::Lookup` (r:1 w:0)
/// Proof: `Scheduler::Lookup` (`max_values`: None, `max_size`: Some(48), added: 2523, mode: `MaxEncodedLen`)
/// Storage: `Scheduler::Agenda` (r:1 w:0)
/// Proof: `Scheduler::Agenda` (`max_values`: None, `max_size`: Some(155814), added: 158289, mode: `MaxEncodedLen`)
/// Storage: `Scheduler::Retries` (r:0 w:1)
/// Proof: `Scheduler::Retries` (`max_values`: None, `max_size`: Some(30), added: 2505, mode: `MaxEncodedLen`)
fn set_retry_named() -> Weight {
// Proof Size summary in bytes:
// Measured: `513 + s * (179 ±0)`
// Estimated: `159279`
// Minimum execution time: 11_017_000 picoseconds.
Weight::from_parts(11_749_385, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: `Scheduler::Agenda` (r:1 w:0)
/// Proof: `Scheduler::Agenda` (`max_values`: None, `max_size`: Some(155814), added: 158289, mode: `MaxEncodedLen`)
/// Storage: `Scheduler::Retries` (r:0 w:1)
/// Proof: `Scheduler::Retries` (`max_values`: None, `max_size`: Some(30), added: 2505, mode: `MaxEncodedLen`)
fn cancel_retry() -> Weight {
// Proof Size summary in bytes:
// Measured: `77 + s * (177 ±0)`
// Estimated: `159279`
// Minimum execution time: 7_550_000 picoseconds.
Weight::from_parts(6_735_955, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(T::DbWeight::get().reads(1))
.saturating_add(T::DbWeight::get().writes(1))
}
/// Storage: `Scheduler::Lookup` (r:1 w:0)
/// Proof: `Scheduler::Lookup` (`max_values`: None, `max_size`: Some(48), added: 2523, mode: `MaxEncodedLen`)
/// Storage: `Scheduler::Agenda` (r:1 w:0)
/// Proof: `Scheduler::Agenda` (`max_values`: None, `max_size`: Some(155814), added: 158289, mode: `MaxEncodedLen`)
/// Storage: `Scheduler::Retries` (r:0 w:1)
/// Proof: `Scheduler::Retries` (`max_values`: None, `max_size`: Some(30), added: 2505, mode: `MaxEncodedLen`)
fn cancel_retry_named() -> Weight {
// Proof Size summary in bytes:
// Measured: `513 + s * (179 ±0)`
// Estimated: `159279`
// Minimum execution time: 11_017_000 picoseconds.
Weight::from_parts(11_749_385, 0)
.saturating_add(Weight::from_parts(0, 159279))
.saturating_add(T::DbWeight::get().reads(2))
.saturating_add(T::DbWeight::get().writes(1))
}
}
Loading
Loading