Conversation
|
I think this has an overlap with #8230, which adds several similar metrics and logs much of the same events |
Yes, there is indeed, but it seems to focus on other metrics. @sistemd thanks for the PR, please also add an unincluded segment size metric. You could also check if we already have other metric for block authorship duration, if we don't we should have that metric. We already have the log for it. |
|
Ah, I was not aware of the overlap. However, the scope seems slightly different. On the parachain informant we can also give a bit more detailed info since we decode the header. Otherwise already looking pretty good already, some points:
|
iulianbarbu
left a comment
There was a problem hiding this comment.
I don't have much context over this, but I am catching up on the go. Nothing outstanding from me, just a few comments. Looking forward to the rest of the changes.
@sandreim Does this make sense? I'm not sure if those are the right metrics, but hopefully it's on the right track. |
|
Sorry for re-requesting your review @skunert, I haven't addressed your comments above yet. |
|
@skunert Should be ready now 👍 |
cumulus/client/service/src/lib.rs
Outdated
| loop { | ||
| select! { | ||
| (backed_block, relay_block) = backed_blocks.select_next_some() => { | ||
| match &last_backed_block { |
There was a problem hiding this comment.
What if there is a new included block but nothing backed? This can happen as well. In that case we should still print the included update.
There was a problem hiding this comment.
I did not know this was possible. In this case I think it's better to separate the included and backed logs again. Because what do you log if you get to included block N at relay block 15 and after that you get backed block 0 at relay block 1 for example? How are you supposed to log both at the same time in this case? It starts to get a bit confusing, I think separating them is the simplest option. 24747e7
iulianbarbu
left a comment
There was a problem hiding this comment.
Looks pretty nice! I am wondering if you tried out running a parachain with zombienet and check how the logs and prometheus metrics show up, while relaychain & parachain blocks get produced and finalized. Would just test it at the end to wrap everything up, once open comments are resolved. LMK what you think.
cumulus/client/service/src/lib.rs
Outdated
| log::info!( | ||
| "New backed block at relay block #{} ({}): #{} ({})", | ||
| relay_block.number(), | ||
| relay_block.hash(), | ||
| backed_block.number(), | ||
| backed_block.hash(), | ||
| ); |
There was a problem hiding this comment.
why not use tracing? I think that even if it is not structured, should still work
There was a problem hiding this comment.
This crate uses log everywhere, that's why I used log as well. Maybe the whole crate should change? BTW, why is this not standard across the entire codebase? Why would you ever use log instead of tracing anyway?
There was a problem hiding this comment.
I think this codebase started long ago when maybe tracing wasn't stabilized, or not so popular (just a guess). Currently the binaries we have use a tracing-subscriber that is configured to capture logs coming from both tracing & log. I noticed the initiative is to use more of tracing (some places had recent changes from log to tracing, e.g. transaction-pool). I think all in all tracing ecosystem is a better fit for what we do here. I haven't used log that much in the past, but I find it maybe useful for small projects like simple tools, not very oriented on running in a production environment (although that's certainly not the case here, and it can still be used in production).
There was a problem hiding this comment.
This crate uses log everywhere, that's why I used log as well. Maybe the whole crate should change?
Looks like it pulls log via sc_telemetry. However, its purpose is not to help with logging utilities for consumer crates, it is meant to consume tracing based events via the workers it defines as far as I understand from the public docs. I think we should use sc_tracing, even if our public docs mentions the crate is unstable. So yeah, maybe the whole crate needs to change, but of course, should be separate work. I guess we can live with the log crate for a while here.
There was a problem hiding this comment.
I can resolve this once a new issue is created that tracks this (we can also add a getting started label to the issue since it should be trivial - it can be done by you or anyone else who has time).
Yeah, looks good, thx |
1b595dd to
2fa4062
Compare
|
/cmd prdoc --audience runtime_dev --audience node_operator --bump patch |
…time_dev --audience node_operator --bump patch'
skunert
left a comment
There was a problem hiding this comment.
Last nits, then is ready to merge IMO.
Please add it to the parachain template as @iulianbarbu suggested here:
Nice work!
Hmm I am not fully convinced that we can remove it. For sure usage should be discouraged if one can get away with omni-node. But in the end we still need a way to start a fresh parachain, and the simple node-template will not be a good starting point. People would need to look at omni-node-lib and copy their service from there. |
I think |
Upsy, I think you are right. I think I looked at it wrong, thought that we add the task to omni-node only, but now I see it is in cumulus/client/service. We should be fine. |
There was a problem hiding this comment.
Nice work 💪 ! Just a few more nits.
LE: would be good to include @skunert suggestions before merging, CI needs some love too.
cumulus/parachains/runtimes/testing/yet-another-parachain/Cargo.toml
Outdated
Show resolved
Hide resolved
130d6ef to
7713c0a
Compare
|
All GitHub workflows were cancelled due to failure one of the required jobs. |
Closes #8216. --------- Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Upgrade all polkadot-sdk git dependencies from branch `stable2503` to `stable2506`. Upgrade all Frontier EVM dependencies from branch `stable2503` to tag `frontier-stable2506`. Fix all compilation errors and test failures caused by upstream breaking API changes in polkadot-sdk stable2506 (134 PRs) and 9 patch releases. Upstream breaking changes addressed: - Remove local `AccountIdFor` alias, now provided by `frame_system::pallet_prelude` (paritytech/polkadot-sdk#7229) - Add `RelayParentOffset = ConstU32<0>` to parachain system config (paritytech/polkadot-sdk#8299) - Fix `Outcome` enum pattern matching: `Error` is now tuple variant wrapping `InstructionError` (paritytech/polkadot-sdk#8535) - Remove `RuntimeEvent` from `pallet_evm::Config` and `pallet_ethereum::Config` (frontier stable2506) - Add `RuntimeEvent` to `pallet_session::historical::Config` - Remove `PassByInner` import, use `.0` for `H256` inner access (paritytech/polkadot-sdk#7375) - Destructure 4 return values from `build_relay_chain_interface` (paritytech/polkadot-sdk#8072) - Add `metrics` field to `BuildNetworkParams` (paritytech/polkadot-sdk#8332) - Add `prometheus_registry` to `StartRelayChainTasksParams` (paritytech/polkadot-sdk#8332) Release: https://github.com/paritytech/polkadot-sdk/releases/tag/polkadot-stable2506 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes #8216.