staking miner: spawn separate task for each block#4716
Conversation
submit_and_watch xt in separate task.submit_and_watch xt in separate task.
| vec![], | ||
| ).await?; | ||
| // grab an externalities without staking, just the election snapshot. | ||
| let mut ext = match crate::create_election_ext::<Runtime, Block>( |
There was a problem hiding this comment.
Beyond the scope of this PR, but wondering if it makes sense to parallize check_versions & ensure_signed_phase & create_election_ext - so we can start creating the election_ext immediatley and if its not the correct version or the signed phase we can send a message to the thread of create_election_ext to stop?
There was a problem hiding this comment.
Should be possible but the lowest hanging fruit is to re-use the existing ws connection and to pass Arc<WsClient> into remote ext instead of creating a new connection for each create_election_ext I think.
My hunch is that it takes a couple of seconds to establish a new ws connection.
There was a problem hiding this comment.
Yeah makes sense - what you are saying might be a good issue for interviewing as well cc @kianenigma
|
|
||
| if ensure_no_previous_solution::<Runtime, Block>(&mut ext, &signer.account).await.is_err() { | ||
| log::debug!(target: LOG_TARGET, "We already have a solution in this phase, skipping."); | ||
| return; |
There was a problem hiding this comment.
In the same vein as my note above, maybe ensure_no_previous_solution & mine_with & get_account_info could be parralized to so we start mining as soon as we have the ext and then stop if there already is a solution?
submit_and_watch xt in separate task.| staking-miner dry-run | ||
| ``` | ||
|
|
||
| ### Test locally |
kianenigma
left a comment
There was a problem hiding this comment.
Looks broadly good to me. If tested and works in westend, then can merge ahead. Thanks!
All in all, I would argue that for now we don't necessarily need to care too much about optimisations etc. At this point, my main concern would be for the code to be maintainable. The two main steps to achieve this are:
|
|
Yeah, you made that clear. Yes tested both on the local dev chain and westend but not remember which commit of this I tested. I realized that the logs from |
This PR makes a couple of optimizations:
JsonRpseeError::RestartNeededexplicitly if that occurs the connection is closed and the client is useless after that, except some buffered responses on subscriptions but it doesn't help because we need the account state (via RPC calls) which will fail.However, I haven't been able to quantify that the optimizations compared to current one so you know, so it's just my hunch that it and
headersshould be never be missed because ofbuffer subscriptionsgets full and dropped.