diff --git a/node/core/backing/src/lib.rs b/node/core/backing/src/lib.rs index 1d17827d0a37..e8dbe4319dd7 100644 --- a/node/core/backing/src/lib.rs +++ b/node/core/backing/src/lib.rs @@ -137,7 +137,7 @@ struct CandidateBackingJob { issued_statements: HashSet, /// These candidates are undergoing validation in the background. awaiting_validation: HashSet, - /// `Some(h)` if this job has already issues `Seconded` statemt for some candidate with `h` hash. + /// `Some(h)` if this job has already issued `Seconded` statement for some candidate with `h` hash. seconded: Option, /// The candidates that are includable, by hash. Each entry here indicates /// that we've sent the provisioner the backed candidate. diff --git a/node/overseer/src/lib.rs b/node/overseer/src/lib.rs index 661ef79b0636..e65e0de8006d 100644 --- a/node/overseer/src/lib.rs +++ b/node/overseer/src/lib.rs @@ -17,7 +17,7 @@ //! # Overseer //! //! `overseer` implements the Overseer architecture described in the -//! [implementers-guide](https://github.com/paritytech/polkadot/blob/master/roadmap/implementers-guide/guide.md). +//! [implementers-guide](https://w3f.github.io/parachain-implementers-guide/node/index.html). //! For the motivations behind implementing the overseer itself you should //! check out that guide, documentation in this crate will be mostly discussing //! technical stuff. @@ -203,7 +203,7 @@ impl OverseerHandler { self.send_and_log_error(Event::MsgToSubsystem(msg.into())).await } - /// Inform the `Overseer` that that some block was finalized. + /// Inform the `Overseer` that some block was finalized. #[tracing::instrument(level = "trace", skip(self), fields(subsystem = LOG_TARGET))] pub async fn block_finalized(&mut self, block: BlockInfo) { self.send_and_log_error(Event::BlockFinalized(block)).await @@ -1002,7 +1002,7 @@ impl Overseer where S: SpawnNamed, { - /// Create a new intance of the `Overseer` with a fixed set of [`Subsystem`]s. + /// Create a new instance of the `Overseer` with a fixed set of [`Subsystem`]s. /// /// ```text /// +------------------------------------+ diff --git a/roadmap/implementers-guide/src/node/README.md b/roadmap/implementers-guide/src/node/README.md index 44eeb4bf977b..f20c970aff6c 100644 --- a/roadmap/implementers-guide/src/node/README.md +++ b/roadmap/implementers-guide/src/node/README.md @@ -10,7 +10,7 @@ The architecture of the node-side behavior aims to embody the Rust principles of Many operations that need to be carried out involve the network, which is asynchronous. This asynchrony affects all core subsystems that rely on the network as well. The approach of hierarchical state machines is well-suited to this kind of environment. -We introduce +We introduce ## Components @@ -26,6 +26,6 @@ The Node-side code comes with a set of assumptions that we build upon. These ass We assume the following constraints regarding provided basic functionality: * The underlying **consensus** algorithm, whether it is BABE or SASSAFRAS is implemented. * There is a **chain synchronization** protocol which will search for and download the longest available chains at all times. - * The **state** of all blocks at the head of the chain is available. There may be **state pruning** such that state of the last `k` blocks behind the last finalized block are is available, as well as the state of all their descendents. This assumption implies that the state of all active leaves and their last `k` ancestors are all available. The underlying implementation is expected to support `k` of a few hundred blocks, but we reduce this to a very conservative `k=5` for our purposes. + * The **state** of all blocks at the head of the chain is available. There may be **state pruning** such that state of the last `k` blocks behind the last finalized block are available, as well as the state of all their descendents. This assumption implies that the state of all active leaves and their last `k` ancestors are all available. The underlying implementation is expected to support `k` of a few hundred blocks, but we reduce this to a very conservative `k=5` for our purposes. * There is an underlying **networking** framework which provides **peer discovery** services which will provide us with peers and will not create "loopback" connections to our own node. The number of peers we will have is assumed to be bounded at 1000. * There is a **transaction pool** and a **transaction propagation** mechanism which maintains a set of current transactions and distributes to connected peers. Current transactions are those which are not outdated relative to some "best" fork of the chain, which is part of the active heads, and have not been included in the best fork.