-
Notifications
You must be signed in to change notification settings - Fork 12
Description
Short intro for what I am trying to accomplish:
Using Grandpa + Babe
Each block has a modified header (ideally) to include metadata-specific data for light clients to be able to query Full nodes for data to check if they indeed do have it.
A high-level algorithm looks like this:
- Get finalized block data somehow.
- Build metadata based on block's body bytes (Kate Commitment)
- Save that metadata on-chain + header data for light clients (probably it's not possible to modify header after block is finalized)
- Other non-proposing nodes must be able to quickly verify that metadata for validity (this process is must faster).
There are a few problems with the whole process:
- Metadata processing is long ~12 seconds on average.
- The result must be validated within the consensus or afterward and trigger slashing in case the validity check is failed.
Myself, I checked a few options based on documentation:
Block import pipeline
That was the most obvious place to go, do it the last step right before the Client, but it seems like I can't access on-chain storage within it. Let's assume I can, will 12 seconds process break anything?
Offchain worker
That was my second obvious choice due to the fact, that it is automatically called after the block is finalized. But the function itself only accepts block number, and I can't get quick access to the block data there. Using RPC localhost call sounds a bit hacky and I would prefer direct access to on-chain data, if possible.
The other problem with offchain worker would be implementing custom slashing because the network must ensure, that metadata is built for each block, with no exceptions.
Is it possible to do all these using Substrate?