Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #851 +/- ##
==========================================
+ Coverage 64.44% 65.65% +1.21%
==========================================
Files 10 45 +35
Lines 225 1581 +1356
Branches 52 54 +2
==========================================
+ Hits 145 1038 +893
- Misses 60 523 +463
Partials 20 20
☔ View full report in Codecov by Sentry. |
|
|
||
| bytes memory payload = NativeTokensTypes.Mint(token, dest, recipient, amount); | ||
| outboundQueue.submit{ value: msg.value }(assetHubParaID, payload); | ||
| bytes memory payload = NativeTokensTypes.Mint(address(registry), token, dest, recipient, amount); |
There was a problem hiding this comment.
So the registry is now the dummy address that we will use to configure Statemine/Bridgehub?
There was a problem hiding this comment.
Yes, indeed.
All assets transferred across the bridge to Asset Hub should have a Location of the form: /GlobalConsensus(Ethereum)/AccountKey20(Registry)/AccountKey20(TokenContractAddress)
* Add previous Merkle proof verifier * Switch BeefyClient to previous MerkleProof verifier * Switch ParachainClient to previous MerkleProof verifier * Remove unused function * Update polkadot to include BEEFY revert * Remove note about not checking index * Remove leading underscore * Don't sort hash pairs when generating test data * Print logs from forge test * wat * Fix library function signatures * Remove test logs * Use internal modifier in Bitfield * Revert modifier so that BeefyClient test works * Guard against short proofs
|
@doubledup @vgeddes But I can not find the counterpart revert in this PR, is that not required or we missed here? |
Thanks Ron, I definitely overlooked those changes. As the relayer is currently failing when it tries to call |
The existing implementation was deficient in that it dropped messages if the maximum number of pending messages was breached. Due to the nature of our bridging design and related weight calculations, there is a hard limit on the number of outgoing messages which can be committed per block. Any messages beyond that limit need to be buffered somewhere, or dropped.
Dropping messages is bad for many reasons:
The new implementation takes advantage of a new generalised message queueing pallet in Substrate: MessageQueue. Its main responsibility is to store pending messages in a PoV-efficient manner, and coordinate their processing by other pallets. It never drops messages.
At a very high-level, the messaging flow now looks like this
OutboundQueueviaOutboundQueue::submitOutboundQueueforwards message toMessageQueueMessageQueuehooks back intoOutboundQueueto process the message when there is enough remaining weight in the blockOther improvements:
snowbridge-query-eventstool is no longer needed, since the pallet stores the committed messages in a way that GSRPC can now query using normal storage queries.Todo:
Cumulus companion PR: Snowfork/cumulus#36
Fixes: SNO-488, SNO-511, SNO-474