Skip to content
This repository was archived by the owner on Nov 15, 2023. It is now read-only.

Conversation

@tomusdrw
Copy link
Contributor

  • Exposing storage changes via RPC pub-sub
  • Allows one to subscribe to particular storage keys

CC @jacogr

@tomusdrw tomusdrw added A0-please_review Pull request needs code review. M6-rpcapi labels Jul 31, 2018
#[pubsub(name = "chain_newHead")] {
/// New head subscription
#[rpc(name = "subscribe_newHead")]
#[rpc(name = "subscribe_newHead", alias = ["chain_subscribeNewHead", ])]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great. Maybe it is better to have the "old" as the alias and the "new" one as the name?

(e.g. chain_subscribeNewHead is actually probably the preferred one to match since it aligns with other RPCs, my gut tells me the preferred one should be the "default")

Copy link
Contributor

@dvdplm dvdplm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good stuff. I can't say I understand it all, but overall very readable and interesting.


/// Get storage changes event stream.
///
/// Passing `None` as keys subscribes to all possible keys
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: …as keys should be …as key

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rephrased the whole sentence, hope it's clearer now.

if let Some(storage_update) = storage_update {
if let Some((storage_update, changes)) = storage_update {
transaction.update_storage(storage_update)?;
// TODO [ToDr] How to handle re-orgs? Should we re-emit all storage changes?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't we emit a re-org event with some meta data about what changed and let interested subscribers re-fetch?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's possible, although it's not easy to get storage changes for blocks that are already imported/executed. Re-fetching changes would mean to re-execute the blocks, but I suppose based on the filter_keys we could just return all the storage values in the re-orged blocks.

subscribers.extend(listeners.iter());
}

if has_wildcard || listeners.is_some() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't you check for !susbscribers.is_empty() here? Or is it faster to do it this way?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually interested only in subscribers for that particular key. So if there is a set of changes:

[(1, Some(2)), (2, None), (3, Some(4))]

but we have no wildcard_listeners and only listener for key=1, changes vector will only contain [(StorageKey(1), Some(StorageData(2))]

filter: filter.clone(),
})).is_err()
},
None => false,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I read this right, if the subscriber is gone when we get here, then we assume they've been removed properly already, so we're returning false to avoid calling remove_subscriber() again for them? It's a bit unclear to me how they can still be in the subscribers collection though, can you elaborate on that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, I could actually .expect() here, since if the structure is consistent the subscribers should always be in self.sinks. The check here is superfluous.
I could refactor to:

let &(ref sink, ref filter) = self.sinks.get(&subscriber).expect("subscribers returned from self.listeners are always in self.sinks; qed");
let result = sink.unbounded_send((hash.clone(), StorageChangeSet {
    changes: changes.clone(),
    filter: filter.clone(),
 }));
if result.is_err() {
  self.remove_subscriber(subscriber);
}

or

if let Some(&(ref sink, ref filter)) = match self.sinks.get(&subscriber) {
   let result = sink.unbounded_send((hash.clone(), StorageChangeSet {
      changes: changes.clone(),
      filter: filter.clone(),
   }));
   if result.is_err() {
     self.remove_subscriber(subscriber);
   }
}

Which one do you prefer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, actually I can't since .get() borrows immutably, so remove_subscriber has to be outside of the scope.

assert_eq!(notifications.listeners.len(), 2);
assert_eq!(notifications.wildcard_listeners.len(), 1);
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The channels are closed here, correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes and since the receiving end is dropped, sending to such channel will trigger an error.

sink
.sink_map_err(|e| warn!("Error sending notifications: {:?}", e))
.send_all(stream)
// we ignore the resulting Stream (if the first stream is over we are unsubscribed)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

…is over…? Do you mean …is closed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is over as in is finished/is done. Which means that the stream will not emit any more items.

/// Drain committed changes to an iterator.
///
/// Panics:
/// Will panic if there are any uncommitted prospective changes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is "prospective" sort of like "pending"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, those are changes that can still be easily discarded. You can see example of usage inside block builder:

  1. We run a transaction
  2. It produces a set of prospective changes
  3. If we detect that it's somehow invalid we discard the prospective changes
  4. If we accept the transaction we commit prospective changes.

if let Some((storage_update, changes)) = storage_update {
transaction.update_storage(storage_update)?;
// TODO [ToDr] How to handle re-orgs? Should we re-emit all storage changes?
self.storage_notifications.lock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure that it should be called before transaction is committed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Moved the notification after commit and also guarded by the same if as block import notification.

Copy link
Member

@gavofyork gavofyork left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aside from the minor comment

@@ -0,0 +1,267 @@
// Copyright 2017 Parity Technologies (UK) Ltd.
// This file is part of Polkadot.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Substrate, not Polkadot :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

@gavofyork gavofyork removed the A0-please_review Pull request needs code review. label Aug 1, 2018
@tomusdrw tomusdrw force-pushed the td-storage-events branch from 3d6c0d9 to e5e7c4c Compare August 1, 2018 11:37
@svyatonik svyatonik merged commit 757a721 into master Aug 1, 2018
@svyatonik svyatonik deleted the td-storage-events branch August 1, 2018 12:29
dvdplm added a commit that referenced this pull request Aug 1, 2018
* master:
  Collator for the "adder" (formerly basic-add) parachain and various small fixes (#438)
  Storage changes subscription (#464)
  Wasm execution optimizations (#466)
  Fix the --key generation (#475)
  Fix typo in service.rs (#472)
  Fix session phase in early-exit (#453)
  Make ping unidirectional (#458)
  Update README.adoc
gavofyork pushed a commit that referenced this pull request Aug 10, 2018
* Initial implementation of storage events.

* Attaching storage events.

* Expose storage modification stream over RPC.

* Use FNV for hashing small keys.

* Fix and add tests.

* Swap alias and RPC name.

* Fix demo.

* Addressing review grumbles.

* Fix comment.
liuchengxu pushed a commit to autonomys/substrate that referenced this pull request Jun 3, 2022
helin6 pushed a commit to boolnetwork/substrate that referenced this pull request Jul 25, 2023
* Bump release version to v0.18.0

Signed-off-by: Alexandru Vasile <[email protected]>

* Update changelog

Signed-off-by: Alexandru Vasile <[email protected]>

* Update dependency version to v0.18.0

Signed-off-by: Alexandru Vasile <[email protected]>

* Modify changelog

Signed-off-by: Alexandru Vasile <[email protected]>

* Move changelog entries from added to changed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants