Generic request/response infrastructure for Polkadot#2352
Conversation
554f71a to
3b5fb08
Compare
8e3065f to
68b879a
Compare
It is not protocol related at all, it is in fact only part of the subsystem communication as it gets wrapped into messages of each subsystem.
WIP: Does not compile.
Request multiplexer is moved to bridge, as there the implementation is more straight forward as we can specialize on `AllMessages` for the multiplexing target. Sending of requests is mostly complete, apart from a few `From` instances. Receiving is also almost done, initializtion needs to be fixed and the multiplexer needs to be invoked.
Subsystems are now able to receive and send requests and responses via the overseer.
- start encoding at 0 - don't crash on zero protocols - don't panic on not yet implemented request handling
Use index 0 instead of 1. Co-authored-by: Andronik Ordian <write@reusable.software>
| // Receiver is a fused stream, which allows for this simple handling of | ||
| // exhausted ones. | ||
| Poll::Ready(None) => {} |
There was a problem hiding this comment.
Is that expected that one of receivers streams can end?
If not, maybe we should abort in this case by returning Poll::Ready(None) instead of silently ignoring it?
Aso, is it safe to poll it if is_terminated is true? Not sure.
There was a problem hiding this comment.
Receiver is a fused Stream, so it should be safe. Not sure about the first question. It seemed logical that the stream ends if all its sources are exhausted.
The only reason those streams should get exhausted is because we are shutting down, in that case it might or might not matter to deliver any messages still in the queues. It felt a bit safer to try to deliver them, until everything is exhausted. On the other hand, receivers will hardly be able to respond anyways ....
I guess that's an edge case, which should not make much of a difference, either way.
There was a problem hiding this comment.
Receiver is a fused Stream, so it should be safe.
it still can panic:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f1ac00093b376fc6cefb41033adff814
There was a problem hiding this comment.
oh boy, thanks a lot! I could have sworn that it is safe to poll a fused stream after exhaustion.
There was a problem hiding this comment.
I think the safest thing would be to early return on the first closed stream, i.e. return Ready(None).
There was a problem hiding this comment.
I think the issue is slightly skewed, if one fuses the inner streams, so they can be polled it will work.
So what you want is: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=691bfc7175278e2887340f5289df63d3
There was a problem hiding this comment.
Ok, so I confused a Stream implementing FusedStream with a stream that got fused by means of fuse. Ok this definitely deserves a test. Thank you guys. Will provide a fix in a minute.
| match Pin::new(rx).poll_next(cx) { | ||
| // If at least one stream is pending, then we are not done yet (No | ||
| // Ready(None)). | ||
| // Ready(None)). |
There was a problem hiding this comment.
At some point, I will get my editor to do this right. It works like 99% of the time, but then suddenly expandtab is activated again for some reason. arrgh
| Action::Nop => {} | ||
| Action::Abort => return Ok(()), | ||
| Action::Abort(reason) => match reason { | ||
| AbortReason::SubsystemError(err) => { |
There was a problem hiding this comment.
If AbortReason becomes an Error type, this could become trivial as well.
There was a problem hiding this comment.
Ok, you have a point. I am already bending my definition of error.
There was a problem hiding this comment.
Na, I think it is good enough for now :-) I finally want to carry on with availability distribution.
drahnr
left a comment
There was a problem hiding this comment.
A few very small nits, other than that looks good to me 👍
Co-authored-by: Bernhard Schuster <bernhard@ahoi.io>
- Channel size is now determined by function. - Explicitely scope NetworkService::start_request.
|
bot merge |
|
Trying merge. |
Generic infrastructure for sending and receiving of requests and responses from subsystems.
This PR depends on this substrate PR, which just got merged - so this should be good to go.