-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: more transports (was: stdin/stdout transport for IPC) #38
Comments
I'll take a crack at this. It'll probably end up on the async branch, which will get merged into master soon (tm). |
To the async branch (tikue/async) I added
Let me know if you're able to get it working with that. I haven't tested it at all yet. |
Added an example demonstrating the usage of pipes. |
Hey, thanks a lot for the quick reply. This looks good. One thing I'm unsure about, is how to transform a
But it might be impossible to do with mio. Looking quickly at the |
I spent a bit of time trying to figure that out last night too. I think the best thing to do would be to file an issue on either the rust or mio repo. I'm kind of the middle man here unfortunately. |
I've searched very briefly, and it doesn't look like there's an easy, cross platform way of configuring stdio to be unblocking. It behaves a lot like file I/O. We could write a polyfill using background thread to simulate asynchronous I/O, or abandon the entire idea of using stdio. |
@sbstp what about mio's support of it that I just linked? Edit: ah, it's |
Looks like I didn't search properly. You can fnctl it like any other file descriptor. That fixes our issues then! Only need a new version of mio. EDIT: It wouldn't work on Windows because the PipeWriter/PipeReader are only available on unix. It's not a problem for my own uses, but it might be a bother for some. |
@sbstp yeah, I just added |
This feature was lost in the tokio rewrite. I'm not sure how I feel about it now. I do like the simplicity of dealing strictly with TCP, but I realize that there's an opportunity here to be useful to a wider audience. I think I'll continue to leave this issue open, but I don't intend to implement it at this time unless there's significant demand. |
Should tarpc be generic over the transport layer (basically |
#199) # New Crates - crate rpc contains the core client/server request-response framework, as well as a transport trait. - crate bincode-transport implements a transport that works almost exactly as tarpc works today (not to say it's wire-compatible). - crate trace has some foundational types for tracing. This isn't really fleshed out yet, but it's useful for in-process log tracing, at least. All crates are now at the top level. e.g. tarpc-plugins is now tarpc/plugins rather than tarpc/src/plugins. tarpc itself is now a *very* small code surface, as most functionality has been moved into the other more granular crates. # New Features - deadlines: all requests specify a deadline, and a server will stop processing a response when past its deadline. - client cancellation propagation: when a client drops a request, the client sends a message to the server informing it to cancel its response. This means cancellations can propagate across multiple server hops. - trace context stuff as mentioned above - more server configuration for total connection limits, per-connection request limits, etc. # Removals - no more shutdown handle. I left it out for now because of time and not being sure what the right solution is. - all async now, no blocking stub or server interface. This helps with maintainability, and async/await makes async code much more usable. The service trait is thusly renamed Service, and the client is renamed Client. - no built-in transport. Tarpc is now transport agnostic (see bincode-transport for transitioning existing uses). - going along with the previous bullet, no preferred transport means no TLS support at this time. We could make a tls transport or make bincode-transport compatible with TLS. - a lot of examples were removed because I couldn't keep up with maintaining all of them. Hopefully the ones I kept are still illustrative. - no more plugins! # Open Questions 1. Should client.send() return `Future<Response>` or `Future<Future<Response>>`? The former appears more ergonomic but it doesn’t allow concurrent requests with a single client handle. The latter is less ergonomic but yields back control of the client once it’s successfully sent out the request. Should we offer fns for both? 2. Should rpc service! Fns take &mut self or &self or self? The service needs to impl Clone anyway, technically we only need to clone it once per connection, and then leave it up to the user to decide if they want to clone it per RPC. In practice, everyone doing nontrivial stuff will need to clone it per RPC, I think. 3. Do the request/response structs look ok? 4. Is supporting server shutdown/lameduck important? Fixes #178 #155 #124 #104 #83 #38
Obsoleted by #199. tarpc is now transport-agnostic. |
The spawner process would act as the server and its children as clients. The child process's stdin and stdout can be treated like a socket. The child process could initiate the connection in its main.
I would use that to build a plugin system that allows live load/reload/unload-ing the plugins.
The text was updated successfully, but these errors were encountered: