-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sharing socket between threads / non-blocking read #113
Comments
Hi, first, what That's what Probably the best way nowadays is |
Thank you very much for your help! I'm still a little unsure, I've never played with native networking like this... maybe you could give me a design recommendation? I have two WebSocket connections with the same remote host, one I use for reading in thread A and one I use for writing in thread B. This feels like poor form but it's what I'm doing now. I setup a couple channels to communicate between threads. I spawn a thread with an infinite loop that tries to read new messages from the socket and if one is found it parses it into a struct and sends it over channel A to an "agent" thread, which will do something else with the information. Ping handling here is easy, as I'm already reading in an infinite loop. In thread B I check channel B for messages from the agent thread, which I serialize and send over the TX socket. Of course this doesn't work because This might be the wrong approach, I don't know. I thought it would minimize the time from the packets arriving at my machine to the information being handled in the agent thread. Two things:
Hopefully you can give some advice, you seem to know what you're talking about here 👍 |
Hi, |
Haha, I felt it was naive but I'm glad to hear it's the least efficient way ever 😄. |
Judging from comments and nice and detailed comments from @agalakhov I may assume that this issue is solved / question is answered, so I'll close the topic ;) |
It seems the https://docs.rs/websocket/0.24.0/websocket/client/sync/struct.Client.html#method.split Is there a reason why To move from a blocking API to an async API is a big step (both, for the maintainers and the users of the resulting API). Even if only the core that handles the Websocket is async, that also would increase the complexity of the code a lot (and add many additional dependencies). So it would be good if I'm going to use the |
Tungstenite itself is neither blocking nor non-blocking. There is non-blocking support and splitting support in The reason |
Thanks, @agalakhov. So this fundamentally is not possible with
Do you mean, mpsc can be used for splitting in a blocking or in an async context? I don't understand what you mean with this, to be honest. Thanks for your help :) |
Most likely you're reading websocket in a loop in some thread. Switch the socket to non-blocking mode, then reading will result in As you can see, we need two external components for that: |
Okay, thanks again! Looks like I have to use It would be great if there was an example of using I think I will create an async "core" that handles the WebSocket and communicates with the rest of the program via messages. This way only the core needs to be async. I'd like to avoid this, but it seems to be the only option if I want to use |
Hi @agalakhov, sorry to resurrect this thread, but I'm trying to use Right now the issue I have is with the handshake when using the Any chance you have a simple example using Much thanks! |
Ok, I think I got it. I'm sure this isn't the sexiest way to do it, but it works when tested against Please don't judge meFirst, here's the { while true; do date; sleep 1; done } | wscat -l 8088 And here is the main.rs: use mio::net::TcpStream;
use mio::{Events, Interest, Poll, Token, Waker};
use std::net::SocketAddr;
use std::net::TcpStream as StdStream;
use std::sync::Arc;
use std::thread::sleep;
use std::thread::spawn;
use std::time::Duration;
use tungstenite::client::IntoClientRequest;
use tungstenite::handshake::client::ClientHandshake;
use tungstenite::protocol::Message;
fn main() {
// Create a poll instance.
let mut poll = Poll::new().unwrap();
const STREAM_TOKEN: Token = Token(10);
const WAKE_TOKEN: Token = Token(20);
// Create storage for events.
let mut events = Events::with_capacity(128);
let socket: SocketAddr = "127.0.0.1:8088".parse().unwrap();
let std_stream = StdStream::connect(socket).unwrap();
let stream = TcpStream::from_std(std_stream);
let hs = ClientHandshake::start(
stream,
String::from("ws://example.com:8080")
.into_client_request()
.unwrap(),
None,
)
.unwrap();
let mut ws = hs.handshake().unwrap().0;
ws.write_message(Message::from("HI!")).unwrap();
ws.write_pending().unwrap();
println!("{}", ws.read_message().unwrap());
// Register the socket with `Poll`
poll.registry()
.register(ws.get_mut(), STREAM_TOKEN, Interest::READABLE)
.unwrap();
let waker = Arc::new(Waker::new(poll.registry(), WAKE_TOKEN).unwrap());
poll.poll(&mut events, Some(Duration::from_millis(100)))
.unwrap();
spawn(move || {
sleep(Duration::from_secs(1));
loop {
poll.poll(&mut events, Some(Duration::from_millis(100)))
.unwrap();
for event in events.iter() {
// We can use the token we previously provided to `register` to
// determine for which socket the event is.
match event.token() {
STREAM_TOKEN => {
println!("Received: {}", ws.read_message().unwrap());
}
WAKE_TOKEN => {
ws.write_message(Message::from("Yay!")).unwrap();
}
// We don't expect any events with tokens other than those we provided.
_ => unreachable!(),
}
}
}
});
loop {
sleep(Duration::from_secs(3));
waker.wake().unwrap()
}
} From
And when running
And as far as I can tell this doesn't take 100% CPU looping too much, so success! |
@mkeedlinger out of interest: why don't you prefer to use tokio, so that you don't need to implement it on top of |
@application-developer-DA Good question. First, the project I'm using Also, my project has 2 streams we need to listen to, a HTTP server, and also a websocket client that is both getting and sending messages. In examples where I've seen Last, my project may have parts that are CPU blocking, but I guess the hope is that by using |
Hello You can achieve this by using multi-producer-single-consumer channels. So basically when you create a WS connection. You create these channels; pass them to the shared server object and that owns them. Together with the WsConnection start point you spawn a new task and listen for the receiving messages and then write to the tcp stream on the WsConnection. The ws connection should have the sender part of the server (which you can clone multiple times) If you want to read from multiple places you can instead use a multi-producer-multiple-consumer channel. I put together a naive ascii art for you: I have created an example create to show you how you can achieve this with full working code. Though it's not fully compatible with the rfc spec yet and can probably be improved a lot. But it's focused towards explaining the simpler concepts of multi users in a server which handles connections etc. Hope that makes sense in some way :o Please let me know if you have any further questions |
At last my solution is to call |
You can also set |
@mkeedlinger Thanks for your example. It didn't work on my end although it gives me the idea and closer to what I want as I'm trying to understand what is Yeah, so in short
Edit: I think I got 2. answered. I miss that |
@haxpor Well it's been a few years (and I've moved on to async Rust with Tokio) but I'll do my best. Websockets communicate over TCP. Tungstenite could create the TCP connection (ie stream) for you, but chooses not to in favor of you creating one and passing it in. Since you create the stream, you could set whatever flags you want on it (like seen here). Since the stream arg has the trait bounds |
Thank you @mkeedlinger . And to others, my apology for resurrect this thread. Base on your sample code, I did more research, and came to the same conclusion just like what you described. Thanks again. For the record, I've attempted to do it in non-async way, make So at the end I cut out |
Hello, I'm building a service which demands minimal latency.
I need to be able to send a message as fast as possible (latency wise), but still engage in potential ping/pong interactions.
My problem is that
read_message
seems to block until a message arrives, which won't do.I was thinking I could access the underlying stream, split it, then have two threads, one which blocks while waiting for new messages and then handles them, and the other which writes whenever it needs to according to its own independent logic.
Is this possible? I've heard about using
mio
to make some of the components async, I saw aset_nonblocking
method mentioned in another issue regarding the blocking nature ofread_message
. I'm overall a bit confused, and can't find an example of how I would achieve an async read (or something equivalent) usingmio
.Thanks so much!
The text was updated successfully, but these errors were encountered: