Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

investigate modifying the kernel's receive buffer size #2255

Closed
marten-seemann opened this issue Dec 5, 2019 · 6 comments · Fixed by #2791
Closed

investigate modifying the kernel's receive buffer size #2255

marten-seemann opened this issue Dec 5, 2019 · 6 comments · Fixed by #2791

Comments

@marten-seemann
Copy link
Member

It turns out that the Linux kernel's UDP receive buffer is too small by default (around 200k). In order to support higher bandwidths, we need a significantly higher value.

It would be good if we could set this on a per-connection basis. Not sure if Go allows us to do this.

@dswarbrick
Copy link

https://golang.org/pkg/net/#UDPConn.SetReadBuffer should do the trick. It's implemented as a setsockopt SO_RCVBUF syscall under the hood.

@marten-seemann
Copy link
Member Author

The buffer sizes can be queried by running

sysctl net.core.rmem_default
sysctl net.core.rmem_max

On the Linux system I'm testing on, both values are 212992. When querying the receive buffer from quic-go using

syscall.GetsockoptInt(int(fd.Fd()), syscall.SOL_SOCKET, syscall.SO_RCVBUF)

I get the same value. Using UDPConn.SetReadBuffer allows me increase the buffer size to 425984 (which is exactly twice the value of 212992). I assume that this has something to do with the doubling described here: http://man7.org/linux/man-pages/man7/socket.7.html (see SO_RCVBUF).

While doubling this value is nice (and leads to speedup of ~2x in my test), I'd like to increase it even further. As far as I can see, we'd need to ask people to run

sysctl -w net.core.rmem_max=xxx

in order to increase the maximum value though.

@dswarbrick
Copy link

Yup, the default Linux kernel UDP receive buffer sizes are fairly small. Applications can only set the receive buffer size up to the limit specified by the kernel. The app can read back the buffer size however, and warn the use if the requested buffer size less than desired.

@K9A2
Copy link

K9A2 commented Jun 10, 2020

Shoud net.core.rmem_max be set as the BDP of current link?

@marten-seemann
Copy link
Member Author

The BDP has nothing to do with this. The problem is how fast (and consistently) we’re able to read packet from the socket. We need to do that faster than the bandwidth of the connection, the RTT is completely irrelevant for this.

@K9A2
Copy link

K9A2 commented Jun 13, 2020

The linux kernel also provide the ability to config udp_mem, udp_wmem_min, udp_rmem_min. Could them affect the performance of QUIC protocol?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants