[Experiment] [feature] Add batch writes mode for Sender #1797
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Adds
batchWrites
option which enables wrap withsocket.cork()
/.uncork()
calls behavior inSender
. When this option is in use, we get better throughput for relatively small messages (something around 1 KiB).Note. This is an experiment and it's very unlikely to get into the library, so I didn't bother with proper documentation and tests. My intent is to demonstrate one of possible approaches (probably, the simplest one) to having batch writes in the library.
More context
On *nix OSes, on each
socket.write()
node tries to call libuv'suv_try_write
function. For a ready-to-write TCP socket that function does the write immediately. So, for small messages this leads to certain overhead due to large amount of sys calls and other factors.On the other hand,
socket.cork()
/.uncork()
calls (and the underlying_writev()
implementation) have a certain overhead, which may sometimes impact the latency for individual messages and, in general, makes almost no sense for larger messages. Having this option enabled also makes no sense in case of large/medium amount of open WS connections with infrequent communication happening over each socket.Benchmark results
Existing benchmarks were considering latency of individual round-trips, so I've added a new one which measures throughput with different levels of concurrency.
Here is the result (10 runs, Node.js v14.10.0):
Plot for 64 B messages
Plot for 1 KiB messages