Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

测试代码优化 #2

Closed
lesismal opened this issue Apr 28, 2023 · 37 comments
Closed

测试代码优化 #2

lesismal opened this issue Apr 28, 2023 · 37 comments

Comments

@lesismal
Copy link
Contributor

感谢压测引入nbio,来回踩一下,顺便提交了个pr:
#1
网页提交的,nbio的部分写提交日志的时候手滑、没写完就提交了,这里继续补充说明下。

没看代码先跑了下压测,发现net和nbio的数据比较低,且都比gnet差距很大,这与我自己压测的结果差距比较大:
lesismal/go-net-benchmark#1

然后对比了下代码,发现其他框架是直接把read buffer返回/回写的,nbio的代码里append了一块新的buffer,这就与其他框架不同了、造成额外的消耗。直接使用read buffer回写是没有副作用的,这样可以与其他框架的测试对齐。
另外一个问题是,tcpkali这种压测工具,不是echo形式(发出并读回,然后再发收,依次循环),而是一边不停发,一边不停收,这就会造成server端读缓冲有大量的数据,这种压测场景下,server端用于读取的buffer size影响就比较大,size大则减少syscall Read的次数,相当于批次优化、提高效率。所以nbio的代码里我也修改了这个read buffer size,因为是每个poller上挂载一个这个buffer,而poller数量不会很多,所以即使buffer size大一些也不会占很大内存,但是读取效率会提高很多。

net也看了下,net的读buffer size只有4k、也是比较小,所以也存在同样问题,为了压测出实际的吞吐能力,也把buffer size调大了,然后跑出来的数据高了很多。

除了gnet,其他几个框架的压测数据也比较低,但这几个框架我只知道名字、没有具体了解和使用过,所以不确定是否存在类似问题。

按照pr中修改的代码,我再跑了下数据,net和nbio都提升了很多(gevent-echo-server失败,没有继续研究):

--- BENCH ECHO START ---

--- GO STDLIB ---
2023/04/28 09:17:05 echo server started on port 5001
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5001
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5001
Ramped up to 100 connections.
Total data sent:     24824.1 MiB (26029916160 bytes)
Total data received: 24726.1 MiB (25927217535 bytes)
Bandwidth per channel: 415.272⇅ Mbps (51908.9 kBps)
Aggregate bandwidth: 20722.534↓, 20804.617↑ Mbps
Packet rate estimate: 1897718.1↓, 1785674.6↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0093 s.
--- DONE ---

--- EVIO ---
2023/04/28 09:17:17 echo server started on port 5002 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5002
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5002
Ramped up to 100 connections.
Total data sent:     11378.7 MiB (11931418624 bytes)
Total data received: 11368.7 MiB (11920914802 bytes)
Bandwidth per channel: 190.646⇅ Mbps (23830.8 kBps)
Aggregate bandwidth: 9528.127↓, 9536.523↑ Mbps
Packet rate estimate: 872434.1↓, 818526.3↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.009 s.
--- DONE ---

--- EVIOP ---
2023/04/28 09:17:29 echo server started on port 5003 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5003
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5003
Ramped up to 100 connections.
Total data sent:     17760.1 MiB (18622840832 bytes)
Total data received: 17749.2 MiB (18611415720 bytes)
Bandwidth per channel: 297.650⇅ Mbps (37206.2 kBps)
Aggregate bandwidth: 14877.917↓, 14887.050↑ Mbps
Packet rate estimate: 1362251.1↓, 1277765.7↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0075 s.
--- DONE ---

--- GEV ---
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5004
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5004
Ramped up to 100 connections.
Total data sent:     13843.2 MiB (14515634176 bytes)
Total data received: 13795.2 MiB (14465350368 bytes)
Bandwidth per channel: 231.688⇅ Mbps (28961.0 kBps)
Aggregate bandwidth: 11564.284↓, 11604.483↑ Mbps
Packet rate estimate: 1059204.8↓, 996020.8↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0069 s.
--- DONE ---

--- NBIO ---
2023/04/28 09:17:53.941 [INF] NBIO[NB] start listen on: ["tcp@[::]:5005"]
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5005
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5005
Ramped up to 100 connections.
Total data sent:     31431.1 MiB (32957923328 bytes)
Total data received: 31402.6 MiB (32927992552 bytes)
Bandwidth per channel: 526.804⇅ Mbps (65850.5 kBps)
Aggregate bandwidth: 26328.248↓, 26352.180↑ Mbps
Packet rate estimate: 2410642.4↓, 2261825.8↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0054 s.
--- DONE ---

--- GNET ---
2023/04/28 09:18:06 Echo server is listening on :5006 (multi-cores: true, loops: 12)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5006
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5006
Ramped up to 100 connections.
Total data sent:     24213.1 MiB (25389301760 bytes)
Total data received: 24167.5 MiB (25341426400 bytes)
Bandwidth per channel: 405.469⇅ Mbps (50683.6 kBps)
Aggregate bandwidth: 20254.325↓, 20292.590↑ Mbps
Packet rate estimate: 1854584.7↓, 1741727.0↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0093 s.
--- DONE ---

--- BENCH ECHO DONE ---
@lesismal
Copy link
Contributor Author

再补充一下,修改net与nbio的read buffer size存在一些实际应用的内存占用的区别:

  1. nbio的这个ReadBufferSize与poller数量相关,与在线连接数无关,即使百万级的连接,这个buffer内存也只有ReadBufferSize*PollerNum个
  2. net的方案,因为需要为每个连接分配一个协程以及read buffer,所以这个内存占用与在线连接数有关,实际应用中如果在线数巨大,size太大则内存占用过高。但如果只是少量连接传输大量数据的场景,就影响不大了,需要根据实际情况定制

@cexll
Copy link
Contributor

cexll commented Apr 29, 2023

感谢大佬补充 我才开始研究epoll相关的,经验不多 感谢指正

@cexll
Copy link
Contributor

cexll commented Apr 29, 2023

大佬你slack在线吗 我有个些疑问想问你

@cexll
Copy link
Contributor

cexll commented Apr 29, 2023

在测试nbio时报错 Could not create 100 connections in allotted time (10s)

然后在容器内测试,为啥gnet突然性能更低了 ?
image

@cexll
Copy link
Contributor

cexll commented Apr 29, 2023

image

@lesismal
Copy link
Contributor Author

有什么报错吗?

@lesismal
Copy link
Contributor Author

lesismal commented Apr 29, 2023

我本地跑了下docker,nbio部分没报这个错啊:

root@ubuntu:~/gevent-benchmark# docker build -t gevent_benchmark .

docker build 安装依赖部分的日志我省掉了,benchmark部分的输出:

--- BENCH ECHO START ---

--- GO STDLIB ---
2023/04/29 07:55:51 echo server started on port 5001
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5001
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5001
Ramped up to 100 connections.
Total data sent:     24783.4 MiB (25987293723 bytes)
Total data received: 24854.9 MiB (26062268784 bytes)
Bandwidth per channel: 416.224⇅ Mbps (52028.0 kBps)
Aggregate bandwidth: 20841.162↓, 20781.206↑ Mbps
Packet rate estimate: 1909153.7↓, 1783689.4↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0042 s.
--- DONE ---

--- EVIO ---
2023/04/29 07:56:05 echo server started on port 5002 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5002
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5002
Ramped up to 100 connections.
Total data sent:     12135.3 MiB (12724797440 bytes)
Total data received: 12126.5 MiB (12715575829 bytes)
Bandwidth per channel: 203.387⇅ Mbps (25423.3 kBps)
Aggregate bandwidth: 10165.646↓, 10173.018↑ Mbps
Packet rate estimate: 930726.1↓, 873157.1↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0067 s.
--- DONE ---

--- EVIOP ---
2023/04/29 07:56:20 echo server started on port 5003 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5003
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5003
Ramped up to 100 connections.
Total data sent:     20049.2 MiB (21023162368 bytes)
Total data received: 20045.0 MiB (21018729258 bytes)
Bandwidth per channel: 336.121⇅ Mbps (42015.1 kBps)
Aggregate bandwidth: 16804.258↓, 16807.802↑ Mbps
Packet rate estimate: 1538538.8↓, 1442625.2↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0064 s.
--- DONE ---

--- GEV ---
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5004
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5004
Ramped up to 100 connections.
Total data sent:     14828.5 MiB (15548809216 bytes)
Total data received: 14787.0 MiB (15505337472 bytes)
Bandwidth per channel: 248.282⇅ Mbps (31035.2 kBps)
Aggregate bandwidth: 12396.705↓, 12431.461↑ Mbps
Packet rate estimate: 1135140.9↓, 1067000.9↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0061 s.
--- DONE ---

--- NBIO ---
2023/04/29 07:56:50.972 [INF] NBIO[NB] start listen on: ["tcp@[::]:5005"]
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5005
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5005
Ramped up to 100 connections.
Total data sent:     27630.5 MiB (28972711724 bytes)
Total data received: 26478.1 MiB (27764260885 bytes)
Bandwidth per channel: 453.675⇅ Mbps (56709.4 kBps)
Aggregate bandwidth: 22200.612↓, 23166.903↑ Mbps
Packet rate estimate: 2033066.5↓, 1988438.0↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0049 s.
--- DONE ---

--- GNET ---
2023/04/29 07:57:06 Echo server is listening on :5006 (multi-cores: true, loops: 12)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5006
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5006
Ramped up to 100 connections.
Total data sent:     24455.4 MiB (25643319296 bytes)
Total data received: 24380.3 MiB (25564633692 bytes)
Bandwidth per channel: 409.317⇅ Mbps (51164.6 kBps)
Aggregate bandwidth: 20434.388↓, 20497.283↑ Mbps
Packet rate estimate: 1872212.0↓, 1759295.9↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0085 s.
--- DONE ---

--- GEVENT ---
# github.com/go-gevent/gevent/internal
/go/pkg/mod/github.com/go-gevent/[email protected]/internal/internal_linux.go:97:40: cannot use syscall.EPOLLIN | syscall.EPOLLET | syscall.EPOLLOUT (untyped int constant -2147483643) as uint32 value in argument to p.ctlEpoll (overflows)
/go/pkg/mod/github.com/go-gevent/[email protected]/internal/internal_linux.go:102:40: cannot use syscall.EPOLLIN | syscall.EPOLLET | syscall.EPOLLPRI (untyped int constant -2147483645) as uint32 value in argument to p.ctlEpoll (overflows)
/go/pkg/mod/github.com/go-gevent/[email protected]/internal/internal_linux.go:107:40: cannot use syscall.EPOLLIN | syscall.EPOLLET | syscall.EPOLLPRI (untyped int constant -2147483645) as uint32 value in argument to p.ctlEpoll (overflows)
/go/pkg/mod/github.com/go-gevent/[email protected]/internal/internal_linux.go:112:40: cannot use syscall.EPOLLIN | syscall.EPOLLOUT | syscall.EPOLLET (untyped int constant -2147483643) as uint32 value in argument to p.ctlEpoll (overflows)
--- BENCH ECHO DONE ---
./bench-echo.sh: line 14:   885 Killed                  GOMAXPROCS=1 $2 --port $4
Removing intermediate container 7be8a9908d17
 ---> b49a7ccc7f6f
Successfully built b49a7ccc7f6f
Successfully tagged gevent_benchmark:latest

@lesismal
Copy link
Contributor Author

你多跑几次试试?
slack直接留言就行,我看到了就回复

@cexll
Copy link
Contributor

cexll commented Apr 29, 2023

为啥你那边会有问题呢

# github.com/go-gevent/gevent/internal

@cexll
Copy link
Contributor

cexll commented Apr 29, 2023

有什么报错吗?

没有报错,就是上面图片那样,执行到哪里之后就停止了 ,对了 我是MacbookPro M1Pro

@lesismal
Copy link
Contributor Author

你换linux先跑下试试,mac m的u我这没环境,不太好定位问题。
go-gevent这个看报错是linux下的变量问题,你mac下是kqueue所以没遇到这个问题,刚好你换linux环境试试顺便把gevent这个修复了。。

@lesismal
Copy link
Contributor Author

如果可以的话,帮我也修复下nbio m1上的这个问题,哈哈哈:joy:,欢迎来pr

@cexll
Copy link
Contributor

cexll commented May 1, 2023

恩恩 我是mac 跑的docker,后面我试试换纯linux

@lesismal
Copy link
Contributor Author

lesismal commented May 1, 2023

哦对你是mac docker,有点奇怪了。我上面的报错是win vmware里的linux跑的docker

@lesismal
Copy link
Contributor Author

lesismal commented May 1, 2023

之前我这环境gevent一直报错所以没看,刚才看了一眼测试代码,这里如果把append去掉有没有副作用?如果可以去掉是不是更快些:
https://github.com/go-gevent/gevent-benchmark/blob/main/gevent-echo-server/main.go#L44

另外evio/gev/gnet这种是仿netty还是啥,我个人觉得这种好难用啊。
单就poller的部分,性能差别应该没太大的,毕竟poller循环的部分的io逻辑就那么点,封装多一点少一点,性能应该都差不太多,就是些参数设置、易用性的取舍导致的一点性能差异才对

@cexll
Copy link
Contributor

cexll commented May 1, 2023

最近会研究这一块 我是基于evio and eviop 不过这几个的方式都很像,netpoll我之前也加了 但是性能巨低(不知道是不是打开方式不对)nbio我看了设计很好 没想到性能居然那么高

@lesismal
Copy link
Contributor Author

lesismal commented May 1, 2023

netpoll我之前也加了 但是性能巨低(不知道是不是打开方式不对)

我也是遇到同样的问题,之前在repo里提了issue让提供下测试代码,但是官方只给了kitex的压测代码也是迷,就没再问了。

@lesismal
Copy link
Contributor Author

lesismal commented May 1, 2023

没想到性能居然那么高

单就4层echo这种压测而言,gnet性能都差不多的,可能buffer之类的参数设置、测试时每个框架运行时os的稳定性略有差别会有一些性能差异、但不会很大。但是易用性上,其他的poller框架目前还有点太难用了

另外就是,如果不是针对海量连接场景,标准库更优。nbio到现在还困扰我一件事,比如用tcpkali这种工具做压测,它不是request-response这种,或者少量requests,收到responses之后再继续request。而是按照-r 不停地发数据,这样就会造成server端一直有数据读。
而poller框架,不适合像c/cpp那种只使用单个逻辑协程、因为go指令性能不够、单逻辑协程就性能太差了,而且我们还是想在逻辑协程里写顺序代码,很可能有数据库、rpc之类的io操作,单逻辑协程的性能是无法接受的。所以就还是得使用逻辑协程池。这样就造成,io协程与逻辑协程是分离的。这就导致在tcpkali这种测试中,io协程那边不断有读到的数据丢给队列里让逻辑协程去处理,这样会造成队列缓冲的数据很多、内存占用甚至会高过使用标准库,因为标准库是阻塞读接口、读一个处理完才会读取下一个、没有这种读出很多方队列里等待执行的内存堆积问题。tcpkali这种测试行为不符合常规的http/websocket场景,所以nbio里默认没有做这个限制,如果业务需要限制内存用量,需要自己添加每个连接的限流机制、不合理频率的连接进行close来解决内存占用问题。
使用EPOLLET+EPOLLONESHOT可以限制这种读取数据的频率,但即使这样,还涉及异步解析器的问题,因为为了节省海量连接数的协程,我们使用的协程池是size有限的,所以每个连接读数据、解析应该尽量不阻塞,这就导致异步解析器必须自己去cache半包,以及不能像标准库那样在读循环中复用同一段buffer。

@lesismal
Copy link
Contributor Author

lesismal commented May 1, 2023

不过,tcpkali -r频率高的这种单个连接巨量数据的场景下,是偏cpu消耗型的业务,或者连接数并不大。如果连接数不大,使用标准库更好,如果连接数很大但因为是cpu消耗,所以比如nbio,设置成io协程中直接处理消息即可,这样也可以避免io与逻辑协程池队列导致的内存问题。

@cexll cexll pinned this issue May 4, 2023
@cexll
Copy link
Contributor

cexll commented May 4, 2023

我测试出来可能是 ReadBufferSize 没有起作用
image

@cexll
Copy link
Contributor

cexll commented May 4, 2023

而且测试的数据也是时高时低的 奇奇怪怪
image

@cexll
Copy link
Contributor

cexll commented May 4, 2023

也许我是arm 不是x86

@lesismal
Copy link
Contributor Author

lesismal commented May 4, 2023

ReadBuffer那个是相当于批次读、减少读的次数,但有两个前提:

  1. 发送方确实有那么大数据发过来并且在接收方读之前内核队列里缓存了那么多,如果每一轮内核并没有缓冲到这么多,比如只缓冲了8k,那这个设置再大也没有提升
  2. 应用层的ReadBufferSize和内核缓冲区是一起起作用的,如果内核缓冲区1m数据,但应用层ReadBufferSize只有1k,那要把当前内核缓冲的数据读完还是要1m/1k=1k次syscall read

这些配置项是要根据实际情况配合着来的

@lesismal
Copy link
Contributor Author

lesismal commented May 4, 2023

而且测试的数据也是时高时低的 奇奇怪怪

试下每一轮测试多间隔一会再跑下一轮,或者每次都换不同的端口试试,免得如果上次数据跑太多内核协议栈会不会有什么遗留负担还没释放完

@cexll
Copy link
Contributor

cexll commented May 4, 2023

试下每一轮测试多间隔一会再跑下一轮,或者每次都换不同的端口试试,免得如果上次数据跑太多内核协议栈会不会有什么遗留负担还没释放完

我增加一下每次间隔30s,每个服务都是不同的端口

我新增了一个actions相对更方便 同时 所有的测试都能成功跑完
echo
image

@lesismal
Copy link
Contributor Author

lesismal commented May 4, 2023

我试了下SetReadBuffer/SetWriteBuffer后,在我这个虚拟机环境里nbio只有70多m,cpu也没什么占用。加了日志后发现是单次写入数据太大很容易就写失败了,所以导致没有后续流量、数据很低。所以去掉了SetReadBuffer/SetWriteBuffer,这个benchmark里也别设置这个了,具体原因我还不确定、以后再看看。
但是gevent代码我没做修改、数据也很低:

root@ubuntu:~/gevent-benchmark# ./bench.sh 

--- BENCH ECHO START ---

--- GO STDLIB ---
2023/05/04 08:05:10 echo server started on port 5001
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5001
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5001
Ramped up to 100 connections.
Total data sent:     24875.8 MiB (26084187731 bytes)
Total data received: 24922.2 MiB (26132811415 bytes)
Bandwidth per channel: 417.324⇅ Mbps (52165.5 kBps)
Aggregate bandwidth: 20885.611↓, 20846.750↑ Mbps
Packet rate estimate: 1913315.4↓, 1789299.4↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0099 s.
--- DONE ---

--- EVIO ---
2023/05/04 08:05:22 echo server started on port 5002 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5002
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5002
Ramped up to 100 connections.
Total data sent:     11519.5 MiB (12079071232 bytes)
Total data received: 11508.9 MiB (12067942575 bytes)
Bandwidth per channel: 192.986⇅ Mbps (24123.2 kBps)
Aggregate bandwidth: 9644.828↓, 9653.722↑ Mbps
Packet rate estimate: 883082.6↓, 828585.6↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0099 s.
--- DONE ---

--- EVIOP ---
2023/05/04 08:05:34 echo server started on port 5003 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5003
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5003
Ramped up to 100 connections.
Total data sent:     17716.1 MiB (18576637952 bytes)
Total data received: 17708.3 MiB (18568490295 bytes)
Bandwidth per channel: 297.125⇅ Mbps (37140.7 kBps)
Aggregate bandwidth: 14853.005↓, 14859.523↑ Mbps
Packet rate estimate: 1359948.8↓, 1275403.1↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0012 s.
--- DONE ---

--- GEV ---
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5004
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5004
Ramped up to 100 connections.
Total data sent:     14289.8 MiB (14983954432 bytes)
Total data received: 14238.5 MiB (14930102592 bytes)
Bandwidth per channel: 239.137⇅ Mbps (29892.1 kBps)
Aggregate bandwidth: 11935.315↓, 11978.365↑ Mbps
Packet rate estimate: 1093145.9↓, 1028111.3↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0073 s.
--- DONE ---

--- NBIO ---
2023/05/04 08:05:58.668 [INF] NBIO[NB] start listen on: ["tcp@[::]:5005"]
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5005
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5005
Ramped up to 100 connections.
Total data sent:     26000.2 MiB (27263172608 bytes)
Total data received: 25954.2 MiB (27214982272 bytes)
Bandwidth per channel: 435.382⇅ Mbps (54422.8 kBps)
Aggregate bandwidth: 21749.860↓, 21788.373↑ Mbps
Packet rate estimate: 1991586.8↓, 1870111.1↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0102 s.
--- DONE ---

--- GNET ---
2023/05/04 08:06:10 Echo server is listening on :5006 (multi-cores: true, loops: 12)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5006
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5006
Ramped up to 100 connections.
Total data sent:     24237.1 MiB (25414467584 bytes)
Total data received: 24196.9 MiB (25372257408 bytes)
Bandwidth per channel: 406.184⇅ Mbps (50772.9 kBps)
Aggregate bandwidth: 20292.298↓, 20326.057↑ Mbps
Packet rate estimate: 1858074.9↓, 1744599.5↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0027 s.
--- DONE ---

--- GEVENT ---
go: downloading github.com/go-gevent/gevent v0.0.1
2023/05/04 08:06:27 echo server started on port 5007 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5007
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5007
Ramped up to 100 connections.
Total data sent:     518.6 MiB (543752192 bytes)
Total data received: 199.0 MiB (208617472 bytes)
Bandwidth per channel: 7.514⇅ Mbps (939.2 kBps)
Aggregate bandwidth: 166.678↓, 434.438↑ Mbps
Packet rate estimate: 15263.4↓, 37288.1↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.013 s.
--- DONE ---

--- BENCH ECHO DONE ---

我这里nbio的ReadBufferSize设置的64k/123k,我看了下这个依赖的[email protected]的读buffer size好像也是64k,而依赖的[email protected]默认读buffer是32k、比较吃亏。
同样是64k的情况下,我这跑多轮,多数时候nbio要比gnet快一点

@cexll
Copy link
Contributor

cexll commented May 4, 2023

emmm 奇怪

@cexll
Copy link
Contributor

cexll commented May 4, 2023

@lesismal
Copy link
Contributor Author

lesismal commented May 4, 2023

emmm 奇怪

你先把nbio的测试代码改正确啊,这个代码应该是已经Write err了所以没有后续流量了。。我看现在最新的是默认32k read buffer并且使用了SetReadBuffer/SetWriteBuffer,把这个OnOpen去掉、ReadBufferSize设置成64k先跟gnet的size保持一致

你看下这个 https://github.com/go-gevent/gevent-benchmark/actions/runs/4880333689

这个流程好多没看懂,结果是在哪里。。

@lesismal
Copy link
Contributor Author

lesismal commented May 4, 2023

刚看了下evio和gev的read buffer也是64k

@cexll
Copy link
Contributor

cexll commented May 4, 2023

马上我改一下

这里 输出的文件
image

@cexll
Copy link
Contributor

cexll commented May 4, 2023

image

这样在你那边可以运行吗?

@lesismal
Copy link
Contributor Author

lesismal commented May 4, 2023

这样在你那边可以运行吗?

应该可以呀,我上次pr的就是这样的,只是ReadBufferSize设置的128k的区别

@cexll
Copy link
Contributor

cexll commented May 4, 2023

好的 我再研究研究,五一假 刚回家

@cexll
Copy link
Contributor

cexll commented May 5, 2023

我昨天测试了,结果和你的一样nbio最高其次是go-net gevent最差🤨

@lesismal
Copy link
Contributor Author

lesismal commented May 5, 2023

标准库的可以调大read buffer size试下,如果是为了单个或者少量连接的吞吐的场景,每个个conn一个协程、read buffer大些也无所谓,标准库足够用了。

evio、gev我昨天试了下NumLoops设置为NumCPU、跟nbio、gnet测试代码poller协程数量一致,也没见到有明显提升。它们buffer size也是64k,而且差距有点过大,但只是echo通常不应该差这么多,没详细看代码,有点迷

@lesismal
Copy link
Contributor Author

lesismal commented May 5, 2023

这个我先close了,有需要交流的咱随时继续

@lesismal lesismal closed this as completed May 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants