-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
测试代码优化 #2
Comments
再补充一下,修改net与nbio的read buffer size存在一些实际应用的内存占用的区别:
|
感谢大佬补充 我才开始研究epoll相关的,经验不多 感谢指正 |
大佬你slack在线吗 我有个些疑问想问你 |
有什么报错吗? |
我本地跑了下docker,nbio部分没报这个错啊: root@ubuntu:~/gevent-benchmark# docker build -t gevent_benchmark . docker build 安装依赖部分的日志我省掉了,benchmark部分的输出: --- BENCH ECHO START ---
--- GO STDLIB ---
2023/04/29 07:55:51 echo server started on port 5001
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5001
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5001
Ramped up to 100 connections.
Total data sent: 24783.4 MiB (25987293723 bytes)
Total data received: 24854.9 MiB (26062268784 bytes)
Bandwidth per channel: 416.224⇅ Mbps (52028.0 kBps)
Aggregate bandwidth: 20841.162↓, 20781.206↑ Mbps
Packet rate estimate: 1909153.7↓, 1783689.4↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0042 s.
--- DONE ---
--- EVIO ---
2023/04/29 07:56:05 echo server started on port 5002 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5002
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5002
Ramped up to 100 connections.
Total data sent: 12135.3 MiB (12724797440 bytes)
Total data received: 12126.5 MiB (12715575829 bytes)
Bandwidth per channel: 203.387⇅ Mbps (25423.3 kBps)
Aggregate bandwidth: 10165.646↓, 10173.018↑ Mbps
Packet rate estimate: 930726.1↓, 873157.1↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0067 s.
--- DONE ---
--- EVIOP ---
2023/04/29 07:56:20 echo server started on port 5003 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5003
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5003
Ramped up to 100 connections.
Total data sent: 20049.2 MiB (21023162368 bytes)
Total data received: 20045.0 MiB (21018729258 bytes)
Bandwidth per channel: 336.121⇅ Mbps (42015.1 kBps)
Aggregate bandwidth: 16804.258↓, 16807.802↑ Mbps
Packet rate estimate: 1538538.8↓, 1442625.2↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0064 s.
--- DONE ---
--- GEV ---
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5004
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5004
Ramped up to 100 connections.
Total data sent: 14828.5 MiB (15548809216 bytes)
Total data received: 14787.0 MiB (15505337472 bytes)
Bandwidth per channel: 248.282⇅ Mbps (31035.2 kBps)
Aggregate bandwidth: 12396.705↓, 12431.461↑ Mbps
Packet rate estimate: 1135140.9↓, 1067000.9↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0061 s.
--- DONE ---
--- NBIO ---
2023/04/29 07:56:50.972 [INF] NBIO[NB] start listen on: ["tcp@[::]:5005"]
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5005
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5005
Ramped up to 100 connections.
Total data sent: 27630.5 MiB (28972711724 bytes)
Total data received: 26478.1 MiB (27764260885 bytes)
Bandwidth per channel: 453.675⇅ Mbps (56709.4 kBps)
Aggregate bandwidth: 22200.612↓, 23166.903↑ Mbps
Packet rate estimate: 2033066.5↓, 1988438.0↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0049 s.
--- DONE ---
--- GNET ---
2023/04/29 07:57:06 Echo server is listening on :5006 (multi-cores: true, loops: 12)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5006
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5006
Ramped up to 100 connections.
Total data sent: 24455.4 MiB (25643319296 bytes)
Total data received: 24380.3 MiB (25564633692 bytes)
Bandwidth per channel: 409.317⇅ Mbps (51164.6 kBps)
Aggregate bandwidth: 20434.388↓, 20497.283↑ Mbps
Packet rate estimate: 1872212.0↓, 1759295.9↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0085 s.
--- DONE ---
--- GEVENT ---
# github.com/go-gevent/gevent/internal
/go/pkg/mod/github.com/go-gevent/[email protected]/internal/internal_linux.go:97:40: cannot use syscall.EPOLLIN | syscall.EPOLLET | syscall.EPOLLOUT (untyped int constant -2147483643) as uint32 value in argument to p.ctlEpoll (overflows)
/go/pkg/mod/github.com/go-gevent/[email protected]/internal/internal_linux.go:102:40: cannot use syscall.EPOLLIN | syscall.EPOLLET | syscall.EPOLLPRI (untyped int constant -2147483645) as uint32 value in argument to p.ctlEpoll (overflows)
/go/pkg/mod/github.com/go-gevent/[email protected]/internal/internal_linux.go:107:40: cannot use syscall.EPOLLIN | syscall.EPOLLET | syscall.EPOLLPRI (untyped int constant -2147483645) as uint32 value in argument to p.ctlEpoll (overflows)
/go/pkg/mod/github.com/go-gevent/[email protected]/internal/internal_linux.go:112:40: cannot use syscall.EPOLLIN | syscall.EPOLLOUT | syscall.EPOLLET (untyped int constant -2147483643) as uint32 value in argument to p.ctlEpoll (overflows)
--- BENCH ECHO DONE ---
./bench-echo.sh: line 14: 885 Killed GOMAXPROCS=1 $2 --port $4
Removing intermediate container 7be8a9908d17
---> b49a7ccc7f6f
Successfully built b49a7ccc7f6f
Successfully tagged gevent_benchmark:latest |
你多跑几次试试? |
为啥你那边会有问题呢
|
没有报错,就是上面图片那样,执行到哪里之后就停止了 ,对了 我是MacbookPro M1Pro |
你换linux先跑下试试,mac m的u我这没环境,不太好定位问题。 |
如果可以的话,帮我也修复下nbio m1上的这个问题,哈哈哈:joy:,欢迎来pr |
恩恩 我是mac 跑的docker,后面我试试换纯linux |
哦对你是mac docker,有点奇怪了。我上面的报错是win vmware里的linux跑的docker |
之前我这环境gevent一直报错所以没看,刚才看了一眼测试代码,这里如果把append去掉有没有副作用?如果可以去掉是不是更快些: 另外evio/gev/gnet这种是仿netty还是啥,我个人觉得这种好难用啊。 |
最近会研究这一块 我是基于evio and eviop 不过这几个的方式都很像,netpoll我之前也加了 但是性能巨低(不知道是不是打开方式不对)nbio我看了设计很好 没想到性能居然那么高 |
我也是遇到同样的问题,之前在repo里提了issue让提供下测试代码,但是官方只给了kitex的压测代码也是迷,就没再问了。 |
单就4层echo这种压测而言,gnet性能都差不多的,可能buffer之类的参数设置、测试时每个框架运行时os的稳定性略有差别会有一些性能差异、但不会很大。但是易用性上,其他的poller框架目前还有点太难用了 另外就是,如果不是针对海量连接场景,标准库更优。nbio到现在还困扰我一件事,比如用tcpkali这种工具做压测,它不是request-response这种,或者少量requests,收到responses之后再继续request。而是按照-r 不停地发数据,这样就会造成server端一直有数据读。 |
不过,tcpkali -r频率高的这种单个连接巨量数据的场景下,是偏cpu消耗型的业务,或者连接数并不大。如果连接数不大,使用标准库更好,如果连接数很大但因为是cpu消耗,所以比如nbio,设置成io协程中直接处理消息即可,这样也可以避免io与逻辑协程池队列导致的内存问题。 |
也许我是arm 不是x86 |
ReadBuffer那个是相当于批次读、减少读的次数,但有两个前提:
这些配置项是要根据实际情况配合着来的 |
试下每一轮测试多间隔一会再跑下一轮,或者每次都换不同的端口试试,免得如果上次数据跑太多内核协议栈会不会有什么遗留负担还没释放完 |
我试了下SetReadBuffer/SetWriteBuffer后,在我这个虚拟机环境里nbio只有70多m,cpu也没什么占用。加了日志后发现是单次写入数据太大很容易就写失败了,所以导致没有后续流量、数据很低。所以去掉了SetReadBuffer/SetWriteBuffer,这个benchmark里也别设置这个了,具体原因我还不确定、以后再看看。 root@ubuntu:~/gevent-benchmark# ./bench.sh
--- BENCH ECHO START ---
--- GO STDLIB ---
2023/05/04 08:05:10 echo server started on port 5001
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5001
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5001
Ramped up to 100 connections.
Total data sent: 24875.8 MiB (26084187731 bytes)
Total data received: 24922.2 MiB (26132811415 bytes)
Bandwidth per channel: 417.324⇅ Mbps (52165.5 kBps)
Aggregate bandwidth: 20885.611↓, 20846.750↑ Mbps
Packet rate estimate: 1913315.4↓, 1789299.4↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0099 s.
--- DONE ---
--- EVIO ---
2023/05/04 08:05:22 echo server started on port 5002 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5002
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5002
Ramped up to 100 connections.
Total data sent: 11519.5 MiB (12079071232 bytes)
Total data received: 11508.9 MiB (12067942575 bytes)
Bandwidth per channel: 192.986⇅ Mbps (24123.2 kBps)
Aggregate bandwidth: 9644.828↓, 9653.722↑ Mbps
Packet rate estimate: 883082.6↓, 828585.6↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0099 s.
--- DONE ---
--- EVIOP ---
2023/05/04 08:05:34 echo server started on port 5003 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5003
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5003
Ramped up to 100 connections.
Total data sent: 17716.1 MiB (18576637952 bytes)
Total data received: 17708.3 MiB (18568490295 bytes)
Bandwidth per channel: 297.125⇅ Mbps (37140.7 kBps)
Aggregate bandwidth: 14853.005↓, 14859.523↑ Mbps
Packet rate estimate: 1359948.8↓, 1275403.1↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0012 s.
--- DONE ---
--- GEV ---
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5004
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5004
Ramped up to 100 connections.
Total data sent: 14289.8 MiB (14983954432 bytes)
Total data received: 14238.5 MiB (14930102592 bytes)
Bandwidth per channel: 239.137⇅ Mbps (29892.1 kBps)
Aggregate bandwidth: 11935.315↓, 11978.365↑ Mbps
Packet rate estimate: 1093145.9↓, 1028111.3↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0073 s.
--- DONE ---
--- NBIO ---
2023/05/04 08:05:58.668 [INF] NBIO[NB] start listen on: ["tcp@[::]:5005"]
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5005
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5005
Ramped up to 100 connections.
Total data sent: 26000.2 MiB (27263172608 bytes)
Total data received: 25954.2 MiB (27214982272 bytes)
Bandwidth per channel: 435.382⇅ Mbps (54422.8 kBps)
Aggregate bandwidth: 21749.860↓, 21788.373↑ Mbps
Packet rate estimate: 1991586.8↓, 1870111.1↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0102 s.
--- DONE ---
--- GNET ---
2023/05/04 08:06:10 Echo server is listening on :5006 (multi-cores: true, loops: 12)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5006
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5006
Ramped up to 100 connections.
Total data sent: 24237.1 MiB (25414467584 bytes)
Total data received: 24196.9 MiB (25372257408 bytes)
Bandwidth per channel: 406.184⇅ Mbps (50772.9 kBps)
Aggregate bandwidth: 20292.298↓, 20326.057↑ Mbps
Packet rate estimate: 1858074.9↓, 1744599.5↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.0027 s.
--- DONE ---
--- GEVENT ---
go: downloading github.com/go-gevent/gevent v0.0.1
2023/05/04 08:06:27 echo server started on port 5007 (loops: 1)
*** 50 connections, 10 seconds, 6 byte packets
Destination: [127.0.0.1]:5007
Interface lo address [127.0.0.1]:0
Using interface lo to connect to [127.0.0.1]:5007
Ramped up to 100 connections.
Total data sent: 518.6 MiB (543752192 bytes)
Total data received: 199.0 MiB (208617472 bytes)
Bandwidth per channel: 7.514⇅ Mbps (939.2 kBps)
Aggregate bandwidth: 166.678↓, 434.438↑ Mbps
Packet rate estimate: 15263.4↓, 37288.1↑ (12↓, 45↑ TCP MSS/op)
Test duration: 10.013 s.
--- DONE ---
--- BENCH ECHO DONE --- 我这里nbio的ReadBufferSize设置的64k/123k,我看了下这个依赖的[email protected]的读buffer size好像也是64k,而依赖的[email protected]默认读buffer是32k、比较吃亏。 |
emmm 奇怪 |
你先把nbio的测试代码改正确啊,这个代码应该是已经Write err了所以没有后续流量了。。我看现在最新的是默认32k read buffer并且使用了SetReadBuffer/SetWriteBuffer,把这个OnOpen去掉、ReadBufferSize设置成64k先跟gnet的size保持一致
这个流程好多没看懂,结果是在哪里。。 |
刚看了下evio和gev的read buffer也是64k |
应该可以呀,我上次pr的就是这样的,只是ReadBufferSize设置的128k的区别 |
好的 我再研究研究,五一假 刚回家 |
我昨天测试了,结果和你的一样nbio最高其次是go-net gevent最差🤨 |
标准库的可以调大read buffer size试下,如果是为了单个或者少量连接的吞吐的场景,每个个conn一个协程、read buffer大些也无所谓,标准库足够用了。 evio、gev我昨天试了下NumLoops设置为NumCPU、跟nbio、gnet测试代码poller协程数量一致,也没见到有明显提升。它们buffer size也是64k,而且差距有点过大,但只是echo通常不应该差这么多,没详细看代码,有点迷 |
这个我先close了,有需要交流的咱随时继续 |
感谢压测引入nbio,来回踩一下,顺便提交了个pr:
#1
网页提交的,nbio的部分写提交日志的时候手滑、没写完就提交了,这里继续补充说明下。
没看代码先跑了下压测,发现net和nbio的数据比较低,且都比gnet差距很大,这与我自己压测的结果差距比较大:
lesismal/go-net-benchmark#1
然后对比了下代码,发现其他框架是直接把read buffer返回/回写的,nbio的代码里append了一块新的buffer,这就与其他框架不同了、造成额外的消耗。直接使用read buffer回写是没有副作用的,这样可以与其他框架的测试对齐。
另外一个问题是,tcpkali这种压测工具,不是echo形式(发出并读回,然后再发收,依次循环),而是一边不停发,一边不停收,这就会造成server端读缓冲有大量的数据,这种压测场景下,server端用于读取的buffer size影响就比较大,size大则减少syscall Read的次数,相当于批次优化、提高效率。所以nbio的代码里我也修改了这个read buffer size,因为是每个poller上挂载一个这个buffer,而poller数量不会很多,所以即使buffer size大一些也不会占很大内存,但是读取效率会提高很多。
net也看了下,net的读buffer size只有4k、也是比较小,所以也存在同样问题,为了压测出实际的吞吐能力,也把buffer size调大了,然后跑出来的数据高了很多。
除了gnet,其他几个框架的压测数据也比较低,但这几个框架我只知道名字、没有具体了解和使用过,所以不确定是否存在类似问题。
按照pr中修改的代码,我再跑了下数据,net和nbio都提升了很多(gevent-echo-server失败,没有继续研究):
The text was updated successfully, but these errors were encountered: