-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
长时间连接会导致服务端异常 #490
Comments
Your error logs actually not reported so frequently, each of them may have at least 1 second between. So I don't think they are related to your high CPU usage. But from your logs, there are several problems:
This indicates that there are another clients trying to connect to your port with wrong method or key. You should find it out.
You have too many connection alives. You should raise the file descriptor limit with
|
这个问题似乎确实存在( #428 ),具体表现为一段时间不连接后,首次连接无法与服务器通信。而且当且仅当我发起连接时,才会出现 |
Switch server to libev or client? |
Server
在 2021年4月13日,12:33,ty ***@***.***> 写道:
因为这个问题出现时,立即同配置切换至 libev
Switch server to libev or client?
|
切换服务器实现,IV 重放过滤器不会被保留,重放/异常包就会“正常”地通过,强烈不推荐这么做。 请谨记:重启 Shadowsocks 服务器必须更换密码。 1.服务器开在哪个端口?是否有可能被别人用工具扫了? 2. |
服务器开在21030,iptables仅允许中转ip连接。个人认为被扫的可能性不大,因为仅当我自己连接时,才会出现 |
看源代码发现这个错误只会出现在新连接的 Address 读不到的情形。如果连 Address 都读不到,很可能是遭到了针对性的重放攻击,或者中转隧道的传输有问题。 请问你的中转方案是什么?是否有使用 simple-obfs 之类的 SIP003 插件? |
内网隧道方案我不太清楚,是服务商提供的,出口->落地由 iptables 中转,无 SIP003 插件,测试的时候已经尽可能做到配置最小化。 具体表现是使用 tcping 测延迟时(客户端qx和clash),若长时间无连接后发起连接,可能出现 tcp 重传(上千的延迟),同时出现 |
As we can see, error is As far as I know, libev won't print any logs about this error. Maybe that's why you think everything works fine with libev. Libev doesn't treat EOF as an error. |
If you can be sure that there are no attacker that is trying to repeat your connections, then
|
Could you test again without I took a glance on |
diff --git a/crates/shadowsocks-service/src/server/tcprelay.rs b/crates/shadowsocks-service/src/server/tcprelay.rs
index f54aea8..9e7d79a 100644
--- a/crates/shadowsocks-service/src/server/tcprelay.rs
+++ b/crates/shadowsocks-service/src/server/tcprelay.rs
@@ -13,7 +13,7 @@ use shadowsocks::{
crypto::v1::CipherKind,
net::{AcceptOpts, TcpStream as OutboundTcpStream},
relay::{
- socks5::Address,
+ socks5::{Address, Error as Socks5Error},
tcprelay::{
utils::{copy_from_encrypted, copy_to_encrypted},
ProxyServerStream,
@@ -89,6 +89,13 @@ impl TcpServerClient {
async fn serve(mut self) -> io::Result<()> {
let target_addr = match Address::read_from(&mut self.stream).await {
Ok(a) => a,
+ Err(Socks5Error::IoError(ref err)) if err.kind() == ErrorKind::UnexpectedEof => {
+ debug!(
+ "handshake failed, received EOF before a complete target Address, peer: {}",
+ self.peer_addr
+ );
+ return Ok(());
+ }
Err(err) => {
// https://github.com/shadowsocks/shadowsocks-rust/issues/292
// Here is an easy solution to be compatible with |
In the mean time, did you see any error logs? |
- ref #490 - Enable clippy on github action
日志和之前的一样,没有更多的信息 |
|
有条件可以通过降版本来验证是否是版本导致的。 |
系统默认设置
密码为13位大小写字母+符号+数字 |
As I said, you have to raise your fd limit because you have already seem: It is obvious that you have already exhausted your fds: Just start server with Test with the lastest v1.10.7. |
If you can still reproduce it after increase the fd limits, could you tell me how can I reproduce it in my local environment? What's your reproduce steps? |
我也没有能快速重现的办法,通常我只是使用火狐打开多个标签页播放油管视频,长时间不关闭标签页 |
@Flandoll Have you increased your fd limit? It is quite normal when you running with system's default fd limit. |
大约11小时前我使用 虽然fd有时候会小幅下降,但总体成上升趋势,如果这种状态保持不变,fd总会有耗尽的时候 |
It shouldn't. 10240 should be enough for normal users. |
Did you see the same problem again? |
目前没有出现无法连接的状况,但是我观察到fd数量还是处于持续上升的状态,截至当前已经达到2814 |
Please try the nightly build. And I am finding a way to compatible with the situation I just found above. |
Here is another build https://github.com/shadowsocks/shadowsocks-rust/actions/runs/909993185 that try to purge those half-open connections.
When CLIENT kill itself without sending But most TARGETs will close the connection if there is no data activity. So it should help to purging the The PROXY may be a router. |
我注意到一个问题,使用
当使用
|
The nightly build only contains Did you compile with the latest master branch? |
是的,Commits e32c869 master |
It shouldn't be off. Maybe that is the root cause? Could you run |
我刚测试了 1.11.0的 1.11.0的
d61d102 的
|
Yes, Does it fix your issue? |
One interesting found: Azure's Firewall will drop idle TCP connections after 4 minutes no activity with RST.
So, technically, even |
我之前注意到了Azure的连接超时,但事实上这些半连接并没有被清除,我甚至怀疑过防火墙没有发送RST而是直接断开了连接。 |
这个问题最后怎么解决,我好像也是这样的问题 |
Just use v1.11.1. |
已经更了1.11.1 使用shadowsocks-v1.11.1.x86_64-unknown-linux-gnu.tar.xz |
As it said, wrong method or key. |
Doesn't Shadowsocks ignore RST? |
Since this project uses network stack provided by kernels, how can it ignore RST. |
Ah ok, I think I saw somewhere that the original project did but couldn't find it... |
Well, currently there is no way to make that possible. But in most cases, there is no reason to use the default configuration of system. |
@Flandoll Does v1.11.1/2 fix your issue? |
目前没有观察到fd数量异常。 |
So it should be considered fixed. Great. |
- default timeout is 24 hours - ref shadowsocks#490 - refactored tunnel copy with copy_bidirectional to avoid unnecessary splits
- ref shadowsocks#490 - fixed bug, ssserver slient-drop doesn't work
连接到服务端一段时间后服务端的CPU使用率会异常升高,并且无法连接到服务端
我的使用环境中最容易触发此异常的就是打开多个油管视频页面,播放几次其中的视频,之后长时间不关闭浏览器(中途电脑可能会进入睡眠模式)
此异常在我更新服务端到1.9.0之后就开始,到目前使用的1.10.1都一直存在,尚不清楚1.8.x是否存在
2021-04-05T21:24:47.366094002+00:00 INFO shadowsocks 1.10.1
2021-04-05T21:24:47.367802311+00:00 INFO shadowsocks tcp server listening on *.*.*.*:*, inbound address *.*.*.*:*
2021-04-06T08:57:47.521403191+00:00 WARN handshake failed, maybe wrong method or key, or under reply attacks. peer: *.*.*.*:*, error: unexpected end of file
2021-04-06T08:57:47.770084864+00:00 WARN handshake failed, maybe wrong method or key, or under reply attacks. peer: *.*.*.*:*, error: unexpected end of file
2021-04-06T08:58:15.940106176+00:00 WARN handshake failed, maybe wrong method or key, or under reply attacks. peer: *.*.*.*:*, error: unexpected end of file
2021-04-07T19:00:25.211424806+00:00 WARN handshake failed, maybe wrong method or key, or under reply attacks. peer: *.*.*.*:*, error: unexpected end of file
2021-04-09T06:49:23.888023166+00:00 ERROR tcp server accept failed with error: Too many open files (os error 24)
2021-04-09T06:49:24.888402494+00:00 ERROR tcp server accept failed with error: Too many open files (os error 24)
2021-04-09T06:51:59.888296564+00:00 ERROR tcp server accept failed with error: Too many open files (os error 24)
2021-04-09T06:52:00.890276714+00:00 ERROR tcp server accept failed with error: Too many open files (os error 24)
~
~
~
2021-04-09T23:33:54.829875812+00:00 ERROR tcp server accept failed with error: Too many open files (os error 24)
2021-04-09T23:33:55.903900128+00:00 ERROR tcp server accept failed with error: Too many open files (os error 24)
2021-04-09T23:33:56.929962562+00:00 ERROR tcp server accept failed with error: Too many open files (os error 24)
2021-04-09T23:33:58.038965784+00:00 ERROR tcp server accept failed with error: Too many open files (os error 24)
2021-04-09T23:33:59.084612733+00:00 ERROR tcp server accept failed with error: Too many open files (os error 24)
这是最近的一次报错信息,后面一段除了时间不同,错误内容都一样,并且重复了五万多行
The text was updated successfully, but these errors were encountered: