-
Notifications
You must be signed in to change notification settings - Fork 647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
read ECONNRESET in @grpc/grpc-js but not in grpc package #1994
Comments
Any updates please? |
We have a similar issue that seems to appear only when our node.js application is deployed on Kubernetes. Here is our stack:
We are getting this error so frequently that it cannot be due to sporadic connectivity issues. |
Any updates? |
we have similar issue as well:
this normally happened after > 10 hours of idle |
something related to keepalive? keepalive: {
keepaliveTimeMs: ms('5m'),
}, I did not get connection reset for several weeks. |
Any updates? |
@bangbang93 After changing the code, have you faced the issue again? |
get rid of this for several months. |
@bangbang93 we have the same issues, aren't you afraid the performance could suffer after setting this option? 5 minutes sound like a lot. This is a comment from the source code:
Here is a link: grpc-node/packages/grpc-js/src/subchannel.ts Line 114 in 6764dcc
Nevertheless I just applied it to our services, let's see how it will play out. |
grpc-node/packages/grpc-js/src/subchannel.ts Lines 109 to 112 in 6764dcc
|
@bangbang93 sorry I sent a wrong link. I tried both and I still get the message :(, but thx for helping |
This works for us: const channelOptions: ChannelOptions = {
...channelOptions,
// Send keepalive pings every 10 seconds, default is 2 hours.
'grpc.keepalive_time_ms': 10 * 1000,
// Keepalive ping timeout after 5 seconds, default is 20 seconds.
'grpc.keepalive_timeout_ms': 5 * 1000,
// Allow keepalive pings when there are no gRPC calls.
'grpc.keepalive_permit_without_calls': 1,
}; ✌️ |
Thank you @HofmannZ . Is that fix reliable for you or just makes the problem less evident? |
Hey @logidelic, We ended up with the following config for the client: // See: https://grpc.github.io/grpc/cpp/md_doc_keepalive.html
const channelOptions: ChannelOptions = {
...channelOptions,
// Send keepalive pings every 6 minutes, default is none.
// Must be more than GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS on the server (5 minutes.)
'grpc.keepalive_time_ms': 6 * 60 * 1000,
// Keepalive ping timeout after 5 seconds, default is 20 seconds.
'grpc.keepalive_timeout_ms': 5 * 1000,
// Allow keepalive pings when there are no gRPC calls.
'grpc.keepalive_permit_without_calls': 1,
}; And the following config for the server: // See: https://grpc.github.io/grpc/cpp/md_doc_keepalive.html
const channelOptions: ChannelOptions = {
...channelOptions,
// Send keepalive pings every 10 seconds, default is 2 hours.
'grpc.keepalive_time_ms': 10 * 1000,
// Keepalive ping timeout after 5 seconds, default is 20 seconds.
'grpc.keepalive_timeout_ms': 5 * 1000,
// Allow keepalive pings when there are no gRPC calls.
'grpc.keepalive_permit_without_calls': 1,
}; We've been running it in production for a couple of months, and it works reliably. |
Description:
we were using @grpc/grpc-js package in the Kubernetes cluster with the alpine image, recently we got the chance to test in production. Sparingly we are observing the read ECONNRESET on the client-side with no logs on the server-side. We switched to an older version of @grpc/grpc-js--1.2.4 but still the error was observed.
In one of the microservices, we used grpc package with nestjs. that service never gave read ECONNRESET. so migrated all the microservices to [email protected] package and now we do not face the read ECONNRESET error. The client takes a pretty good amount of time to connect to the server around like 2secs 3secs but no read ECONNRESET error is observed.
Environment:
-@grpc/proto-loader: 0.5.6
Earlier package: @grpc/grpc-js
New package: [email protected]
please tell any more details to add.
The text was updated successfully, but these errors were encountered: