Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should v1 balancer handle TCP connections to old endpoint? #1831

Closed
gyuho opened this issue Jan 24, 2018 · 6 comments
Closed

Should v1 balancer handle TCP connections to old endpoint? #1831

gyuho opened this issue Jan 24, 2018 · 6 comments

Comments

@gyuho
Copy link
Contributor

gyuho commented Jan 24, 2018

What version of gRPC are you using?

v1.7.5

What version of Go are you using (go version)?

1.9.2

What operating system (Linux, Windows, …) and version?

Linux

What did you do?

etcd-io/etcd#9212

What did you expect to see?

Clean TCP shutdown on endpoint switch.

What did you see instead?

New TCP session.

We are using v1 grpc.WithBalancer (planned to upgrade to new balancer grpc/proposal#30).

Does v1 balancer expect users to manually close old *grpc.ClientConn when disconnected? We get notified on disconnects via func(error) (that is returned from Up function).

@menghanl
Copy link
Contributor

I'm not sure I fully understand your question.

You should always close all ClientConns after use. Otherwise there mignt be leaked resources including connections and goroutines.

Each ClientConn maintains all the connections (TCP connections), and will retry if any of them disconnects (goroutines with infinite loops basically). If the ClientConn is not closed, these won't be released.

@gyuho
Copy link
Contributor Author

gyuho commented Jan 24, 2018

@menghanl We implement our custom grpc.Balancer interface passing multiple endpoints on Notify for disconnect failover. And get the initial *grpc.ClientConn with grpc.DialContext(ctx,host,grpc.WithBalancer(b)).

My question is whether the connection *grpc.ClientConn returned from grpc.DialContext needs to be closed when pinned address is down, or let gRPC handle connection switch with balancer. I mean connection switch by switching to other endpoints with Notify and Up.

We are asking because v1 balancer endpoint switch does not seem to close the previous connection (etcd-io/etcd#9212) for our use case. We never close *grpc.ClientConn manually when endpoint switch happens in gRPC side.

Thanks.

@menghanl
Copy link
Contributor

If an address was returned by a previous Notify and removed in the following Notify, the connection will be gracefully closed.
For example, a Notify with [server1, server2] followed by a Notify with [server2] will cause the connection to server1 to be gracefully closed.

Do you have pending RPCs on the old connection that you are trying to close? The connection will be kept open until all RPCs finish.

@gyuho
Copy link
Contributor Author

gyuho commented Jan 24, 2018

Makes sense.

Do you have pending RPCs on the old connection that you are trying to close? The connection will be kept open until all RPCs finish.

We will double-check on etcd side, and get back to you shortly.

Thanks!

@dfawley
Copy link
Member

dfawley commented Feb 1, 2018

Hi @gyuho, have you had a chance to look into this yet? Thanks!

@gyuho
Copy link
Contributor Author

gyuho commented Feb 1, 2018

@dfawley Was busy with new etcd release with gRPC v1.7.5.
And we are rewriting our client balancer with new gRPC balancer interface, anyway.

So, we will double-check on this with new balancer implementation (next 2~3 months).

Thanks!

@gyuho gyuho closed this as completed Feb 1, 2018
@lock lock bot locked as resolved and limited conversation to collaborators Sep 26, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants