Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce pre-opening of connections in MultiplexedConnectionPool #3248

Closed
sbordet opened this issue Jan 9, 2019 · 9 comments
Closed

Introduce pre-opening of connections in MultiplexedConnectionPool #3248

sbordet opened this issue Jan 9, 2019 · 9 comments
Labels
Stale For auto-closed stale issues and pull requests

Comments

@sbordet
Copy link
Contributor

sbordet commented Jan 9, 2019

HTTP/2 requires an additional roundtrip to open a connection, where the client needs to send the preface and wait for the server preface (see #2672).

In case of a spike of requests, it may be possible that HttpClient.maxRequestsQueuedPerDestination is exceeded while the connections are being opened.

Pre-opening connections during startup may allow the client to tolerate the request spike without overflowing the request queue.

@gregw
Copy link
Contributor

gregw commented Jan 9, 2019

Is this really necessary? Isn't it just configuration that setups with low values for maxConcurrentStreams will create lots of h2 connections and thus see this extra delay?
Perhaps it is better to put the effort into h2 connections that assume h2 rather than go via an upgrade to make the connection quicker?

@sbordet
Copy link
Contributor Author

sbordet commented Jan 9, 2019

@gregw it's not just low maxConcurrentStreams.
Even with moderately high values of maxConcurrentStreams - say 100 - there can be a case where you hit the client with a bazillion requests at the same time, and the time to open the second connection is enough to overflow the queue.
Currently HttpClient does not support upgrade, so it's a straight h2c connection - yet it requires the HTTP/2 preface.

@gregw
Copy link
Contributor

gregw commented Jan 9, 2019

@sbordet so how do you see this working:

  • heuristic that works out how many connections to open whenever the connection pool is empty?
  • configuration for the minimum number of connections to have open?
  • method to pre-open a number of connections and then let load and idle timeout do what it may.

@sbordet
Copy link
Contributor Author

sbordet commented Jan 10, 2019

My initial thought is to have something similar to ThreadPoolExecutor.prestartAllCoreThreads().

A configurable setter with preOpenCount and a method preOpen(), perhaps returning a CompletableFuture to signal when done, or taking a Callback.

@gregw
Copy link
Contributor

gregw commented Jan 11, 2019

So having a prestart call suggest that there will also be a minimum connections configuration? Otherwise you will be able to prestart, then idle timeout will close all the connections and a burst that arrives after a pause will still suffer. In fact the pool is going to have to actively close and refresh connections, because simply ignoring idle timeouts is not sufficient as the other end will probably close the connection. Thus we will need a minimum number of connections and when an idle timeout fires, we need to close a connection and create a new one. We then need a way to stagger such reconnections so they don't all happen at once.

@sbordet
Copy link
Contributor Author

sbordet commented Feb 25, 2019

@gregw not keen to have a "ping" request to keep the connections open. The idea would be that when one closes, another is opened to keep a minimum number of connections open.

Randomizing the idle timeout on the client seems complicated and may lead to unexpected behavior for those that count on the idle timeout to be somehow precise.

Yes we may have an unlucky window where N connections that were never used all idle timeout and close at the same time, but it should not be that complicated to spread their openings.
In fact, if we spread the first opening of those connections, subsequent reopenings will keep the spread.

So perhaps we need minConnections and openSpreadPeriod.

Thoughts?

@stale
Copy link

stale bot commented Feb 25, 2020

This issue has been automatically marked as stale because it has been a full year without activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Stale For auto-closed stale issues and pull requests label Feb 25, 2020
@sbordet sbordet removed the Stale For auto-closed stale issues and pull requests label Feb 25, 2020
@sbordet
Copy link
Contributor Author

sbordet commented Aug 13, 2020

ConnectionPool.preCreateConnections(int) has been introduced by #4975.

The use case would be to pre-create the connections because you know you're going to use them, but then they will undergo the usual logic for idle timeout and eventually the pool could go back to zero connections opened.

I'm not sure we should provide more complicated behavior for now, such as minConnections etc. - always difficult to tune and anyhow subject to never be respected due to idle timeouts happening at the same time.

@gregw further thoughts?

@github-actions
Copy link

This issue has been automatically marked as stale because it has been a
full year without activity. It will be closed if no further activity occurs.
Thank you for your contributions.

@github-actions github-actions bot added the Stale For auto-closed stale issues and pull requests label Mar 28, 2022
@sbordet sbordet closed this as completed Mar 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Stale For auto-closed stale issues and pull requests
Projects
None yet
Development

No branches or pull requests

2 participants