Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

limiting request queue size #514

Closed
xpiwo opened this issue Sep 25, 2016 · 8 comments
Closed

limiting request queue size #514

xpiwo opened this issue Sep 25, 2016 · 8 comments

Comments

@xpiwo
Copy link

xpiwo commented Sep 25, 2016

what do you think about adding limit to the number of requests that can be queued(and not just queueTimeout)?

the goal, in case of some network spike, is to avoid adding thousands of requests to the queue, most of which are 'abandoned' because the user already refreshed, adding yet another req to the queue.

_connRequestQueue.length > x; check should work.

thanks.

@dmcghan
Copy link

dmcghan commented Sep 25, 2016

@xpiwo Do you want to get an error when the max queue size is exceeded?

Thousands? How fast are your users refreshing? :) How does a queue limit solve the root issue (users refreshing, presumably due to long wait times)?

Have you thought about how returning an error might affect other users who are not refreshing a lot? I would think you'd want to implement this higher up in the stack to deal with abusers only.

Perhaps a more intelligent solution would be to bump the previous request from the queue and add the new request to the end (or provide an API for you to do this somehow). Of course, that would require more thought...

@xpiwo
Copy link
Author

xpiwo commented Sep 26, 2016

yes, any error will do, either the 'Cannot open further sessions' or maybe some new error 'Request queue is full' etc.

in our environment, in some cases the network may be down/lag for 5-10 minutes, the queue will easily get flooded with such requests, as some users refresh, others think it's a good idea to re-login.

currently we disable the queue, everyone gets errors for 5 minutes, and then resume operations.

the root cause is adding unnecessary load/computation (since the generated queue will be parsed), 'delaying' the perceived time the issue was resolved.

this is an 'edge case', where the queue is 'full', in normal circumstances people will not see any errors.

canceling requests / bumping out from queue -> i don't think it is common in nodejs workflow. will need to map/assign each query to a request, listen to some end/close event, and then decide if to cancel it.

@cjbj
Copy link
Member

cjbj commented Sep 26, 2016

@xpiwo If your net is down for such a long time, it sounds like you have bigger problems to look at first.

I'm with Dan that perhaps any throttling could be done above Node. For production apps this possibly could be handled in a load balancer.

@xpiwo
Copy link
Author

xpiwo commented Sep 26, 2016

there is nothing to throttle, since when network is not acting up, this is acceptable load.

once in a while the network may lag, and the queue will take care of it.

in more rare occasions, the network may lag a lot, and i want to prevent filling the queue with a bunch of useless requests that were already abandoned.

anyhow, for now i guess i will patch my get_connection since pool._connRequestQueue.length
is exposed, and do the check myself.

thanks.

@sagiegurari
Copy link

just a suggestion, maybe timeout slow requests (they will be slow since you are flooded with so many requests), by using some express middleware like: https://github.com/expressjs/timeout
that way you might avoid going to oracle for old requests.
not sure if it will workaround your problem, but might help.

@xpiwo
Copy link
Author

xpiwo commented Sep 26, 2016

you are right that timing how long it took to get the connection may allow skipping the execution itself, but i believe queueTimeout provides similar functionality.

we use nginx and it kills/timeouts slow requests, so i think .on(end/close) will also work, but adding events (per request), or somehow globally may affect performance.

another reason i prefer to avoid endless queues is that the application looks un-responsive; you make a request and nothing happens vs getting some error telling you the queue is full.

thanks.

@cjbj
Copy link
Member

cjbj commented Dec 8, 2016

@xpiwo since you are dealing with network outages, have you had a chance to look at 1.12.0-dev's poolPingInterval? Also see Connection Pool Pinging

cjbj added a commit that referenced this issue Mar 6, 2020
@cjbj
Copy link
Member

cjbj commented Mar 6, 2020

A pool.queueMax attribute was added to the master branch for the future node-oracledb 5.0.

@cjbj cjbj closed this as completed Mar 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants