-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
limiting request queue size #514
Comments
@xpiwo Do you want to get an error when the max queue size is exceeded? Thousands? How fast are your users refreshing? :) How does a queue limit solve the root issue (users refreshing, presumably due to long wait times)? Have you thought about how returning an error might affect other users who are not refreshing a lot? I would think you'd want to implement this higher up in the stack to deal with abusers only. Perhaps a more intelligent solution would be to bump the previous request from the queue and add the new request to the end (or provide an API for you to do this somehow). Of course, that would require more thought... |
yes, any error will do, either the 'Cannot open further sessions' or maybe some new error 'Request queue is full' etc. in our environment, in some cases the network may be down/lag for 5-10 minutes, the queue will easily get flooded with such requests, as some users refresh, others think it's a good idea to re-login. currently we disable the queue, everyone gets errors for 5 minutes, and then resume operations. the root cause is adding unnecessary load/computation (since the generated queue will be parsed), 'delaying' the perceived time the issue was resolved. this is an 'edge case', where the queue is 'full', in normal circumstances people will not see any errors. canceling requests / bumping out from queue -> i don't think it is common in nodejs workflow. will need to map/assign each query to a request, listen to some end/close event, and then decide if to cancel it. |
@xpiwo If your net is down for such a long time, it sounds like you have bigger problems to look at first. I'm with Dan that perhaps any throttling could be done above Node. For production apps this possibly could be handled in a load balancer. |
there is nothing to throttle, since when network is not acting up, this is acceptable load. once in a while the network may lag, and the queue will take care of it. in more rare occasions, the network may lag a lot, and i want to prevent filling the queue with a bunch of useless requests that were already abandoned. anyhow, for now i guess i will patch my get_connection since thanks. |
just a suggestion, maybe timeout slow requests (they will be slow since you are flooded with so many requests), by using some express middleware like: https://github.com/expressjs/timeout |
you are right that timing how long it took to get the connection may allow skipping the execution itself, but i believe queueTimeout provides similar functionality. we use nginx and it kills/timeouts slow requests, so i think .on(end/close) will also work, but adding events (per request), or somehow globally may affect performance. another reason i prefer to avoid endless queues is that the application looks un-responsive; you make a request and nothing happens vs getting some error telling you the queue is full. thanks. |
@xpiwo since you are dealing with network outages, have you had a chance to look at 1.12.0-dev's |
A |
what do you think about adding limit to the number of requests that can be queued(and not just queueTimeout)?
the goal, in case of some network spike, is to avoid adding thousands of requests to the queue, most of which are 'abandoned' because the user already refreshed, adding yet another req to the queue.
_connRequestQueue.length > x; check should work.
thanks.
The text was updated successfully, but these errors were encountered: