Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Summary of Changes
Hello @nzinov, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new mechanism to prevent the vLLM scheduler's waiting queue from becoming overloaded. It adds a configuration option to enable queue length limiting, which, when active, will reject new incoming requests if the number of waiting requests exceeds the number of currently running requests, returning an HTTP 503 error to the client. This helps maintain service stability under high load.
Highlights
- New Configuration Parameter: Introduced
limit_queue_length(a boolean flag) toSchedulerConfiginvllm/config.py. When enabled, this parameter activates a mechanism to prevent the scheduler's waiting queue from growing excessively. - Queue Limiting Logic: Implemented logic within the scheduler's
add_requestmethod invllm/v1/core/sched/scheduler.py. Iflimit_queue_lengthis true, new requests are rejected by raising aSchedulerWaitingQueueFullErrorwhen the number of waiting requests is greater than or equal to the number of currently running requests. - New Exception Type: Defined a new exception class,
SchedulerWaitingQueueFullError, invllm/v1/engine/exceptions.pyto specifically signal when the scheduler's waiting queue capacity is exceeded. - API Error Handling: Integrated the new
SchedulerWaitingQueueFullErrorinto the OpenAI serving endpoints (vllm/entrypoints/openai/serving_chat.pyandvllm/entrypoints/openai/serving_completion.py). When this exception is caught, the API now returns an HTTP 503 (Service Unavailable) response to the client. - New CLI Argument (Unused by this PR's core logic): Added a
--max-waiting-queue-lengthcommand-line argument invllm/engine/arg_utils.py. While added, this argument does not appear to be directly utilized by thelimit_queue_lengthboolean flag's logic introduced in this pull request, which uses a dynamic comparison of waiting vs. running requests.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces a feature to limit the request queue length, which is a great addition for managing server load. The core logic seems sound, but there's an issue with the new command-line argument that could cause a runtime error. I've provided a fix for that. I also found a minor inaccuracy in a docstring and suggested a clarification. Once these issues are addressed, the PR should be in good shape.
| scheduler_group.add_argument( | ||
| "--max-waiting-queue-length", | ||
| **scheduler_kwargs["max_waiting_queue_length"]) |
There was a problem hiding this comment.
The argument name --max-waiting-queue-length is misleading for a boolean flag. A more appropriate name would be --limit-queue-length to match the configuration parameter limit_queue_length. Also, the code will raise a KeyError because scheduler_kwargs does not contain a key "max_waiting_queue_length". The code should be accessing scheduler_kwargs["limit_queue_length"].
| scheduler_group.add_argument( | |
| "--max-waiting-queue-length", | |
| **scheduler_kwargs["max_waiting_queue_length"]) | |
| scheduler_group.add_argument( | |
| "--limit-queue-length", | |
| **scheduler_kwargs["limit_queue_length"]) |
| limit_queue_length: bool = False | ||
| """If set, incoming requests will be rejected with HTTP 503 | ||
| when number of waiting requests exceeds number of running requests.""" |
There was a problem hiding this comment.
The docstring states that requests are rejected when the number of waiting requests exceeds the number of running requests. However, the implementation in vllm/v1/core/sched/scheduler.py uses the condition len(self.waiting) >= len(self.running), which means rejection happens when the number is greater than or equal to. To avoid confusion, update the docstring to accurately reflect the implementation.
when the number of waiting requests is greater than or equal to the number of running requests.|
This is one of the desirable feature we want in vLLM btw! Some ways to perform admission control/back pressure to control the queue depth. |
|
This is being added in #21352 |
No description provided.