Skip to content

[Core][Feat] Add max-waiting-queue-length parameter to reject requests when waiting queue is full#27064

Open
chaunceyjiang wants to merge 33 commits intovllm-project:mainfrom
chaunceyjiang:reject
Open

[Core][Feat] Add max-waiting-queue-length parameter to reject requests when waiting queue is full#27064
chaunceyjiang wants to merge 33 commits intovllm-project:mainfrom
chaunceyjiang:reject

Conversation

@chaunceyjiang
Copy link
Collaborator

@chaunceyjiang chaunceyjiang commented Oct 17, 2025

Purpose

Feature implementation #18826

CLOSE #18826

CLOSE #21352

Test Plan

vllm serve /home/jovyan/qwen3-8b  --no-enable-prefix-caching --max-waiting-queue-length 1
hey -n 1000 -c 50 -m POST \                                                    1 ↵
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello! What can you do?"} 
    ],
    "temperature": 0.7
  }' http://localhost:8000/v1/chat/completions

Test Result

(APIServer pid=18343) INFO:     127.0.0.1:52460 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=18343) INFO:     127.0.0.1:52122 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=18343) ERROR 10-17 02:39:34 [serving_engine.py:740] Request chatcmpl-ebe199d1fcc84afd89db45e086f532c1 was rejected by the vLLM model's safety system
(APIServer pid=18343) INFO:     127.0.0.1:52100 - "POST /v1/chat/completions HTTP/1.1" 503 Service Unavailable
(APIServer pid=18343) INFO:     127.0.0.1:52184 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=18343) INFO:     127.0.0.1:52304 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=18343) INFO:     127.0.0.1:52198 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(APIServer pid=18343) INFO:     127.0.0.1:52394 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(

TODO

  • e2e
  • ut
Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Note

Cursor Bugbot is generating a summary for commit c67d75018f29288602ca75f8438bf0f0e0d02aa1. Configure here.


Note

Implements a hard cap on the scheduler waiting queue and surfaces rejections to clients.

  • Adds SchedulerConfig.max_waiting_queue_length with CLI --max-waiting-queue-length; plumbs through EngineArgs into engine config
  • Scheduler now rejects requests when the waiting queue reaches the limit, records REJECTED event, and returns outputs with finish_reason rejected
  • Extends API enums/constants: adds FinishReason.REJECTED ("rejected") and EngineCoreEventType.REJECTED
  • OpenAI serving maps finish_reason rejected to GenerationError with ServiceUnavailableError (HTTP 503) and preserves error type/status in responses

Written by Cursor Bugbot for commit c67d75018f29288602ca75f8438bf0f0e0d02aa1. This will update automatically on new commits. Configure here.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR.

@chaunceyjiang
Copy link
Collaborator Author

/cc @robertgshaw2-redhat @njhill @hmellor PTAL.

@WoutDeRijck
Copy link

Nice feature, we needed this and implemented this!

However, we found a bug when using LoRA adapters: the request removal from the running queue fails when the request is aborted.

Fix: Use discard() instead of remove() to avoid KeyError exceptions:

vllm/v1/metrics/stats.py


def finish_request(self, req_state: 'RequestState'):
    if req_state.lora_name is None:
        return
    lora_stats = self.lora_name_to_stats[req_state.lora_name]
    lora_stats.waiting_requests.discard(req_state.request_id)
    lora_stats.running_requests.discard(req_state.request_id)

The issue is that remove() raises a KeyError if the element doesn't exist, while discard() safely handles the case where the request_id may have already been removed or never added.

…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
@chaunceyjiang
Copy link
Collaborator Author

Quick thought: Can we reject requests faster at higher-level, e.g. at AsyncLLM.add_request? This will allow us to avoid the input/output processing.

Hi @njhill @orozery

I’ve implemented another PR based on your suggestions. The new implementation avoids the input/output processing. Could you take another look?

#37413

…s when waiting queue is full

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[RFC]: Controlling the maximum length of the waiting queue

10 participants