Skip to content

[Responses][CI] Filter negative token IDs in schema fuzz test to avoid 500 errors#35231

Merged
DarkLight1337 merged 4 commits intovllm-project:mainfrom
ROCm:akaratza_entrypoints_openai_server
Feb 25, 2026
Merged

[Responses][CI] Filter negative token IDs in schema fuzz test to avoid 500 errors#35231
DarkLight1337 merged 4 commits intovllm-project:mainfrom
ROCm:akaratza_entrypoints_openai_server

Conversation

@AndreasKaratzas
Copy link
Collaborator

@AndreasKaratzas AndreasKaratzas commented Feb 24, 2026

Adds server-side validation to reject negative token IDs in POST /v1/completions, which previously caused an unhandled 500 Internal Server Error.

Problem

When the prompt field contains negative token IDs (e.g. {"prompt": [[-1]]}), the server crashes with a 500 instead of returning a proper 400 validation error. This was discovered via schemathesis fuzz testing in test_openapi_stateless.

curl -X POST -H 'Content-Type: application/json' \
  -d '{"prompt": [[-1]]}' \
  http://localhost:8000/v1/completions
# Before: 500 Internal Server Error
# After:  400 Bad Request with descriptive error message

Fix

Added a model_validator in CompletionRequest (vllm/entrypoints/openai/completion/protocol.py) that checks all token IDs in prompt are non-negative, consistent with how other fields like logprobs, prompt_logprobs, and cache_salt are already validated on the same class.

Testing

  • Existing test_openapi_stateless schemathesis fuzz test now passes for POST /v1/completions without needing a test-side filter (the server returns a 400 which schemathesis treats as valid behavior).
  • Added explicit test cases in test_completion_error.py for both flat ([-1]) and nested ([[-1]]) negative token ID prompts, verifying the validator returns a proper validation error.

Signed-off-by: Andreas Karatzas <akaratza@amd.com>
…d 500 errors

Signed-off-by: Andreas Karatzas <akaratza@amd.com>
Signed-off-by: Andreas Karatzas <akaratza@amd.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds server-side validation to reject negative token IDs in completion requests, which previously caused 500 errors. The change is implemented via a model_validator in CompletionRequest and is accompanied by new tests. The implementation is correct, but I've suggested a simplification to the validation logic to improve readability and maintainability by removing a redundant check and a nested loop.

Comment on lines +390 to +406
if isinstance(prompt, list):
for item in prompt:
if isinstance(item, str):
continue
if isinstance(item, int):
if item < 0:
raise VLLMValidationError(
"Token IDs in `prompt` must be non-negative.",
parameter="prompt",
)
elif isinstance(item, list):
for token_id in item:
if isinstance(token_id, int) and token_id < 0:
raise VLLMValidationError(
"Token IDs in `prompt` must be non-negative.",
parameter="prompt",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic for validating prompt token IDs can be simplified to improve readability and maintainability. The current implementation has a redundant check for string types and a nested loop that can be replaced with a more concise any() expression. This also removes code duplication.

        if isinstance(prompt, list):
            for item in prompt:
                if isinstance(item, int):
                    if item < 0:
                        raise VLLMValidationError(
                            "Token IDs in `prompt` must be non-negative.",
                            parameter="prompt",
                        )
                elif isinstance(item, list):
                    if any(isinstance(x, int) and x < 0 for x in item):
                        raise VLLMValidationError(
                            "Token IDs in `prompt` must be non-negative.",
                            parameter="prompt",
                        )

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replaced the custom validator with a Pydantic-native Field(ge=0) constraint on the prompt type annotation. No custom validation code needed. Also added explicit test cases for both flat and nested negative token ID prompts in test_completion_error.py.

Signed-off-by: Andreas Karatzas <akaratza@amd.com>
Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) February 25, 2026 03:41
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Feb 25, 2026
@DarkLight1337 DarkLight1337 merged commit 2ff3e43 into vllm-project:main Feb 25, 2026
50 checks passed
@AndreasKaratzas AndreasKaratzas deleted the akaratza_entrypoints_openai_server branch February 25, 2026 06:11
haanjack pushed a commit to haanjack/vllm that referenced this pull request Feb 26, 2026
…d 500 errors (vllm-project#35231)

Signed-off-by: Andreas Karatzas <akaratza@amd.com>
tom-zju pushed a commit to tom-zju/vllm that referenced this pull request Feb 26, 2026
…d 500 errors (vllm-project#35231)

Signed-off-by: Andreas Karatzas <akaratza@amd.com>
flutist pushed a commit to flutist/vllm_custom_dataset_img_support_base64 that referenced this pull request Feb 28, 2026
…d 500 errors (vllm-project#35231)

Signed-off-by: Andreas Karatzas <akaratza@amd.com>
Signed-off-by: xjx <493337577@qq.com>
llsj14 pushed a commit to llsj14/vllm that referenced this pull request Mar 1, 2026
…d 500 errors (vllm-project#35231)

Signed-off-by: Andreas Karatzas <akaratza@amd.com>
tunglinwood pushed a commit to tunglinwood/vllm that referenced this pull request Mar 4, 2026
…d 500 errors (vllm-project#35231)

Signed-off-by: Andreas Karatzas <akaratza@amd.com>
askliar pushed a commit to askliar/vllm that referenced this pull request Mar 9, 2026
…d 500 errors (vllm-project#35231)

Signed-off-by: Andreas Karatzas <akaratza@amd.com>
Signed-off-by: Andrii Skliar <askliar@nvidia.com>
Copilot AI pushed a commit to machov/vllm that referenced this pull request Mar 10, 2026
…d 500 errors (vllm-project#35231)

Signed-off-by: Andreas Karatzas <akaratza@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants