[Misc] Use VLLMValidationError consistently in completion and chat completion protocol#35666
Conversation
There was a problem hiding this comment.
Code Review
This pull request does a good job of consistently using VLLMValidationError instead of ValueError for request validation, which improves error reporting for clients. I've identified a couple of instances where the new validation errors could be made even more informative by including the parameter argument, aligning them more closely with the other changes in this PR. My suggestions aim to enhance this consistency.
| raise VLLMValidationError( | ||
| "Cannot set both `continue_final_message` " | ||
| "and `add_generation_prompt` to True.", | ||
| ) |
There was a problem hiding this comment.
For consistency with the other changes in this PR, it would be better to specify which parameter is causing the validation error. Since this is a conflict between two parameters, you can pick one to report. add_generation_prompt seems like a reasonable choice.
| raise VLLMValidationError( | |
| "Cannot set both `continue_final_message` " | |
| "and `add_generation_prompt` to True.", | |
| ) | |
| raise VLLMValidationError( | |
| "Cannot set both `continue_final_message` " | |
| "and `add_generation_prompt` to True.", | |
| parameter="add_generation_prompt", | |
| ) |
| raise VLLMValidationError( | ||
| "Either prompt or prompt_embeds must be " | ||
| "provided and non-empty.", | ||
| ) |
There was a problem hiding this comment.
For consistency with the goal of this PR to provide structured error context, it's a good idea to include the parameter that failed validation. In this case, since either prompt or prompt_embeds is required, you can specify prompt as the parameter.
| raise VLLMValidationError( | |
| "Either prompt or prompt_embeds must be " | |
| "provided and non-empty.", | |
| ) | |
| raise VLLMValidationError( | |
| "Either prompt or prompt_embeds must be " | |
| "provided and non-empty.", | |
| parameter="prompt", | |
| ) |
|
Hi @umut-polat, the pre-commit checks have failed. Please run: uv pip install pre-commit
pre-commit install
pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is
|
04f6ac4 to
0aaa659
Compare
|
how does this change impact user-facing status codes/ |
|
No change in status codes — |
098130e to
b6b799a
Compare
…lidators replace remaining ValueError raises with VLLMValidationError in model validators for CompletionRequest and ChatCompletionRequest, consistent with the pattern established in the responses protocol. Signed-off-by: umut-polat <52835619+umut-polat@users.noreply.github.com>
b6b799a to
29f4aa8
Compare
| raise ValueError("When using `tool_choice`, `tools` must be set.") | ||
| raise VLLMValidationError( | ||
| "When using `tool_choice`, `tools` must be set.", | ||
| parameter="tool_choice", |
There was a problem hiding this comment.
The parameter here should be tools
DarkLight1337
left a comment
There was a problem hiding this comment.
Actually, this PR looks like a duplicate of #36254
|
closing in favor of #36254 which covers the same files. will rebase that one if needed. |
Purpose
Replace remaining
ValueErrorraises withVLLMValidationErrorin themodel validators for
CompletionRequestandChatCompletionRequest.This follows the same pattern established in the responses protocol and
SamplingParams, whereVLLMValidationErrorprovides structured errorcontext (
parameter,value) for better client-side error handling.Changes
vllm/entrypoints/openai/completion/protocol.pyvalidate_prompt_and_prompt_embeds: ValueError → VLLMValidationErrorcheck_cache_salt_support: ValueError → VLLMValidationError (withparameter/value)vllm/entrypoints/openai/chat_completion/protocol.pycheck_structured_outputs_count: 2× ValueError → VLLMValidationErrorcheck_tool_usage: 6× ValueError → VLLMValidationError (withparameter/value)check_generation_prompt: ValueError → VLLMValidationErrorcheck_cache_salt_support: ValueError → VLLMValidationError (withparameter/value)Test Plan
Existing tests cover these validators. The change only affects the
exception type raised (VLLMValidationError extends ValueError), so all
existing error handling continues to work.