fix(frontend): always include usage, when configured to do so#20983
fix(frontend): always include usage, when configured to do so#20983NickLucche merged 3 commits intovllm-project:mainfrom
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @max-wittig, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request implements a fix to ensure that usage statistics are consistently included in the completion API's responses. It addresses a specific scenario where these statistics were omitted when stream_options were not explicitly provided, aligning the behavior with the intended functionality of always reporting usage. This is a follow-up to a previous change to cover this edge case.
Highlights
- Completion API: Ensured that usage statistics are always included in the completion response, even when
stream_optionsare not specified, by leveraging theenable_force_include_usageflag.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request correctly ensures that usage information is included in completion streams when stream_options is not specified, by respecting the enable_force_include_usage flag. The change is simple, well-contained, and directly addresses the issue described. The code is clear and I see no issues with it.
658b824 to
9b42f26
Compare
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
NickLucche
left a comment
There was a problem hiding this comment.
I think you need to edit at least serving_chat and serving_transcriptions too.
Which could make for an opportunity to group the logic into a util function
|
@NickLucche Thanks for the review. I will check it! |
9b42f26 to
6f81ce5
Compare
|
This pull request has merge conflicts that must be resolved before it can be |
6f81ce5 to
c6c5ff4
Compare
c1e0090 to
88021ff
Compare
0f35c00 to
afcfb09
Compare
b48218c to
1012a74
Compare
1012a74 to
3e4ce83
Compare
|
Seems like bc-lint passes now after rebase. Ready for re-review @NickLucche |
|
Re-runnning tests, we might have some issues on CI apologies. |
|
Checks all passed 🟩 |
6ec1908 to
8f928e7
Compare
|
Rebased again. Could somebody please review this one? @NickLucche |
NickLucche
left a comment
There was a problem hiding this comment.
I would suggest spinning up a different server with a light model to test out that single feature for now.
Switching fixture to "function" would make the whole test run slower.
| def server(request): | ||
| if marker := request.node.get_closest_marker("extra_server_args"): | ||
| SERVER_ARGS.append(marker.args[0]) | ||
|
|
There was a problem hiding this comment.
sorry about the delay, I hadn't checked tests thoroughly, I don't think this is actually working as intended.
As the fixture is per-module, it is going to be created once and then re-used for all other tests.
What's happening here is that it's using the first tests args to spawn the server, so it's dependent on the order of tests.
There was a problem hiding this comment.
Hi. Thanks for the review! I've tried to add a separate server now, but seems like it cannot start as the primary one is taking all the resources. Any way around this?
(EngineCore_DP0 pid=31416) File "/vllm/vllm/worker/worker_base.py", line 611, in init_device
(EngineCore_DP0 pid=31416) self.worker.init_device() # type: ignore
(EngineCore_DP0 pid=31416) ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=31416) File "/vllm/vllm/v1/worker/gpu_worker.py", line 181, in init_device
(EngineCore_DP0 pid=31416) raise ValueError(
(EngineCore_DP0 pid=31416) ValueError: Free memory on device (13.3/139.8 GiB) on startup is less than desired GPU memory utilization (0.9, 125.82 GiB). Decrease GPU memory utilization or reduce GPU memory used by other processes.There was a problem hiding this comment.
@max-wittig let's try and spawn the module and particularly the newly added function fixture with minimal --gpu-memory-utilization and args.
|
This pull request has merge conflicts that must be resolved before it can be |
NickLucche
left a comment
There was a problem hiding this comment.
Left a few suggestions to try and get these tests running
| "--max-model-len", | ||
| "2048", | ||
| "--enforce-eager", | ||
| "--max-num-seqs", | ||
| "128", |
There was a problem hiding this comment.
we're only testing usage so let's spin up the most lightweight server we can.
We can try
--max-model-len 128
--max-num-seqs 1
--gpu-memory-utilization 0.2
There was a problem hiding this comment.
I got it working now, but I think I will still have to limit the gpu-memory-utilization of the main server
There was a problem hiding this comment.
@NickLucche Hi! Seems like I can't start a second server without running into resource limits. Could you give me any pointers here?
There was a problem hiding this comment.
@max-wittig how about we try @pytest.fixture(scope="class") and then group tests into 2 classes, one requirining vanilla server, and the other one requiring the one with flag in this PR?
There was a problem hiding this comment.
I've tried it with scope=module now and by putting it in a separate file. Seems cleaner that way in my opinion.
|
This pull request has merge conflicts that must be resolved before it can be |
|
I think global format changes have affected this approved PR too https://vllm-dev.slack.com/archives/C07R5Q1Q2BB/p1759663228844749 |
|
@max-wittig I appreciate your efforts on this! |
|
@NickLucche Much nicer formatting now! I rebased again. |
|
This pull request has merge conflicts that must be resolved before it can be |
Even when stream_options is not specified Signed-off-by: Max Wittig <max.wittig@siemens.com> Signed-off-by: Antoine Auger <antoineauger@users.noreply.github.com>
Signed-off-by: Antoine Auger <antoineauger@users.noreply.github.com>
Signed-off-by: Max Wittig <max.wittig@siemens.com>
|
@NickLucche This is ready and passing again! |
|
Thanks for your patience @max-wittig ! |
|
@NickLucche Thanks for the reviews and the merge! |
Even when stream_options is not specified
This is a follow up from https://github.com/vllm-project/vllm/pull/19695/files where I didn't consider the fact that we also want to include the usage, when no stream options are specified
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.Purpose
Test Plan
--enable-force-include-usageTest Result
(Optional) Documentation Update