Skip to content

fix(frontend): always include usage, when configured to do so#20983

Merged
NickLucche merged 3 commits intovllm-project:mainfrom
siemens:fix/stream-options-always-usage
Oct 14, 2025
Merged

fix(frontend): always include usage, when configured to do so#20983
NickLucche merged 3 commits intovllm-project:mainfrom
siemens:fix/stream-options-always-usage

Conversation

@max-wittig
Copy link
Copy Markdown
Contributor

@max-wittig max-wittig commented Jul 15, 2025

Even when stream_options is not specified

This is a follow up from https://github.com/vllm-project/vllm/pull/19695/files where I didn't consider the fact that we also want to include the usage, when no stream options are specified

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

Test Plan

  1. Run vllm with --enable-force-include-usage
  2. Send a normal streaming request to vLLM and observe the usage being included at the end

Test Result

(Optional) Documentation Update

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @max-wittig, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements a fix to ensure that usage statistics are consistently included in the completion API's responses. It addresses a specific scenario where these statistics were omitted when stream_options were not explicitly provided, aligning the behavior with the intended functionality of always reporting usage. This is a follow-up to a previous change to cover this edge case.

Highlights

  • Completion API: Ensured that usage statistics are always included in the completion response, even when stream_options are not specified, by leveraging the enable_force_include_usage flag.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the frontend label Jul 15, 2025
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly ensures that usage information is included in completion streams when stream_options is not specified, by respecting the enable_force_include_usage flag. The change is simple, well-contained, and directly addresses the issue described. The code is clear and I see no issues with it.

@max-wittig max-wittig force-pushed the fix/stream-options-always-usage branch from 658b824 to 9b42f26 Compare July 15, 2025 12:14
@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@max-wittig max-wittig marked this pull request as ready for review July 15, 2025 14:21
@max-wittig max-wittig requested a review from aarnphm as a code owner July 15, 2025 14:21
@max-wittig
Copy link
Copy Markdown
Contributor Author

@aarnphm maybe you could take a look. This is a fix for the contribution that I made in #19695. Thank you!

Copy link
Copy Markdown
Collaborator

@NickLucche NickLucche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to edit at least serving_chat and serving_transcriptions too.
Which could make for an opportunity to group the logic into a util function

@max-wittig
Copy link
Copy Markdown
Contributor Author

@NickLucche Thanks for the review. I will check it!

@max-wittig max-wittig force-pushed the fix/stream-options-always-usage branch from 9b42f26 to 6f81ce5 Compare July 18, 2025 10:37
@mergify
Copy link
Copy Markdown

mergify bot commented Jul 18, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @max-wittig.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 18, 2025
@max-wittig max-wittig force-pushed the fix/stream-options-always-usage branch from 6f81ce5 to c6c5ff4 Compare July 18, 2025 10:43
@mergify mergify bot removed the needs-rebase label Jul 18, 2025
@max-wittig max-wittig force-pushed the fix/stream-options-always-usage branch 15 times, most recently from c1e0090 to 88021ff Compare July 18, 2025 14:26
@max-wittig max-wittig force-pushed the fix/stream-options-always-usage branch from 0f35c00 to afcfb09 Compare September 12, 2025 08:40
@mergify mergify bot removed the needs-rebase label Sep 12, 2025
@max-wittig max-wittig force-pushed the fix/stream-options-always-usage branch 3 times, most recently from b48218c to 1012a74 Compare September 12, 2025 08:55
@max-wittig
Copy link
Copy Markdown
Contributor Author

Seems like this now fails without an error message that can be understood, because of #21234. This MR is open since a long time and we keep rebasing and fixing it.

Could you explain how I can see what's wrong or how to fix those bc-lint issues? @zhewenl

/cc @simon-mo

@max-wittig max-wittig force-pushed the fix/stream-options-always-usage branch from 1012a74 to 3e4ce83 Compare September 15, 2025 07:49
@max-wittig
Copy link
Copy Markdown
Contributor Author

Seems like bc-lint passes now after rebase.

Ready for re-review @NickLucche

@NickLucche
Copy link
Copy Markdown
Collaborator

Re-runnning tests, we might have some issues on CI apologies.

@bbartels
Copy link
Copy Markdown
Contributor

Checks all passed 🟩

@max-wittig max-wittig force-pushed the fix/stream-options-always-usage branch 2 times, most recently from 6ec1908 to 8f928e7 Compare September 17, 2025 12:54
@max-wittig
Copy link
Copy Markdown
Contributor Author

Rebased again. Could somebody please review this one? @NickLucche

Copy link
Copy Markdown
Collaborator

@NickLucche NickLucche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest spinning up a different server with a light model to test out that single feature for now.
Switching fixture to "function" would make the whole test run slower.

Comment on lines +26 to +29
def server(request):
if marker := request.node.get_closest_marker("extra_server_args"):
SERVER_ARGS.append(marker.args[0])

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry about the delay, I hadn't checked tests thoroughly, I don't think this is actually working as intended.
As the fixture is per-module, it is going to be created once and then re-used for all other tests.
What's happening here is that it's using the first tests args to spawn the server, so it's dependent on the order of tests.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi. Thanks for the review! I've tried to add a separate server now, but seems like it cannot start as the primary one is taking all the resources. Any way around this?

(EngineCore_DP0 pid=31416)   File "/vllm/vllm/worker/worker_base.py", line 611, in init_device
(EngineCore_DP0 pid=31416)     self.worker.init_device()  # type: ignore
(EngineCore_DP0 pid=31416)     ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=31416)   File "/vllm/vllm/v1/worker/gpu_worker.py", line 181, in init_device
(EngineCore_DP0 pid=31416)     raise ValueError(
(EngineCore_DP0 pid=31416) ValueError: Free memory on device (13.3/139.8 GiB) on startup is less than desired GPU memory utilization (0.9, 125.82 GiB). Decrease GPU memory utilization or reduce GPU memory used by other processes.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@max-wittig let's try and spawn the module and particularly the newly added function fixture with minimal --gpu-memory-utilization and args.

@mergify
Copy link
Copy Markdown

mergify bot commented Sep 19, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @max-wittig.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Copy link
Copy Markdown
Collaborator

@NickLucche NickLucche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few suggestions to try and get these tests running

Comment on lines +64 to +68
"--max-model-len",
"2048",
"--enforce-eager",
"--max-num-seqs",
"128",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're only testing usage so let's spin up the most lightweight server we can.
We can try

--max-model-len 128
--max-num-seqs 1
--gpu-memory-utilization 0.2

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got it working now, but I think I will still have to limit the gpu-memory-utilization of the main server

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NickLucche Hi! Seems like I can't start a second server without running into resource limits. Could you give me any pointers here?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@max-wittig how about we try @pytest.fixture(scope="class") and then group tests into 2 classes, one requirining vanilla server, and the other one requiring the one with flag in this PR?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried it with scope=module now and by putting it in a separate file. Seems cleaner that way in my opinion.

@mergify
Copy link
Copy Markdown

mergify bot commented Oct 6, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @max-wittig.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@NickLucche
Copy link
Copy Markdown
Collaborator

I think global format changes have affected this approved PR too https://vllm-dev.slack.com/archives/C07R5Q1Q2BB/p1759663228844749

@bbartels
Copy link
Copy Markdown
Contributor

bbartels commented Oct 9, 2025

@max-wittig I appreciate your efforts on this!

@max-wittig
Copy link
Copy Markdown
Contributor Author

@NickLucche Much nicer formatting now! I rebased again.

@mergify
Copy link
Copy Markdown

mergify bot commented Oct 13, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @max-wittig.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

max-wittig and others added 3 commits October 13, 2025 16:00
Even when stream_options is not specified

Signed-off-by: Max Wittig <max.wittig@siemens.com>
Signed-off-by: Antoine Auger <antoineauger@users.noreply.github.com>
Signed-off-by: Antoine Auger <antoineauger@users.noreply.github.com>
Signed-off-by: Max Wittig <max.wittig@siemens.com>
@max-wittig
Copy link
Copy Markdown
Contributor Author

@NickLucche This is ready and passing again!

@NickLucche
Copy link
Copy Markdown
Collaborator

Thanks for your patience @max-wittig !

@max-wittig
Copy link
Copy Markdown
Contributor Author

@NickLucche Thanks for the reviews and the merge!

@max-wittig max-wittig mentioned this pull request Oct 15, 2025
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants