Skip to content

Add batch token statistics logging to LengthAwareSampler#2204

Merged
kylesayrs merged 6 commits intovllm-project:mainfrom
jwpark33:lengthawaresampler-log
Jan 13, 2026
Merged

Add batch token statistics logging to LengthAwareSampler#2204
kylesayrs merged 6 commits intovllm-project:mainfrom
jwpark33:lengthawaresampler-log

Conversation

@jwpark33
Copy link
Contributor

@jwpark33 jwpark33 commented Jan 8, 2026

SUMMARY:
This PR introduces batch-level token statistics logging to the LengthAwareSampler. When the batch_size is greater than 1, the sampler now calculates and logs the token overhead incurred by padding and truncation within each batch, as well as the total overhead for the entire dataset.

Resolves: #2194

Key Changes

  • LengthAwareSampler Enhancement: Added a batch_size parameter and a private method _calculate_and_log_batch_stats to track token dynamics.
  • Log Detailed Overhead: Provides DEBUG level logs for both per-batch and cumulative token additions (padding) or removals (truncation).
  • Integration: Updated _make_sampler in utils.py to correctly propagate the batch_size to the sampler instance.
  • Improved Observability: Helps users understand the efficiency of their batching strategy in terms of token utilization.

TEST PLAN:

pytest tests/llmcompressor/datasets/test_length_aware_sampler.py

@kylesayrs @rahul-tuli @dsikka

@jwpark33 jwpark33 marked this pull request as ready for review January 8, 2026 13:05
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jwpark33, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces significant improvements to the LengthAwareSampler by adding robust logging for token statistics. The primary goal is to provide users with better visibility into the efficiency of their batching strategies, specifically by tracking token overhead caused by padding and truncation. This enhancement allows for a clearer understanding of how different batch sizes impact token utilization, enabling more informed optimization decisions.

Highlights

  • LengthAwareSampler Enhancement: The LengthAwareSampler now accepts a batch_size parameter and includes a new private method, _calculate_and_log_batch_stats, to monitor token dynamics within batches.
  • Detailed Token Overhead Logging: The sampler provides DEBUG level logs detailing per-batch and cumulative token additions (due to padding) and removals (due to truncation), offering insights into batching efficiency.
  • Integration with Sampler Creation: The _make_sampler utility function has been updated to correctly pass the batch_size argument to the LengthAwareSampler instance, ensuring proper functionality.
  • Improved Observability: These changes enhance the ability for users to understand and optimize the token utilization efficiency of their batching strategies.
  • New Unit Tests: New unit tests have been added to verify the correct behavior of the LengthAwareSampler's batch_size parameter and its logging functionality for token statistics.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces valuable logging for batch-level token statistics in LengthAwareSampler, which will help users understand token overhead from padding and truncation. The implementation is solid, and the new tests provide good coverage. I have a few minor suggestions to improve code efficiency and adhere to typing best practices.

@jwpark33 jwpark33 force-pushed the lengthawaresampler-log branch from a66c481 to acb7ce6 Compare January 8, 2026 13:09
@github-actions
Copy link

github-actions bot commented Jan 8, 2026

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@dsikka dsikka added the ready When a PR is ready for review label Jan 8, 2026
@dsikka
Copy link
Collaborator

dsikka commented Jan 8, 2026

@jwpark33 Thank you for the contribution.
Do you mind running quality checks under the root dir:

make style
make quality

This will resolve this failure:
Screenshot 2026-01-08 at 3 20 46 PM

@jwpark33 jwpark33 force-pushed the lengthawaresampler-log branch from 924a2d5 to 0c0d9a6 Compare January 8, 2026 22:52
kylesayrs
kylesayrs previously approved these changes Jan 9, 2026
Copy link
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work, thanks for the contribution!

Signed-off-by: jwpark33 <pjw9703@gmail.com>
Signed-off-by: jwpark33 <pjw9703@gmail.com>
@jwpark33 jwpark33 force-pushed the lengthawaresampler-log branch from 941091c to ed3a268 Compare January 10, 2026 15:25
@jwpark33 jwpark33 requested a review from kylesayrs January 11, 2026 10:49
kylesayrs
kylesayrs previously approved these changes Jan 12, 2026
Copy link
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks perfect, great job!

@kylesayrs
Copy link
Collaborator

kylesayrs commented Jan 12, 2026

@jwpark33 Looks like you have a test failure in one of the tests you've added

FAILED tests/llmcompressor/datasets/test_length_aware_sampler.py::TestLengthAwareSampler::test_per_batch_logging - assert False
 +  where False = any(<generator object TestLengthAwareSampler.test_per_batch_logging.<locals>.<genexpr> at 0x7f4759977920>)

I think this is just because we're no longer logging inter-batch statistics.

Signed-off-by: jwpark33 <pjw9703@gmail.com>
@jwpark33 jwpark33 force-pushed the lengthawaresampler-log branch from 8fc472e to 0690ff4 Compare January 13, 2026 13:44
@jwpark33
Copy link
Contributor Author

@jwpark33 Looks like you have a test failure in one of the tests you've added

FAILED tests/llmcompressor/datasets/test_length_aware_sampler.py::TestLengthAwareSampler::test_per_batch_logging - assert False
 +  where False = any(<generator object TestLengthAwareSampler.test_per_batch_logging.<locals>.<genexpr> at 0x7f4759977920>)

I think this is just because we're no longer logging inter-batch statistics.

@kylesayrs Sorry for the extra round.. I’ve addressed your feedback and pushed updates. Rebasing seems to have cleared the previous approvals, so requesting another review. Thanks!

@jwpark33 jwpark33 requested a review from kylesayrs January 13, 2026 13:52
@kylesayrs kylesayrs merged commit d266a65 into vllm-project:main Jan 13, 2026
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Debug messages when using the LengthAwareSampler

4 participants