Add batch token statistics logging to LengthAwareSampler#2204
Add batch token statistics logging to LengthAwareSampler#2204kylesayrs merged 6 commits intovllm-project:mainfrom
LengthAwareSampler#2204Conversation
Summary of ChangesHello @jwpark33, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces significant improvements to the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces valuable logging for batch-level token statistics in LengthAwareSampler, which will help users understand token overhead from padding and truncation. The implementation is solid, and the new tests provide good coverage. I have a few minor suggestions to improve code efficiency and adhere to typing best practices.
a66c481 to
acb7ce6
Compare
|
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
|
@jwpark33 Thank you for the contribution. |
924a2d5 to
0c0d9a6
Compare
kylesayrs
left a comment
There was a problem hiding this comment.
Awesome work, thanks for the contribution!
49bcec2 to
941091c
Compare
Signed-off-by: jwpark33 <pjw9703@gmail.com>
Signed-off-by: jwpark33 <pjw9703@gmail.com>
Signed-off-by: jwpark33 <pjw9703@gmail.com>
941091c to
ed3a268
Compare
kylesayrs
left a comment
There was a problem hiding this comment.
Looks perfect, great job!
|
@jwpark33 Looks like you have a test failure in one of the tests you've added I think this is just because we're no longer logging inter-batch statistics. |
Signed-off-by: jwpark33 <pjw9703@gmail.com>
8fc472e to
0690ff4
Compare
@kylesayrs Sorry for the extra round.. I’ve addressed your feedback and pushed updates. Rebasing seems to have cleared the previous approvals, so requesting another review. Thanks! |

SUMMARY:
This PR introduces batch-level token statistics logging to the LengthAwareSampler. When the batch_size is greater than 1, the sampler now calculates and logs the token overhead incurred by padding and truncation within each batch, as well as the total overhead for the entire dataset.
Resolves: #2194
Key Changes
batch_sizeparameter and a private method_calculate_and_log_batch_statsto track token dynamics.utils.pyto correctly propagate thebatch_sizeto the sampler instance.TEST PLAN:
@kylesayrs @rahul-tuli @dsikka