-
-
Notifications
You must be signed in to change notification settings - Fork 11.4k
[Bugfix] Correct max tokens for non-contiguous embeds #21798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Correct max tokens for non-contiguous embeds #21798
Conversation
Signed-off-by: Alexandre Milesi <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a bug where the encoder cache size for multimodal models with non-contiguous embeddings was underestimated, causing some inference requests to hang. The fix correctly calculates the required token space by using the full length of the placeholder (item.length) instead of just the number of embedding tokens (item.get_num_embeds()). This change is correct and effectively resolves the described issue. The change is localized and doesn't seem to have unintended side effects.
|
+ @DarkLight1337 for review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel that we should define a separate method for this definition. For example we can add MultiModalProfiler.get_max_placeholder_tokens which explicitly includes both multi-modal and text inside the placeholder tokens.
Signed-off-by: Alexandre Milesi <[email protected]>
|
Added this new function that uses the existing one with an additional flag to include text tokens or not |
ywang96
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really appreciate the fix! I left some comments
Signed-off-by: Alexandre Milesi <[email protected]>
880d356 to
9c49f53
Compare
ywang96
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
…1798) Signed-off-by: Alexandre Milesi <[email protected]> Co-authored-by: Alexandre Milesi <[email protected]>
…1798) Signed-off-by: Alexandre Milesi <[email protected]> Co-authored-by: Alexandre Milesi <[email protected]>
…1798) Signed-off-by: Alexandre Milesi <[email protected]> Co-authored-by: Alexandre Milesi <[email protected]> Signed-off-by: x22x22 <[email protected]>
…1798) Signed-off-by: Alexandre Milesi <[email protected]> Co-authored-by: Alexandre Milesi <[email protected]>
…1798) Signed-off-by: Alexandre Milesi <[email protected]> Co-authored-by: Alexandre Milesi <[email protected]> Signed-off-by: Jinzhen Lin <[email protected]>
…1798) Signed-off-by: Alexandre Milesi <[email protected]> Co-authored-by: Alexandre Milesi <[email protected]> Signed-off-by: Noam Gat <[email protected]>
…1798) Signed-off-by: Alexandre Milesi <[email protected]> Co-authored-by: Alexandre Milesi <[email protected]> Signed-off-by: Paul Pak <[email protected]>
…1798) Signed-off-by: Alexandre Milesi <[email protected]> Co-authored-by: Alexandre Milesi <[email protected]> Signed-off-by: Diego-Castan <[email protected]>
…1798) Signed-off-by: Alexandre Milesi <[email protected]> Co-authored-by: Alexandre Milesi <[email protected]>
…1798) Signed-off-by: Alexandre Milesi <[email protected]> Co-authored-by: Alexandre Milesi <[email protected]>
Fixes a hang on some VLM models with large images.
Some models like Pixtral derivative do not have contiguous multimodal image embeddings, but insert text tokens in between chunks of image row embeddings.
The encoder cache manager did not take this into account, and underestimated the size of the cache to allocate. At inference time, some requests with a large image that exceed the cache size never got scheduled.
FIX #18329
Example
Before this MR:
Request stuck forever, never scheduled.
After this MR
Request runs as expected