Skip to content

Conversation

@milesial
Copy link
Contributor

@milesial milesial commented Jul 29, 2025

Fixes a hang on some VLM models with large images.

Some models like Pixtral derivative do not have contiguous multimodal image embeddings, but insert text tokens in between chunks of image row embeddings.

The encoder cache manager did not take this into account, and underestimated the size of the cache to allocate. At inference time, some requests with a large image that exceed the cache size never got scheduled.

FIX #18329

Example

vllm serve mistralai/Pixtral-12B-2409  --no-enable-prefix-caching --tokenizer_mode mistral --config_format mistral --load_format mistral --max_model_len 32000
curl -X 'POST' \
'http://0.0.0.0:8000/v1/chat/completions' \
    -H 'Accept: application/json' \
    -H 'Content-Type: application/json' \
    -d '{
          "model": "mistralai/Pixtral-12B-2409",
          "messages": [
            {
              "role": "user",
              "content": [
                {
                  "type": "image_url",
                  "image_url":
                    {
                      "url": "https://external-content.duckduckgo.com/iu/?u=http%3A%2F%2Fsvs.gsfc.nasa.gov%2Fvis%2Fa000000%2Fa003600%2Fa003657%2Fsc09cloud_still_persp_c1440.0500.jpg&f=1&nofb=1&ipt=3aece6fc933b224b28bcab2de0586cabb5e0693be3e04d6b7815d69e7b7b265d"
                    }
                },
                {
                  "type": "text",
                  "text": "What is in this image?"
                }
              ]
            }
          ],
          "stream": false,
          "max_tokens": 256,
          "temperature": 0
    }'

Before this MR:

INFO 07-28 18:58:14 [gpu_model_runner.py:2238] Encoder cache will be initialized with a budget of 4096 tokens, and profiled with 1 image items of the maximum feature size.   

Request stuck forever, never scheduled.

After this MR

INFO 07-28 19:22:40 [gpu_model_runner.py:2238] Encoder cache will be initialized with a budget of 4160 tokens, and profiled with 1 image items of the maximum feature size.   

Request runs as expected

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the multi-modality Related to multi-modality (#4194) label Jul 29, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug where the encoder cache size for multimodal models with non-contiguous embeddings was underestimated, causing some inference requests to hang. The fix correctly calculates the required token space by using the full length of the placeholder (item.length) instead of just the number of embedding tokens (item.get_num_embeds()). This change is correct and effectively resolves the described issue. The change is localized and doesn't seem to have unintended side effects.

@milesial
Copy link
Contributor Author

+ @DarkLight1337 for review

@DarkLight1337
Copy link
Member

DarkLight1337 commented Jul 29, 2025

cc @ywang96 @Isotr0py

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel that we should define a separate method for this definition. For example we can add MultiModalProfiler.get_max_placeholder_tokens which explicitly includes both multi-modal and text inside the placeholder tokens.

@milesial
Copy link
Contributor Author

Added this new function that uses the existing one with an additional flag to include text tokens or not

Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really appreciate the fix! I left some comments

Signed-off-by: Alexandre Milesi <[email protected]>
@milesial milesial force-pushed the alexandrem/mm-encoder-fix branch from 880d356 to 9c49f53 Compare July 29, 2025 21:31
Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ywang96 ywang96 added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 29, 2025
@ywang96 ywang96 enabled auto-merge (squash) July 29, 2025 23:20
@ywang96 ywang96 merged commit 0e36abf into vllm-project:main Jul 30, 2025
75 checks passed
liuyumoye pushed a commit to liuyumoye/vllm that referenced this pull request Jul 31, 2025
vadiklyutiy pushed a commit to CentML/vllm that referenced this pull request Aug 5, 2025
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
…1798)

Signed-off-by: Alexandre Milesi <[email protected]>
Co-authored-by: Alexandre Milesi <[email protected]>
Signed-off-by: x22x22 <[email protected]>
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
…1798)

Signed-off-by: Alexandre Milesi <[email protected]>
Co-authored-by: Alexandre Milesi <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
noamgat pushed a commit to noamgat/vllm that referenced this pull request Aug 9, 2025
…1798)

Signed-off-by: Alexandre Milesi <[email protected]>
Co-authored-by: Alexandre Milesi <[email protected]>
Signed-off-by: Noam Gat <[email protected]>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
…1798)

Signed-off-by: Alexandre Milesi <[email protected]>
Co-authored-by: Alexandre Milesi <[email protected]>
Signed-off-by: Paul Pak <[email protected]>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
…1798)

Signed-off-by: Alexandre Milesi <[email protected]>
Co-authored-by: Alexandre Milesi <[email protected]>
Signed-off-by: Diego-Castan <[email protected]>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Inference with long-token vision requests blocks the V1 engine

3 participants