Skip to content

[BugFix] Fix the issue of thinker requests being preempted, causing shape mismatch.#3147

Merged
hsliuustc0106 merged 2 commits into
vllm-project:mainfrom
amy-why-3459:bugfix
May 15, 2026
Merged

[BugFix] Fix the issue of thinker requests being preempted, causing shape mismatch.#3147
hsliuustc0106 merged 2 commits into
vllm-project:mainfrom
amy-why-3459:bugfix

Conversation

@amy-why-3459
Copy link
Copy Markdown
Contributor

@amy-why-3459 amy-why-3459 commented Apr 25, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Fix the issue of thinker requests being preempted, causing shape mismatch

Test Plan

pytest -sv tests/e2e/online_serving/test_qwen3_omni_expansion.py -m "full_model" --run-level "full_model"

vllm bench serve \
    --omni \
  --dataset-name random \
  --port 28889 \
  --max-concurrency 32 \
  --model /home/models/Qwen3-Omni-30B-A3B-Instruct \
  --endpoint /v1/chat/completions \
  --backend openai-chat-omni \
  --num-warmups 2 \
  --num-prompts 128 \
  --random-input-len 2500 \
  --ignore-eos \
  --percentile-metrics ttft,tpot,itl,e2el,audio_ttfp,audio_rtf \
  --random-output-len 900 \
  --extra_body '{"modalities": ["text", "audio"]}'

Test Result

=============================== warnings summary ===============================
<frozen importlib._bootstrap>:488
  <frozen importlib._bootstrap>:488: DeprecationWarning: builtin type SwigPyPacked has no __module__ attribute

<frozen importlib._bootstrap>:488
  <frozen importlib._bootstrap>:488: DeprecationWarning: builtin type SwigPyObject has no __module__ attribute

../../../usr/local/lib/python3.12/dist-packages/torch/jit/_script.py:365: 14 warnings
  /usr/local/lib/python3.12/dist-packages/torch/jit/_script.py:365: DeprecationWarning: `torch.jit.script_method` is deprecated. Please switch to `torch.compile` or `torch.export`.
    warnings.warn(

tests/e2e/online_serving/test_qwen3_omni_expansion.py::test_text_to_audio_001[default]
  /usr/local/lib/python3.12/dist-packages/pydub/utils.py:14: DeprecationWarning: 'audioop' is deprecated and slated for removal in Python 3.13
    import audioop

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
--- Running Summary
=========== 36 passed, 3 skipped, 17 warnings in 1433.71s (0:23:53) ============
============ Serving Benchmark Result ============
Successful requests:                     128
Failed requests:                         0
Maximum request concurrency:             32
Benchmark duration (s):                  783.97
Request throughput (req/s):              0.16
Peak concurrent requests:                37.00
----------------End-to-end Latency----------------
Mean E2EL (ms):                          176080.43
Median E2EL (ms):                        174200.13
P99 E2EL (ms):                           299217.79
================== Text Result ===================
Total input tokens:                      321792
Total generated tokens:                  75756
Output token throughput (tok/s):         96.63
Peak output token throughput (tok/s):    928.00
Peak concurrent requests:                37.00
Total Token throughput (tok/s):          507.09
---------------Time to First Token----------------
Mean TTFT (ms):                          1527.28
Median TTFT (ms):                        289.07
P99 TTFT (ms):                           5655.24
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          58.93
Median TPOT (ms):                        36.80
P99 TPOT (ms):                           327.84
---------------Inter-token Latency----------------
Mean ITL (ms):                           25.41
Median ITL (ms):                         21.15
P99 ITL (ms):                            104.51
================== Audio Result ==================
Total audio duration generated(s):       23292.16
Total audio frames generated:            559011675
Audio throughput(audio duration/s):      29.71
---------------Time to First Packet---------------
Mean AUDIO_TTFP (ms):                    6244.77
Median AUDIO_TTFP (ms):                  3812.04
P99 AUDIO_TTFP (ms):                     14192.98
-----------------Real Time Factor-----------------
Mean AUDIO_RTF:                          1.10
Median AUDIO_RTF:                        1.01
P99 AUDIO_RTF:                           1.57
==================================================

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please paste the results comparison before and after, or the e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@amy-why-3459 amy-why-3459 changed the title [BugFix] Fix the chunked prefill issue in thinker [WIP][BugFix] Fix the chunked prefill issue in thinker Apr 25, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a07f07fdd0

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread vllm_omni/model_executor/stage_input_processors/qwen3_omni.py Outdated
@hsliuustc0106
Copy link
Copy Markdown
Collaborator

I also met this problem when I try to send request using vllm bench --omni

@amy-why-3459 amy-why-3459 force-pushed the bugfix branch 13 times, most recently from 53c2b9d to 603a5b4 Compare April 30, 2026 03:38
@amy-why-3459 amy-why-3459 changed the title [WIP][BugFix] Fix the chunked prefill issue in thinker [BugFix] Fix the issue of thinker requests being preempted, causing shape mismatch. May 1, 2026
@yenuo26 yenuo26 added the omni-test label to trigger buildkite omni model test in nightly CI label May 1, 2026
@Gaohan123 Gaohan123 added this to the v0.20.0 milestone May 5, 2026
@amy-why-3459 amy-why-3459 force-pushed the bugfix branch 4 times, most recently from b5031e7 to be5b627 Compare May 6, 2026 01:49
@Gaohan123 Gaohan123 removed the omni-test label to trigger buildkite omni model test in nightly CI label May 6, 2026
@amy-why-3459 amy-why-3459 force-pushed the bugfix branch 3 times, most recently from 8fbfa02 to e196426 Compare May 7, 2026 02:06
Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
…atch

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
@hsliuustc0106 hsliuustc0106 merged commit e7ee5de into vllm-project:main May 15, 2026
6 checks passed
tzhouam pushed a commit that referenced this pull request May 15, 2026
…hape mismatch. (#3147)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
tzhouam pushed a commit that referenced this pull request May 15, 2026
…hape mismatch. (#3147)

Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

omni-test label to trigger buildkite omni model test in nightly CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug][Qwen3-Omni]: Qwen3-Omni: The shape does not match in high-concurrency scenarios.

4 participants