Skip to content

[Optimize][Qwen3-Omni] Reduce inter-packet latency in async chunk #1656

Merged
hsliuustc0106 merged 9 commits intovllm-project:mainfrom
ZeldaHuang:optimize_qwen3_omni_decode_embeddings
Mar 5, 2026
Merged

[Optimize][Qwen3-Omni] Reduce inter-packet latency in async chunk #1656
hsliuustc0106 merged 9 commits intovllm-project:mainfrom
ZeldaHuang:optimize_qwen3_omni_decode_embeddings

Conversation

@ZeldaHuang
Copy link
Copy Markdown
Collaborator

@ZeldaHuang ZeldaHuang commented Mar 4, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Reduce inter-packet latency by per-token decode embedding passing in async chunk,previous torch.cat is time costing in large context(video request etc)

payload_data[key] = torch.cat([origin_payload[key], value], dim=0)

Test Plan

vllm bench serve \
    --omni \
    --dataset-name random \
    --port 8091 \
    --max-concurrency 1 \
    --model /mnt/data/models/Qwen3-Omni-30B-A3B-Instruct \
    --endpoint /v1/chat/completions \
    --backend openai-chat-omni \
    --num-prompts 1 \
    --random-input-len 8000 \
    --ignore-eos \
    --percentile-metrics ttft,tpot,itl,e2el,audio_ttfp,audio_rtf \
    --random-output-len 100 \
    --extra_body '{"modalities": ["text", "audio"]}'

Test Result

before this PR

============ Serving Benchmark Result ============
Successful requests:                     1         
Failed requests:                         0         
Maximum request concurrency:             1         
Benchmark duration (s):                  21.28     
Request throughput (req/s):              0.05      
Peak concurrent requests:                1.00      
----------------End-to-end Latency----------------
Mean E2EL (ms):                          21277.44  
Median E2EL (ms):                        21277.44  
P99 E2EL (ms):                           21277.44  
================== Text Result ===================
Total input tokens:                      8000      
Total generated tokens:                  100       
Output token throughput (tok/s):         4.70      
Peak output token throughput (tok/s):    94.00     
Peak concurrent requests:                1.00      
Total Token throughput (tok/s):          380.68    
---------------Time to First Token----------------
Mean TTFT (ms):                          908.48    
Median TTFT (ms):                        908.48    
P99 TTFT (ms):                           908.48    
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          13.05     
Median TPOT (ms):                        13.05     
P99 TPOT (ms):                           13.05     
---------------Inter-token Latency----------------
Mean ITL (ms):                           13.18     
Median ITL (ms):                         9.40      
P99 ITL (ms):                            55.72     
================== Audio Result ==================
Total audio duration generated(s):       29.65     
Total audio frames generated:            711675    
Audio throughput(audio duration/s):      1.39      
---------------Time to First Packet---------------
Mean AUDIO_TTFP (ms):                    3197.30   
Median AUDIO_TTFP (ms):                  3197.30   
P99 AUDIO_TTFP (ms):                     3197.30   
-----------------Real Time Factor-----------------
Mean AUDIO_RTF:                          0.71      
Median AUDIO_RTF:                        0.71      
P99 AUDIO_RTF:                           0.71      
==================================================

after this PR

============ Serving Benchmark Result ============
Successful requests:                     1         
Failed requests:                         0         
Maximum request concurrency:             1         
Benchmark duration (s):                  7.37      
Request throughput (req/s):              0.14      
Peak concurrent requests:                1.00      
----------------End-to-end Latency----------------
Mean E2EL (ms):                          7368.84   
Median E2EL (ms):                        7368.84   
P99 E2EL (ms):                           7368.84   
================== Text Result ===================
Total input tokens:                      8000      
Total generated tokens:                  100       
Output token throughput (tok/s):         13.57     
Peak output token throughput (tok/s):    86.00     
Peak concurrent requests:                1.00      
Total Token throughput (tok/s):          1099.15   
---------------Time to First Token----------------
Mean TTFT (ms):                          824.10    
Median TTFT (ms):                        824.10    
P99 TTFT (ms):                           824.10    
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          13.43     
Median TPOT (ms):                        13.43     
P99 TPOT (ms):                           13.43     
---------------Inter-token Latency----------------
Mean ITL (ms):                           13.71     
Median ITL (ms):                         9.42      
P99 ITL (ms):                            137.64    
================== Audio Result ==================
Total audio duration generated(s):       25.46     
Total audio frames generated:            611025    
Audio throughput(audio duration/s):      3.45      
---------------Time to First Packet---------------
Mean AUDIO_TTFP (ms):                    1796.20   
Median AUDIO_TTFP (ms):                  1796.20   
P99 AUDIO_TTFP (ms):                     1796.20   
-----------------Real Time Factor-----------------
Mean AUDIO_RTF:                          0.28      
Median AUDIO_RTF:                        0.28      
P99 AUDIO_RTF:                           0.28      
==================================================

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please paste the results comparison before and after, or the e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b66dbf09f7

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +141 to 143
else:
# When prefilling a chunked thinker, thinker_hidden_states needs to be updated.
talker_additional_info["thinker_hidden_states"] = pooling_output.get("24").detach().cpu()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve thinker embeddings on no-token async chunks

When output_token_ids is empty (the code comment says this happens while chunked thinker prefill is still running), this branch updates only thinker_hidden_states and never refreshes thinker_embeddings. The new decode path later computes start_index from num_processed_tokens - thinker_embeddings.shape[0] in _thinker_decode_to_talker_decode, so if prefill spans additional chunks after chunk 0, thinker_embeddings.shape[0] becomes stale and the talker can jump past available decode embeddings and emit EOS/pad early.

Useful? React with 👍 / 👎.


if not output_token_ids:
if output_token_ids:
talker_additional_info["thinker_decode_embeddings_list"] = [pooling_output.get("0").detach().cpu()]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Append decode embeddings per token, not per chunk

This stores the entire decode embedding tensor as a single list element regardless of how many tokens were produced in the chunk. The consumer indexes thinker_decode_embeddings_list by token position (start_index), so any chunk containing multiple generated tokens will look like length 1 and all but the first token embedding in that chunk become unreachable, causing premature EOS/pad behavior for those requests.

Useful? React with 👍 / 👎.

@amy-why-3459
Copy link
Copy Markdown
Contributor

Thank you very much for your contribution, LGTM

@hsliuustc0106 hsliuustc0106 added the ready label to trigger buildkite CI label Mar 4, 2026
@amy-why-3459
Copy link
Copy Markdown
Contributor

How will this PR affect E2E in a high-concurrency scenario?

@amy-why-3459
Copy link
Copy Markdown
Contributor

`vllm bench serve     --omni   --dataset-name random   
--port 28889   --max-concurrency 10   
--model /home/models/Qwen3-Omni-30B-A3B-Instruct   
--endpoint /v1/chat/completions   
--backend openai-chat-omni   
--num-prompts 100   
--random-input-len 100   
--ignore-eos   
--percentile-metrics ttft,tpot,itl,e2el,audio_ttfp,audio_rtf  
 --random-output-len 100   
--extra_body '{"modalities": ["text", "audio"]}'`

@hsliuustc0106
Copy link
Copy Markdown
Collaborator

`vllm bench serve     --omni   --dataset-name random   
--port 28889   --max-concurrency 10   
--model /home/models/Qwen3-Omni-30B-A3B-Instruct   
--endpoint /v1/chat/completions   
--backend openai-chat-omni   
--num-prompts 100   
--random-input-len 100   
--ignore-eos   
--percentile-metrics ttft,tpot,itl,e2el,audio_ttfp,audio_rtf  
 --random-output-len 100   
--extra_body '{"modalities": ["text", "audio"]}'`

any comparison for this case?

@amy-why-3459
Copy link
Copy Markdown
Contributor

`vllm bench serve     --omni   --dataset-name random   
--port 28889   --max-concurrency 10   
--model /home/models/Qwen3-Omni-30B-A3B-Instruct   
--endpoint /v1/chat/completions   
--backend openai-chat-omni   
--num-prompts 100   
--random-input-len 100   
--ignore-eos   
--percentile-metrics ttft,tpot,itl,e2el,audio_ttfp,audio_rtf  
 --random-output-len 100   
--extra_body '{"modalities": ["text", "audio"]}'`

any comparison for this case?

In this scenario, the E2E latency degradation is quite significant and requires further analysis.

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
@ZeldaHuang ZeldaHuang force-pushed the optimize_qwen3_omni_decode_embeddings branch from b66dbf0 to 388a7ef Compare March 5, 2026 03:25
@ZeldaHuang
Copy link
Copy Markdown
Collaborator Author

`vllm bench serve     --omni   --dataset-name random   
--port 28889   --max-concurrency 10   
--model /home/models/Qwen3-Omni-30B-A3B-Instruct   
--endpoint /v1/chat/completions   
--backend openai-chat-omni   
--num-prompts 100   
--random-input-len 100   
--ignore-eos   
--percentile-metrics ttft,tpot,itl,e2el,audio_ttfp,audio_rtf  
 --random-output-len 100   
--extra_body '{"modalities": ["text", "audio"]}'`

decode_embedding tensor list serialization cause performance degradation,
future optimization fix this
before pr:

============ Serving Benchmark Result ============
Successful requests:                     10        
Failed requests:                         0         
Maximum request concurrency:             10        
Benchmark duration (s):                  17.21     
Request throughput (req/s):              0.58      
Peak concurrent requests:                10.00     
----------------End-to-end Latency----------------
Mean E2EL (ms):                          15702.55  
Median E2EL (ms):                        16458.21  
P99 E2EL (ms):                           17198.42  
================== Text Result ===================
Total input tokens:                      1000      
Total generated tokens:                  1000      
Output token throughput (tok/s):         58.11     
Peak output token throughput (tok/s):    321.00    
Peak concurrent requests:                10.00     
Total Token throughput (tok/s):          116.21    
---------------Time to First Token----------------
Mean TTFT (ms):                          419.02    
Median TTFT (ms):                        452.20    
P99 TTFT (ms):                           453.57    
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          34.18     
Median TPOT (ms):                        34.38     
P99 TPOT (ms):                           39.06     
---------------Inter-token Latency----------------
Mean ITL (ms):                           33.84     
Median ITL (ms):                         18.96     
P99 ITL (ms):                            985.66    
================== Audio Result ==================
Total audio duration generated(s):       251.69    
Total audio frames generated:            6040575   
Audio throughput(audio duration/s):      14.62     
---------------Time to First Packet---------------
Mean AUDIO_TTFP (ms):                    2224.78   
Median AUDIO_TTFP (ms):                  2232.00   
P99 AUDIO_TTFP (ms):                     2768.72   
-----------------Real Time Factor-----------------
Mean AUDIO_RTF:                          0.64      
Median AUDIO_RTF:                        0.61      
P99 AUDIO_RTF:                           0.86      
==================================================

after pr:

============ Serving Benchmark Result ============
Successful requests:                     10        
Failed requests:                         0         
Maximum request concurrency:             10        
Benchmark duration (s):                  17.57     
Request throughput (req/s):              0.57      
Peak concurrent requests:                10.00     
----------------End-to-end Latency----------------
Mean E2EL (ms):                          15568.83  
Median E2EL (ms):                        16396.14  
P99 E2EL (ms):                           17552.55  
================== Text Result ===================
Total input tokens:                      1000      
Total generated tokens:                  1000      
Output token throughput (tok/s):         56.91     
Peak output token throughput (tok/s):    328.00    
Peak concurrent requests:                10.00     
Total Token throughput (tok/s):          113.81    
---------------Time to First Token----------------
Mean TTFT (ms):                          270.63    
Median TTFT (ms):                        294.26    
P99 TTFT (ms):                           295.23    
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          44.25     
Median TPOT (ms):                        44.38     
P99 TPOT (ms):                           54.63     
---------------Inter-token Latency----------------
Mean ITL (ms):                           43.80     
Median ITL (ms):                         0.01      
P99 ITL (ms):                            1308.06   
================== Audio Result ==================
Total audio duration generated(s):       264.38    
Total audio frames generated:            6345255   
Audio throughput(audio duration/s):      15.05     
---------------Time to First Packet---------------
Mean AUDIO_TTFP (ms):                    1828.64   
Median AUDIO_TTFP (ms):                  1848.61   
P99 AUDIO_TTFP (ms):                     2365.43   
-----------------Real Time Factor-----------------
Mean AUDIO_RTF:                          0.59      
Median AUDIO_RTF:                        0.58      
P99 AUDIO_RTF:                           0.64      
==================================================

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>

if not output_token_ids:
if output_token_ids:
talker_additional_info["override_keys"] = ["thinker_decode_embeddings", "thinker_output_token_ids"]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The override_keys are the same for each step, so we don't need to accumulate them or transmit them every time.

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
@amy-why-3459
Copy link
Copy Markdown
Contributor

Thank you very much for your contribution, nice work, LGTM.

Copy link
Copy Markdown
Collaborator

@hsliuustc0106 hsliuustc0106 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@hsliuustc0106 hsliuustc0106 merged commit 070ea0d into vllm-project:main Mar 5, 2026
7 checks passed
linyueqian pushed a commit to lishunyang12/vllm-omni that referenced this pull request Mar 5, 2026
hsliuustc0106 added a commit to hsliuustc0106/vllm-omni-skills that referenced this pull request Mar 7, 2026
### vllm-omni-api
- Source: [PR #1724](vllm-project/vllm-omni#1724) - Revert "[Profile] Adding metrics for Diffusion/DiT Single diffusion Pipeline (#668)"
- Changes:
  - New feature: Revert "[Profile] Adding metrics for Diffusion/DiT Single diffusion Pipeline (#668)"

### vllm-omni-contrib
- Source: [PR #1724](vllm-project/vllm-omni#1724) - Revert "[Profile] Adding metrics for Diffusion/DiT Single diffusion Pipeline (#668)"
- Changes:
  - New feature: Revert "[Profile] Adding metrics for Diffusion/DiT Single diffusion Pipeline (#668)"

### vllm-omni-api
- Source: [PR #1716](vllm-project/vllm-omni#1716) - [Feature]:  Add vae-patch-parallel CLI argument in online serving
- Changes:
  - New feature: [Feature]:  Add vae-patch-parallel CLI argument in online serving

### vllm-omni-contrib
- Source: [PR #1716](vllm-project/vllm-omni#1716) - [Feature]:  Add vae-patch-parallel CLI argument in online serving
- Changes:
  - New feature: [Feature]:  Add vae-patch-parallel CLI argument in online serving

### vllm-omni-contrib
- Source: [PR #1693](vllm-project/vllm-omni#1693) - [skip CI][Docs] Add TTS model developer guide
- Changes:
  - New feature: [skip CI][Docs] Add TTS model developer guide

### vllm-omni-audio-tts
- Source: [PR #1688](vllm-project/vllm-omni#1688) - [MiMo-Audio] Bugfix tp lg than 1
- Changes:
  - Bug fix: [MiMo-Audio] Bugfix tp lg than 1

### vllm-omni-distributed
- Source: [PR #1688](vllm-project/vllm-omni#1688) - [MiMo-Audio] Bugfix tp lg than 1
- Changes:
  - Bug fix: [MiMo-Audio] Bugfix tp lg than 1

### vllm-omni-perf
- Source: [PR #1688](vllm-project/vllm-omni#1688) - [MiMo-Audio] Bugfix tp lg than 1
- Changes:
  - Bug fix: [MiMo-Audio] Bugfix tp lg than 1

### vllm-omni-perf
- Source: [PR #1687](vllm-project/vllm-omni#1687) - [BugFix] Return proper HTTP status for ErrorResponse in create_speech
- Changes:
  - Bug fix: [BugFix] Return proper HTTP status for ErrorResponse in create_speech

### vllm-omni-distributed
- Source: [PR #1687](vllm-project/vllm-omni#1687) - [BugFix] Return proper HTTP status for ErrorResponse in create_speech
- Changes:
  - Bug fix: [BugFix] Return proper HTTP status for ErrorResponse in create_speech

### vllm-omni-api
- Source: [PR #1687](vllm-project/vllm-omni#1687) - [BugFix] Return proper HTTP status for ErrorResponse in create_speech
- Changes:
  - Bug fix: [BugFix] Return proper HTTP status for ErrorResponse in create_speech
- Additions:
  - `/v1/audio/speech`

### vllm-omni-quantization
- Source: [PR #1687](vllm-project/vllm-omni#1687) - [BugFix] Return proper HTTP status for ErrorResponse in create_speech
- Changes:
  - Bug fix: [BugFix] Return proper HTTP status for ErrorResponse in create_speech

### vllm-omni-cicd
- Source: [PR #1683](vllm-project/vllm-omni#1683) - [CI] Remove high concurrency tests before issue #1374 fixed.
- Changes:
  - Bug fix: [CI] Remove high concurrency tests before issue #1374 fixed.

### vllm-omni-audio-tts
- Source: [PR #1678](vllm-project/vllm-omni#1678) - Add non-async chunk support for Qwen3-TTS
- Changes:
  - New feature: Add non-async chunk support for Qwen3-TTS

### vllm-omni-cicd
- Source: [PR #1678](vllm-project/vllm-omni#1678) - Add non-async chunk support for Qwen3-TTS
- Changes:
  - New feature: Add non-async chunk support for Qwen3-TTS

### vllm-omni-cicd
- Source: [PR #1677](vllm-project/vllm-omni#1677) - Replace hard-coded cuda generator with current_omni_platform.device_type

### vllm-omni-perf
- Source: [PR #1677](vllm-project/vllm-omni#1677) - Replace hard-coded cuda generator with current_omni_platform.device_type

### vllm-omni-serving
- Source: [PR #1675](vllm-project/vllm-omni#1675) - [Misc] remove logits_processor_pattern this field, because vllm have …

### vllm-omni-cicd
- Source: [PR #1666](vllm-project/vllm-omni#1666) - [Cleanup] Move cosyvoice3 tests to model subdirectory

### vllm-omni-audio-tts
- Source: [PR #1664](vllm-project/vllm-omni#1664) - [Bugfix] Fix all-silence TTS output: use float32 for speech tokenizer decoder
- Changes:
  - Bug fix: [Bugfix] Fix all-silence TTS output: use float32 for speech tokenizer decoder

### vllm-omni-cicd
- Source: [PR #1664](vllm-project/vllm-omni#1664) - [Bugfix] Fix all-silence TTS output: use float32 for speech tokenizer decoder
- Changes:
  - Bug fix: [Bugfix] Fix all-silence TTS output: use float32 for speech tokenizer decoder

### vllm-omni-distributed
- Source: [PR #1656](vllm-project/vllm-omni#1656) - [Optimize][Qwen3-Omni] Reduce inter-packet latency in async chunk

### vllm-omni-contrib
- Source: [PR #1656](vllm-project/vllm-omni#1656) - [Optimize][Qwen3-Omni] Reduce inter-packet latency in async chunk

### vllm-omni-quantization
- Source: [PR #1652](vllm-project/vllm-omni#1652) - [UX] Add progress bar for diffusion models
- Changes:
  - New feature: [UX] Add progress bar for diffusion models

### vllm-omni-perf
- Source: [PR #1652](vllm-project/vllm-omni#1652) - [UX] Add progress bar for diffusion models
- Changes:
  - New feature: [UX] Add progress bar for diffusion models

### vllm-omni-distributed
- Source: [PR #1651](vllm-project/vllm-omni#1651) - docs: Announce vllm-omni-skills community project

### vllm-omni-quantization
- Source: [PR #1651](vllm-project/vllm-omni#1651) - docs: Announce vllm-omni-skills community project

### vllm-omni-perf
- Source: [PR #1651](vllm-project/vllm-omni#1651) - docs: Announce vllm-omni-skills community project

### vllm-omni-contrib
- Source: [PR #1649](vllm-project/vllm-omni#1649) - [Misc] update wechat

### vllm-omni-perf
- Source: [PR #1642](vllm-project/vllm-omni#1642) - [chore] add _repeated_blocks for regional compilation support
- Changes:
  - New feature: [chore] add _repeated_blocks for regional compilation support

### vllm-omni-api
- Source: [PR #1641](vllm-project/vllm-omni#1641) - [Bugfix] Add TTS request validation to prevent engine crashes
- Changes:
  - New feature: [Bugfix] Add TTS request validation to prevent engine crashes

### vllm-omni-cicd
- Source: [PR #1641](vllm-project/vllm-omni#1641) - [Bugfix] Add TTS request validation to prevent engine crashes
- Changes:
  - New feature: [Bugfix] Add TTS request validation to prevent engine crashes

### vllm-omni-image-gen
- Source: [PR #1640](vllm-project/vllm-omni#1640) - [FP8 Quantization] Add FP8 quantization support for Flux transformer
- Changes:
  - New feature: [FP8 Quantization] Add FP8 quantization support for Flux transformer
- Additions:
  - text-to-image
  - Text-to-Image
  - Flux

### vllm-omni-quantization
- Source: [PR #1640](vllm-project/vllm-omni#1640) - [FP8 Quantization] Add FP8 quantization support for Flux transformer
- Changes:
  - New feature: [FP8 Quantization] Add FP8 quantization support for Flux transformer
- Additions:
  - FP8 support or improvements

### vllm-omni-contrib
- Source: [PR #1640](vllm-project/vllm-omni#1640) - [FP8 Quantization] Add FP8 quantization support for Flux transformer
- Changes:
  - New feature: [FP8 Quantization] Add FP8 quantization support for Flux transformer

### vllm-omni-perf
- Source: [PR #1640](vllm-project/vllm-omni#1640) - [FP8 Quantization] Add FP8 quantization support for Flux transformer
- Changes:
  - New feature: [FP8 Quantization] Add FP8 quantization support for Flux transformer

### vllm-omni-contrib
- Source: [PR #1631](vllm-project/vllm-omni#1631) - [BugFix] Fix LongCat Sequence Parallelism / Small Cleanup
- Changes:
  - Bug fix: [BugFix] Fix LongCat Sequence Parallelism / Small Cleanup

### vllm-omni-cicd
- Source: [PR #1628](vllm-project/vllm-omni#1628) - [Test][Qwen3-Omni]Modify Qwen3-Omni benchmark test cases

### vllm-omni-perf
- Source: [PR #1628](vllm-project/vllm-omni#1628) - [Test][Qwen3-Omni]Modify Qwen3-Omni benchmark test cases

### vllm-omni-perf
- Source: [PR #1619](vllm-project/vllm-omni#1619) - [Bugfix] Fix Qwen3-TTS code predictor crash due to missing vLLM config context
- Changes:
  - Bug fix: [Bugfix] Fix Qwen3-TTS code predictor crash due to missing vLLM config context

### vllm-omni-perf
- Source: [PR #1617](vllm-project/vllm-omni#1617) - [Refactor][Perf] Qwen3-TTS: re-prefill Code Predictor with torch.compile + enable Code2Wav decoder CUDA Graph
- Changes:
  - Performance improvement: [Refactor][Perf] Qwen3-TTS: re-prefill Code Predictor with torch.compile + enable Code2Wav decoder CUDA Graph

### vllm-omni-contrib
- Source: [PR #1615](vllm-project/vllm-omni#1615) - [Doc] Fix links in the configuration doc
- Changes:
  - Bug fix: [Doc] Fix links in the configuration doc

### vllm-omni-audio-tts
- Source: [PR #1614](vllm-project/vllm-omni#1614) - perf: replace per-element .item() GPU syncs with batch .tolist() in TTS code predictor
- Changes:
  - Performance improvement: perf: replace per-element .item() GPU syncs with batch .tolist() in TTS code predictor

### vllm-omni-perf
- Source: [PR #1614](vllm-project/vllm-omni#1614) - perf: replace per-element .item() GPU syncs with batch .tolist() in TTS code predictor
- Changes:
  - Performance improvement: perf: replace per-element .item() GPU syncs with batch .tolist() in TTS code predictor

### vllm-omni-image-gen
- Source: [PR #1609](vllm-project/vllm-omni#1609) - [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation
- Changes:
  - Bug fix: [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation
- Additions:
  - GLM-Image
  - GLM-Image
  - GLM-Image
  - GLM-Image
  - GLM-Image
  - GLM-Image
  - GLM-Image
  - GLM-Image

### vllm-omni-api
- Source: [PR #1609](vllm-project/vllm-omni#1609) - [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation
- Changes:
  - Bug fix: [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation

### vllm-omni-perf
- Source: [PR #1609](vllm-project/vllm-omni#1609) - [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation
- Changes:
  - Bug fix: [Bugfix] Fix filepath resolution for model with subdir and GLM-Image generation

### vllm-omni-contrib
- Source: [PR #1604](vllm-project/vllm-omni#1604) - [Model]: support Helios  from ByteDance

### vllm-omni-perf
- Source: [PR #1604](vllm-project/vllm-omni#1604) - [Model]: support Helios  from ByteDance

### vllm-omni-serving
- Source: [PR #1602](vllm-project/vllm-omni#1602) - [Bugfix] fix kernel error for qwen3-omni
- Changes:
  - Bug fix: [Bugfix] fix kernel error for qwen3-omni

### vllm-omni-distributed
- Source: [PR #1598](vllm-project/vllm-omni#1598) - [BugFix] Fix load_weights error when loading HunyuanImage3.0
- Changes:
  - Bug fix: [BugFix] Fix load_weights error when loading HunyuanImage3.0

### vllm-omni-image-gen
- Source: [PR #1598](vllm-project/vllm-omni#1598) - [BugFix] Fix load_weights error when loading HunyuanImage3.0
- Changes:
  - Bug fix: [BugFix] Fix load_weights error when loading HunyuanImage3.0
- Additions:
  - HunyuanImage3
  - HunyuanImage3Pipeline
  - HunyuanImage3
  - HunyuanImage-3
  - HunyuanImage-3
  - HunyuanImage-3
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage3Pipeline
  - HunyuanImage-3

### vllm-omni-quantization
- Source: [PR #1598](vllm-project/vllm-omni#1598) - [BugFix] Fix load_weights error when loading HunyuanImage3.0
- Changes:
  - Bug fix: [BugFix] Fix load_weights error when loading HunyuanImage3.0

### vllm-omni-perf
- Source: [PR #1598](vllm-project/vllm-omni#1598) - [BugFix] Fix load_weights error when loading HunyuanImage3.0
- Changes:
  - Bug fix: [BugFix] Fix load_weights error when loading HunyuanImage3.0

### vllm-omni-audio-tts
- Source: [PR #1583](vllm-project/vllm-omni#1583) - [Feat][Qwen3TTS] reduce TTFA with flexible initial phase
- Changes:
  - New feature: [Feat][Qwen3TTS] reduce TTFA with flexible initial phase

### vllm-omni-api
- Source: [PR #1583](vllm-project/vllm-omni#1583) - [Feat][Qwen3TTS] reduce TTFA with flexible initial phase
- Changes:
  - New feature: [Feat][Qwen3TTS] reduce TTFA with flexible initial phase

### vllm-omni-cicd
- Source: [PR #1583](vllm-project/vllm-omni#1583) - [Feat][Qwen3TTS] reduce TTFA with flexible initial phase
- Changes:
  - New feature: [Feat][Qwen3TTS] reduce TTFA with flexible initial phase

### vllm-omni-contrib
- Source: [PR #1583](vllm-project/vllm-omni#1583) - [Feat][Qwen3TTS] reduce TTFA with flexible initial phase
- Changes:
  - New feature: [Feat][Qwen3TTS] reduce TTFA with flexible initial phase

### vllm-omni-api
- Source: [PR #1579](vllm-project/vllm-omni#1579) - [1/N][Refactor] Clean up dead code in output processor

### vllm-omni-serving
- Source: [PR #1579](vllm-project/vllm-omni#1579) - [1/N][Refactor] Clean up dead code in output processor

### vllm-omni-distributed
- Source: [PR #1578](vllm-project/vllm-omni#1578) - [Feature][Bagel] Add CFG parallel mode
- Changes:
  - New feature: [Feature][Bagel] Add CFG parallel mode

### vllm-omni-cicd
- Source: [PR #1578](vllm-project/vllm-omni#1578) - [Feature][Bagel] Add CFG parallel mode
- Changes:
  - New feature: [Feature][Bagel] Add CFG parallel mode

### vllm-omni-perf
- Source: [PR #1578](vllm-project/vllm-omni#1578) - [Feature][Bagel] Add CFG parallel mode
- Changes:
  - New feature: [Feature][Bagel] Add CFG parallel mode

### vllm-omni-contrib
- Source: [PR #1576](vllm-project/vllm-omni#1576) - 0.16.0 release

### vllm-omni-audio-tts
- Source: [PR #1570](vllm-project/vllm-omni#1570) - [bugfix] Fix unexpected argument 'is_finished' in function llm2code2wav_async_chunk of mimo-audio
- Changes:
  - Bug fix: [bugfix] Fix unexpected argument 'is_finished' in function llm2code2wav_async_chunk of mimo-audio

### vllm-omni-api
- Source: [PR #1566](vllm-project/vllm-omni#1566) - [Bugfix] Import InputPreprocessor into Renderer
- Changes:
  - Bug fix: [Bugfix] Import InputPreprocessor into Renderer

### vllm-omni-distributed
- Source: [PR #1539](vllm-project/vllm-omni#1539) - [Debug] Enable curl retry aligned with openai

### vllm-omni-quantization
- Source: [PR #1539](vllm-project/vllm-omni#1539) - [Debug] Enable curl retry aligned with openai

### vllm-omni-perf
- Source: [PR #1539](vllm-project/vllm-omni#1539) - [Debug] Enable curl retry aligned with openai

### vllm-omni-image-gen
- Source: [PR #1537](vllm-project/vllm-omni#1537) - [NPU] [Features] [Bugfix] Support mindiesd adaln
- Changes:
  - New feature: [NPU] [Features] [Bugfix] Support mindiesd adaln
- Additions:
  - mindiesd
  - mindiesd
  - Qwen-Image-Edit-2509
  - mindiesd
  - mindiesd
  - mindiesd
  - mindiesd

### vllm-omni-perf
- Source: [PR #1537](vllm-project/vllm-omni#1537) - [NPU] [Features] [Bugfix] Support mindiesd adaln
- Changes:
  - New feature: [NPU] [Features] [Bugfix] Support mindiesd adaln

### vllm-omni-serving
- Source: [PR #1536](vllm-project/vllm-omni#1536) - [Bugfix] Fix transformers 5.x compat issues in online TTS serving
- Changes:
  - Bug fix: [Bugfix] Fix transformers 5.x compat issues in online TTS serving

### vllm-omni-perf
- Source: [PR #1536](vllm-project/vllm-omni#1536) - [Bugfix] Fix transformers 5.x compat issues in online TTS serving
- Changes:
  - Bug fix: [Bugfix] Fix transformers 5.x compat issues in online TTS serving
lishunyang12 pushed a commit to lishunyang12/vllm-omni that referenced this pull request Mar 11, 2026
…lm-project#1656)

Signed-off-by: ZeldaHuang <hzm414167@alibaba-inc.com>
Signed-off-by: lishunyang <lishunyang12@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants