Skip to content

[skip CI][Docs] Add Qwen3-Omni and Qwen3-TTS performance blog and figures#1837

Merged
hsliuustc0106 merged 9 commits intovllm-project:mainfrom
Shirley125:blog-qwen3-omni-tts
Apr 17, 2026
Merged

[skip CI][Docs] Add Qwen3-Omni and Qwen3-TTS performance blog and figures#1837
hsliuustc0106 merged 9 commits intovllm-project:mainfrom
Shirley125:blog-qwen3-omni-tts

Conversation

@Shirley125
Copy link
Copy Markdown
Contributor

@Shirley125 Shirley125 commented Mar 12, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Add Qwen3-Omni and Qwen3-TTS performance blog and figures

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please paste the results comparison before and after, or the e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
Co-authored-by: linyueqian <linyueqian@outlook.com>
@Shirley125 Shirley125 force-pushed the blog-qwen3-omni-tts branch from ce5beca to 4f8713c Compare March 12, 2026 05:33
Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
@Gaohan123
Copy link
Copy Markdown
Collaborator

@Gaohan123 Gaohan123 added this to the v0.18.0 milestone Mar 12, 2026
Replaced the existing YouTube iframe with a new one.

Signed-off-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
**Qwen3-TTS** (H200, concurrency 1):

<table><tr>
<td><img src="figures/tts/Mean_E2EL_(ms)_vllm_omni_vs_transformers.png" alt="Qwen3-TTS E2EL: vLLM vs HF" width="100%"/></td>
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be better to use https://user-images.githubusercontent.com/xxx/xxxx/xxx.png rather than to upload these pictures to repository?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The total size of all PNGsis only about 2–3 MB, which is negligible for the repository. Keeping the images together with the blog content in the same version ensures consistency.

…ugfix

qwen3 omni and  tts blog

Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
@Shirley125 Shirley125 force-pushed the blog-qwen3-omni-tts branch from 1229ce5 to e341e6b Compare March 27, 2026 01:50
Shirley125 and others added 2 commits March 27, 2026 10:20
Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
@linyueqian
Copy link
Copy Markdown
Collaborator

linyueqian commented Mar 27, 2026

Updated TTS benchmark results with latest vLLM v0.18.0 / vllm-omni v0.18.0rc2 data (H200).

Key changes:

  • All 4 benchmark phases now pass with 0 failures (CUDA graph was previously broken)
  • CUDA Graph section updated: now shows ~1.8x RTF improvement (was previously "negligible impact")
  • Added "Why E2EL/RTF are higher with async chunk" subsection explaining the TTFP vs throughput tradeoff
  • Updated YouTube demo link and port numbers

Headline numbers (concurrency 1):

  • TTFP: 64ms (async chunk) / 733ms (CUDA graph)
  • RTF: 0.124 (CUDA graph) / 0.160 (async chunk)
  • vs HF Transformers: 242x faster TTFP, 16.5x faster RTF

@Sy0307 @JuanPZuluaga - could you take a look at these results and see if they align with what you're seeing?

Signed-off-by: linyueqian <linyueqian@outlook.com>
@linyueqian linyueqian force-pushed the blog-qwen3-omni-tts branch from 158bd71 to e43e72f Compare March 27, 2026 03:55
Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
@fake0fan
Copy link
Copy Markdown
Contributor

@yinpeiqi pls also check if the corresponding descriptions and results are consistent with the paper.

Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
@JuanPZuluaga
Copy link
Copy Markdown
Contributor

@linyueqian ran a benchmark locally with the latest main:

  • Qwen3-TTS Benchmark Results
  • Hardware: RTX 6000 Ada (48GB) vs PR's H200
    -Model: Qwen3-TTS-12Hz-1.7B-CustomVoice
  • Config: bs16 (max_num_seqs=16), async_chunk enabled, CUDA Graph on (max inflight 16)

Results on RTX 6000 Ada

Metric | Concurrency 1 | Concurrency 4 | Concurrency 10
-- | -- | -- | --
TTFP (ms) | 44.9 | 102.0 | 217.3
E2EL (ms) | 1033.5 | 1289.6 | 1782.2
RTF | 0.183 | 0.239 | 0.310
Throughput (audio-s/s) | 5.48 | 16.67 | 31.16

Results from PR (H200)

Metric | Concurrency 1 | Concurrency 4 | Concurrency 10
-- | -- | -- | --
TTFP (ms) | 64 | 119 | 425
E2EL (ms) | 941 | ~987 | 1767
RTF | 0.160 | — | 0.314

(i'm seeing a bit fast TTFP)

@linyueqian
Copy link
Copy Markdown
Collaborator

@linyueqian ran a benchmark locally with the latest main:

  • Qwen3-TTS Benchmark Results
  • Hardware: RTX 6000 Ada (48GB) vs PR's H200
    -Model: Qwen3-TTS-12Hz-1.7B-CustomVoice
  • Config: bs16 (max_num_seqs=16), async_chunk enabled, CUDA Graph on (max inflight 16)

Results on RTX 6000 Ada

Metric | Concurrency 1 | Concurrency 4 | Concurrency 10
-- | -- | -- | --
TTFP (ms) | 44.9 | 102.0 | 217.3
E2EL (ms) | 1033.5 | 1289.6 | 1782.2
RTF | 0.183 | 0.239 | 0.310
Throughput (audio-s/s) | 5.48 | 16.67 | 31.16

Results from PR (H200)

Metric | Concurrency 1 | Concurrency 4 | Concurrency 10
-- | -- | -- | --
TTFP (ms) | 64 | 119 | 425
E2EL (ms) | 941 | ~987 | 1767
RTF | 0.160 | — | 0.314

(i'm seeing a bit fast TTFP)

Thank you very much!

@Gaohan123 Gaohan123 modified the milestones: v0.18.0, v0.20.0 Apr 14, 2026
@hsliuustc0106 hsliuustc0106 merged commit 1637dba into vllm-project:main Apr 17, 2026
3 checks passed
lvliang-intel pushed a commit to lvliang-intel/vllm-omni that referenced this pull request Apr 20, 2026
…ures (vllm-project#1837)

Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
Signed-off-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
Signed-off-by: linyueqian <linyueqian@outlook.com>
Co-authored-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
Co-authored-by: linyueqian <linyueqian@outlook.com>
qinganrice pushed a commit to qinganrice/vllm-omni that referenced this pull request Apr 23, 2026
…ures (vllm-project#1837)

Signed-off-by: CHEN <116010019@link.cuhk.edu.cn>
Signed-off-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
Signed-off-by: linyueqian <linyueqian@outlook.com>
Co-authored-by: Yueqian Lin <70319226+linyueqian@users.noreply.github.com>
Co-authored-by: linyueqian <linyueqian@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants