Skip to content

[Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series)#1979

Merged
david6666666 merged 28 commits into
vllm-project:mainfrom
SamitHuang:fix/diffusion-config-alignment-offline-online
Mar 19, 2026
Merged

[Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series)#1979
david6666666 merged 28 commits into
vllm-project:mainfrom
SamitHuang:fix/diffusion-config-alignment-offline-online

Conversation

@SamitHuang
Copy link
Copy Markdown
Collaborator

@SamitHuang SamitHuang commented Mar 18, 2026

Summary

Fixes config arguments that diverge between offline (Omni.generate) and online (serving chat / API) inference for diffusion models, causing different generation results for the same prompt+seed. Affected models: Wan2.2 (T2V/I2V), Qwen-Image, Qwen-Image-Edit, and Qwen-Image-Layered.

Root Causes & Fixes

  1. guidance_scale_2 leaked the 0.0 sentinel valueOmniDiffusionRequest.__post_init__ auto-filled guidance_scale_2 from guidance_scale before resolving the 0.0 sentinel back to 1.0. Online requests that omitted guidance_scale ended up with guidance_scale_2 = 0.0, which disabled CFG entirely on the low-noise stage in Wan2.2 (and silently altered quality for Qwen-Image models). Fix: resolve the sentinel first, then auto-fill guidance_scale_2.

  2. num_inference_steps hardcoded to 50 in serving_chat.py — The chat endpoint forced 50 steps regardless of the pipeline's own default (Wan2.2 uses 40, others may differ). Fix: change the OmniDiffusionSamplingParams.num_inference_steps default to None (sentinel) so each pipeline's forward() applies its own model-specific default when the caller does not specify a value.

  3. Redundant guidance_scale_provided flag in AsyncOmniDiffusionAsyncOmniDiffusion.generate() manually set guidance_scale_provided before OmniDiffusionRequest.__post_init__ ran, which then overwrote it anyway. Fix: remove the redundant pre-set; __post_init__ now handles it correctly and consistently for both offline and online paths.

  4. cfg_scale not mapped to true_cfg_scale in online serving — The offline script uses --cfg-scale which maps to true_cfg_scale in OmniDiffusionSamplingParams, but the online serving_chat.py handler only read extra_body.get("true_cfg_scale"), silently ignoring the cfg_scale key sent by clients. Fix: accept cfg_scale as an alias for true_cfg_scale.

  5. layers and resolution not passed through in online serving — Qwen-Image-Layered parameters from extra_body were not forwarded to OmniDiffusionSamplingParams. Fix: add passthrough for both.

  6. RGB→RGBA conversion missing for Qwen-Image-Layered — The model's VAE expects 4-channel (RGBA) input, but online serving decodes user-uploaded images as RGB. Fix: add automatic RGB→RGBA conversion in both the preprocessing function and the fallback path in forward().

Files Changed

File Change
vllm_omni/diffusion/request.py Reorder __post_init__: resolve guidance_scale sentinel → set CFG flag → auto-fill guidance_scale_2
vllm_omni/inputs/data.py Change num_inference_steps default from 50 to None
vllm_omni/entrypoints/openai/serving_chat.py Remove hardcoded defaults; accept cfg_scale alias; pass layers/resolution
vllm_omni/entrypoints/async_omni_diffusion.py Remove redundant guidance_scale_provided pre-set
vllm_omni/diffusion/worker/diffusion_model_runner.py Guard cache refresh against num_inference_steps=None
vllm_omni/diffusion/models/qwen_image/pipeline_qwen_image_layered.py Auto-convert RGB→RGBA input images

Test Plan

For each model, offline and online inference were run with identical parameters (same seed, prompt, num_inference_steps, guidance scales). The raw output tensors (before any video/image encoding) were saved and compared at the numpy level to verify pixel-level identity.

Test Parameters

Model Prompt Seed Steps cfg_scale guidance_scale Extra
Wan2.2 (T2V) "a cat walking..." 42 40 - 5.0 480p, 17 frames
Qwen-Image "a beautiful landscape..." 42 50 - 1.0 1024×1024
Qwen-Image-Edit "edit the image..." 42 50 4.0 1.0 1024×1024
Qwen-Image-Layered "a rabbit" 0 50 4.0 - 4 layers, 640 resolution, rabbit.png input

Test Scripts & Commands

Wan2.2, Qwen-Image, Qwen-Image-Edit

  • Offline: Custom E2E test script e2e_test_config_alignment.py (not committed) calling Omni.generate() directly with explicit OmniDiffusionSamplingParams.
  • Online: Same script starting vllm serve <model> --omni, then sending requests via requests.post() to /v1/chat/completions.
  • Output images saved to: e2e_test_outputs/ (e.g., qwen-image_offline.png, qwen-image_online.png, wan22_offline_raw.npy, wan22_online_raw.npy).

Qwen-Image-Layered

Offline command:

cd examples && CUDA_VISIBLE_DEVICES=7 python offline_inference/image_to_image/image_edit.py \
    --model "Qwen/Qwen-Image-Layered" \
    --image rabbit.png \
    --prompt "a rabbit" \
    --output "e2e_layered_offline" \
    --num-inference-steps 50 \
    --cfg-scale 4.0 \
    --seed 0 \
    --layers 4 \
    --color-format "RGBA" \
    --enforce-eager

Online server:

CUDA_VISIBLE_DEVICES=3 vllm serve "Qwen/Qwen-Image-Layered" \
    --omni --port 8092 --enforce-eager

Online request:

import base64, requests

with open('examples/rabbit.png', 'rb') as f:
    img_b64 = base64.b64encode(f.read()).decode()

payload = {
    'model': 'Qwen/Qwen-Image-Layered',
    'messages': [{
        'role': 'user',
        'content': [
            {'type': 'image_url', 'image_url': {'url': f'data:image/png;base64,{img_b64}'}},
            {'type': 'text', 'text': 'a rabbit'}
        ]
    }],
    'extra_body': {
        'num_inference_steps': 50,
        'cfg_scale': 4.0,
        'seed': 0,
        'layers': 4,
    }
}

resp = requests.post('http://localhost:8092/v1/chat/completions', json=payload, timeout=600)

Output images saved to:

  • Offline: examples/e2e_layered_offline_0.png to e2e_layered_offline_3.png (4 RGBA layers, 864×480)
  • Online: raw decoded tensor saved as /tmp/e2e_layered_online/raw_decoded.npy (the online API postprocessing for layered output has a pre-existing bug — images are compared at the raw tensor level before postprocessing)

Offline:

e2e_layered_offline_0 e2e_layered_offline_3

Online:

layered_0 layered_3

Comparison:

import numpy as np
offline = np.load('/tmp/e2e_layered_offline/raw_decoded.npy')
online  = np.load('/tmp/e2e_layered_online/raw_decoded.npy')
print(f'Shapes: {offline.shape} vs {online.shape}')
print(f'Pixel-identical: {np.array_equal(offline, online)}')
# Shapes: (4, 4, 480, 864) vs (4, 4, 480, 864)
# Pixel-identical: True

Test Results

All four models produce pixel-identical output between offline and online inference:

Model                   Offline Shape              Online Shape               Pixel-Identical
─────────────────────── ────────────────────────── ────────────────────────── ───────────────
Wan2.2 (T2V)            (1, 3, 17, 480, 832)       (1, 3, 17, 480, 832)       ✅ True
Qwen-Image              (1024, 1024, 3)            (1024, 1024, 3)            ✅ True
Qwen-Image-Edit         (1024, 1024, 3)            (1024, 1024, 3)            ✅ True
Qwen-Image-Layered      (4, 4, 480, 864)           (4, 4, 480, 864)           ✅ True

Qwen-Image-Layered: Additional Fixes Verified

The online serving path for Qwen-Image-Layered had three additional issues fixed in this PR:

  1. cfg_scaletrue_cfg_scale mapping: Before fix, raw_true_cfg_scale=None (ignored); after fix, raw_true_cfg_scale=4.0 (correctly mapped).
  2. RGB→RGBA conversion: Before fix, the online request crashed with expected input to have 4 channels, but got 3 channels; after fix, RGB images are automatically converted to RGBA.
  3. layers/resolution passthrough: These parameters are now correctly forwarded from extra_body to the pipeline.

Server log comparison (effective params in pipeline's forward()):

# Before fix (online):
true_cfg_scale=4.0, raw_true_cfg_scale=None, guidance_scale_provided=False

# After fix (online):
true_cfg_scale=4.0, raw_true_cfg_scale=4.0, guidance_scale_provided=False

# Offline reference:
true_cfg_scale=4.0, raw_true_cfg_scale=4.0, guidance_scale_provided=True

Note: guidance_scale_provided differs (True offline vs False online) because the offline script explicitly passes --guidance-scale 1.0 while the online request omits it. This does not affect the output because Qwen-Image-Layered is not a guidance-distilled model — the guidance_scale value is unused when transformer.guidance_embeds is False.

Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Two optimizations that eliminate ~6.5s of IPC serialization overhead
for single-stage diffusion pipelines (e.g. Wan2.2 I2V/T2V) in online
serving mode:

Phase 1 – Inline diffusion (eliminate Hop3):
When there is exactly one diffusion stage in async mode, initialize
OmniDiffusion directly in the orchestrator process instead of spawning
a stage worker subprocess. This removes the entire Hop3 serialization
path (pickle + mp.Queue/SHM) between the stage worker and orchestrator.
GPU workers for tensor parallelism are still spawned by DiffusionExecutor.

Phase 2 – SHM tensor transfer (optimize Hop1):
Replace pickle-based serialization of large tensors through MessageQueue
with POSIX shared memory. The worker copies tensor data into a named SHM
segment and enqueues only lightweight metadata; the scheduler reconstructs
the tensor from SHM. This reduces Hop1 overhead from ~3.4s to ~1.5s.

Measured on Wan2.2-I2V-A14B (TP=2, 1280x720, 5s@16fps, 1 step):
  Before:  e2e = 37.5s
  Phase 1: e2e = 33.1s  (−4.4s)
  Phase 2: e2e = 31.0s  (−2.1s)
  Total:   e2e = 31.0s  (−6.5s, −17.5%)

Made-with: Cursor

Signed-off-by: samithuang <285365963@qq.com>
…17.5%)

perf: reduce IPC overhead for single-stage diffusion serving (~6.5s, 17.5%)
Signed-off-by: Samit <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
… inference

Three root causes led to different generation results depending on
whether offline (Omni.generate) or online (serving_chat / API) paths
were used for the same model:

1. **guidance_scale_2 leaked the sentinel value (0.0)** —
   OmniDiffusionRequest.__post_init__ auto-filled guidance_scale_2
   from guidance_scale *before* resolving the 0.0 sentinel back to 1.0.
   Online requests that omitted guidance_scale ended up with
   guidance_scale_2 = 0.0, disabling CFG on the low-noise stage in
   Wan2.2 (and silently altering quality for Qwen-Image models).
   Fix: resolve the sentinel first, then auto-fill guidance_scale_2.

2. **num_inference_steps hardcoded to 50 in serving_chat** —
   The chat endpoint forced 50 steps regardless of the pipeline's own
   default (Wan2.2 uses 40). Change the dataclass default to None
   (sentinel) so each pipeline's forward() applies its own default
   when the caller does not specify a value.

3. **Redundant guidance_scale_provided flag in AsyncOmniDiffusion** —
   AsyncOmniDiffusion.generate() manually set guidance_scale_provided
   before OmniDiffusionRequest.__post_init__ ran, which then overwrote
   it. Remove the redundant pre-set; __post_init__ now handles it
   correctly and consistently for both paths.

Affected models: Wan2.2 (T2V/I2V), Qwen-Image, Qwen-Image-Edit.

Made-with: Cursor

Signed-off-by: samithuang <285365963@qq.com>
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 140ef4afc3

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread vllm_omni/inputs/data.py
num_inference_steps: int = 50
# Scheduler parameters – ``None`` means "not explicitly set by the caller";
# each pipeline's ``forward()`` decides its own model-specific default.
num_inference_steps: int | None = None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Restore a concrete default for num_inference_steps

Changing OmniDiffusionSamplingParams.num_inference_steps to None breaks pipelines that still require an explicit step count. In DreamIDOmniPipeline.forward, the value is read directly and passed into get_scheduler_time_steps without a fallback, and FlowUniPCMultistepScheduler.set_timesteps asserts that num_inference_steps is not None; requests that omit this field now fail at runtime instead of using the previous default behavior.

Useful? React with 👍 / 👎.

not getattr(req, "skip_cache_refresh", False)
and self.cache_backend is not None
and self.cache_backend.is_enabled()
and req.sampling_params.num_inference_steps is not None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Always refresh cache state even when steps are omitted

The new num_inference_steps is not None gate prevents cache_backend.refresh(...) from running for requests that rely on pipeline defaults (now common after this commit). That leaves per-request cache state stale across generations: for example, TeaCacheBackend.refresh is the hook reset path and is documented/implemented as required before each generation. With cache enabled and omitted step count, subsequent requests can reuse prior residual/counter state and produce incorrect outputs.

Useful? React with 👍 / 👎.

@hsliuustc0106
Copy link
Copy Markdown
Collaborator

also for qwen-image-layered

@SamitHuang
Copy link
Copy Markdown
Collaborator Author

also for qwen-image-layered

already covered by the same fixed for qwen-image-edit

@SamitHuang SamitHuang changed the title [Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image, Qwen-Image-Edit) [Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series) Mar 18, 2026
@Gaohan123 Gaohan123 added the ready label to trigger buildkite CI label Mar 19, 2026
@Gaohan123 Gaohan123 added this to the v0.18.0 milestone Mar 19, 2026
@david6666666
Copy link
Copy Markdown
Collaborator

also for qwen-image-layered

@bjf-frz qwen-image-layered args should be alignmented, please check after this pr.thanks

Copy link
Copy Markdown
Member

@ZJY0516 ZJY0516 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@hsliuustc0106
Copy link
Copy Markdown
Collaborator

@SamitHuang please also attach the result for qwen-image-layered

@david6666666
Copy link
Copy Markdown
Collaborator

@bjf-frz PTAL thanks, and RGBA issue appears to remain unresolved.

The Qwen-Image-Layered model VAE expects 4-channel (RGBA) input but
online serving sends RGB images decoded from base64. Add automatic
RGB→RGBA conversion in both the preprocessing function and the
fallback path in forward() to prevent channel mismatch errors.

Signed-off-by: samithuang <285365963@qq.com>
Made-with: Cursor
@SamitHuang
Copy link
Copy Markdown
Collaborator Author

@bjf-frz PTAL thanks, and RGBA issue appears to remain unresolved.

addressed

image = cast(PIL.Image.Image | torch.Tensor | np.ndarray, raw_image)

if isinstance(image, PIL.Image.Image) and image.mode != "RGBA":
image = image.convert("RGBA")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@david6666666
Copy link
Copy Markdown
Collaborator

LGTM

@david6666666 david6666666 merged commit 5699fe7 into vllm-project:main Mar 19, 2026
6 of 7 checks passed
fhfuih pushed a commit to fhfuih/vllm-omni that referenced this pull request Mar 19, 2026
… inference (Wan2.2, Qwen-Image series) (vllm-project#1979)

Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Hu1Lcode pushed a commit to Hu1Lcode/vllm-omni that referenced this pull request Mar 19, 2026
… inference (Wan2.2, Qwen-Image series) (vllm-project#1979)

Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Signed-off-by: Hui <1779066624@qq.com>
@hsliuustc0106
Copy link
Copy Markdown
Collaborator

@ZJY0516 @SamitHuang remember to add tests to confirm these online and offline alignments

yiliu30 pushed a commit to yiliu30/vllm-omni-fork that referenced this pull request Mar 20, 2026
… inference (Wan2.2, Qwen-Image series) (vllm-project#1979)

Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>

Signed-off-by: yiliu30 <yi4.liu@intel.com>
hsliuustc0106 added a commit to hsliuustc0106/vllm-omni-skills that referenced this pull request Mar 22, 2026
### vllm-omni-audio-tts
- Source: [PR #2059](vllm-project/vllm-omni#2059) - [BugFix][Qwen3TTS] CodePredictor CudaGraph Pool
- Changes:
  - Bug fix: [BugFix][Qwen3TTS] CodePredictor CudaGraph Pool

### vllm-omni-perf
- Source: [PR #2059](vllm-project/vllm-omni#2059) - [BugFix][Qwen3TTS] CodePredictor CudaGraph Pool
- Changes:
  - Bug fix: [BugFix][Qwen3TTS] CodePredictor CudaGraph Pool

### vllm-omni-api
- Source: [PR #2058](vllm-project/vllm-omni#2058) - [Bugfix] Fix Fish Speech and CosyVoice3 online serving - missing is_comprehension and broken model detection
- Changes:
  - Bug fix: [Bugfix] Fix Fish Speech and CosyVoice3 online serving - missing is_comprehension and broken model detection

### vllm-omni-contrib
- Source: [PR #2045](vllm-project/vllm-omni#2045) - [Voxtral] Improve example

### vllm-omni-cicd
- Source: [PR #2045](vllm-project/vllm-omni#2045) - [Voxtral] Improve example

### vllm-omni-api
- Source: [PR #2042](vllm-project/vllm-omni#2042) - [bugfix] /chat/completion doesn't read extra_body for diffusion model
- Changes:
  - Bug fix: [bugfix] /chat/completion doesn't read extra_body for diffusion model

### vllm-omni-perf
- Source: [PR #2042](vllm-project/vllm-omni#2042) - [bugfix] /chat/completion doesn't read extra_body for diffusion model
- Changes:
  - Bug fix: [bugfix] /chat/completion doesn't read extra_body for diffusion model

### vllm-omni-contrib
- Source: [PR #2038](vllm-project/vllm-omni#2038) - [Doc] Update docs and dockerfiles for rebase of vllm v0.18.0

### vllm-omni-serving
- Source: [PR #2037](vllm-project/vllm-omni#2037) - [Rebase] Rebase to vllm v0.18.0

### vllm-omni-contrib
- Source: [PR #2037](vllm-project/vllm-omni#2037) - [Rebase] Rebase to vllm v0.18.0

### vllm-omni-api
- Source: [PR #2037](vllm-project/vllm-omni#2037) - [Rebase] Rebase to vllm v0.18.0

### vllm-omni-cicd
- Source: [PR #2037](vllm-project/vllm-omni#2037) - [Rebase] Rebase to vllm v0.18.0

### vllm-omni-cicd
- Source: [PR #2032](vllm-project/vllm-omni#2032) - [CI] Change Bagel online test environment variable `VLLM_TEST_CLEAN_GPU_MEMORY` to `0`

### vllm-omni-cicd
- Source: [PR #2031](vllm-project/vllm-omni#2031) - [CI] Fix test.
- Changes:
  - Bug fix: [CI] Fix test.

### vllm-omni-cicd
- Source: [PR #2017](vllm-project/vllm-omni#2017) - [CI] [ROCm] Setup `test-ready.yml` and `test-merge.yml`

### vllm-omni-cicd
- Source: [PR #2014](vllm-project/vllm-omni#2014) - [Test] Implement mock HTTP request handling in benchmark CLI tests

### vllm-omni-perf
- Source: [PR #2014](vllm-project/vllm-omni#2014) - [Test] Implement mock HTTP request handling in benchmark CLI tests

### vllm-omni-serving
- Source: [PR #2012](vllm-project/vllm-omni#2012) - [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips
- Changes:
  - Bug fix: [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips

### vllm-omni-image-gen
- Source: [PR #2012](vllm-project/vllm-omni#2012) - [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips
- Changes:
  - Bug fix: [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips

### vllm-omni-perf
- Source: [PR #2012](vllm-project/vllm-omni#2012) - [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips
- Changes:
  - Bug fix: [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips

### vllm-omni-serving
- Source: [PR #2009](vllm-project/vllm-omni#2009) - [Bugfix] revert PR#1758 which introduced the accuracy problem of qwen3-omni
- Changes:
  - Bug fix: [Bugfix] revert PR#1758 which introduced the accuracy problem of qwen3-omni

### vllm-omni-image-gen
- Source: [PR #2007](vllm-project/vllm-omni#2007) - [Bugfix]Fix bug of online server can not return mutli images
- Changes:
  - Bug fix: [Bugfix]Fix bug of online server can not return mutli images
- Additions:
  - Qwen-Image-Layered
  - Qwen-Image-Layered
  - Qwen-Image-Layered

### vllm-omni-api
- Source: [PR #2007](vllm-project/vllm-omni#2007) - [Bugfix]Fix bug of online server can not return mutli images
- Changes:
  - Bug fix: [Bugfix]Fix bug of online server can not return mutli images

### vllm-omni-cicd
- Source: [PR #1998](vllm-project/vllm-omni#1998) - [CI] Split BAGEL tests into dummy/real weight tiers (L2/L3)

### vllm-omni-serving
- Source: [PR #1985](vllm-project/vllm-omni#1985) - [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls
- Changes:
  - Performance improvement: [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls

### vllm-omni-audio-tts
- Source: [PR #1985](vllm-project/vllm-omni#1985) - [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls
- Changes:
  - Performance improvement: [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls

### vllm-omni-perf
- Source: [PR #1985](vllm-project/vllm-omni#1985) - [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls
- Changes:
  - Performance improvement: [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls

### vllm-omni-serving
- Source: [PR #1984](vllm-project/vllm-omni#1984) - [CI] [ROCm] Bugfix device environment issue
- Changes:
  - Bug fix: [CI] [ROCm] Bugfix device environment issue

### vllm-omni-api
- Source: [PR #1984](vllm-project/vllm-omni#1984) - [CI] [ROCm] Bugfix device environment issue
- Changes:
  - Bug fix: [CI] [ROCm] Bugfix device environment issue

### vllm-omni-serving
- Source: [PR #1982](vllm-project/vllm-omni#1982) - [Fix] Fix slow hasattr in CUDAGraphWrapper.__getattr__
- Changes:
  - Bug fix: [Fix] Fix slow hasattr in CUDAGraphWrapper.__getattr__

### vllm-omni-cicd
- Source: [PR #1982](vllm-project/vllm-omni#1982) - [Fix] Fix slow hasattr in CUDAGraphWrapper.__getattr__
- Changes:
  - Bug fix: [Fix] Fix slow hasattr in CUDAGraphWrapper.__getattr__

### vllm-omni-api
- Source: [PR #1979](vllm-project/vllm-omni#1979) - [Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series)
- Changes:
  - Bug fix: [Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series)
- Additions:
  - `/v1/chat/completions`

### vllm-omni-perf
- Source: [PR #1979](vllm-project/vllm-omni#1979) - [Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series)
- Changes:
  - Bug fix: [Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series)

### vllm-omni-contrib
- Source: [PR #1976](vllm-project/vllm-omni#1976) - [skip ci][Docs] Update WeChat QR code (fix filename case)
- Changes:
  - Bug fix: [skip ci][Docs] Update WeChat QR code (fix filename case)

### vllm-omni-contrib
- Source: [PR #1974](vllm-project/vllm-omni#1974) - [Docs] Update WeChat QR code for community support

### vllm-omni-cicd
- Source: [PR #1945](vllm-project/vllm-omni#1945) - Fix Base voice clone streaming quality and stop-token crash
- Changes:
  - Bug fix: Fix Base voice clone streaming quality and stop-token crash

### vllm-omni-cicd
- Source: [PR #1938](vllm-project/vllm-omni#1938) - [Test] L4 complete diffusion feature test for Bagel models
- Changes:
  - New feature: [Test] L4 complete diffusion feature test for Bagel models

### vllm-omni-perf
- Source: [PR #1938](vllm-project/vllm-omni#1938) - [Test] L4 complete diffusion feature test for Bagel models
- Changes:
  - New feature: [Test] L4 complete diffusion feature test for Bagel models

### vllm-omni-perf
- Source: [PR #1934](vllm-project/vllm-omni#1934) - Fix OmniGen2 transformer config loading for HF models
- Changes:
  - Bug fix: Fix OmniGen2 transformer config loading for HF models

### vllm-omni-audio-tts
- Source: [PR #1930](vllm-project/vllm-omni#1930) - [Bug][Qwen3TTS][Streaming] remove dynamic initial chunk and only compute on initial request

### vllm-omni-perf
- Source: [PR #1930](vllm-project/vllm-omni#1930) - [Bug][Qwen3TTS][Streaming] remove dynamic initial chunk and only compute on initial request

### vllm-omni-audio-tts
- Source: [PR #1926](vllm-project/vllm-omni#1926) - [Misc] removed qwen3_tts.py as it is out-dated

### vllm-omni-contrib
- Source: [PR #1920](vllm-project/vllm-omni#1920) - [Docs] Add Wan2.1-T2V as supported video generation models
- Changes:
  - New feature: [Docs] Add Wan2.1-T2V as supported video generation models

### vllm-omni-video-gen
- Source: [PR #1915](vllm-project/vllm-omni#1915) - [Bugfix] fix helios video generate use cpu device
- Changes:
  - Bug fix: [Bugfix] fix helios video generate use cpu device

### vllm-omni-perf
- Source: [PR #1915](vllm-project/vllm-omni#1915) - [Bugfix] fix helios video generate use cpu device
- Changes:
  - Bug fix: [Bugfix] fix helios video generate use cpu device

### vllm-omni-audio-tts
- Source: [PR #1913](vllm-project/vllm-omni#1913) - [Optim][Qwen3TTS][CodePredictor] support torch.compile with reduce-overhead and dynamic False

### vllm-omni-perf
- Source: [PR #1913](vllm-project/vllm-omni#1913) - [Optim][Qwen3TTS][CodePredictor] support torch.compile with reduce-overhead and dynamic False

### vllm-omni-api
- Source: [PR #1908](vllm-project/vllm-omni#1908) - [Entrypoint][Refactor] vLLM-Omni Entrypoint Refactoring

### vllm-omni-perf
- Source: [PR #1908](vllm-project/vllm-omni#1908) - [Entrypoint][Refactor] vLLM-Omni Entrypoint Refactoring

### vllm-omni-contrib
- Source: [PR #1908](vllm-project/vllm-omni#1908) - [Entrypoint][Refactor] vLLM-Omni Entrypoint Refactoring

### vllm-omni-serving
- Source: [PR #1908](vllm-project/vllm-omni#1908) - [Entrypoint][Refactor] vLLM-Omni Entrypoint Refactoring

### vllm-omni-cicd
- Source: [PR #1908](vllm-project/vllm-omni#1908) - [Entrypoint][Refactor] vLLM-Omni Entrypoint Refactoring

### vllm-omni-image-gen
- Source: [PR #1900](vllm-project/vllm-omni#1900) - [Feat] support HSDP for Flux family
- Changes:
  - New feature: [Feat] support HSDP for Flux family

### vllm-omni-contrib
- Source: [PR #1900](vllm-project/vllm-omni#1900) - [Feat] support HSDP for Flux family
- Changes:
  - New feature: [Feat] support HSDP for Flux family

### vllm-omni-distributed
- Source: [PR #1898](vllm-project/vllm-omni#1898) - [Feature]: Remove some useless `hf_overrides` in yaml
- Changes:
  - New feature: [Feature]: Remove some useless `hf_overrides` in yaml

### vllm-omni-quantization
- Source: [PR #1898](vllm-project/vllm-omni#1898) - [Feature]: Remove some useless `hf_overrides` in yaml
- Changes:
  - New feature: [Feature]: Remove some useless `hf_overrides` in yaml

### vllm-omni-cicd
- Source: [PR #1898](vllm-project/vllm-omni#1898) - [Feature]: Remove some useless `hf_overrides` in yaml
- Changes:
  - New feature: [Feature]: Remove some useless `hf_overrides` in yaml

### vllm-omni-perf
- Source: [PR #1898](vllm-project/vllm-omni#1898) - [Feature]: Remove some useless `hf_overrides` in yaml
- Changes:
  - New feature: [Feature]: Remove some useless `hf_overrides` in yaml

### vllm-omni-contrib
- Source: [PR #1890](vllm-project/vllm-omni#1890) - [NPU] Upgrade to v0.17.0

### vllm-omni-contrib
- Source: [PR #1889](vllm-project/vllm-omni#1889) - Add `Governance` section
- Changes:
  - New feature: Add `Governance` section

### vllm-omni-distributed
- Source: [PR #1881](vllm-project/vllm-omni#1881) - [Feat] Support T5 Tensor Parallelism
- Changes:
  - New feature: [Feat] Support T5 Tensor Parallelism

### vllm-omni-cicd
- Source: [PR #1881](vllm-project/vllm-omni#1881) - [Feat] Support T5 Tensor Parallelism
- Changes:
  - New feature: [Feat] Support T5 Tensor Parallelism
clodaghwalsh17 pushed a commit to clodaghwalsh17/nm-vllm-omni-ent that referenced this pull request May 12, 2026
… inference (Wan2.2, Qwen-Image series) (vllm-project#1979)

Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>
Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

6 participants