[Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series)#1979
Conversation
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Two optimizations that eliminate ~6.5s of IPC serialization overhead for single-stage diffusion pipelines (e.g. Wan2.2 I2V/T2V) in online serving mode: Phase 1 – Inline diffusion (eliminate Hop3): When there is exactly one diffusion stage in async mode, initialize OmniDiffusion directly in the orchestrator process instead of spawning a stage worker subprocess. This removes the entire Hop3 serialization path (pickle + mp.Queue/SHM) between the stage worker and orchestrator. GPU workers for tensor parallelism are still spawned by DiffusionExecutor. Phase 2 – SHM tensor transfer (optimize Hop1): Replace pickle-based serialization of large tensors through MessageQueue with POSIX shared memory. The worker copies tensor data into a named SHM segment and enqueues only lightweight metadata; the scheduler reconstructs the tensor from SHM. This reduces Hop1 overhead from ~3.4s to ~1.5s. Measured on Wan2.2-I2V-A14B (TP=2, 1280x720, 5s@16fps, 1 step): Before: e2e = 37.5s Phase 1: e2e = 33.1s (−4.4s) Phase 2: e2e = 31.0s (−2.1s) Total: e2e = 31.0s (−6.5s, −17.5%) Made-with: Cursor Signed-off-by: samithuang <285365963@qq.com>
…17.5%) perf: reduce IPC overhead for single-stage diffusion serving (~6.5s, 17.5%)
Signed-off-by: Samit <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
Signed-off-by: Samit <285365963@qq.com>
Signed-off-by: samithuang <285365963@qq.com>
… inference Three root causes led to different generation results depending on whether offline (Omni.generate) or online (serving_chat / API) paths were used for the same model: 1. **guidance_scale_2 leaked the sentinel value (0.0)** — OmniDiffusionRequest.__post_init__ auto-filled guidance_scale_2 from guidance_scale *before* resolving the 0.0 sentinel back to 1.0. Online requests that omitted guidance_scale ended up with guidance_scale_2 = 0.0, disabling CFG on the low-noise stage in Wan2.2 (and silently altering quality for Qwen-Image models). Fix: resolve the sentinel first, then auto-fill guidance_scale_2. 2. **num_inference_steps hardcoded to 50 in serving_chat** — The chat endpoint forced 50 steps regardless of the pipeline's own default (Wan2.2 uses 40). Change the dataclass default to None (sentinel) so each pipeline's forward() applies its own default when the caller does not specify a value. 3. **Redundant guidance_scale_provided flag in AsyncOmniDiffusion** — AsyncOmniDiffusion.generate() manually set guidance_scale_provided before OmniDiffusionRequest.__post_init__ ran, which then overwrote it. Remove the redundant pre-set; __post_init__ now handles it correctly and consistently for both paths. Affected models: Wan2.2 (T2V/I2V), Qwen-Image, Qwen-Image-Edit. Made-with: Cursor Signed-off-by: samithuang <285365963@qq.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 140ef4afc3
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| num_inference_steps: int = 50 | ||
| # Scheduler parameters – ``None`` means "not explicitly set by the caller"; | ||
| # each pipeline's ``forward()`` decides its own model-specific default. | ||
| num_inference_steps: int | None = None |
There was a problem hiding this comment.
Restore a concrete default for num_inference_steps
Changing OmniDiffusionSamplingParams.num_inference_steps to None breaks pipelines that still require an explicit step count. In DreamIDOmniPipeline.forward, the value is read directly and passed into get_scheduler_time_steps without a fallback, and FlowUniPCMultistepScheduler.set_timesteps asserts that num_inference_steps is not None; requests that omit this field now fail at runtime instead of using the previous default behavior.
Useful? React with 👍 / 👎.
| not getattr(req, "skip_cache_refresh", False) | ||
| and self.cache_backend is not None | ||
| and self.cache_backend.is_enabled() | ||
| and req.sampling_params.num_inference_steps is not None |
There was a problem hiding this comment.
Always refresh cache state even when steps are omitted
The new num_inference_steps is not None gate prevents cache_backend.refresh(...) from running for requests that rely on pipeline defaults (now common after this commit). That leaves per-request cache state stale across generations: for example, TeaCacheBackend.refresh is the hook reset path and is documented/implemented as required before each generation. With cache enabled and omitted step count, subsequent requests can reuse prior residual/counter state and produce incorrect outputs.
Useful? React with 👍 / 👎.
|
also for qwen-image-layered |
already covered by the same fixed for qwen-image-edit |
@bjf-frz qwen-image-layered args should be alignmented, please check after this pr.thanks |
…alignment-offline-online
Signed-off-by: samithuang <285365963@qq.com>
|
@SamitHuang please also attach the result for qwen-image-layered |
|
@bjf-frz PTAL thanks, and RGBA issue appears to remain unresolved. |
The Qwen-Image-Layered model VAE expects 4-channel (RGBA) input but online serving sends RGB images decoded from base64. Add automatic RGB→RGBA conversion in both the preprocessing function and the fallback path in forward() to prevent channel mismatch errors. Signed-off-by: samithuang <285365963@qq.com> Made-with: Cursor
…://github.com/samithuang/vllm-omni into fix/diffusion-config-alignment-offline-online
addressed |
| image = cast(PIL.Image.Image | torch.Tensor | np.ndarray, raw_image) | ||
|
|
||
| if isinstance(image, PIL.Image.Image) and image.mode != "RGBA": | ||
| image = image.convert("RGBA") |
|
LGTM |
… inference (Wan2.2, Qwen-Image series) (vllm-project#1979) Signed-off-by: samithuang <285365963@qq.com> Signed-off-by: Samit <285365963@qq.com> Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
… inference (Wan2.2, Qwen-Image series) (vllm-project#1979) Signed-off-by: samithuang <285365963@qq.com> Signed-off-by: Samit <285365963@qq.com> Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com> Signed-off-by: Hui <1779066624@qq.com>
|
@ZJY0516 @SamitHuang remember to add tests to confirm these online and offline alignments |
… inference (Wan2.2, Qwen-Image series) (vllm-project#1979) Signed-off-by: samithuang <285365963@qq.com> Signed-off-by: Samit <285365963@qq.com> Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com> Signed-off-by: yiliu30 <yi4.liu@intel.com>
### vllm-omni-audio-tts - Source: [PR #2059](vllm-project/vllm-omni#2059) - [BugFix][Qwen3TTS] CodePredictor CudaGraph Pool - Changes: - Bug fix: [BugFix][Qwen3TTS] CodePredictor CudaGraph Pool ### vllm-omni-perf - Source: [PR #2059](vllm-project/vllm-omni#2059) - [BugFix][Qwen3TTS] CodePredictor CudaGraph Pool - Changes: - Bug fix: [BugFix][Qwen3TTS] CodePredictor CudaGraph Pool ### vllm-omni-api - Source: [PR #2058](vllm-project/vllm-omni#2058) - [Bugfix] Fix Fish Speech and CosyVoice3 online serving - missing is_comprehension and broken model detection - Changes: - Bug fix: [Bugfix] Fix Fish Speech and CosyVoice3 online serving - missing is_comprehension and broken model detection ### vllm-omni-contrib - Source: [PR #2045](vllm-project/vllm-omni#2045) - [Voxtral] Improve example ### vllm-omni-cicd - Source: [PR #2045](vllm-project/vllm-omni#2045) - [Voxtral] Improve example ### vllm-omni-api - Source: [PR #2042](vllm-project/vllm-omni#2042) - [bugfix] /chat/completion doesn't read extra_body for diffusion model - Changes: - Bug fix: [bugfix] /chat/completion doesn't read extra_body for diffusion model ### vllm-omni-perf - Source: [PR #2042](vllm-project/vllm-omni#2042) - [bugfix] /chat/completion doesn't read extra_body for diffusion model - Changes: - Bug fix: [bugfix] /chat/completion doesn't read extra_body for diffusion model ### vllm-omni-contrib - Source: [PR #2038](vllm-project/vllm-omni#2038) - [Doc] Update docs and dockerfiles for rebase of vllm v0.18.0 ### vllm-omni-serving - Source: [PR #2037](vllm-project/vllm-omni#2037) - [Rebase] Rebase to vllm v0.18.0 ### vllm-omni-contrib - Source: [PR #2037](vllm-project/vllm-omni#2037) - [Rebase] Rebase to vllm v0.18.0 ### vllm-omni-api - Source: [PR #2037](vllm-project/vllm-omni#2037) - [Rebase] Rebase to vllm v0.18.0 ### vllm-omni-cicd - Source: [PR #2037](vllm-project/vllm-omni#2037) - [Rebase] Rebase to vllm v0.18.0 ### vllm-omni-cicd - Source: [PR #2032](vllm-project/vllm-omni#2032) - [CI] Change Bagel online test environment variable `VLLM_TEST_CLEAN_GPU_MEMORY` to `0` ### vllm-omni-cicd - Source: [PR #2031](vllm-project/vllm-omni#2031) - [CI] Fix test. - Changes: - Bug fix: [CI] Fix test. ### vllm-omni-cicd - Source: [PR #2017](vllm-project/vllm-omni#2017) - [CI] [ROCm] Setup `test-ready.yml` and `test-merge.yml` ### vllm-omni-cicd - Source: [PR #2014](vllm-project/vllm-omni#2014) - [Test] Implement mock HTTP request handling in benchmark CLI tests ### vllm-omni-perf - Source: [PR #2014](vllm-project/vllm-omni#2014) - [Test] Implement mock HTTP request handling in benchmark CLI tests ### vllm-omni-serving - Source: [PR #2012](vllm-project/vllm-omni#2012) - [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips - Changes: - Bug fix: [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips ### vllm-omni-image-gen - Source: [PR #2012](vllm-project/vllm-omni#2012) - [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips - Changes: - Bug fix: [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips ### vllm-omni-perf - Source: [PR #2012](vllm-project/vllm-omni#2012) - [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips - Changes: - Bug fix: [Fixbug][Perf] Qwen3-omni: code predictor with re-prefill + SDPA and eliminate decode hot-path CPU round-trips ### vllm-omni-serving - Source: [PR #2009](vllm-project/vllm-omni#2009) - [Bugfix] revert PR#1758 which introduced the accuracy problem of qwen3-omni - Changes: - Bug fix: [Bugfix] revert PR#1758 which introduced the accuracy problem of qwen3-omni ### vllm-omni-image-gen - Source: [PR #2007](vllm-project/vllm-omni#2007) - [Bugfix]Fix bug of online server can not return mutli images - Changes: - Bug fix: [Bugfix]Fix bug of online server can not return mutli images - Additions: - Qwen-Image-Layered - Qwen-Image-Layered - Qwen-Image-Layered ### vllm-omni-api - Source: [PR #2007](vllm-project/vllm-omni#2007) - [Bugfix]Fix bug of online server can not return mutli images - Changes: - Bug fix: [Bugfix]Fix bug of online server can not return mutli images ### vllm-omni-cicd - Source: [PR #1998](vllm-project/vllm-omni#1998) - [CI] Split BAGEL tests into dummy/real weight tiers (L2/L3) ### vllm-omni-serving - Source: [PR #1985](vllm-project/vllm-omni#1985) - [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls - Changes: - Performance improvement: [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls ### vllm-omni-audio-tts - Source: [PR #1985](vllm-project/vllm-omni#1985) - [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls - Changes: - Performance improvement: [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls ### vllm-omni-perf - Source: [PR #1985](vllm-project/vllm-omni#1985) - [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls - Changes: - Performance improvement: [Perf] [Qwen3-TTS] Keep audio_codes and last_talker_hidden on GPU to eliminate per-step sync stalls ### vllm-omni-serving - Source: [PR #1984](vllm-project/vllm-omni#1984) - [CI] [ROCm] Bugfix device environment issue - Changes: - Bug fix: [CI] [ROCm] Bugfix device environment issue ### vllm-omni-api - Source: [PR #1984](vllm-project/vllm-omni#1984) - [CI] [ROCm] Bugfix device environment issue - Changes: - Bug fix: [CI] [ROCm] Bugfix device environment issue ### vllm-omni-serving - Source: [PR #1982](vllm-project/vllm-omni#1982) - [Fix] Fix slow hasattr in CUDAGraphWrapper.__getattr__ - Changes: - Bug fix: [Fix] Fix slow hasattr in CUDAGraphWrapper.__getattr__ ### vllm-omni-cicd - Source: [PR #1982](vllm-project/vllm-omni#1982) - [Fix] Fix slow hasattr in CUDAGraphWrapper.__getattr__ - Changes: - Bug fix: [Fix] Fix slow hasattr in CUDAGraphWrapper.__getattr__ ### vllm-omni-api - Source: [PR #1979](vllm-project/vllm-omni#1979) - [Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series) - Changes: - Bug fix: [Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series) - Additions: - `/v1/chat/completions` ### vllm-omni-perf - Source: [PR #1979](vllm-project/vllm-omni#1979) - [Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series) - Changes: - Bug fix: [Bugfix] Fix config misalignment between offline and online diffusion inference (Wan2.2, Qwen-Image series) ### vllm-omni-contrib - Source: [PR #1976](vllm-project/vllm-omni#1976) - [skip ci][Docs] Update WeChat QR code (fix filename case) - Changes: - Bug fix: [skip ci][Docs] Update WeChat QR code (fix filename case) ### vllm-omni-contrib - Source: [PR #1974](vllm-project/vllm-omni#1974) - [Docs] Update WeChat QR code for community support ### vllm-omni-cicd - Source: [PR #1945](vllm-project/vllm-omni#1945) - Fix Base voice clone streaming quality and stop-token crash - Changes: - Bug fix: Fix Base voice clone streaming quality and stop-token crash ### vllm-omni-cicd - Source: [PR #1938](vllm-project/vllm-omni#1938) - [Test] L4 complete diffusion feature test for Bagel models - Changes: - New feature: [Test] L4 complete diffusion feature test for Bagel models ### vllm-omni-perf - Source: [PR #1938](vllm-project/vllm-omni#1938) - [Test] L4 complete diffusion feature test for Bagel models - Changes: - New feature: [Test] L4 complete diffusion feature test for Bagel models ### vllm-omni-perf - Source: [PR #1934](vllm-project/vllm-omni#1934) - Fix OmniGen2 transformer config loading for HF models - Changes: - Bug fix: Fix OmniGen2 transformer config loading for HF models ### vllm-omni-audio-tts - Source: [PR #1930](vllm-project/vllm-omni#1930) - [Bug][Qwen3TTS][Streaming] remove dynamic initial chunk and only compute on initial request ### vllm-omni-perf - Source: [PR #1930](vllm-project/vllm-omni#1930) - [Bug][Qwen3TTS][Streaming] remove dynamic initial chunk and only compute on initial request ### vllm-omni-audio-tts - Source: [PR #1926](vllm-project/vllm-omni#1926) - [Misc] removed qwen3_tts.py as it is out-dated ### vllm-omni-contrib - Source: [PR #1920](vllm-project/vllm-omni#1920) - [Docs] Add Wan2.1-T2V as supported video generation models - Changes: - New feature: [Docs] Add Wan2.1-T2V as supported video generation models ### vllm-omni-video-gen - Source: [PR #1915](vllm-project/vllm-omni#1915) - [Bugfix] fix helios video generate use cpu device - Changes: - Bug fix: [Bugfix] fix helios video generate use cpu device ### vllm-omni-perf - Source: [PR #1915](vllm-project/vllm-omni#1915) - [Bugfix] fix helios video generate use cpu device - Changes: - Bug fix: [Bugfix] fix helios video generate use cpu device ### vllm-omni-audio-tts - Source: [PR #1913](vllm-project/vllm-omni#1913) - [Optim][Qwen3TTS][CodePredictor] support torch.compile with reduce-overhead and dynamic False ### vllm-omni-perf - Source: [PR #1913](vllm-project/vllm-omni#1913) - [Optim][Qwen3TTS][CodePredictor] support torch.compile with reduce-overhead and dynamic False ### vllm-omni-api - Source: [PR #1908](vllm-project/vllm-omni#1908) - [Entrypoint][Refactor] vLLM-Omni Entrypoint Refactoring ### vllm-omni-perf - Source: [PR #1908](vllm-project/vllm-omni#1908) - [Entrypoint][Refactor] vLLM-Omni Entrypoint Refactoring ### vllm-omni-contrib - Source: [PR #1908](vllm-project/vllm-omni#1908) - [Entrypoint][Refactor] vLLM-Omni Entrypoint Refactoring ### vllm-omni-serving - Source: [PR #1908](vllm-project/vllm-omni#1908) - [Entrypoint][Refactor] vLLM-Omni Entrypoint Refactoring ### vllm-omni-cicd - Source: [PR #1908](vllm-project/vllm-omni#1908) - [Entrypoint][Refactor] vLLM-Omni Entrypoint Refactoring ### vllm-omni-image-gen - Source: [PR #1900](vllm-project/vllm-omni#1900) - [Feat] support HSDP for Flux family - Changes: - New feature: [Feat] support HSDP for Flux family ### vllm-omni-contrib - Source: [PR #1900](vllm-project/vllm-omni#1900) - [Feat] support HSDP for Flux family - Changes: - New feature: [Feat] support HSDP for Flux family ### vllm-omni-distributed - Source: [PR #1898](vllm-project/vllm-omni#1898) - [Feature]: Remove some useless `hf_overrides` in yaml - Changes: - New feature: [Feature]: Remove some useless `hf_overrides` in yaml ### vllm-omni-quantization - Source: [PR #1898](vllm-project/vllm-omni#1898) - [Feature]: Remove some useless `hf_overrides` in yaml - Changes: - New feature: [Feature]: Remove some useless `hf_overrides` in yaml ### vllm-omni-cicd - Source: [PR #1898](vllm-project/vllm-omni#1898) - [Feature]: Remove some useless `hf_overrides` in yaml - Changes: - New feature: [Feature]: Remove some useless `hf_overrides` in yaml ### vllm-omni-perf - Source: [PR #1898](vllm-project/vllm-omni#1898) - [Feature]: Remove some useless `hf_overrides` in yaml - Changes: - New feature: [Feature]: Remove some useless `hf_overrides` in yaml ### vllm-omni-contrib - Source: [PR #1890](vllm-project/vllm-omni#1890) - [NPU] Upgrade to v0.17.0 ### vllm-omni-contrib - Source: [PR #1889](vllm-project/vllm-omni#1889) - Add `Governance` section - Changes: - New feature: Add `Governance` section ### vllm-omni-distributed - Source: [PR #1881](vllm-project/vllm-omni#1881) - [Feat] Support T5 Tensor Parallelism - Changes: - New feature: [Feat] Support T5 Tensor Parallelism ### vllm-omni-cicd - Source: [PR #1881](vllm-project/vllm-omni#1881) - [Feat] Support T5 Tensor Parallelism - Changes: - New feature: [Feat] Support T5 Tensor Parallelism
… inference (Wan2.2, Qwen-Image series) (vllm-project#1979) Signed-off-by: samithuang <285365963@qq.com> Signed-off-by: Samit <285365963@qq.com> Co-authored-by: Hongsheng Liu <liuhongsheng4@huawei.com>
Summary
Fixes config arguments that diverge between offline (
Omni.generate) and online (serving chat / API) inference for diffusion models, causing different generation results for the same prompt+seed. Affected models: Wan2.2 (T2V/I2V), Qwen-Image, Qwen-Image-Edit, and Qwen-Image-Layered.Root Causes & Fixes
guidance_scale_2leaked the 0.0 sentinel value —OmniDiffusionRequest.__post_init__auto-filledguidance_scale_2fromguidance_scalebefore resolving the0.0sentinel back to1.0. Online requests that omittedguidance_scaleended up withguidance_scale_2 = 0.0, which disabled CFG entirely on the low-noise stage in Wan2.2 (and silently altered quality for Qwen-Image models). Fix: resolve the sentinel first, then auto-fillguidance_scale_2.num_inference_stepshardcoded to 50 inserving_chat.py— The chat endpoint forced 50 steps regardless of the pipeline's own default (Wan2.2 uses 40, others may differ). Fix: change theOmniDiffusionSamplingParams.num_inference_stepsdefault toNone(sentinel) so each pipeline'sforward()applies its own model-specific default when the caller does not specify a value.Redundant
guidance_scale_providedflag inAsyncOmniDiffusion—AsyncOmniDiffusion.generate()manually setguidance_scale_providedbeforeOmniDiffusionRequest.__post_init__ran, which then overwrote it anyway. Fix: remove the redundant pre-set;__post_init__now handles it correctly and consistently for both offline and online paths.cfg_scalenot mapped totrue_cfg_scalein online serving — The offline script uses--cfg-scalewhich maps totrue_cfg_scaleinOmniDiffusionSamplingParams, but the onlineserving_chat.pyhandler only readextra_body.get("true_cfg_scale"), silently ignoring thecfg_scalekey sent by clients. Fix: acceptcfg_scaleas an alias fortrue_cfg_scale.layersandresolutionnot passed through in online serving — Qwen-Image-Layered parameters fromextra_bodywere not forwarded toOmniDiffusionSamplingParams. Fix: add passthrough for both.RGB→RGBA conversion missing for Qwen-Image-Layered — The model's VAE expects 4-channel (RGBA) input, but online serving decodes user-uploaded images as RGB. Fix: add automatic RGB→RGBA conversion in both the preprocessing function and the fallback path in
forward().Files Changed
vllm_omni/diffusion/request.py__post_init__: resolve guidance_scale sentinel → set CFG flag → auto-fill guidance_scale_2vllm_omni/inputs/data.pynum_inference_stepsdefault from50toNonevllm_omni/entrypoints/openai/serving_chat.pycfg_scalealias; passlayers/resolutionvllm_omni/entrypoints/async_omni_diffusion.pyguidance_scale_providedpre-setvllm_omni/diffusion/worker/diffusion_model_runner.pynum_inference_steps=Nonevllm_omni/diffusion/models/qwen_image/pipeline_qwen_image_layered.pyTest Plan
For each model, offline and online inference were run with identical parameters (same seed, prompt, num_inference_steps, guidance scales). The raw output tensors (before any video/image encoding) were saved and compared at the numpy level to verify pixel-level identity.
Test Parameters
Test Scripts & Commands
Wan2.2, Qwen-Image, Qwen-Image-Edit
e2e_test_config_alignment.py(not committed) callingOmni.generate()directly with explicitOmniDiffusionSamplingParams.vllm serve <model> --omni, then sending requests viarequests.post()to/v1/chat/completions.e2e_test_outputs/(e.g.,qwen-image_offline.png,qwen-image_online.png,wan22_offline_raw.npy,wan22_online_raw.npy).Qwen-Image-Layered
Offline command:
Online server:
CUDA_VISIBLE_DEVICES=3 vllm serve "Qwen/Qwen-Image-Layered" \ --omni --port 8092 --enforce-eagerOnline request:
Output images saved to:
examples/e2e_layered_offline_0.pngtoe2e_layered_offline_3.png(4 RGBA layers, 864×480)/tmp/e2e_layered_online/raw_decoded.npy(the online API postprocessing for layered output has a pre-existing bug — images are compared at the raw tensor level before postprocessing)Offline:
Online:
Comparison:
Test Results
All four models produce pixel-identical output between offline and online inference:
Qwen-Image-Layered: Additional Fixes Verified
The online serving path for Qwen-Image-Layered had three additional issues fixed in this PR:
cfg_scale→true_cfg_scalemapping: Before fix,raw_true_cfg_scale=None(ignored); after fix,raw_true_cfg_scale=4.0(correctly mapped).expected input to have 4 channels, but got 3 channels; after fix, RGB images are automatically converted to RGBA.layers/resolutionpassthrough: These parameters are now correctly forwarded fromextra_bodyto the pipeline.Server log comparison (effective params in pipeline's
forward()):