Skip to content

[BugFix]config priority fix#2289

Closed
Bounty-hunter wants to merge 1 commit intovllm-project:mainfrom
Bounty-hunter:config_priority
Closed

[BugFix]config priority fix#2289
Bounty-hunter wants to merge 1 commit intovllm-project:mainfrom
Bounty-hunter:config_priority

Conversation

@Bounty-hunter
Copy link
Copy Markdown
Contributor

@Bounty-hunter Bounty-hunter commented Mar 28, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Background

Currently, vllm-omni constructs stage configurations during startup according to the following priority rules.

Diffusion Model

  • If stage_configs_path is provided, construct the configuration from it.
  • Otherwise, construct the configuration from CLI kwargs, e.g.:
python text_to_image.py --tensor-parallel-size 4

Omni Model

  • If stage_configs_path is provided, construct the configuration from it.
  • Otherwise, load the configuration from the default locations:
    vllm_omni/model_executor/stage_configs/
    vllm_omni/platforms/xxx/stage_configs/

Problem

HunyuanImage-3.0 supports two execution modes:

  • DIT only
  • AR + DIT (multi-stage)

However, during startup the system always loads the stage configuration from
vllm_omni/model_executor/stage_configs/. As a result, it always initializes the AR stage (or AR + DIT in the future), even when only the DIT stage is required.

PR #1826
attempts to address this by placing both AR and DIT configurations in hunyuan_image_3_moe.yaml, and dynamically selecting the relevant configuration based on the task type (text-to-image, image-to-text, etc.).

However, this approach introduces a new issue:

When starting Hunyuan DIT, a YAML configuration must always be specified, or use the default config with 8 card and 8 tensor parallel from hunyuan_image_3_moe.yaml

CLI kwargs (e.g. --tensor-parallel-size) are totally ignored.

This problem is discussed in:
#2282

Purpose

For DIT-only models (e.g., HunyuanImage DIT), the startup behavior should be consistent with other diffusion models. Users should be able to launch the model directly with CLI arguments, for example:

python -u examples/offline_inference/text_to_image/text_to_image.py \
--model /home/models/tencent/HunyuanImage-3.0  \
--prompt "A brown and white dog is running on the grass"  \
--output output_image_latest.png    \
--num-inference-steps 50    \
--tensor-parallel-size 4    \
--diffusion-only \
--cfg-scale 4.0

For multi-stage models (e.g., HunyuanImage AR + DIT), the system should still load stage configurations from the default directories:vllm_omni/model_executor/stage_configs/ or vllm_omni/platforms/xxx/stage_configs/.

Change

Introduce a new parameter: --diffusion-only

  • If --diffusion-only is set: it default to construct config from kwargs
  • If --diffusion-only is not set: it default to construct from vllm_omni/model_executor/stage_configs/ or vllm_omni/platforms/xxx/stage_configs/

cc @fake0fan @xuechendi @yinpeiqi

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please paste the results comparison before and after, or the e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@Bounty-hunter Bounty-hunter changed the title config priority fix [WIP]config priority fix Mar 28, 2026
@Bounty-hunter Bounty-hunter marked this pull request as ready for review March 28, 2026 07:08
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d03ed7022f

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 5 to 6
stage_args:
- stage_id: 0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep diffusion stage in default Hunyuan config

This change removes the stage_id: 1 diffusion stage from the default hunyuan_image_3_moe config, so launching without diffusion_only now resolves to an AR-only pipeline and breaks image generation flows that previously relied on default config resolution. That regresses both offline usage (tests/e2e/offline_inference/test_expert_parallel.py expects Omni(model="tencent/HunyuanImage-3.0") to return images) and serving paths (vllm_omni/entrypoints/openai/api_server.py rejects pipelines with no diffusion stage for /v1/images/*). The diffusion stage should remain in the default config, with diffusion_only controlling config-construction priority rather than deleting the stage.

Useful? React with 👍 / 👎.

Signed-off-by: dengyunyang <584797741@qq.com>
@Bounty-hunter Bounty-hunter changed the title [WIP]config priority fix [BugFix]config priority fix Mar 28, 2026
@Bounty-hunter
Copy link
Copy Markdown
Contributor Author

fix by #2076

1 similar comment
@Bounty-hunter
Copy link
Copy Markdown
Contributor Author

fix by #2076

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant