Skip to content

[Bugfix] Preserve default diffusion sampling params in default stage#2780

Merged
Gaohan123 merged 2 commits into
vllm-project:mainfrom
david6666666:codex/issue-2776-generator-device-defaults
Apr 17, 2026
Merged

[Bugfix] Preserve default diffusion sampling params in default stage#2780
Gaohan123 merged 2 commits into
vllm-project:mainfrom
david6666666:codex/issue-2776-generator-device-defaults

Conversation

@david6666666
Copy link
Copy Markdown
Collaborator

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Fix the default diffusion-stage bootstrap path so generator_device and other diffusion sampling fields can be provided through --default-sampling-params.

Before this change, generator_device was already supported in OmniDiffusionSamplingParams and honored by the diffusion runner, but the auto-generated single-stage diffusion config did not preserve default_sampling_params from the CLI. As a result, defaults passed through --default-sampling-params could be dropped in the default stage-config path.

This change keeps the existing CLI surface unchanged and aligns startup defaults with the per-request API behavior, instead of introducing a separate dedicated CLI flag.

Test Plan

  1. Add a unit test to verify AsyncOmniEngine._create_default_diffusion_stage_cfg() preserves parsed stage-0 default_sampling_params.
  2. Extend OpenAI image server tests to cover generator_device coming from default sampling params for both single-stage and AsyncOmni paths.
  3. Run lightweight local validation with:
    • python -m py_compile vllm_omni/engine/async_omni_engine.py tests/entrypoints/test_async_omni_diffusion_config.py tests/entrypoints/openai_api/test_image_server.py

Test Result

  • Added regression coverage for default diffusion stage config propagation and image API default sampling params.
  • py_compile passed for all modified files.
  • Full pytest execution was not completed in the local environment because test startup was blocked by missing dependencies (cv2, aenum).

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please paste the results comparison before and after, or the e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: David Chen <530634352@qq.com>
@chatgpt-codex-connector
Copy link
Copy Markdown

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

default_sampling_params = None
if not isinstance(default_sampling_params, dict):
default_sampling_params = None
stage_default_sampling_params = default_sampling_params.get("0", {}) if default_sampling_params else {}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So here the logic is, for multiple stages model, we take stage-0 as defaulte params for all stages. Right?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not exactly.

This code path is only used when we synthesize the fallback single-stage diffusion config in _create_default_diffusion_stage_cfg(). In that case, there is only one generated stage, and its stage_id is always 0, so reading default_sampling_params["0"] is intentional.

For multi-stage models, we do not go through this helper. We load the resolved stage configs directly, and each stage keeps its own default_sampling_params, which are later read from that stage’s config during metadata extraction. So this change does not apply stage-0 defaults to all stages; it only preserves stage-0 defaults for the synthetic single-stage diffusion path.

@Gaohan123 Gaohan123 added this to the v0.20.0 milestone Apr 14, 2026
@david6666666
Copy link
Copy Markdown
Collaborator Author

@fake0fan @gcanlin ptal thx

lishunyang12
lishunyang12 previously approved these changes Apr 16, 2026
Copy link
Copy Markdown
Collaborator

@lishunyang12 lishunyang12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. The change is correct and well-scoped.

What it does: When _create_default_diffusion_stage_cfg builds a single-stage diffusion config from CLI kwargs, it now parses default_sampling_params (JSON string keyed by stage index), extracts the stage-0 entry, and includes it in the generated stage config dict. This aligns with how stage_config.py / parse_stage_deploy_config reads default_sampling_params from stage data.

Correctness notes:

  • The hardcoded key "0" is correct here since this method always creates a single stage with stage_id: 0.
  • JSON parse error handling is reasonable (logs a warning, falls back to empty dict).
  • The guard if not isinstance(default_sampling_params, dict) handles the case where the parsed JSON is a non-dict type (e.g., a list or number).
  • The "default_sampling_params" key is placed at the correct level in the stage config dict (sibling of engine_args, runtime, etc.), matching what parse_stage_deploy_config expects via stage_data.get("default_sampling_params").

Tests: Unit test for the builder method and extended integration assertions for generator_device propagation both look good.

Minor optional suggestion (non-blocking): the variable name stage_default_sampling_params could be just stage_sampling_defaults for brevity, but this is purely stylistic.

@lishunyang12 lishunyang12 dismissed their stale review April 16, 2026 14:55

Replacing with inline comments

@david6666666 david6666666 added the ready label to trigger buildkite CI label Apr 17, 2026
Copy link
Copy Markdown
Collaborator

@Gaohan123 Gaohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks

@Gaohan123 Gaohan123 merged commit 1237882 into vllm-project:main Apr 17, 2026
7 of 8 checks passed
lvliang-intel pushed a commit to lvliang-intel/vllm-omni that referenced this pull request Apr 20, 2026
david6666666 added a commit that referenced this pull request Apr 20, 2026
 #2877 (#2878)

Signed-off-by: david6666666 <530634352@qq.com>
Signed-off-by: David Chen <530634352@qq.com>
Signed-off-by: WeiQing Chen <40507679+david6666666@users.noreply.github.com>
lengrongfu pushed a commit to lengrongfu/vllm-omni that referenced this pull request May 1, 2026
clodaghwalsh17 pushed a commit to clodaghwalsh17/nm-vllm-omni-ent that referenced this pull request May 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Support generator_device within --default-sampling-params for multi-modal models (Feedback on PR #2769)

3 participants