Skip to content

[Engine] Add entrypoint class and stage management#13

Merged
hsliuustc0106 merged 6 commits intovllm-project:mainfrom
Gaohan123:entrypoint_stage
Oct 24, 2025
Merged

[Engine] Add entrypoint class and stage management#13
hsliuustc0106 merged 6 commits intovllm-project:mainfrom
Gaohan123:entrypoint_stage

Conversation

@Gaohan123
Copy link
Copy Markdown
Collaborator

Purpose

This PR implements Phase 1 features of #10 . It adds entrypoint class OmniLLM for vLLM-omni. And it also builds the stage management of multiple modules within a model.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/hsliuustc0106/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @Gaohan123, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request lays the foundational infrastructure for multi-stage model execution within the vllm-omni framework. It introduces a new entrypoint class, OmniLLM, designed to orchestrate complex workflows by chaining together different model stages. This enables flexible configuration and dynamic management of multi-modal models, moving towards a more modular and extensible architecture for advanced generation capabilities.

Highlights

  • New Configuration Classes: Introduced OmniModelConfig to extend base model configurations with stage-specific parameters and OmniEngineArgs to provide command-line argument parsing for these new omni-specific engine settings.
  • Refactored OmniLLM Entrypoint: The OmniLLM class has been completely rewritten to manage a sequence of Stage objects, orchestrating multi-stage model execution. It now dynamically loads stage configurations and chains their outputs.
  • Stage Management System: Replaced the StageManager with a new Stage class, which encapsulates the configuration, engine, and input/output processing logic for individual stages within a multi-stage model pipeline.
  • Dynamic Stage Configuration Loading: Added utility functions to load stage configurations from YAML files based on the model type, enabling flexible and declarative definition of multi-stage model architectures.
  • Multi-modal Input/Output Handling: Defined OmniTokensPrompt to support multi-modal inputs (e.g., prompt embeddings, additional information) and OmniRequestOutput to capture stage-specific output details.
  • Example Stage Configuration: Included an example YAML configuration for the qwen2_5_omni model, demonstrating how to define multiple stages like 'thinker', 'talker', and 'code2wav' with their respective engine arguments and input/output types.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR introduces the OmniLLM class for managing multi-stage models, along with stage management and configuration loading. It also includes a StageLLM class and updates to configuration and input data structures. The review focuses on correctness and potential issues in the new implementation.

Comment thread vllm_omni/entrypoints/omni_llm.py Outdated

return combined
stage = Stage(stage_config)
omni_llm = OmniLLM(model=model, **stage_config.engine_args)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

It seems like you are creating an OmniLLM instance within the initialize_stages method, but the OmniLLM class is designed to manage multiple stages. This could lead to confusion and potential issues. Consider whether you should be instantiating StageLLM here instead.

Comment on lines +22 to +23
@config
@dataclass(config=ConfigDict(arbitrary_types_allowed=True))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider adding a docstring to explain what the @config decorator does and why it's being used here. This will improve readability and understanding of the code.

Comment thread vllm_omni/engine/arg_utils.py Outdated
Comment on lines +29 to +30
parser.add_argument("--model-stage", type=str, default=EngineArgs.model_stage,
help="Declare model stage (e.g., 'thinker', 'talker', 'token2wav'). This will be written into model_config.model_stage for schedulers to use.")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The help message for --model-stage uses EngineArgs.model_stage as the default value, but it should reference OmniEngineArgs.model_stage to be accurate.

Suggested change
parser.add_argument("--model-stage", type=str, default=EngineArgs.model_stage,
help="Declare model stage (e.g., 'thinker', 'talker', 'token2wav'). This will be written into model_config.model_stage for schedulers to use.")
parser.add_argument("--model-stage", type=str, default=OmniEngineArgs.model_stage,
help="Declare model stage (e.g., 'thinker', 'talker', 'token2wav'). This will be written into model_config.model_stage for schedulers to use.")

engine_inputs = stage.process_engine_inputs(self.stage_list, prompts)
else:
engine_inputs = prompts
engine_outputs = self._run_generation(stage, sampling_params_list[stage_id], engine_inputs)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider adding a check to ensure that sampling_params_list has the same length as self.stage_list. If they don't match, it could lead to indexing errors or incorrect behavior.

Comment on lines +63 to +64
if len(self.engine_input_source) == 0:
raise ValueError("engine_input_source is empty")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Raising a ValueError if engine_input_source is empty might be too strict. There might be valid cases where a stage doesn't require an input source. Consider using a different approach, such as skipping the input processing or providing a default input.

@Gaohan123 Gaohan123 changed the title [Core] Add entrypoint class and stage management [Engine] Add entrypoint class and stage management Oct 22, 2025
Comment thread vllm_omni/config/__init__.py Outdated
"""
Configuration module for vLLM-omni.
"""
from vllm.config import ModelConfig
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import order should be reconfigrated, maybe we need to add precommit

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed. Later I think we can uniformly build a precommit and submit a PR to recheck all existing codes.

engine_args:
model_stage: thinker
model_arch: Qwen2_5OmniForConditionalGeneration
worker_cls: vllm_omni.worker.AR_gpu_worker.ARGPUWorker
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the worker file and class name should be renamed

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

engine_output_type: latent
engine_input_source: [0]
custom_process_input_func: vllm_omni.model_executor.stage_input_processors.qwen2_5_omni.thinker2talker

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

del blank here

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment thread vllm_omni/entrypoints/stage.py Outdated
raise IndexError(f"Stage config {stage_id} not found. Available stages: 0-{len(self.stage_configs)-1}")

return self.stage_configs[stage_id]
class Stage:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stage -> OmniStage
does it only process one stage?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if so, the stage_manager.py file name should be changed

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it only process one. Personally I think the name "Stage" is simple and clear. vLLM main doesn't have this concept. Also I modify stage_manager.py to stage.py

@@ -0,0 +1,32 @@
from vllm.inputs import TextPrompt
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imports should be reordered

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

Comment thread vllm_omni/entrypoints/omni_llm.py Outdated
return engine_outputs


class StageLLM(LLM):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in this file, OmniLLM, StageLLM, LLM, these relateionships between them are not clear

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After thinking, OmniLLM is the main entry class, which keeps consistent usage habit with LLM in vLLM main branch. And StageLLM inherited from LLM to process each stage with one engine. Specifically, One OmniLLM includes multiple StageLLMs, which inherits from LLM.

@Gaohan123
Copy link
Copy Markdown
Collaborator Author

Fix necessary reviews from #23 , other reviews for future development

@hsliuustc0106
Copy link
Copy Markdown
Collaborator

lgtm
approve

@hsliuustc0106 hsliuustc0106 merged commit 15dbe97 into vllm-project:main Oct 24, 2025
@Gaohan123 Gaohan123 deleted the entrypoint_stage branch November 1, 2025 02:27
princepride pushed a commit to princepride/vllm-omni that referenced this pull request Jan 10, 2026
[Engine] Add entrypoint class and stage management
R2-Y pushed a commit to R2-Y/vllm-omni that referenced this pull request Jan 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants