Skip to content

Conversation

@shen-shanshan
Copy link
Collaborator

What this PR does / why we need it?

What's the PR does:

  1. Move AscendSparseMoeBlock to qwen3 model, since it's only used by qwen3 model.
  2. Disable AscendSparseMoeBlock if aclgraph is enabled, AscendSparseMoeBlock doesn't work with aclgraph currently.

Does this PR introduce any user-facing change?

How was this patch tested?

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix an issue where qwen3 MoE models do not work with aclgraph. The changes involve moving the Ascend-specific MoE block implementation to the qwen3 model file and conditionally disabling it when aclgraph is enabled. While the overall approach is sound, the current implementation introduces critical bugs. Specifically, the method signatures for the newly introduced CustomSparseMoeBlock and the conditionally used Qwen3MoeSparseMoeBlock are incompatible with how they are called, which will lead to runtime TypeErrors. I have provided detailed comments and suggestions to address these critical issues.

Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
Signed-off-by: shen-shanshan <[email protected]>
example_prompts = [
"Hello, my name is",
]
dtype = "half"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vllm runner actually runs in eager mode by default.

@wangxiyuan wangxiyuan merged commit 9f590c7 into vllm-project:v0.9.1-dev Aug 22, 2025
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants