Skip to content

[Bugfix] Fix in_profile_run in mtp_proposer dummy_run#5165

Merged
wangxiyuan merged 3 commits intovllm-project:mainfrom
slippersss:bugfix_profile
Dec 18, 2025
Merged

[Bugfix] Fix in_profile_run in mtp_proposer dummy_run#5165
wangxiyuan merged 3 commits intovllm-project:mainfrom
slippersss:bugfix_profile

Conversation

@slippersss
Copy link
Copy Markdown
Contributor

@slippersss slippersss commented Dec 18, 2025

What this PR does / why we need it?

This PR aims to fix failure of enable_force_load_balance caused by missing in_profile_run in dummy_run of mtp_proposer.

Does this PR introduce any user-facing change?

N/A

How was this patch tested?

by ci

Signed-off-by: Zetong Li <slippersss@126.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a bug related to in_profile_run in mtp_proposer. The changes correctly add an is_profile parameter to dummy_run and pass it down. However, there are two critical issues. First, in mtp_proposer.py, the parameter passed to set_ascend_forward_context is named is_profile_run instead of the correct in_profile_run, which will cause the flag to be ignored. Second, the call to self.drafter.dummy_run in model_runner_v1.py now includes the is_profile argument, but other Proposer implementations (EagleProposer, NgramProposer, SuffixDecodingProposer) and the base Proposer interface have not been updated to accept this argument, which will lead to a TypeError at runtime.

Comment thread vllm_ascend/spec_decode/mtp_proposer.py Outdated
batch_descriptor=batch_descriptor,
is_mtp_model=True):
is_mtp_model=True,
is_profile_run=is_profile):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The parameter name is_profile_run is incorrect. The set_ascend_forward_context function expects in_profile_run. This typo will cause the is_profile flag to be ignored, as Python will not pass it to the function, and in_profile_run will use its default value (False). This makes the intended fix ineffective.

Suggested change
is_profile_run=is_profile):
in_profile_run=is_profile):

dummy_compute_logits=dummy_drafter_compute_logits,
in_graph_capturing=not force_attention)
in_graph_capturing=not force_attention,
is_profile=is_profile)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This change introduces the is_profile keyword argument to the dummy_run call. However, the Proposer interface and its other implementations (EagleProposer, NgramProposer, SuffixDecodingProposer) have not been updated to accept this argument. This will cause a TypeError at runtime if a proposer other than MtpProposer is used. To fix this, you should update the base Proposer interface in vllm_ascend/spec_decode/interface.py and all its subclasses to include is_profile=False in their dummy_run method signatures.

Signed-off-by: Zetong Li <slippersss@126.com>
Signed-off-by: Zetong Li <slippersss@126.com>
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@weijinqian0 weijinqian0 added ready read for review ready-for-test start test by label for PR labels Dec 18, 2025
@wangxiyuan wangxiyuan merged commit 2304218 into vllm-project:main Dec 18, 2025
54 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Dec 19, 2025
…to eplb_refactor

* 'main' of https://github.com/vllm-project/vllm-ascend: (52 commits)
  [Doc]Add the user_guide doc file regarding fine-grained TP. (vllm-project#5084)
  [pref] qwen3_next add triton ops : fused_sigmoid_gating_delta_rule_update (vllm-project#4818)
  [Feature] Add token mask for DispatchGmmCombineDecode operator (vllm-project#5171)
  [CI] Improve CI (vllm-project#5078)
  [Refactor] remove some metadata variables in attention_v1. (vllm-project#5160)
  Add Qwen3-VL-235B-A22B-Instruct tutorials (vllm-project#5167)
  [Doc] Add a perf tune section (vllm-project#5127)
  [Image] Refactor image build (vllm-project#5175)
  [refactor] refactor weight trans nz and transpose (vllm-project#4878)
  [BugFix]Fix precision issue for LoRA feature (vllm-project#4141)
  【Doc】Deepseekv3.1/R1 doc enhancement (vllm-project#4827)
  support basic long_seq feature st (vllm-project#5140)
  [Bugfix] install trition for test_custom_op (vllm-project#5112)
  [2/N][Pangu][MoE] Remove Pangu Related Code (vllm-project#5130)
  [bugfix] Use FUSED_MC2 MoE comm path for the op `dispatch_ffn_combine` (vllm-project#5156)
  [BugFix] Fix top_p,top_k issue with EAGLE and add top_p,top_k in EAGLE e2e (vllm-project#5131)
  [Doc][P/D] Fix MooncakeConnector's name (vllm-project#5172)
  [Bugfix] Fix in_profile_run in mtp_proposer dummy_run (vllm-project#5165)
  [Doc] Refact benchmark doc (vllm-project#5173)
  [Nightly]  Avoid max_model_len being smaller than the decoder prompt to prevent single-node-accuray-tests from failing (vllm-project#5174)
  ...

Signed-off-by: 白永斌 <baiyongbin3@h-partners.com>
chenaoxuan pushed a commit to chenaoxuan/vllm-ascend that referenced this pull request Dec 20, 2025
)

### What this PR does / why we need it?
This PR aims to fix failure of `enable_force_load_balance` caused by
missing `in_profile_run` in `dummy_run` of mtp_proposer.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

---------

Signed-off-by: Zetong Li <slippersss@126.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
)

### What this PR does / why we need it?
This PR aims to fix failure of `enable_force_load_balance` caused by
missing `in_profile_run` in `dummy_run` of mtp_proposer.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

---------

Signed-off-by: Zetong Li <slippersss@126.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
)

### What this PR does / why we need it?
This PR aims to fix failure of `enable_force_load_balance` caused by
missing `in_profile_run` in `dummy_run` of mtp_proposer.

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
by ci

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

---------

Signed-off-by: Zetong Li <slippersss@126.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants