Skip to content

[BugFix]Fix recomputed scheduler bug#7137

Merged
zzzzwwjj merged 1 commit intovllm-project:mainfrom
nwpu-zxr:fix_recomputed
Mar 10, 2026
Merged

[BugFix]Fix recomputed scheduler bug#7137
zzzzwwjj merged 1 commit intovllm-project:mainfrom
nwpu-zxr:fix_recomputed

Conversation

@nwpu-zxr
Copy link
Copy Markdown
Contributor

@nwpu-zxr nwpu-zxr commented Mar 10, 2026

What this PR does / why we need it?

Fix the wrong usage of model_type.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

By CI.

Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a bug in the recompute scheduler where the model_type was being accessed incorrectly, potentially leading to misidentification of hybrid models. The fix updates the attribute path to ensure the scheduler correctly identifies specific models (Qwen3_next, Qwen3_5) by referencing hf_text_config.model_type. This change resolves the underlying issue and ensures proper scheduler behavior without introducing any user-facing modifications.

Highlights

  • Bug Fix for Recompute Scheduler: Corrected the attribute path used to determine the model type within the RecomputeScheduler, specifically changing model_config.model_type to model_config.hf_text_config.model_type to accurately identify hybrid models like Qwen3_next and Qwen3_5.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • vllm_ascend/core/recompute_scheduler.py
    • Corrected the path for accessing the model type to hf_text_config.model_type when determining if a model is a hybrid type (Qwen3_next or Qwen3_5).
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a bug in RecomputeScheduler by using the right model_type from hf_text_config. This ensures that is_hybrid_model is determined correctly.

I've added one suggestion to improve the maintainability of the hybrid model check. By using a constant list of model substrings, it will be easier and less error-prone to add new hybrid models in the future.

Comment on lines 95 to 98
self.is_hybrid_model = (
"qwen3_next" in self.vllm_config.model_config.model_type
or "qwen3_5" in self.vllm_config.model_config.model_type
"qwen3_next" in self.vllm_config.model_config.hf_text_config.model_type
or "qwen3_5" in self.vllm_config.model_config.hf_text_config.model_type
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

For better maintainability and to reduce the risk of future bugs when new hybrid models are introduced, it's better to define the hybrid model substrings in a constant and then check for membership. This makes it easier to update and manage the list of hybrid models.

        hybrid_models = ("qwen3_next", "qwen3_5")
        model_type = self.vllm_config.model_config.hf_text_config.model_type
        self.is_hybrid_model = any(s in model_type for s in hybrid_models)

@zzzzwwjj zzzzwwjj merged commit e16009b into vllm-project:main Mar 10, 2026
20 of 22 checks passed
@nwpu-zxr nwpu-zxr deleted the fix_recomputed branch March 11, 2026 01:52
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Mar 12, 2026
…to qwen3next_graph

* 'main' of https://github.com/vllm-project/vllm-ascend: (88 commits)
  [main][bugfix] Fixed the problem of speculative decoding in FULL mode (vllm-project#7148)
  fixed fia pad logic in graph mode. (vllm-project#7144)
  [Doc] fix DSV3.1 PD configs (vllm-project#7187)
  refactor: add a check before layer_sharding logging (vllm-project#7186)
  [Build] Add support for Ascend950 chip (vllm-project#7151)
  Revert "[CI] fix skiped e2e test when upgrade vllm version  (vllm-project#6654)" (vllm-project#7166)
  [MODELRUNNERV2]fix penality ops (vllm-project#7013)
  [Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras (vllm-project#6650)
  [KV Pool]get_num_new_matched_tokens return 0 if token length < block_size (vllm-project#7146)
  [CI] Build Image for v0.16.0rc1 (vllm-project#7155)
  [CI] Skip `test_mooncake_layerwise_connector.py` in `ut` (vllm-project#7147)
  [BugFix]Fix recomputed scheduler bug (vllm-project#7137)
  [Model] Support Minimax-m2.5 on NPU (vllm-project#7105)
  [P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups (vllm-project#7022)
  Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule (vllm-project#7109)
  [Doc][ReleaseNote] Add release notes for v0.16.0rc1 (vllm-project#7067)
  [Misc] Download on both hk and guiyang region (vllm-project#7129)
  [bugdix] The problem that the w4a8 weight fails to be loaded when the EP is not enabled is resolved. (vllm-project#7090)
  [eagle][cp] fix eagle_cp enable bug2 (vllm-project#7079)
  [CI]Upgrade niglty multi-node-tests max-parallel to 2 (vllm-project#7035)
  ...
Nagisa125 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 17, 2026
### What this PR does / why we need it?
Fix the wrong usage of `model_type`.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
By CI.

- vLLM version: v0.16.0
- vLLM main:
vllm-project/vllm@4034c3d

Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants