Skip to content

Main2main Upgrade vllm commit to 0320 17:00#7510

Merged
wangxiyuan merged 2 commits intovllm-project:mainfrom
leo-pony:main2main_0320
Mar 23, 2026
Merged

Main2main Upgrade vllm commit to 0320 17:00#7510
wangxiyuan merged 2 commits intovllm-project:mainfrom
leo-pony:main2main_0320

Conversation

@leo-pony
Copy link
Copy Markdown
Collaborator

@leo-pony leo-pony commented Mar 20, 2026

What this PR does / why we need it?

Main2main Upgrade vllm commit to 0320 17:00

  1. fix vllm refactored _moe_forward to call runner.forward_impl_chunked() when runner.use_dp_chunking is True. vllm PR:"[MoE Refactor] DefaultMoERunner simplification #33049"

2.fix vllm moved the call to self._set_compile_ranges() in VllmConfig.__post_init__ from before check_and_update_config() to after it (to allow platforms to lower max_num_batched_tokens first). vllm PR: "fix(xpu): Re-compute compile ranges after platform-specific config updates" #37523

Does this PR introduce any user-facing change?

NA

How was this patch tested?

NA

@leo-pony leo-pony marked this pull request as draft March 20, 2026 09:09
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on upgrading the vLLM commit to a specific version (0320 17:00) and ensuring compatibility with vLLM version 0.17.0. It introduces conditional logic based on the vLLM version to adapt the behavior of certain components within the vLLM-Ascend project. The changes primarily involve incorporating version checking using the vllm_version_is function and adjusting code accordingly.

Highlights

  • vLLM Version Compatibility: This PR introduces version checking for vLLM to ensure compatibility between different parts of the vLLM-Ascend project, specifically addressing changes introduced in vLLM version 0.17.0.
  • Code Modifications: The code changes primarily involve conditional logic based on the vLLM version, using the vllm_version_is function to adapt the behavior of certain components.
  • File Updates: Several files were updated to incorporate the version checking mechanism, including docs/source/community/versioning_policy.md, tests/ut/distributed/device_communicators/test_pyhccl.py, vllm_ascend/ascend_forward_context.py, vllm_ascend/ops/mla.py, vllm_ascend/patch/worker/patch_qwen3_5.py, vllm_ascend/patch/worker/patch_qwen3_next.py, and vllm_ascend/platform.py.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Ignored Files
  • Ignored by pattern: .github/workflows/** (6)
    • .github/workflows/_e2e_test.yaml
    • .github/workflows/bot_pr_create.yaml
    • .github/workflows/dockerfiles/Dockerfile.lint
    • .github/workflows/pr_test_full.yaml
    • .github/workflows/pr_test_light.yaml
    • .github/workflows/schedule_codecov_refresh.yaml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the vLLM commit reference in the documentation and introduces conditional logic across several files to ensure compatibility with vLLM version 0.17.0. This adaptation addresses API changes in the upstream vLLM library, specifically concerning StatelessProcessGroup and the handling of virtual_engine within the forward context. The changes are well-contained within conditional blocks, allowing the codebase to support different vLLM versions. The documentation update reflects the new vLLM commit hash.

@leo-pony leo-pony added ready read for review ready-for-test start test by label for PR labels Mar 20, 2026
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

1 similar comment
@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

leo-pony and others added 2 commits March 23, 2026 09:12
Signed-off-by: leo-pony <nengjunma@outlook.com>
Root causes:
- AscendMoERunner enters forward_impl_chunked path on DP runs due to
  flashinfer_all2allv backend causing use_dp_chunking=True; fix by
  overriding use_dp_chunking=False in AscendMoERunner
- compile_ranges_endpoints is None when update_compile_ranges_split_points
  runs after vLLM moved _set_compile_ranges() to after check_and_update_config;
  fix by returning [] instead of None in _get_compile_ranges

Upstream commit range: 6a9cceb..ed359c4

Co-Authored-By: Claude Code <noreply@anthropic.com>
Signed-off-by: leo-pony <nengjunma@outlook.com>
@leo-pony leo-pony marked this pull request as ready for review March 23, 2026 09:18
@wangxiyuan wangxiyuan merged commit fcba91a into vllm-project:main Mar 23, 2026
38 checks passed
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 25, 2026
### What this PR does / why we need it?
Main2main Upgrade vllm commit to 0320 17:00

1. fix vllm refactored `_moe_forward` to call
`runner.forward_impl_chunked()` when `runner.use_dp_chunking` is True.
vllm PR:"[MoE Refactor] DefaultMoERunner simplification
[#33049](vllm-project/vllm#33049)"

2.fix vllm moved the call to `self._set_compile_ranges()` in
`VllmConfig.__post_init__` from **before** `check_and_update_config()`
to **after** it (to allow platforms to lower `max_num_batched_tokens`
first). vllm PR: "fix(xpu): Re-compute compile ranges after
platform-specific config updates"
[#37523](vllm-project/vllm#37523)


### Does this PR introduce _any_ user-facing change?
NA

### How was this patch tested?
NA

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@8b63257

---------

Signed-off-by: leo-pony <nengjunma@outlook.com>
Co-authored-by: Claude Code <noreply@anthropic.com>
lihaokun-2026 pushed a commit to lihaokun-2026/vllm-ascend that referenced this pull request Mar 29, 2026
### What this PR does / why we need it?
Main2main Upgrade vllm commit to 0320 17:00

1. fix vllm refactored `_moe_forward` to call
`runner.forward_impl_chunked()` when `runner.use_dp_chunking` is True.
vllm PR:"[MoE Refactor] DefaultMoERunner simplification
[#33049](vllm-project/vllm#33049)"

2.fix vllm moved the call to `self._set_compile_ranges()` in
`VllmConfig.__post_init__` from **before** `check_and_update_config()`
to **after** it (to allow platforms to lower `max_num_batched_tokens`
first). vllm PR: "fix(xpu): Re-compute compile ranges after
platform-specific config updates"
[#37523](vllm-project/vllm#37523)


### Does this PR introduce _any_ user-facing change?
NA

### How was this patch tested?
NA

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@8b63257

---------

Signed-off-by: leo-pony <nengjunma@outlook.com>
Co-authored-by: Claude Code <noreply@anthropic.com>
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Apr 1, 2026
### What this PR does / why we need it?
Main2main Upgrade vllm commit to 0320 17:00

1. fix vllm refactored `_moe_forward` to call
`runner.forward_impl_chunked()` when `runner.use_dp_chunking` is True.
vllm PR:"[MoE Refactor] DefaultMoERunner simplification
[#33049](vllm-project/vllm#33049)"

2.fix vllm moved the call to `self._set_compile_ranges()` in
`VllmConfig.__post_init__` from **before** `check_and_update_config()`
to **after** it (to allow platforms to lower `max_num_batched_tokens`
first). vllm PR: "fix(xpu): Re-compute compile ranges after
platform-specific config updates"
[#37523](vllm-project/vllm#37523)


### Does this PR introduce _any_ user-facing change?
NA

### How was this patch tested?
NA

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@8b63257

---------

Signed-off-by: leo-pony <nengjunma@outlook.com>
Co-authored-by: Claude Code <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build documentation Improvements or additions to documentation module:core module:ops module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants