Skip to content

[Misc][Main2Main] Upgrade vLLM to v0.20.1 and 0506#8983

Merged
wangxiyuan merged 29 commits intovllm-project:mainfrom
wxsIcey:wxs/0506-2
May 10, 2026
Merged

[Misc][Main2Main] Upgrade vLLM to v0.20.1 and 0506#8983
wangxiyuan merged 29 commits intovllm-project:mainfrom
wxsIcey:wxs/0506-2

Conversation

@wxsIcey
Copy link
Copy Markdown
Collaborator

@wxsIcey wxsIcey commented May 8, 2026

What this PR does / why we need it?

Fixes NPUInputBatch not have thinking_budget_state_holder = None, caused by [Reasoning][Feature] Support for speculative decoding with thinking budget

Fixes 'AscendMultiHeadLatentAttention' not have skip_topk, caused by [Feature]: IndexCache support for DSA models

Fixes MLA prefill backends selection, caused by [Attention] Abstract the MLA prefill backends and eliminate cuDNN

Fixes ModelRunner V2 eagle refactor, caused by [Model Runner V2] Skip attention metadata rebuild before draft prefill
, [Model Runner V2] Rebuild attn metadata between draft decode steps
, [Model Runner V2] Add logprob_token_ids support, [Model Runner V2] Fix rejection sampling acceptance rate gap vs MRV1

Does this PR introduce any user-facing change?

N/A

How was this patch tested?

CI passed with new added/existing test.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 8, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request upgrades the vLLM dependency to version 0.20.1 and introduces necessary adjustments to the Ascend NPU integration. The changes ensure compatibility with upstream vLLM refactors, specifically addressing missing attributes in input batches and attention layers, and providing robust handling for MLA prefill backends and speculative decoding workflows.

Highlights

  • Dependency Upgrade: Upgraded the vLLM dependency to version 0.20.1 across all Dockerfiles and documentation.
  • Compatibility Fixes: Added missing attributes to NPUInputBatch and AscendMultiHeadLatentAttention to maintain compatibility with upstream vLLM changes.
  • MLA Prefill Backend: Introduced a new patch for the MLA prefill backend to prevent runtime crashes on Ascend NPUs by registering a no-op implementation.
  • ModelRunner V2 Refactor: Updated Eagle speculative decoding and logprob sampling logic to support version-conditional execution for v0.20.1.
New Features

🧠 You can now enable Memory (public preview) to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Ignored Files
  • Ignored by pattern: .github/workflows/** (7)
    • .github/workflows/_e2e_test.yaml
    • .github/workflows/dockerfiles/Dockerfile.lint
    • .github/workflows/pr_test_full.yaml
    • .github/workflows/pr_test_light.yaml
    • .github/workflows/schedule_update_estimated_time.yaml
    • .github/workflows/schedule_vllm_e2e_test.yaml
    • .github/workflows/scripts/config.yaml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize the Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counterproductive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request upgrades the vLLM dependency to v0.20.1 and implements compatibility fixes for the Ascend NPU backend, covering MLA prefill handling, input batch structures, and Eagle speculative decoding. The reviewer identified a critical recurring logic error where version checks were inverted, which would disable compatibility logic on the target version. A redundant variable assignment in the MLA module was also noted.\n\nSuggested PR Title:\n\nmarkdown\n[Ops][Misc] Upgrade vLLM to v0.20.1 and compatibility fixes\n\n\nSuggested PR Summary:\n\nmarkdown\n### What this PR does / why we need it?\nThis PR upgrades the vLLM dependency to v0.20.1 and includes several compatibility fixes for the Ascend NPU backend:\n- Fixes `NPUInputBatch` compatibility with thinking budget state.\n- Fixes `AscendMultiHeadLatentAttention` missing `skip_topk` attribute.\n- Fixes MLA prefill backend selection to avoid crashes on Ascend.\n- Updates ModelRunner V2 Eagle speculative decoding to match vLLM 0.20.1 refactors.\n\n### Does this PR introduce _any_ user-facing change?\nNo.\n\n### How was this patch tested?\nCI passed with existing tests.\n

Comment thread vllm_ascend/patch/platform/patch_mla_prefill_backend.py
Comment thread vllm_ascend/worker/v2/sample/logprob.py Outdated
Comment thread vllm_ascend/worker/v2/spec_decode/eagle/aclgraph.py Outdated
Comment thread vllm_ascend/worker/v2/spec_decode/eagle/speculator.py Outdated
Comment thread vllm_ascend/ops/mla.py Outdated
@wxsIcey wxsIcey added ready read for review ready-for-test start test by label for PR labels May 8, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 9, 2026

This pull request has conflicts, please resolve those before we can evaluate the pull request.

wxsIcey added 12 commits May 9, 2026 04:08
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
wxsIcey added 6 commits May 9, 2026 04:10
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
wxsIcey added 9 commits May 9, 2026 09:28
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
from vllm.v1.worker.gpu.model_runner import GPUModelRunner

from vllm_ascend.ascend_forward_context import MoECommType, get_mrv2_in_profile_run
from vllm_ascend.worker.v2.model_runner import NPUModelRunner
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we do not want test model runner v2 on 0.20.1, we need delete it. Otherwise many import problems will occur.

@wxsIcey
Copy link
Copy Markdown
Collaborator Author

wxsIcey commented May 10, 2026

wxsIcey added 2 commits May 10, 2026 02:44
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
@wangxiyuan wangxiyuan merged commit 3636a3d into vllm-project:main May 10, 2026
51 of 54 checks passed
yangzhe-2026 pushed a commit to yangzhe-2026/vllm-ascend that referenced this pull request May 10, 2026
### What this PR does / why we need it?
Fixes `NPUInputBatch` not have `thinking_budget_state_holder = None`,
caused by [[Reasoning][Feature] Support for speculative decoding with
thinking budget](vllm-project/vllm#34668)

Fixes 'AscendMultiHeadLatentAttention' not have `skip_topk`, caused by
[[Feature]: IndexCache support for DSA
models](vllm-project/vllm#37735)

Fixes MLA prefill backends selection, caused by [[Attention] Abstract
the MLA prefill backends and eliminate
cuDNN](vllm-project/vllm#32623)

Fixes ModelRunner V2 eagle refactor, caused by [[Model Runner V2] Skip
attention metadata rebuild before draft prefill
](vllm-project/vllm#40410), [[Model Runner V2]
Rebuild attn metadata between draft decode steps
](vllm-project/vllm#41162), [[Model Runner V2]
Add logprob_token_ids
support](vllm-project/vllm#40559), [[Model
Runner V2] Fix rejection sampling acceptance rate gap vs
MRV1](vllm-project/vllm#40651)

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
CI passed with new added/existing test.

- vLLM version: v0.19.1
- vLLM main:
vllm-project/vllm@4d51588

---------

Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: yangzhe-2026 <yangzhe@isrc.iscas.ac.cn>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build documentation Improvements or additions to documentation module:ops ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants