[Misc][Main2Main] Upgrade vLLM to v0.20.1 and 0506#8983
[Misc][Main2Main] Upgrade vLLM to v0.20.1 and 0506#8983wangxiyuan merged 29 commits intovllm-project:mainfrom
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request upgrades the vLLM dependency to version 0.20.1 and introduces necessary adjustments to the Ascend NPU integration. The changes ensure compatibility with upstream vLLM refactors, specifically addressing missing attributes in input batches and attention layers, and providing robust handling for MLA prefill backends and speculative decoding workflows. Highlights
New Features🧠 You can now enable Memory (public preview) to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize the Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counterproductive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request upgrades the vLLM dependency to v0.20.1 and implements compatibility fixes for the Ascend NPU backend, covering MLA prefill handling, input batch structures, and Eagle speculative decoding. The reviewer identified a critical recurring logic error where version checks were inverted, which would disable compatibility logic on the target version. A redundant variable assignment in the MLA module was also noted.\n\nSuggested PR Title:\n\nmarkdown\n[Ops][Misc] Upgrade vLLM to v0.20.1 and compatibility fixes\n\n\nSuggested PR Summary:\n\nmarkdown\n### What this PR does / why we need it?\nThis PR upgrades the vLLM dependency to v0.20.1 and includes several compatibility fixes for the Ascend NPU backend:\n- Fixes `NPUInputBatch` compatibility with thinking budget state.\n- Fixes `AscendMultiHeadLatentAttention` missing `skip_topk` attribute.\n- Fixes MLA prefill backend selection to avoid crashes on Ascend.\n- Updates ModelRunner V2 Eagle speculative decoding to match vLLM 0.20.1 refactors.\n\n### Does this PR introduce _any_ user-facing change?\nNo.\n\n### How was this patch tested?\nCI passed with existing tests.\n
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
Signed-off-by: wxsIcey <1790571317@qq.com>
| from vllm.v1.worker.gpu.model_runner import GPUModelRunner | ||
|
|
||
| from vllm_ascend.ascend_forward_context import MoECommType, get_mrv2_in_profile_run | ||
| from vllm_ascend.worker.v2.model_runner import NPUModelRunner |
There was a problem hiding this comment.
if we do not want test model runner v2 on 0.20.1, we need delete it. Otherwise many import problems will occur.
Signed-off-by: wxsIcey <1790571317@qq.com>
### What this PR does / why we need it? Fixes `NPUInputBatch` not have `thinking_budget_state_holder = None`, caused by [[Reasoning][Feature] Support for speculative decoding with thinking budget](vllm-project/vllm#34668) Fixes 'AscendMultiHeadLatentAttention' not have `skip_topk`, caused by [[Feature]: IndexCache support for DSA models](vllm-project/vllm#37735) Fixes MLA prefill backends selection, caused by [[Attention] Abstract the MLA prefill backends and eliminate cuDNN](vllm-project/vllm#32623) Fixes ModelRunner V2 eagle refactor, caused by [[Model Runner V2] Skip attention metadata rebuild before draft prefill ](vllm-project/vllm#40410), [[Model Runner V2] Rebuild attn metadata between draft decode steps ](vllm-project/vllm#41162), [[Model Runner V2] Add logprob_token_ids support](vllm-project/vllm#40559), [[Model Runner V2] Fix rejection sampling acceptance rate gap vs MRV1](vllm-project/vllm#40651) ### Does this PR introduce _any_ user-facing change? N/A ### How was this patch tested? CI passed with new added/existing test. - vLLM version: v0.19.1 - vLLM main: vllm-project/vllm@4d51588 --------- Signed-off-by: wxsIcey <1790571317@qq.com> Signed-off-by: yangzhe-2026 <yangzhe@isrc.iscas.ac.cn>
What this PR does / why we need it?
Fixes
NPUInputBatchnot havethinking_budget_state_holder = None, caused by [Reasoning][Feature] Support for speculative decoding with thinking budgetFixes 'AscendMultiHeadLatentAttention' not have
skip_topk, caused by [Feature]: IndexCache support for DSA modelsFixes MLA prefill backends selection, caused by [Attention] Abstract the MLA prefill backends and eliminate cuDNN
Fixes ModelRunner V2 eagle refactor, caused by [Model Runner V2] Skip attention metadata rebuild before draft prefill
, [Model Runner V2] Rebuild attn metadata between draft decode steps
, [Model Runner V2] Add logprob_token_ids support, [Model Runner V2] Fix rejection sampling acceptance rate gap vs MRV1
Does this PR introduce any user-facing change?
N/A
How was this patch tested?
CI passed with new added/existing test.