Skip to content

[Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras#6650

Merged
paulyu12 merged 11 commits intovllm-project:mainfrom
paulyu12:sharded_loras
Mar 11, 2026
Merged

[Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras#6650
paulyu12 merged 11 commits intovllm-project:mainfrom
paulyu12:sharded_loras

Conversation

@paulyu12
Copy link
Copy Markdown
Collaborator

@paulyu12 paulyu12 commented Feb 10, 2026

What this PR does / why we need it?

Fix the issue #6143 .

Does this PR introduce any user-facing change?

Allow to start the server with "--enable-lora && --fully-sharded-loras && --tensor_parallel_size 2".

How was this patch tested?

pytest -sv tests/e2e/multicard/2-cards/test_llama32_lora_tp2.py

@paulyu12 paulyu12 requested a review from wangxiyuan as a code owner February 10, 2026 03:05
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @paulyu12, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug that previously prevented the vLLM server from operating with a specific configuration involving LoRA, tensor parallelism, and fully sharded LoRAs. The changes enable this configuration by refining the LoRA computation logic and introducing specialized sharded LoRA layer implementations for Ascend devices, ensuring broader compatibility and stability for advanced LoRA setups.

Highlights

  • Bugfix for LoRA with Tensor Parallelism and Fully Sharded LoRAs: Resolved an issue preventing the server from starting when LoRA, tensor parallelism (tp2), and fully sharded LoRAs were all enabled simultaneously, specifically addressing issue [Bug]: vllm_ascend的lora特性使用--fully-sharded-loras报错 #6143.
  • Refactored LoRA Bias Handling: The add_expand function in vllm_ascend/lora/punica_npu.py was modified to remove explicit bias handling, simplifying the LoRA computation logic.
  • Introduced Ascend-specific Sharded LoRA Layers: New classes for sharded LoRA implementations (e.g., AscendColumnParallelLinearWithShardedLoRA) were added and registered to support fully sharded LoRA configurations on Ascend devices.
  • New End-to-End Test: A new end-to-end test (test_llama32_lora_tp2.py) was added to validate the fix and ensure proper functionality of LoRA with tensor parallelism and fully sharded LoRAs.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/e2e/multicard/2-cards/test_llama32_lora_tp2.py
    • Added a new end-to-end test file to verify LoRA functionality with tensor parallelism (tp2) and fully sharded LoRAs for the Llama 3.2 model.
  • vllm_ascend/lora/punica_npu.py
    • Removed the lora_bias_stacked parameter from the add_expand function signature.
    • Updated the documentation and semantics of add_expand to reflect the removal of bias handling.
    • Eliminated the call to _apply_bias within the add_expand function.
    • Adjusted the call to self.add_expand in add_lora_linear to align with the updated add_expand signature.
  • vllm_ascend/lora/utils.py
    • Imported _fully_sharded_can_replace and several *WithShardedLoRA classes from vllm.lora.layers.utils.
    • Applied the _not_fully_sharded_can_replace decorator to existing Ascend-specific LoRA layer classes.
    • Introduced new Ascend-specific sharded LoRA layer classes: AscendColumnParallelLinearWithShardedLoRA, AscendMergedColumnParallelLinearWithShardedLoRA, AscendMergedQKVParallelLinearWithShardedLoRA, AscendQKVParallelLinearWithShardedLoRA, and AscendRowParallelLinearWithShardedLoRA.
    • Registered all newly added sharded LoRA classes in the refresh_all_lora_classes function.
Activity
  • No human activity (comments, reviews, etc.) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue with enabling LoRA, tensor parallelism (tp=2), and fully sharded LoRAs simultaneously. The fix involves adding support for Ascend-specific sharded LoRA layers by introducing new layer replacement classes and using decorators to conditionally apply them based on the fully_sharded_loras configuration. Additionally, a new end-to-end test is added to verify the fix for this specific scenario. A minor cleanup is also included, removing unused LoRA bias handling code. The changes are well-structured and appear correct.

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@paulyu12 paulyu12 changed the title [Bugfix][LoRA] Fix the issue when enable LoRA + tp2 + fully_sharded_loras [Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras Feb 10, 2026
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
@paulyu12 paulyu12 requested a review from Yikun as a code owner February 10, 2026 06:16
@paulyu12 paulyu12 added ready read for review ready-for-test start test by label for PR module:lora and removed module:tests ready read for review ready-for-test start test by label for PR labels Feb 10, 2026
@wangxiyuan wangxiyuan added ready-for-test start test by label for PR and removed ready-for-test start test by label for PR labels Feb 25, 2026
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
paulyu12 added 3 commits March 3, 2026 16:25
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
@paulyu12 paulyu12 merged commit 830f39d into vllm-project:main Mar 11, 2026
53 of 55 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Mar 12, 2026
…to qwen3next_graph

* 'main' of https://github.com/vllm-project/vllm-ascend: (88 commits)
  [main][bugfix] Fixed the problem of speculative decoding in FULL mode (vllm-project#7148)
  fixed fia pad logic in graph mode. (vllm-project#7144)
  [Doc] fix DSV3.1 PD configs (vllm-project#7187)
  refactor: add a check before layer_sharding logging (vllm-project#7186)
  [Build] Add support for Ascend950 chip (vllm-project#7151)
  Revert "[CI] fix skiped e2e test when upgrade vllm version  (vllm-project#6654)" (vllm-project#7166)
  [MODELRUNNERV2]fix penality ops (vllm-project#7013)
  [Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras (vllm-project#6650)
  [KV Pool]get_num_new_matched_tokens return 0 if token length < block_size (vllm-project#7146)
  [CI] Build Image for v0.16.0rc1 (vllm-project#7155)
  [CI] Skip `test_mooncake_layerwise_connector.py` in `ut` (vllm-project#7147)
  [BugFix]Fix recomputed scheduler bug (vllm-project#7137)
  [Model] Support Minimax-m2.5 on NPU (vllm-project#7105)
  [P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups (vllm-project#7022)
  Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule (vllm-project#7109)
  [Doc][ReleaseNote] Add release notes for v0.16.0rc1 (vllm-project#7067)
  [Misc] Download on both hk and guiyang region (vllm-project#7129)
  [bugdix] The problem that the w4a8 weight fails to be loaded when the EP is not enabled is resolved. (vllm-project#7090)
  [eagle][cp] fix eagle_cp enable bug2 (vllm-project#7079)
  [CI]Upgrade niglty multi-node-tests max-parallel to 2 (vllm-project#7035)
  ...
Nagisa125 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 17, 2026
…ras (vllm-project#6650)

### What this PR does / why we need it?
Fix the issue vllm-project#6143 .

### Does this PR introduce _any_ user-facing change?
Allow to start the server with "--enable-lora && --fully-sharded-loras
&& --tensor_parallel_size 2".

### How was this patch tested?
pytest -sv tests/e2e/multicard/2-cards/test_llama32_lora_tp2.py
- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@d7e17aa

---------

Signed-off-by: paulyu12 <507435917@qq.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:lora ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants