[Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras#6650
[Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras#6650paulyu12 merged 11 commits intovllm-project:mainfrom
Conversation
Signed-off-by: paulyu12 <507435917@qq.com>
Summary of ChangesHello @paulyu12, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical bug that previously prevented the vLLM server from operating with a specific configuration involving LoRA, tensor parallelism, and fully sharded LoRAs. The changes enable this configuration by refining the LoRA computation logic and introducing specialized sharded LoRA layer implementations for Ascend devices, ensuring broader compatibility and stability for advanced LoRA setups. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request addresses an issue with enabling LoRA, tensor parallelism (tp=2), and fully sharded LoRAs simultaneously. The fix involves adding support for Ascend-specific sharded LoRA layers by introducing new layer replacement classes and using decorators to conditionally apply them based on the fully_sharded_loras configuration. Additionally, a new end-to-end test is added to verify the fix for this specific scenario. A minor cleanup is also included, removing unused LoRA bias handling code. The changes are well-structured and appear correct.
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
…to qwen3next_graph * 'main' of https://github.com/vllm-project/vllm-ascend: (88 commits) [main][bugfix] Fixed the problem of speculative decoding in FULL mode (vllm-project#7148) fixed fia pad logic in graph mode. (vllm-project#7144) [Doc] fix DSV3.1 PD configs (vllm-project#7187) refactor: add a check before layer_sharding logging (vllm-project#7186) [Build] Add support for Ascend950 chip (vllm-project#7151) Revert "[CI] fix skiped e2e test when upgrade vllm version (vllm-project#6654)" (vllm-project#7166) [MODELRUNNERV2]fix penality ops (vllm-project#7013) [Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras (vllm-project#6650) [KV Pool]get_num_new_matched_tokens return 0 if token length < block_size (vllm-project#7146) [CI] Build Image for v0.16.0rc1 (vllm-project#7155) [CI] Skip `test_mooncake_layerwise_connector.py` in `ut` (vllm-project#7147) [BugFix]Fix recomputed scheduler bug (vllm-project#7137) [Model] Support Minimax-m2.5 on NPU (vllm-project#7105) [P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups (vllm-project#7022) Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule (vllm-project#7109) [Doc][ReleaseNote] Add release notes for v0.16.0rc1 (vllm-project#7067) [Misc] Download on both hk and guiyang region (vllm-project#7129) [bugdix] The problem that the w4a8 weight fails to be loaded when the EP is not enabled is resolved. (vllm-project#7090) [eagle][cp] fix eagle_cp enable bug2 (vllm-project#7079) [CI]Upgrade niglty multi-node-tests max-parallel to 2 (vllm-project#7035) ...
…ras (vllm-project#6650) ### What this PR does / why we need it? Fix the issue vllm-project#6143 . ### Does this PR introduce _any_ user-facing change? Allow to start the server with "--enable-lora && --fully-sharded-loras && --tensor_parallel_size 2". ### How was this patch tested? pytest -sv tests/e2e/multicard/2-cards/test_llama32_lora_tp2.py - vLLM version: v0.15.0 - vLLM main: vllm-project/vllm@d7e17aa --------- Signed-off-by: paulyu12 <507435917@qq.com> Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
What this PR does / why we need it?
Fix the issue #6143 .
Does this PR introduce any user-facing change?
Allow to start the server with "--enable-lora && --fully-sharded-loras && --tensor_parallel_size 2".
How was this patch tested?
pytest -sv tests/e2e/multicard/2-cards/test_llama32_lora_tp2.py