Skip to content

[bugfix][LoRA] Fix the lora accuracy issue introduced by the upstream vLLM changed.#6958

Merged
MengqingCao merged 9 commits intovllm-project:mainfrom
paulyu12:fix_lora_accuracy
Mar 10, 2026
Merged

[bugfix][LoRA] Fix the lora accuracy issue introduced by the upstream vLLM changed.#6958
MengqingCao merged 9 commits intovllm-project:mainfrom
paulyu12:fix_lora_accuracy

Conversation

@paulyu12
Copy link
Copy Markdown
Collaborator

@paulyu12 paulyu12 commented Mar 3, 2026

What this PR does / why we need it?

Fix the LoRA e2e test accuracy issue that introduced by the upstream PR vllm-project/vllm#32005

Does this PR introduce any user-facing change?

No.

How was this patch tested?

pytest -sv tests/e2e/singlecard/test_llama32_lora.py

Signed-off-by: paulyu12 <507435917@qq.com>
@paulyu12 paulyu12 requested a review from MengqingCao as a code owner March 3, 2026 09:02
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a LoRA accuracy regression that emerged from an upstream vLLM change. It refines the internal handling of active LoRA adapters within the model_runner_v1 component, specifically impacting CUDA graph capture logic. The modifications ensure that the system correctly identifies and accounts for the number of active LoRAs, thereby resolving the reported accuracy issue without introducing any user-facing changes.

Highlights

  • LoRA Parameter Refinement: Introduced a new parameter, force_num_active_loras, to the _determine_batch_execution_and_padding function, allowing for explicit control over the count of active LoRA adapters during batch execution.
  • has_lora Calculation Update: Modified the logic for determining has_lora to derive its value from num_active_loras, ensuring consistent and accurate LoRA state identification, especially for CUDA graph capture.
  • Simplified _dummy_run Function: Removed the activate_lora parameter from the _dummy_run function and updated its internal calls to directly utilize num_active_loras, streamlining the LoRA activation process.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • vllm_ascend/worker/model_runner_v1.py
    • Added a new parameter force_num_active_loras to the _determine_batch_execution_and_padding function signature.
    • Revised the calculation of has_lora to depend on the newly introduced num_active_loras variable.
    • Removed the activate_lora parameter from the _dummy_run function signature.
    • Modified calls within _dummy_run to pass num_active_loras to _determine_batch_execution_and_padding and the LoRAContext manager.
Activity
  • No human activity (comments, reviews, etc.) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a LoRA accuracy issue by refining how the number of active LoRAs is handled, especially for CUDA graph capturing. The changes introduce a num_active_loras parameter and propagate it through _determine_batch_execution_and_padding and _dummy_run. While the overall direction is correct, I've identified a critical inconsistency in _dummy_run that could lead to incorrect graph capture and replay behavior.

Comment on lines +2178 to +2182
num_active_loras=(
self.lora_config.max_loras
if self.lora_config is not None
else num_active_loras
),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There's an inconsistency in how num_active_loras is used within the _dummy_run function. Here, num_active_loras is conditionally set to self.lora_config.max_loras for the maybe_dummy_run_with_lora context manager. However, the call to _determine_batch_execution_and_padding on lines 2101-2102 uses the original num_active_loras argument passed to _dummy_run. This discrepancy can cause a mismatch between the CUDA graph captured (based on one number of LoRAs) and the dummy run setup (based on another), potentially leading to correctness issues or runtime errors during graph replay.

To ensure consistency, the same value for num_active_loras should be used for both determining the batch descriptor and for the dummy LoRA setup. If the intention is to use max_loras for graph capture, this should be reflected in both places. The simplest fix to ensure consistency is to use the num_active_loras parameter from the function signature in both calls.

            num_active_loras=num_active_loras,

Signed-off-by: paulyu12 <507435917@qq.com>
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 3, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: paulyu12 <507435917@qq.com>
@paulyu12 paulyu12 added ready read for review ready-for-test start test by label for PR and removed ready-for-test start test by label for PR ready read for review labels Mar 4, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 6, 2026

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Signed-off-by: yupeng <507435917@qq.com>
@paulyu12 paulyu12 requested a review from wangxiyuan as a code owner March 6, 2026 09:11
@paulyu12 paulyu12 force-pushed the fix_lora_accuracy branch from da16d65 to f9197f9 Compare March 9, 2026 03:14
Signed-off-by: paulyu12 <507435917@qq.com>
@paulyu12 paulyu12 force-pushed the fix_lora_accuracy branch from f9197f9 to 2ce61cb Compare March 9, 2026 03:16
@MengqingCao MengqingCao merged commit 40f7d93 into vllm-project:main Mar 10, 2026
54 of 55 checks passed
Nagisa125 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 17, 2026
… vLLM changed. (vllm-project#6958)

### What this PR does / why we need it?
Fix the LoRA e2e test accuracy issue that introduced by the upstream PR
vllm-project/vllm#32005

### How was this patch tested?
pytest -sv tests/e2e/singlecard/test_llama32_lora.py

- vLLM version: v0.16.0
- vLLM main:
vllm-project/vllm@15d76f7
---------
Signed-off-by: paulyu12 <507435917@qq.com>
Signed-off-by: yupeng <507435917@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants