Skip to content

[WIP] [Feature]: LoRA for vision modules#20787

Closed
prashanth058 wants to merge 1 commit intovllm-project:mainfrom
prashanth058:qwen_vision_lora
Closed

[WIP] [Feature]: LoRA for vision modules#20787
prashanth058 wants to merge 1 commit intovllm-project:mainfrom
prashanth058:qwen_vision_lora

Conversation

@prashanth058
Copy link
Copy Markdown
Contributor

@prashanth058 prashanth058 commented Jul 10, 2025

Purpose

add LoRA support for Vision modules as well in multimodal models(#11255, #17660). Currently, this PR just hacks through to get it working for the Qwen VL model family.

Test Plan & Results

Compared results(both intermediate activations and final generations) running with the dynamic LoRA and merged adapter weights. Latency, throughput numbers are ~20% slower with LoRA (max r=32, input_len=3072) compared to merged weights.

@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added qwen Related to Qwen models v1 labels Jul 10, 2025
@mergify
Copy link
Copy Markdown

mergify bot commented Jul 10, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @prashanth058.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 10, 2025
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @prashanth058, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request extends vLLM's LoRA functionality to support vision modules in multimodal models, with an initial focus on the Qwen VL family. It introduces a sophisticated mechanism to manage LoRA adapter mappings dynamically, especially when vision tokens undergo reduction. The changes generalize LoRA applicability to all linear layers and include necessary model-specific adjustments and profiling updates to accommodate this new capability.

Highlights

  • LoRA Support for Vision Modules: This pull request introduces initial support for applying LoRA (Low-Rank Adaptation) to vision modules within multimodal models, specifically targeting the Qwen VL model family as a first step. This extends LoRA capabilities beyond just language model components.
  • Dynamic LoRA Mapping for Vision Token Reduction: A new mechanism has been implemented to dynamically adjust LoRA mappings for vision tokens, particularly before and after token reduction operations. This involves pre-computing post-reduction LoRA mappings and applying them via a forward pre-hook on the vision token reducer layer.
  • Generalized LoRA Target Layers: The filtering logic for unsupported multimodal modules has been updated to allow LoRA application to any LinearBase layer, removing previous restrictions that limited LoRA to only language model components within multimodal architectures.
  • Qwen2VL Model Adjustments: Several tensor shape adjustments (unsqueeze(1)) have been added to the Qwen2VLMLP and Qwen2VLAttention modules to ensure compatibility with the new LoRA integration. Additionally, new methods (get_vision_token_reduction_factor, get_vision_token_reducer_layer) were added to expose vision token reduction details to the LoRA manager.
  • Multimodal Encoder LoRA Integration: The multimodal encoder execution (_execute_mm_encoder) now incorporates LoRA handling. It processes multimodal inputs in chunks, dynamically setting and preparing LoRA adapter mappings based on the associated LoRA requests and vision token reduction factors.
  • Profiling and Compatibility Hacks: The model profiling run has been updated to correctly account for LoRA with vision tokens, including adjustments to the maximum number of multimodal items to fit metadata. A notable ctypes hack was added to gpu_model_runner.py for compatibility with Python versions older than 3.12.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces LoRA support for vision modules, which is a significant feature enhancement. The changes in vllm/lora/models.py to generalize LoRA application to any linear layer are well-designed. The model-specific adaptations in vllm/model_executor/models/qwen2_vl.py and the new logic in vllm/v1/worker/gpu_model_runner.py to handle LoRA with vision encoders and token reduction seem correct for the purpose.

However, I've identified a critical issue with a ctypes hack that is unsafe and must be removed. I've also pointed out a few medium-severity issues related to magic numbers and tensor shape handling that should be addressed to improve code clarity and maintainability.

Once these issues are addressed, this PR will be in a much better shape.

Comment on lines 196 to 197
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Adding unsqueeze(1) here and in other parts of the Qwen2-VL model (split_qkv, Qwen2VisionAttention.forward) seems like a workaround for an unexpected tensor shape. While this might fix the immediate issue, it can make the code more brittle.

It would be more robust to investigate why the batch dimension is sometimes missing and address the root cause. Is it possible that the input tensor is being squeezed somewhere when the batch size is 1? If this unsqueeze is necessary, please add a more detailed comment explaining the circumstances under which the input tensor becomes 2D.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The value 10 for max_items_per_chunk is a magic number. It's not clear why this specific value was chosen. To improve maintainability and readability, please define it as a named constant at the top of the file or as a class attribute with a comment explaining its purpose and how the value was determined.

For example:

# At the top of the file or as a class constant
_MM_ENCODER_MAX_ITEMS_PER_CHUNK = 10  # Chosen to balance memory usage and batching efficiency

Then use this constant in the code.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The division by 4 here is a magic number. The comment "to fit the vision tokens in the punica wrapper metadata" gives some context, but it would be better to define this as a named constant with a more detailed explanation. This will make the code easier to understand and maintain.

For example:

# At the top of the file or as a class constant
_LORA_PROFILING_MM_ITEMS_REDUCTION_FACTOR = 4 # Reduce max_num_mm_items to avoid exceeding punica wrapper metadata limits during profiling with LoRA.
...
max_num_mm_items = max_num_mm_items // _LORA_PROFILING_MM_ITEMS_REDUCTION_FACTOR

@jeejeelee jeejeelee self-assigned this Jul 11, 2025
@mergify mergify bot removed the needs-rebase label Jul 11, 2025
@Wesley-Jzy
Copy link
Copy Markdown

Is this feature ready to use?

@prashanth058
Copy link
Copy Markdown
Contributor Author

@Wesley-Jzy In its current state its not really well tested but should "work" with image inputs atleast(haven't tested it on video inputs)

@pb-sameerreddy
Copy link
Copy Markdown

+1 on importance of this feature

Can you share how you are testing, and what if anything needs to be done? Happy to help out here.

@hnt2601
Copy link
Copy Markdown

hnt2601 commented Sep 24, 2025

Does the team have a plan to support gemma3-27b-it other than qwen? Thanks a lot

@aasthavar
Copy link
Copy Markdown

@prashanth058 happy to contribute, If you need extra hands!

@jeejeelee
Copy link
Copy Markdown
Collaborator

@aasthavar see: #26674

@jeejeelee jeejeelee closed this Dec 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

qwen Related to Qwen models v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants