[WIP] [Feature]: LoRA for vision modules#20787
[WIP] [Feature]: LoRA for vision modules#20787prashanth058 wants to merge 1 commit intovllm-project:mainfrom
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
|
This pull request has merge conflicts that must be resolved before it can be |
There was a problem hiding this comment.
Summary of Changes
Hello @prashanth058, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request extends vLLM's LoRA functionality to support vision modules in multimodal models, with an initial focus on the Qwen VL family. It introduces a sophisticated mechanism to manage LoRA adapter mappings dynamically, especially when vision tokens undergo reduction. The changes generalize LoRA applicability to all linear layers and include necessary model-specific adjustments and profiling updates to accommodate this new capability.
Highlights
- LoRA Support for Vision Modules: This pull request introduces initial support for applying LoRA (Low-Rank Adaptation) to vision modules within multimodal models, specifically targeting the Qwen VL model family as a first step. This extends LoRA capabilities beyond just language model components.
- Dynamic LoRA Mapping for Vision Token Reduction: A new mechanism has been implemented to dynamically adjust LoRA mappings for vision tokens, particularly before and after token reduction operations. This involves pre-computing post-reduction LoRA mappings and applying them via a forward pre-hook on the vision token reducer layer.
- Generalized LoRA Target Layers: The filtering logic for unsupported multimodal modules has been updated to allow LoRA application to any
LinearBaselayer, removing previous restrictions that limited LoRA to only language model components within multimodal architectures. - Qwen2VL Model Adjustments: Several tensor shape adjustments (
unsqueeze(1)) have been added to theQwen2VLMLPandQwen2VLAttentionmodules to ensure compatibility with the new LoRA integration. Additionally, new methods (get_vision_token_reduction_factor,get_vision_token_reducer_layer) were added to expose vision token reduction details to the LoRA manager. - Multimodal Encoder LoRA Integration: The multimodal encoder execution (
_execute_mm_encoder) now incorporates LoRA handling. It processes multimodal inputs in chunks, dynamically setting and preparing LoRA adapter mappings based on the associated LoRA requests and vision token reduction factors. - Profiling and Compatibility Hacks: The model profiling run has been updated to correctly account for LoRA with vision tokens, including adjustments to the maximum number of multimodal items to fit metadata. A notable
ctypeshack was added togpu_model_runner.pyfor compatibility with Python versions older than 3.12.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces LoRA support for vision modules, which is a significant feature enhancement. The changes in vllm/lora/models.py to generalize LoRA application to any linear layer are well-designed. The model-specific adaptations in vllm/model_executor/models/qwen2_vl.py and the new logic in vllm/v1/worker/gpu_model_runner.py to handle LoRA with vision encoders and token reduction seem correct for the purpose.
However, I've identified a critical issue with a ctypes hack that is unsafe and must be removed. I've also pointed out a few medium-severity issues related to magic numbers and tensor shape handling that should be addressed to improve code clarity and maintainability.
Once these issues are addressed, this PR will be in a much better shape.
There was a problem hiding this comment.
Adding unsqueeze(1) here and in other parts of the Qwen2-VL model (split_qkv, Qwen2VisionAttention.forward) seems like a workaround for an unexpected tensor shape. While this might fix the immediate issue, it can make the code more brittle.
It would be more robust to investigate why the batch dimension is sometimes missing and address the root cause. Is it possible that the input tensor is being squeezed somewhere when the batch size is 1? If this unsqueeze is necessary, please add a more detailed comment explaining the circumstances under which the input tensor becomes 2D.
vllm/v1/worker/gpu_model_runner.py
Outdated
There was a problem hiding this comment.
The value 10 for max_items_per_chunk is a magic number. It's not clear why this specific value was chosen. To improve maintainability and readability, please define it as a named constant at the top of the file or as a class attribute with a comment explaining its purpose and how the value was determined.
For example:
# At the top of the file or as a class constant
_MM_ENCODER_MAX_ITEMS_PER_CHUNK = 10 # Chosen to balance memory usage and batching efficiencyThen use this constant in the code.
vllm/v1/worker/gpu_model_runner.py
Outdated
There was a problem hiding this comment.
The division by 4 here is a magic number. The comment "to fit the vision tokens in the punica wrapper metadata" gives some context, but it would be better to define this as a named constant with a more detailed explanation. This will make the code easier to understand and maintain.
For example:
# At the top of the file or as a class constant
_LORA_PROFILING_MM_ITEMS_REDUCTION_FACTOR = 4 # Reduce max_num_mm_items to avoid exceeding punica wrapper metadata limits during profiling with LoRA.
...
max_num_mm_items = max_num_mm_items // _LORA_PROFILING_MM_ITEMS_REDUCTION_FACTORf1a99d6 to
6db9bf3
Compare
6db9bf3 to
2353aea
Compare
|
Is this feature ready to use? |
|
@Wesley-Jzy In its current state its not really well tested but should "work" with image inputs atleast(haven't tested it on video inputs) |
|
+1 on importance of this feature Can you share how you are testing, and what if anything needs to be done? Happy to help out here. |
|
Does the team have a plan to support gemma3-27b-it other than qwen? Thanks a lot |
|
@prashanth058 happy to contribute, If you need extra hands! |
|
@aasthavar see: #26674 |
Purpose
add LoRA support for Vision modules as well in multimodal models(#11255, #17660). Currently, this PR just hacks through to get it working for the Qwen VL model family.
Test Plan & Results
Compared results(both intermediate activations and final generations) running with the dynamic LoRA and merged adapter weights. Latency, throughput numbers are ~20% slower with LoRA (max r=32, input_len=3072) compared to merged weights.