Skip to content

Handle import path change for vllm lora#376

Merged
danielhanchen merged 3 commits intounslothai:nightlyfrom
Datta0:vllm_lora_dec
Dec 9, 2025
Merged

Handle import path change for vllm lora#376
danielhanchen merged 3 commits intounslothai:nightlyfrom
Datta0:vllm_lora_dec

Conversation

@Datta0
Copy link
Copy Markdown
Collaborator

@Datta0 Datta0 commented Dec 9, 2025

vLLM modified their LoRA modules and split up the paths. This is to address that
vllm-project/vllm#30253

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Datta0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a breaking change in the vLLM library where the import paths for LoRA modules were refactored. By introducing conditional import statements, the system can now seamlessly operate with different vLLM versions, enhancing the robustness and maintainability of the codebase without requiring manual adjustments for varying vLLM installations.

Highlights

  • vLLM LoRA Import Compatibility: Implemented a try-except block to handle changes in vLLM's LoRA module import paths, ensuring compatibility with both older and newer vLLM versions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly handles a breaking change in vllm's LoRA module import paths by using a try-except block. This ensures backward compatibility with older versions of vllm. The implementation is clean and effective. I have one suggestion to fix a broken URL in a code comment to aid future maintenance.

LRUCacheLoRAModelManager, create_lora_manager)
except ImportError:
# Newer vLLM version moved/split lora methods
# https://github.com/vllm-project/vllm/pull/30253
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The URL in this comment is broken and leads to a 404 error, which makes it difficult for future developers to understand the context for this change. The correct pull request appears to be #4701. Please update the link to improve code maintainability.

Suggested change
# https://github.com/vllm-project/vllm/pull/30253
# https://github.com/vllm-project/vllm/pull/4701

@danielhanchen danielhanchen merged commit ec18618 into unslothai:nightly Dec 9, 2025
danielhanchen added a commit that referenced this pull request Dec 12, 2025
* Update __init__.py

* Update gradient_checkpointing.py

* Update __init__.py

* Update gradient_checkpointing.py

* Update compiler.py

* Handle import path change for vllm lora (#376)

* Handle import path change for vllm lora

* Better handle sleep and wakeup

* Revert "Better handle sleep and wakeup"

This reverts commit 00a8d68.

* Update compiler.py

---------

Co-authored-by: Datta Nimmaturi <venkatadattasainimmaturi@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants