Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend] support only use linear lora modules in attention #5483

Closed

Conversation

jinzhen-lin
Copy link
Contributor

The vLLM create LoRA modules for all linear modules and use punica kernel for lora forward. So we need to precompile all possible linear dimensions. If a model with new intermediate size is published, we cannot use it with lora before adding the new intermediate size to the c++ code.

However, many lora models only use linear modules in attention module (e.g. qkv_proj and o_proj). This PR add a argument
--linear-lora-attn-only to support only use linear modules in attention module.

Copy link

This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!

@github-actions github-actions bot added the stale label Oct 26, 2024
@github-actions github-actions bot added unstale and removed stale labels Nov 27, 2024
Copy link

mergify bot commented Nov 27, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @jinzhen-lin.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant