Skip to content

Latest commit

 

History

History
18 lines (10 loc) · 1.24 KB

README.md

File metadata and controls

18 lines (10 loc) · 1.24 KB

Note

LoRD related code has a new home at mergekit. Extract any LoRA with a simple command like mergekit-extract-lora 'teknium/OpenHermes-2.5-Mistral-7B' 'mistralai/Mistral-7B-v0.1' 'extracted_OpenHermes-2.5-LoRA_output_path' --rank=32

LoRD: Low-Rank Decomposition of finetuned Large Language Models

This repository contains code for extracting LoRA adapters from finetuned transformers models, using Singular Value Decomposition (SVD).

LoRA (Low-Rank Adaptation) is a technique for parameter-efficient fine-tuning of large language models. The technique presented here allows extracting PEFT compatible Low-Rank adapters from full fine-tunes or merged model.

Getting started

Everything you need to extract and publish your LoRA adapter is available in the LoRD.ipynb notebook.

Running the notebook on Colab is the easiest way to get started.

Special thanks

Thanks to @kohya_ss for their prior work on LoRA extraction for Stable Diffusion.