Note
LoRD related code has a new home at mergekit. Extract any LoRA with a simple command like mergekit-extract-lora 'teknium/OpenHermes-2.5-Mistral-7B' 'mistralai/Mistral-7B-v0.1' 'extracted_OpenHermes-2.5-LoRA_output_path' --rank=32
This repository contains code for extracting LoRA adapters from finetuned transformers
models, using Singular Value Decomposition (SVD).
LoRA (Low-Rank Adaptation) is a technique for parameter-efficient fine-tuning of large language models. The technique presented here allows extracting PEFT compatible Low-Rank adapters from full fine-tunes or merged model.
Everything you need to extract and publish your LoRA adapter is available in the LoRD.ipynb
notebook.
Running the notebook on Colab is the easiest way to get started.
Thanks to @kohya_ss for their prior work on LoRA extraction for Stable Diffusion.