Skip to content

Commit

Permalink
Refactor instructpix2pix lora to support peft (#10205)
Browse files Browse the repository at this point in the history
* make base code changes referred from train_instructpix2pix script in examples

* change code to use PEFT as discussed in issue 10062

* update README training command

* update README training command

* refactor variable name and freezing unet

* Update examples/research_projects/instructpix2pix_lora/train_instruct_pix2pix_lora.py

Co-authored-by: Sayak Paul <[email protected]>

* update README installation instructions.

* cleanup code using make style and quality

---------

Co-authored-by: Sayak Paul <[email protected]>
  • Loading branch information
Aiden-Frost and sayakpaul authored Jan 7, 2025
1 parent b94cfd7 commit f1e0c7c
Show file tree
Hide file tree
Showing 2 changed files with 263 additions and 125 deletions.
35 changes: 33 additions & 2 deletions examples/research_projects/instructpix2pix_lora/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,42 @@
This extended LoRA training script was authored by [Aiden-Frost](https://github.com/Aiden-Frost).
This is an experimental LoRA extension of [this example](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py). This script provides further support add LoRA layers for unet model.

## Running locally with PyTorch
### Installing the dependencies

Before running the scripts, make sure to install the library's training dependencies:

**Important**

To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```

Then cd in the example folder and run
```bash
pip install -r requirements.txt
```

And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:

```bash
accelerate config
```

Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment.


## Training script example

```bash
export MODEL_ID="timbrooks/instruct-pix2pix"
export DATASET_ID="instruction-tuning-sd/cartoonization"
export OUTPUT_DIR="instructPix2Pix-cartoonization"

accelerate launch finetune_instruct_pix2pix.py \
accelerate launch train_instruct_pix2pix_lora.py \
--pretrained_model_name_or_path=$MODEL_ID \
--dataset_name=$DATASET_ID \
--enable_xformers_memory_efficient_attention \
Expand All @@ -24,7 +52,10 @@ accelerate launch finetune_instruct_pix2pix.py \
--rank=4 \
--output_dir=$OUTPUT_DIR \
--report_to=wandb \
--push_to_hub
--push_to_hub \
--original_image_column="original_image" \
--edited_image_column="cartoonized_image" \
--edit_prompt_column="edit_prompt"
```

## Inference
Expand Down
Loading

0 comments on commit f1e0c7c

Please sign in to comment.