Skip to content

Latest commit

 

History

History
21 lines (17 loc) · 1.17 KB

README.md

File metadata and controls

21 lines (17 loc) · 1.17 KB

simple script for easy flux finetuning from a folder of images. uses replicate for the cloud gpu goodness.

  1. populate .env file: add your openai and replicate keys. openai used for captioning.
  2. pip install -r requirements.txt
  3. put all the images inside data/source_images
  4. adjust constants at the top of finetune.py (your replicate details)
  5. run python finetune.py
  6. wait for the script to finish and it will return the the training url
  7. optionally create embeddings for all the image descriptions and store them in a .csv - so that later you can do semantic searches over your image library

Behind the scenes the script will:

  1. ask the user for details e.g. what to call the model
  2. create a new folder data/training_pack and copy + convert all images and rename to uuid.jpg format
  3. if need be - downscale images to 1024x1024 (max)
  4. run each image through gpt4-o-mini to generate a description
  5. save all descriptions as uuid.txt and put it into the same folder, optionally creates embeddings and adds to the csv
  6. .zip the folder
  7. create a new model on replicate
  8. create a new training job on replicate - and give the user the url to check on the training