Skip to content
This repository has been archived by the owner on May 14, 2024. It is now read-only.

Releases: Linaqruf/kohya-trainer

xformers-0.0.16 for colab T4

18 Jan 05:55
Compare
Choose a tag to compare

xformers-0.0.16 build with Colab T4

v9

20 Dec 00:16
2c61188
Compare
Choose a tag to compare
v9

v9 (17/12):

  • Added the save_model_as option to fine_tune.py, which allows you to save the model in any format.
  • Added the keep_tokens option to fine_tune.py, which allows you to fix the first n tokens of the caption and not shuffle them.
  • Added support for left-right flipping augmentation in prepare_buckets_latents.py and fine_tune.py with the flip_aug option.

v8

15 Dec 11:07
Compare
Choose a tag to compare
v8

v8 (13/12):

  • Added support for training with fp16 gradients (experimental feature). This allows training with 8GB VRAM on SD1.x. See "Training with fp16 gradients (experimental feature)" for details.
  • Updated WD14Tagger script to automatically download weights.

v7

11 Dec 08:55
Compare
Choose a tag to compare
v7
v7 (7/12):
  • Requires Diffusers 0.10.2 (0.10.0 or later will work, but there are reported issues with 0.10.0 so we recommend using 0.10.2). To update, run pip install -U diffusers[torch]==0.10.2 in your virtual environment.
  • Added support for Diffusers 0.10 (uses code in Diffusers for v-parameterization training and also supports safetensors).
  • Added support for accelerate 0.15.0.
  • Added support for multiple teacher data folders. For caption and tag preprocessing, use the --full_path option. The arguments for the cleaning script have also changed, see "Caption and Tag Preprocessing" for details.

v6

08 Dec 06:30
acf76fe
Compare
Choose a tag to compare
v6

Changes 12/6

  • Temporary fix for an error when saving in the .safetensors format with some models. If you experienced this error with v5, please try v6.

v5

08 Dec 06:29
acf76fe
Compare
Choose a tag to compare
v5

Changes 12/5

  • Added support for the .safetensors format. Install safetensors with pip install safetensors and specify the use_safetensors option when saving.
  • Added the log_prefix option.
  • The cleaning script can now be used even when one of the captions or tags is missing.

v4

08 Dec 06:24
acf76fe
Compare
Choose a tag to compare

Changes 11/29

  • Requires Diffusers 0.9.0. To update it, run pip install -U diffusers[torch]==0.9.0.
  • Supports Stable Diffusion v2.0. Use the --v2 option when training (and when pre-acquiring latents). If you are using 768-v-ema.ckpt or stable-diffusion-2 instead of stable-diffusion-v2-base, also use the --v_parameterization option when training.
  • Added options to specify the minimum and maximum resolutions of the bucket when pre-acquiring latents.
  • Modified the loss calculation formula.
  • Added options for the learning rate scheduler.
  • Added support for downloading Diffusers models directly from Hugging Face and for saving during training.
  • clean_captions_and_tags.py can now be used even when only one of the captions or tags is missing.
  • Minor fixes.

v3

08 Dec 06:15
acf76fe
Compare
Choose a tag to compare
v3

Changes 11/23

  • Added a tagging script using WD14Tagger.
  • Added --logging-dir=logs option to fine_tune.py.
  • Fixed a bug that caused data to be shuffled twice.
  • Corrected spelling mistakes in the options for each script.
    e.g. caption_extention -> caption_extension

Note

  • The old spelling of the options will continue to work for the time being.

v1

08 Dec 06:07
acf76fe
Compare
Choose a tag to compare
v1

Archive for v1 fine-tuning script