Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge #1132

Merged
merged 33 commits into from
Nov 10, 2024
Merged

merge #1132

merged 33 commits into from
Nov 10, 2024

Conversation

bghira
Copy link
Owner

@bghira bghira commented Nov 10, 2024

  • WSL dockerfile updates
  • skip layer guidance for SD3.5 Medium
  • multi-caption crash fixes
  • validation crash fixes
  • validation torch compile fixed
  • sd3 uses uniform sampling
  • flux schedule shift defaults to 3

Jimmy and others added 30 commits October 29, 2024 20:36
Fix missing docker dependencies
…aption-parquets

Fix multi-caption parquets crashing in multiple locations (Closes #1092)
…mpling

flux and sd3 could use uniform sampling instead of beta or sigmoid
mhirki and others added 3 commits November 9, 2024 15:52
…dom validation errors with SD3.5 after commit 48cfc09 removed some earlier fixes. This also fixes torch.compile not getting called for the validation pipeline.

Calling self.pipeline.to(self.inference_device) appears to have an unwanted side-effect: it moves additional text encoders to the accelerator device. In the case of SD3.5, I saw text_encoder_2 and text_encoder_3 getting moved to the GPU. This caused my RTX 3090 to go OOM when trying to generate validation images during training. Explicitly setting text_encoder_2 and text_encoder_3 to None in extra_pipeline_kwargs fixes this issue.
…-good

Fix random validation errors for good (and restore torch.compile for the validation pipeline at the same time)
@bghira bghira merged commit ac6efa7 into release Nov 10, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants