-
Notifications
You must be signed in to change notification settings - Fork 6.6k
Description
Describe the bug
The results obtained using DPM++ 2M SDE Karras contain artifacts that suggest there is some bug. This happens across different XL models, although is not so visible in the base SDXL. Using Automatic1111 with the same models and the same kind of scheduler results in high-quality results, and it seems impossible to achieve the same quality using the diffusers equivalent.
An image generated using Automatic1111 DPM++ 2M SDE Karras with the Juggernaut XL v6 model

An image generated using the equivalent scheduler from diffusers:

The artifacts are very visible in the last image.
The following didn't help:
- Loading the model in fp32
use_karras_sigmas=Falseuse_karras_sigmas=Falseanddpmsolver++euler_at_final=Trueanduse_lu_lambdas=True
I see this happening clearly at least with the following models:
frankjoshua/juggernautXL_version6RundiffusionLykon/dreamshaper-xl-1-0
Reproduction
I've created a github repo to help reproducing the problem. Also, a simpler code snippet is provided below:
from diffusers import AutoPipelineForText2Image
from diffusers.schedulers import DPMSolverMultistepScheduler
import torch
model_id = "frankjoshua/juggernautXL_version6Rundiffusion"
pipeline = AutoPipelineForText2Image.from_pretrained(
model_id,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True,
).to("cuda")
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(
pipeline.scheduler.config,
use_karras_sigmas=True,
sde_type="sde-dpmsolver++",
euler_at_final=True,
use_lu_lambdas=True
)
prompt = "Adorable infant playing with a variety of colorful rattle toys."
results = pipeline(
prompt=prompt,
guidance_scale=3,
generator=torch.Generator(device="cuda").manual_seed(42),
num_inference_steps=25,
height=768,
width=1344)
display(results.images[0])Logs
No response
System Info
diffusersversion: 0.24.0- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- PyTorch version (GPU?): 2.1.1 (True)
- Huggingface_hub version: 0.20.1
- Transformers version: 4.36.2
- Accelerate version: 0.25.0
- xFormers version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no