Skip to content

AutoencoderKL multi thread issues #7450

@jasstionzyf

Description

@jasstionzyf

Describe the bug

Multi pipelines use same AutoencoderKL run concurrency lead to wired image like black.

Reproduction

from diffusers import StableDiffusionPipeline
import torch
import threading
import copy
import numpy as np
import torch
from diffusers import (

StableDiffusionPipeline,
UNet2DConditionModel,
AutoencoderKL, UNet2DConditionModel, PNDMScheduler,

DPMSolverSinglestepScheduler
    


)
from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
import os
from compel import Compel
from IPython.display import display
from PIL import Image

modelPath="/root/.cache/soujpg/models/diffusers/sd-1.5/"

vae= AutoencoderKL.from_pretrained(modelPath+"vae/",torch_dtype=torch.float16)
unet= UNet2DConditionModel.from_pretrained(modelPath, subfolder="unet",torch_dtype=torch.float16)
text_encoder= CLIPTextModel.from_pretrained(modelPath, subfolder="text_encoder",torch_dtype=torch.float16)
tokenizer= CLIPTokenizer.from_pretrained(modelPath, subfolder="tokenizer",torch_dtype=torch.float16)
scheduler=DPMSolverSinglestepScheduler.from_pretrained(modelPath, subfolder="scheduler",torch_dtype=torch.float16) 
safety_checker=None
feature_extractor=None






def infer_one():


    pipelineKwargs={
        "unet":unet,
        "text_encoder":text_encoder,
        "vae":vae,
        "tokenizer":tokenizer,
        "feature_extractor":feature_extractor,
        "safety_checker":safety_checker,
        "scheduler":copy.deepcopy(scheduler),}
    pipeline=StableDiffusionPipeline(**pipelineKwargs)
    pipeline=pipeline.to("cuda")
    rs=pipeline("a photo of an astronaut riding a horse on mars").images[0] 
    rs.save(shortuuid.uuid()+".png")
    

x1 = threading.Thread(target=infer_one) 
x1.start()

x2 = threading.Thread(target=infer_one) 
x2.start()

Logs

No response

System Info

diffusers version: 0.27.0
Platform: Linux-6.1.58+-x86_64-with-glibc2.35
Python version: 3.11.4
PyTorch version (GPU?): 2.2.1+cu122 (True)
Huggingface_hub version: 0.20.3
Transformers version: 4.38.2
Accelerate version: 0.28.0
xFormers version: not installed
Using GPU in script?: yes, RTX 4080
Using distributed or parallel set-up in script?: yes

Who can help?

@yiyixuxu @DN6 @sayakpaul

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions