-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DCAE-Sana obtains the NAN value after decoding the latent which comes from encoding image #96
Comments
Hi, can you check If the dc_ae.dtype is FP32 or BF16? @ZouYa99 |
It is torch.float32. @lawrence-cj |
@chenjy2003 Junyu, can you help here? |
Hi @ZouYa99 , I ran the same code (except replacing |
I am sorry to say that my device cannot run "mit-han-lab/dc-ae-f32c32-sana-1.0" directly because of the network. So I downloaded this model again (https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0). But there's still a problem with NAN. |
@ZouYa99 I'm still not sure the reason for this NAN issue. Maybe something is wrong with the environment. Since our models are merged into diffusers, I would recommend you give it a try. Please upgrade diffusers first by from PIL import Image
import torch
import torchvision.transforms as transforms
from torchvision.utils import save_image
from diffusers import AutoencoderDC
device = torch.device("cuda")
dc_ae: AutoencoderDC = AutoencoderDC.from_pretrained(f"mit-han-lab/dc-ae-f32c32-sana-1.0-diffusers", torch_dtype=torch.float32).to(device).eval()
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(0.5, 0.5),
])
image = Image.open("assets/fig/girl.png")
x = transform(image)[None].to(device)
latent = dc_ae.encode(x).latent
y = dc_ae.decode(latent).sample
save_image(y * 0.5 + 0.5, "demo_dc_ae.png") |
I ran the following code.However, the "output" is all NAN values. I'm confused. Can you help me?
The text was updated successfully, but these errors were encountered: