Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError #3

Closed
syddharth opened this issue Jul 18, 2023 · 51 comments
Closed

RuntimeError #3

syddharth opened this issue Jul 18, 2023 · 51 comments

Comments

@syddharth
Copy link

Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [16, 2560, 9, 9]

I have the mm_sd_v15.ckpt downloaded in stable-diffusion-webui\extensions\sd-webui-animatediff\model.

Complete log:

2023-07-18 11:19:21,548 - AnimateDiff - INFO - AnimateDiff process start with video length 2, FPS 8, motion module mm_sd_v15.ckpt.
2023-07-18 11:19:21,551 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-18 11:19:21,552 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-18 11:19:21,552 - AnimateDiff - INFO - Injection finished.
0%| | 0/30 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(ba8qq6a5sxydhr6)', 'Beautiful Scenery', '', [], 30, 16, False, False, 1, 2, 7.5, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.33, 1.5, '4x_UniversalUpscalerV2-Sharper_103000_G', 10, 0, 0, 19, '', '', [], <gradio.routes.Request object at 0x000002E7E74F5420>, 0, 0, False, 'Horizontal', '1,1', False, '0.2', False, False, 'female', True, 1, True, -1.0, [], [], [], [], False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': <object object at 0x000002E7E6FB5740>}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': <object object at 0x000002E7E6FB5720>}, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 2, 8, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E5CB6B60>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E5C86980>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E750FE20>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E750CA90>, None, False, '0', 'G:\stablediffusion\stable-diffusion-webui\extensions/sd-webui-faceswap/models\inswapper_128.onnx', 'CodeFormer', 1, '', 1, 1, False, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 0, 1, 1, 0, 0, 0, 0, False, 'Default', False, False, 'Euler a', 0.95, 0.75, 'zero', 'pos', 'linear', 0.2, 0.0, 0.75, None, 'Lanczos', 1, 0, 0, True, 0.3, 'Latent', 0.55, 0.3, 0.2, 0.2, [], False, 1.5, 1.2, False, '', '1', 'from modules.processing import process_images\n\np.width = 768\np.height = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, 0, 0, 384, 384, False, False, True, True, True, 1, '', '', 8, True, 16, 'Median cut', False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, 'positive', 'comma', 0, False, False, '', 'Positive', 0, ', ', True, 32, 0, 'Median cut', 'luminance', False, 'Illustration', 'svg', True, True, False, 0.5, True, 16, True, 16, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', False, 'None', 'Not set', True, False, '', '', '', '', '', 1.3, 'Not set', 'Not set', 'Not set', 1, 1.3, 'Not set', 'Not set', 'Not set', 'Not set', 'Not set', 'Not set', 1.3, 1.3, 1.3, 'Not set', 'Not set', 1.3, True, True, 'Disabled', None, None, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "G:\stablediffusion\stable-diffusion-webui\modules\call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
File "G:\stablediffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\txt2img.py", line 62, in txt2img
processed = processing.process_images(p)
File "G:\stablediffusion\stable-diffusion-webui\modules\processing.py", line 639, in process_images
res = process_images_inner(p)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\processing.py", line 759, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "G:\stablediffusion\stable-diffusion-webui\modules\processing.py", line 1012, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 464, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling
return func()
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 464, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 183, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 21, in mm_tes_forward
x = layer(x, context)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 79, in forward
hidden_states = self.temporal_transformer(hidden_states, encoder_hidden_states, attention_mask)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 144, in forward
hidden_states = self.norm(hidden_states)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
return F.group_norm(
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [4, 2560, 8, 8]


@continue-revolution
Copy link
Owner

Please screenshot your webui

@syddharth
Copy link
Author

I got the extension to run, had to restart.
Though, the gifs created are static and do not animate.

@continue-revolution
Copy link
Owner

  1. check your terminal and see if you have successfully downloaded model weights
  2. Gradio cannot show gif properly. You need to download the gif or go to output/txt2img-images/AnimateDiff

@syddharth
Copy link
Author

Model weights were manually downloaded.

This is what the terminal shows:

2023-07-18 11:44:03,725 - AnimateDiff - INFO - AnimateDiff process start with video length 8, FPS 8, motion module mm_sd_v15.ckpt.
2023-07-18 11:44:03,729 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-18 11:44:03,729 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-18 11:44:03,729 - AnimateDiff - INFO - Injection finished.
Data shape for DDIM sampling is (8, 4, 64, 64), eta 0.0
Running DDIM Sampling with 31 timesteps
DDIM Sampler: 100%|███████████████| 31/31 [00:11<00:00, 2.75it/s]
2023-07-18 11:44:17,809 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.
2023-07-18 11:44:17,809 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-07-18 11:44:17,810 - AnimateDiff - INFO - Removal finished.
2023-07-18 11:44:17,810 - AnimateDiff - INFO - Merging images into GIF.
2023-07-18 11:44:18,429 - AnimateDiff - INFO - AnimateDiff process end.

I did check out the outputs\txt2img-images\AnimateDiff folder.
All the gifs seem to have just one frame.

Screenshot of webui
image

@continue-revolution
Copy link
Owner

Huh, it seems that it is not your problem. How about setting frame number to 16? They trained their motion modules when the frame number is 16.

Also, there is no need to change batch size. It will be replaced by the video frame numbers internally.

@syddharth
Copy link
Author

I tried 16 too, same results. The gif seems to be made of just 1frame.

image

@RyanzW0521
Copy link

Same problem. Each time when I restart Webui it runs, but then the second time the error appears.
And when it runs correctly it can only generate 1 frame too

@continue-revolution
Copy link
Owner

continue-revolution commented Jul 18, 2023

Weird. You both can also observe txt2img-images/{today’s date}/ and read the images that look similar to your GIF. Are there 8/16 images?

@RyanzW0521 please post your terminal log with error message.

@RyanzW0521
Copy link

RyanzW0521 commented Jul 18, 2023

Wierd. You both can also observe txt2img-images/{today’s date}/ and read the images that look similar to your GIF. Are there 8/16 images?

@RyanzW0521 please post your terminal log with error message.

Unfortunately, in my folder it only has one image.
And here is the log

 0%|                                                                                                                                                                                                   | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(c3l341tdjvpcs5t)', 'best quality, masterpiece, 1girl, cloudy sky, dandelion, contrapposto, alternate hairstyle', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, True, 16, 8, 'mm_sd_v15.ckpt', False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, 50) {}
Traceback (most recent call last):
  File "/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/stable-diffusion-webui/modules/processing.py", line 486, in process_images
    res = process_images_inner(p)
  File "/stable-diffusion-webui/modules/processing.py", line 632, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "/stable-diffusion-webui/modules/processing.py", line 832, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 349, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 225, in launch_sampling
    return func()
  File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 349, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 117, in forward
    x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1329, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 212, in forward2
    return forward(*args, **kwargs)
  File "/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 201, in forward
    h = module(h, emb, context)
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py", line 19, in mm_tes_forward
    x = layer(x, emb)
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 249, in forward
    return checkpoint(
  File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 129, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 262, in _forward
    h = self.in_layers(x)
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/container.py", line 204, in forward
    input = module(input)
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 219, in forward
    return super().forward(x.float()).type(x.dtype)
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 273, in forward
    return F.group_norm(
  File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2528, in group_norm
    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [2, 5120, 8, 8]

And I found that after the error reported, even though you cancel the use, before you restart you can still not generate other images

@continue-revolution
Copy link
Owner

@RyanzW0521 Please also post a screenshot of your webui that produce this error. I will look into the reason tomorrow.

@RyanzW0521
Copy link

@RyanzW0521 Please also post a screenshot of your webui that produce this error. I will look into the reason tomorrow.

截屏2023-07-18 下午3 13 19
thanks!

@continue-revolution
Copy link
Owner

For everyone who come to here: I am not sure which features on WebUI are not compatible with AnimateDiff. Please feel free to post your terminal logs and a screenshot. Please do not use xformers at this time. The very first thing you can do is to try reproducing my demo.

https://civitai.com/models/30240?modelVersionId=78775

masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes,

badhandv4,easynegative,ng_deepnegative_v1_75t,verybadimagenegative_v1.3, bad-artist, bad_prompt_version2-neg, teeth

The majority of negative prompts are textual embeddings from civitai. Please manually download them.

@JPW0080
Copy link

JPW0080 commented Jul 18, 2023

Working on Win 11/3090, AUTOMATIC1111 1.4.0.
The generated GIF plays when clicked on my side.

@robbsaber
Copy link

Not working. Same error as above. Win 10/3090

@legarth
Copy link

legarth commented Jul 18, 2023

Same error here. Downloaded models manually.
Windows 11. RTX4090
A1111 v1.4.0

File "C:\Users\x\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [32, 2560, 8, 8]

@continue-revolution
Copy link
Owner

@legarth @robbsaber please include a screenshot of your webui

@KoDelioDa
Copy link

image
Error completing request
*** Arguments: ('task(ev7zb6xuo0y95xh)', 'masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes,', 'badhandv4,easynegative,ng_deepnegative_v1_75t,verybadimagenegative_v1.3, bad-artist, bad_prompt_version2-neg, teeth', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 2048, 128, True, True, True, False, True, 16, 8, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb36c513f70>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb36c511ab0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb36c3ae4d0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb36c3acf10>, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'positive', 'comma', 0, False, False, '', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/txt2img.py", line 57, in txt2img
processed = processing.process_images(p)
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/processing.py", line 620, in process_images
res = process_images_inner(p)
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/processing.py", line 739, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/processing.py", line 992, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 439, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 278, in launch_sampling
return func()
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 439, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/src/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 177, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/src/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/src/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 781, in forward
h = module(h, emb, context)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py", line 19, in mm_tes_forward
x = layer(x, emb)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 249, in forward
return checkpoint(
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 129, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 262, in _forward
h = self.in_layers(x)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/content/drive/.shortcut-targets-by-id/1_4t4MLpW_ubRFQ-2-cF8GRcCiXNAlkmp/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "/content/drive/MyDrive/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 219, in forward
return super().forward(x.float()).type(x.dtype)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/normalization.py", line 273, in forward
return F.group_norm(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [16, 5120, 8, 8]

@legarth
Copy link

legarth commented Jul 18, 2023

@continue-revolution Screenshot below
Windows 11. RTX4090
A1111 v1.4.0
``
Screenshot 2023-07-18 094946

@sabersilverchen
Copy link

I fix the RuntimeError issue by updating my stable-diffusion-webui to version 1.4.1

@Fhantomchaos
Copy link

Clean install Automatic1111 and got this bug, could someone give me any ideal ?

*** Error completing request
*** Arguments: ('task(6l3z6peqsui4r5r)', 'in the style of AImanhwa, anime girl in the wind', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x000002672CEAA560>, 0, True, 16, 8, 'mm_sd_v15.ckpt', False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
File "D:\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
File "D:\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\txt2img.py", line 62, in txt2img
processed = processing.process_images(p)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 639, in process_images
res = process_images_inner(p)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 758, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 1011, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 464, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "D:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling
return func()
File "D:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 464, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 183, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 21, in mm_tes_forward
x = layer(x, context)
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 79, in forward
hidden_states = self.temporal_transformer(hidden_states, encoder_hidden_states, attention_mask)
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 151, in forward
hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, video_length=video_length)
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 215, in forward
hidden_states = attention_block(
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 539, in forward
hidden_states = self._memory_efficient_attention_xformers(query, key, value, attention_mask)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 468, in memory_efficient_attention_xformers
hidden_states = xformers.ops.memory_efficient_attention(query, key, value, attn_bias=attention_mask,
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 192, in memory_efficient_attention
return memory_efficient_attention(
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 290, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 310, in memory_efficient_attention_forward
out, *
= op.apply(inp, needs_gradient=False)
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 175, in apply
out, lse, rng_seed, rng_offset = cls.OPERATOR(
File "D:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})
RuntimeError: CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

image

@recoilme
Copy link

now working for me (3090, runpod.io, this template https://www.runpod.io/console/gpu-browse?template=runpod-stable-unified )

  • download model manually
  • get runtime error (Expected weight ), restart webui
  • fix string in animatedif.py - imageio.mimsave(video_path, video_list, duration=(1000/fps)) - replace 1000 with 1, restart webui
  • generate with 576*576 (vertical not working)
  • use 75 tokens (positive/negative) if more - i get 2 version in one animation
  • worked in preview
  • image blurred, add steps and try to fix it in prompt like: half clothed eyes, (highres, shadows, absurdres, best_quality, ultra_detailed, 8k, extremely_clear, photograph, beautiful, sharp focus, hdr), (adult, perfect skin, female, looking away, portrait, upper body, smile, detailed eyes, sensual, stranded, beautiful light, hair bun), Grabuloska, (black_background:1.4)
  • same with different samplers (Euler a, DDIM, DPM++2Sa Karras)
  • colorful model
  • 16 fps/ 24 frame - work for me

But this absolutely amazing! Thank you!
00014-588303550

What do you think, is it possible to merge this Unet layer in model directly??

@sdbds
Copy link

sdbds commented Jul 18, 2023

找到原因了,是半精度导致的问题。
建议把官方那个自动半精度pr合了。
guoyww/AnimateDiff#25

because of fp16
use --no-half to avoid this problem
or waiting update autocast to fp16

@k-a-s-o-u
Copy link

I fix the RuntimeError issue by updating my stable-diffusion-webui to version 1.4.1

got error on WebUI1.3.2 but fixed on 1.4.1. Thanks!

@robbsaber
Copy link

@legarth @robbsaber please include a screenshot of your webui

updated webui to 1.4.1. still get an error
webui
code :

RuntimeError: CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

2023-07-18 15:34:59,288 - AnimateDiff - INFO - AnimateDiff process start with video length 16, FPS 8, motion module mm_sd_v15.ckpt.
2023-07-18 15:34:59,293 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-18 15:34:59,294 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-18 15:34:59,294 - AnimateDiff - INFO - Injection finished.
Data shape for DDIM sampling is (16, 4, 64, 64), eta 0.0
Running DDIM Sampling with 20 timesteps
DDIM Sampler: 0%| | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(529eqw1n4m39rub)', 'masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes,', 'badhandv4,easynegative,ng_deepnegative_v1_75t,verybadimagenegative_v1.3, bad-artist, bad_prompt_version2-neg, teeth', [], 20, 19, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, '', 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', True, 16, 8, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000021C3B170100>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000021C3B170250>, None, False, '0', 'C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\models\roop\inswapper_128.onnx', 'CodeFormer', 1, '', 1, 1, False, True, False, False, False, False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 5, 'all', 'all', 'all', '', '', '', '1', 'none', False, '', '', 'comma', '', True, '', '20', 'all', 'all', 'all', 'all', None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
processed = processing.process_images(p)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\processing.py", line 620, in process_images
res = process_images_inner(p)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\processing.py", line 739, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\processing.py", line 992, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 222, in sample
samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 51, in launch_sampling
return func()
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 222, in
samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 104, in sample
samples, intermediates = self.ddim_sampling(conditioning, size,
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 164, in ddim_sampling
outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 58, in p_sample_ddim_hook
res = self.orig_p_sample_ddim(x_dec, cond, ts, *args, unconditional_conditioning=unconditional_conditioning, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 212, in p_sample_ddim
model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 21, in mm_tes_forward
x = layer(x, context)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 79, in forward
hidden_states = self.temporal_transformer(hidden_states, encoder_hidden_states, attention_mask)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 151, in forward
hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, video_length=video_length)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 215, in forward
hidden_states = attention_block(
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 539, in forward
hidden_states = self._memory_efficient_attention_xformers(query, key, value, attention_mask)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 468, in memory_efficient_attention_xformers
hidden_states = xformers.ops.memory_efficient_attention(query, key, value, attn_bias=attention_mask,
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 192, in memory_efficient_attention
return memory_efficient_attention(
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 290, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 310, in memory_efficient_attention_forward
out, *
= op.apply(inp, needs_gradient=False)
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 175, in apply
out, lse, rng_seed, rng_offset = cls.OPERATOR(
File "C:\Users\rober\Documents\GitHub\Ai\stable-diffusion-webui\venv\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})
RuntimeError: CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

@continue-revolution
Copy link
Owner

@robbsaber Do not use xformers. I will remove xformers in this extension tomorrow.

@robbsaber
Copy link

robbsaber commented Jul 19, 2023

@robbsaber Do not use xformers. I will remove xformers in this extension tomorrow.

removing --xformers from webui_user.bat worked. Thanks!
For anyone else:
1.update automatic 1111
2.download the ckpts manually to stable-diffusion-webui\extensions\sd-webui-animatediff\model
3. remove --xformers

@gitmylo
Copy link

gitmylo commented Jul 20, 2023

@robbsaber Do not use xformers. I will remove xformers in this extension tomorrow.

removing --xformers from webui_user.bat worked. Thanks! For anyone else: 1.update automatic 1111 2.download the ckpts manually to stable-diffusion-webui\extensions\sd-webui-animatediff\model 3. remove --xformers

Yeah, seems the automatic download isn't working. Use huggingface_hub for downloading the models instead, @continue-revolution. https://huggingface.co/guoyww/animatediff/ seems to be an official repository hosting the models already.

Edit: for reference, the "shape issue" is caused by a model not being loaded correctly, loading the model will fix it.

@continue-revolution
Copy link
Owner

@gitmylo https://github.com/continue-revolution/sd-webui-animatediff#how-to-use item 2. Since there are 4 sources to download the motion modules, I decide to remove the auto-download feature in v1.1.0 and let users choose.

which model is not loaded correctly? SD or motion module? How do you actually “load” it?

@gitmylo
Copy link

gitmylo commented Jul 20, 2023

The shape issue is caused when you don't have the motion models installed and attempt to run the network twice, first run it gives a warning and gives random frames, second run will crash saying the vectors are not the expected size.

@andzejsp
Copy link

andzejsp commented Jul 20, 2023

Win11 / 3090

A1111 extension

idk why its not working, downloaded ckpt into the right folder, disable --xformers from webui.bat, still get one frame in gif and on second gen it errors out.

venv "E:\Git_repos\stable-difussion-automati1111\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.3.2
Commit hash: baf6946e06249c5af9851c60171692c44ef633e0
Installing requirements


Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 1.13.1+cu117.
The program is tested to work with torch 2.0.0.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
ControlNet v1.1.173
ControlNet v1.1.173
Loading weights [52768d2bc4] from E:\Git_repos\stable-difussion-automati1111\models\Stable-diffusion\toonyou_beta3.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 4.9s (import torch: 1.2s, import gradio: 0.7s, import ldm: 0.4s, other imports: 0.6s, setup codeformer: 0.1s, load scripts: 0.9s, create ui: 0.3s, gradio launch: 0.5s).
Creating model from config: E:\Git_repos\stable-difussion-automati1111\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying optimization: Doggettx... done.
Textual inversion embeddings loaded(0):
Model loaded in 4.1s (load weights from disk: 0.8s, create model: 0.6s, apply weights to model: 0.5s, apply half(): 0.6s, move model to device: 0.5s, load textual inversion embeddings: 1.2s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  5.11it/s]
2023-07-20 22:07:43,084 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.01<00:00, 10.77it/s]
2023-07-20 22:07:43,084 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-07-20 22:07:43,085 - AnimateDiff - INFO - Removal finished.
2023-07-20 22:07:43,085 - AnimateDiff - INFO - Merging images into GIF.
2023-07-20 22:07:43,224 - AnimateDiff - INFO - AnimateDiff process end.
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00,  9.41it/s]
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(qwiu0kozwmciobx)', 'masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes,', 'badhandv4,easynegative,ng_deepnegative_v1_75t,verybadimagenegative_v1.3, bad-artist, bad_prompt_version2-neg, teeth', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, True, 0, 16, 8, 'mm_sd_v15.ckpt', <controlnet.py.UiControlNetUnit object at 0x000002DD8B663010>, <controlnet.py.UiControlNetUnit object at 0x000002DD8B6632E0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, False, None, False, 50) {}
Traceback (most recent call last):
  File "E:\Git_repos\stable-difussion-automati1111\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "E:\Git_repos\stable-difussion-automati1111\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\modules\txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "E:\Git_repos\stable-difussion-automati1111\modules\processing.py", line 610, in process_images
    res = process_images_inner(p)
  File "E:\Git_repos\stable-difussion-automati1111\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\modules\processing.py", line 728, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "E:\Git_repos\stable-difussion-automati1111\modules\processing.py", line 976, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "E:\Git_repos\stable-difussion-automati1111\modules\sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "E:\Git_repos\stable-difussion-automati1111\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "E:\Git_repos\stable-difussion-automati1111\modules\sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\modules\sd_samplers_kdiffusion.py", line 137, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "E:\Git_repos\stable-difussion-automati1111\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
    h = module(h, emb, context)
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\extensions\sd-webui-animatediff\scripts\animatediff.py", line 19, in mm_tes_forward
    x = layer(x, emb)
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 249, in forward
    return checkpoint(
  File "E:\Git_repos\stable-difussion-automati1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "E:\Git_repos\stable-difussion-automati1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "E:\Git_repos\stable-difussion-automati1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 262, in _forward
    h = self.in_layers(x)
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
    input = module(input)
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "E:\Git_repos\stable-difussion-automati1111\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "E:\Git_repos\stable-difussion-automati1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 226, in forward
    return super().forward(x.float()).type(x.dtype)
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
    return F.group_norm(
  File "E:\Git_repos\stable-difussion-automati1111\venv\lib\site-packages\torch\nn\functional.py", line 2528, in group_norm
    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [2, 5120, 8, 8]

image

EDIT: It works now, as you can see my a1111 was 1.3x, did a git pull and now its working on 1.4.1

:)

@amj
Copy link

amj commented Jul 21, 2023

running with --no-half fixed the "Runtime error: Expected weight..." bug on a m2, but colors are still washed-out/grey, gonna look at running the guoyww script directly and see if it makes a difference.

@dmahlow
Copy link

dmahlow commented Jul 21, 2023

I'm running auto1111 on RunDiffusion and can't update from v1.3.2 to v1.4.1.. Any chance this can be made to work with 1.3.2. somehow?

@andzejsp
Copy link

what do you mean you cant update? just do a git pull

@lhucklen
Copy link

Any word on this being fixed?

@andzejsp
Copy link

Any word on this being fixed?

as it was stated earler, this bug is model limitation. Need better models

@continue-revolution continue-revolution mentioned this issue Jul 24, 2023
2 tasks
@J-Cott
Copy link

J-Cott commented Jul 25, 2023

@robbsaber Do not use xformers. I will remove xformers in this extension tomorrow.

Why does it work one time fine with --xformers enabled but not twice? isn't xformers very beneficial to have?

@continue-revolution
Copy link
Owner

@J-Cott I got weird error when xformers was applied to the motion module and I had no idea how to resolve (the error is very ambiguous). At this time even if you enable xformers, it will not be applied to the motion module. I have mo idea what else will happen because I really don’t have time to test. I have to prioritize other problems.

@J-Cott
Copy link

J-Cott commented Jul 25, 2023

@J-Cott I got weird error when xformers was applied to the motion module and I had no idea how to resolve (the error is very ambiguous). At this time even if you enable xformers, it will not be applied to the motion module. I have mo idea what else will happen because I really don’t have time to test. I have to prioritize other problems.

I believe found the issue to my (and probably other peoples errors) the instructions say not to change the .ckpt name but when downloaded from Civitai (instead of the very slow Google Drive download link) the names of the files are different (animatediffMotion_v15.ckpt) and it cannot seem to find the model (as there is an error in the console to download them manually). renamed them to "mm_sd_v15.ckpt" and it now works fine.

@AK51
Copy link

AK51 commented Sep 14, 2023

RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [32, 2560, 8, 8]

Hi, I did the git pull and the SD is up to date as well as your extension. However, after running for one time, I got error and I need to close my terminal and restart the SD again, is there any solution or work around method? Thx

   Traceback (most recent call last):
      File "/home/ak/sd/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/home/ak/sd/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "/home/ak/sd/stable-diffusion-webui/modules/processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "/home/ak/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "/home/ak/sd/stable-diffusion-webui/modules/processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_extra.py", line 71, in restart_sampler
        x = heun_step(x, old_sigma, new_sigma)
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_extra.py", line 19, in heun_step
        denoised = model(x, old_sigma * s_in, **extra_args)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 802, in forward
        h = module(h, emb, context)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py", line 23, in mm_tes_forward
        x = layer(x, context)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 85, in forward
        return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 142, in forward
        hidden_states = self.norm(hidden_states)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 459, in network_GroupNorm_forward
        return originals.GroupNorm_forward(self, input)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/normalization.py", line 273, in forward
        return F.group_norm(
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/functional.py", line 2530, in group_norm
        return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [32, 2560, 8, 8]

---

@RyanzW0521
Copy link

RyanzW0521 commented Sep 14, 2023 via email

@continue-revolution
Copy link
Owner

@AK51 I have no idea why. If both WebUI and this extension are up to date, you may want to remove webui/repository/{every folder inside it} and let webui re-install.

@AK51
Copy link

AK51 commented Sep 14, 2023

Hi, it works now. I did git pull and forgot to restart. :P Thx

@continue-revolution
Copy link
Owner

I will close this issue for now. If people still have problem about shape issue, pls update your webui. If you still have a problem, feel free to submit a new issue.

@2blackbar
Copy link

2blackbar commented Sep 15, 2023

This issue is still present , dont close it , i used animatediff and adetailer and it popped up
This is pretty critical issue, completely breaks any generation, you have to rerun webui again , its not like nans when you can load another model and it works again without closing

@continue-revolution
Copy link
Owner

@2blackbar fresh re-install everything including venv will almost certainly resolve your problem. Otherwise I cannot promise anything.

@ExoSilica
Copy link

ExoSilica commented Sep 15, 2023

Are you guys using ADetailer? From my experiments, when using ADetailer, it works once and then the second time regardless if ADetailer is on or not it shows "RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [32, 2560, 8, 8]".

If you use SD Animatediff without ADetailer you can generate as much as you want without this RuntimeError.

However I ask that the creator of this extension fix this issue, ADetailer fixes the quality of this generation.

@continue-revolution
Copy link
Owner

@ExoSilica ADetailer has its own backend of image generation. I’m not sure what they did with their own inpainting + CN backend, and I won’t have time to read their source code.

If you really want to automate inpainting, probably I can consider connecting my own segment anything extension with this one, which will be much easier, but not now. I’m busy with my real life at this time.

@ExoSilica
Copy link

ExoSilica commented Sep 15, 2023

Another experiment I conducted has shown that face restoration does not affect the extension's ability to execute multiple times. Therefore, it could serve as a good alternative to the ADetailer extension until the developer resolves the RuntimeError.

@continue-revolution, no problem mate. I'm just trying to help other users who may have encountered the same issue and understand what might be causing it. It took me hours of restarting and retrying to discover this. Please work on it whenever you have free time. For now, I'll be using the face restoration, although it slightly reduces the authenticity (likeness) of the model.

PS: If you do get a chance to investigate this further, while you're at it, see if you can make this extension compatible with Roop, which is essentially a face restoration technique that utilizes a custom image to assist with the diffusion.

@AK51
Copy link

AK51 commented Sep 17, 2023

Hi,

RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [32, 2560, 12, 8]

Time taken: 0.7 sec.

There is still the same error....

    Traceback (most recent call last):
      File "/home/ak/sd/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/home/ak/sd/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "/home/ak/sd/stable-diffusion-webui/modules/processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "/home/ak/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "/home/ak/sd/stable-diffusion-webui/modules/processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_extra.py", line 71, in restart_sampler
        x = heun_step(x, old_sigma, new_sigma)
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_extra.py", line 19, in heun_step
        denoised = model(x, old_sigma * s_in, **extra_args)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/modules/sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 802, in forward
        h = module(h, emb, context)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py", line 23, in mm_tes_forward
        x = layer(x, context)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 85, in forward
        return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 142, in forward
        hidden_states = self.norm(hidden_states)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/home/ak/sd/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 459, in network_GroupNorm_forward
        return originals.GroupNorm_forward(self, input)
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/modules/normalization.py", line 273, in forward
        return F.group_norm(
      File "/home/ak/sd/stable-diffusion-webui/venv/lib/python3.9/site-packages/torch/nn/functional.py", line 2530, in group_norm
        return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [32, 2560, 12, 8]

---

@AK51
Copy link

AK51 commented Sep 17, 2023

If I disable the ADetailer and restart the SD, it works fine. Cheers

@vezleyj
Copy link

vezleyj commented Feb 3, 2024

Controlnet doesn't work under vid2vid or batch image input,please help!Animatediff can generate vid2vid but controlnet can't take any effect, e.g. openpose.

2024-02-03 21:38:36,597 - ControlNet - INFO - controlnet_default_config
2024-02-03 21:38:40,130 - ControlNet - INFO - ControlNet model control_v11p_sd15_openpose cab727d4 loaded.
*** Error running process: M:\stable-diffusion-webui-master\webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "M:\stable-diffusion-webui-master\webui\modules\scripts.py", line 718, in process
script.process(p, *script_args)
File "M:\stable-diffusion-webui-master\webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1143, in process
self.controlnet_hack(p)
File "M:\stable-diffusion-webui-master\webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1128, in controlnet_hack
self.controlnet_main_entry(p)
File "M:\stable-diffusion-webui-master\webui\extensions\sd-webui-animatediff\scripts\animatediff_cn.py", line 248, in hacked_main_entry
model_net.reset()
AttributeError: 'ControlModel' object has no attribute 'reset'


0%|
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests