-
-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError #3
Comments
Please screenshot your webui |
I got the extension to run, had to restart. |
|
Model weights were manually downloaded. This is what the terminal shows: 2023-07-18 11:44:03,725 - AnimateDiff - INFO - AnimateDiff process start with video length 8, FPS 8, motion module mm_sd_v15.ckpt. I did check out the outputs\txt2img-images\AnimateDiff folder. |
Huh, it seems that it is not your problem. How about setting frame number to 16? They trained their motion modules when the frame number is 16. Also, there is no need to change batch size. It will be replaced by the video frame numbers internally. |
Same problem. Each time when I restart Webui it runs, but then the second time the error appears. |
Weird. You both can also observe txt2img-images/{today’s date}/ and read the images that look similar to your GIF. Are there 8/16 images? @RyanzW0521 please post your terminal log with error message. |
Unfortunately, in my folder it only has one image.
And I found that after the error reported, even though you cancel the use, before you restart you can still not generate other images |
@RyanzW0521 Please also post a screenshot of your webui that produce this error. I will look into the reason tomorrow. |
|
For everyone who come to here: I am not sure which features on WebUI are not compatible with AnimateDiff. Please feel free to post your terminal logs and a screenshot. Please do not use xformers at this time. The very first thing you can do is to try reproducing my demo. https://civitai.com/models/30240?modelVersionId=78775 masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes, badhandv4,easynegative,ng_deepnegative_v1_75t,verybadimagenegative_v1.3, bad-artist, bad_prompt_version2-neg, teeth The majority of negative prompts are textual embeddings from civitai. Please manually download them. |
Working on Win 11/3090, AUTOMATIC1111 1.4.0. |
Not working. Same error as above. Win 10/3090 |
Same error here. Downloaded models manually. File "C:\Users\x\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm |
@legarth @robbsaber please include a screenshot of your webui |
@continue-revolution Screenshot below |
I fix the RuntimeError issue by updating my stable-diffusion-webui to version 1.4.1 |
Clean install Automatic1111 and got this bug, could someone give me any ideal ? *** Error completing request |
now working for me (3090, runpod.io, this template https://www.runpod.io/console/gpu-browse?template=runpod-stable-unified )
But this absolutely amazing! Thank you! What do you think, is it possible to merge this Unet layer in model directly?? |
找到原因了,是半精度导致的问题。 because of fp16 |
got error on WebUI1.3.2 but fixed on 1.4.1. Thanks! |
updated webui to 1.4.1. still get an error
2023-07-18 15:34:59,288 - AnimateDiff - INFO - AnimateDiff process start with video length 16, FPS 8, motion module mm_sd_v15.ckpt. |
@robbsaber Do not use xformers. I will remove xformers in this extension tomorrow. |
removing --xformers from webui_user.bat worked. Thanks! |
Yeah, seems the automatic download isn't working. Use Edit: for reference, the "shape issue" is caused by a model not being loaded correctly, loading the model will fix it. |
@gitmylo https://github.com/continue-revolution/sd-webui-animatediff#how-to-use item 2. Since there are 4 sources to download the motion modules, I decide to remove the auto-download feature in v1.1.0 and let users choose. which model is not loaded correctly? SD or motion module? How do you actually “load” it? |
The shape issue is caused when you don't have the motion models installed and attempt to run the network twice, first run it gives a warning and gives random frames, second run will crash saying the vectors are not the expected size. |
Win11 / 3090 A1111 extension idk why its not working, downloaded ckpt into the right folder, disable --xformers from webui.bat, still get one frame in gif and on second gen it errors out.
EDIT: It works now, as you can see my a1111 was 1.3x, did a git pull and now its working on 1.4.1 :) |
running with --no-half fixed the "Runtime error: Expected weight..." bug on a m2, but colors are still washed-out/grey, gonna look at running the guoyww script directly and see if it makes a difference. |
I'm running auto1111 on RunDiffusion and can't update from v1.3.2 to v1.4.1.. Any chance this can be made to work with 1.3.2. somehow? |
what do you mean you cant update? just do a git pull |
Any word on this being fixed? |
as it was stated earler, this bug is model limitation. Need better models |
Why does it work one time fine with --xformers enabled but not twice? isn't xformers very beneficial to have? |
@J-Cott I got weird error when xformers was applied to the motion module and I had no idea how to resolve (the error is very ambiguous). At this time even if you enable xformers, it will not be applied to the motion module. I have mo idea what else will happen because I really don’t have time to test. I have to prioritize other problems. |
I believe found the issue to my (and probably other peoples errors) the instructions say not to change the .ckpt name but when downloaded from Civitai (instead of the very slow Google Drive download link) the names of the files are different (animatediffMotion_v15.ckpt) and it cannot seem to find the model (as there is an error in the console to download them manually). renamed them to "mm_sd_v15.ckpt" and it now works fine. |
Hi, I did the git pull and the SD is up to date as well as your extension. However, after running for one time, I got error and I need to close my terminal and restart the SD again, is there any solution or work around method? Thx
|
您的邮件我已收到,辛苦!
|
@AK51 I have no idea why. If both WebUI and this extension are up to date, you may want to remove webui/repository/{every folder inside it} and let webui re-install. |
Hi, it works now. I did git pull and forgot to restart. :P Thx |
I will close this issue for now. If people still have problem about shape issue, pls update your webui. If you still have a problem, feel free to submit a new issue. |
This issue is still present , dont close it , i used animatediff and adetailer and it popped up |
@2blackbar fresh re-install everything including venv will almost certainly resolve your problem. Otherwise I cannot promise anything. |
Are you guys using ADetailer? From my experiments, when using ADetailer, it works once and then the second time regardless if ADetailer is on or not it shows "RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [32, 2560, 8, 8]". If you use SD Animatediff without ADetailer you can generate as much as you want without this RuntimeError. However I ask that the creator of this extension fix this issue, ADetailer fixes the quality of this generation. |
@ExoSilica ADetailer has its own backend of image generation. I’m not sure what they did with their own inpainting + CN backend, and I won’t have time to read their source code. If you really want to automate inpainting, probably I can consider connecting my own segment anything extension with this one, which will be much easier, but not now. I’m busy with my real life at this time. |
Another experiment I conducted has shown that face restoration does not affect the extension's ability to execute multiple times. Therefore, it could serve as a good alternative to the ADetailer extension until the developer resolves the RuntimeError. @continue-revolution, no problem mate. I'm just trying to help other users who may have encountered the same issue and understand what might be causing it. It took me hours of restarting and retrying to discover this. Please work on it whenever you have free time. For now, I'll be using the face restoration, although it slightly reduces the authenticity (likeness) of the model. PS: If you do get a chance to investigate this further, while you're at it, see if you can make this extension compatible with Roop, which is essentially a face restoration technique that utilizes a custom image to assist with the diffusion. |
Hi,
There is still the same error....
|
If I disable the ADetailer and restart the SD, it works fine. Cheers |
Controlnet doesn't work under vid2vid or batch image input,please help!Animatediff can generate vid2vid but controlnet can't take any effect, e.g. openpose. 2024-02-03 21:38:36,597 - ControlNet - INFO - controlnet_default_config |
Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [16, 2560, 9, 9]
I have the mm_sd_v15.ckpt downloaded in stable-diffusion-webui\extensions\sd-webui-animatediff\model.
Complete log:
2023-07-18 11:19:21,548 - AnimateDiff - INFO - AnimateDiff process start with video length 2, FPS 8, motion module mm_sd_v15.ckpt.
2023-07-18 11:19:21,551 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-18 11:19:21,552 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-18 11:19:21,552 - AnimateDiff - INFO - Injection finished.
0%| | 0/30 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(ba8qq6a5sxydhr6)', 'Beautiful Scenery', '', [], 30, 16, False, False, 1, 2, 7.5, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.33, 1.5, '4x_UniversalUpscalerV2-Sharper_103000_G', 10, 0, 0, 19, '', '', [], <gradio.routes.Request object at 0x000002E7E74F5420>, 0, 0, False, 'Horizontal', '1,1', False, '0.2', False, False, 'female', True, 1, True, -1.0, [], [], [], [], False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': <object object at 0x000002E7E6FB5740>}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': <object object at 0x000002E7E6FB5720>}, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 2, 8, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E5CB6B60>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E5C86980>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E750FE20>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E750CA90>, None, False, '0', 'G:\stablediffusion\stable-diffusion-webui\extensions/sd-webui-faceswap/models\inswapper_128.onnx', 'CodeFormer', 1, '', 1, 1, False, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 0, 1, 1, 0, 0, 0, 0, False, 'Default', False, False, 'Euler a', 0.95, 0.75, 'zero', 'pos', 'linear', 0.2, 0.0, 0.75, None, 'Lanczos', 1, 0, 0, True, 0.3, 'Latent', 0.55, 0.3, 0.2, 0.2, [], False, 1.5, 1.2, False, '', '1', 'from modules.processing import process_images\n\np.width = 768\np.height = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, 0, 0, 384, 384, False, False, True, True, True, 1, '', '', 8, True, 16, 'Median cut', False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, 'positive', 'comma', 0, False, False, '', 'Positive', 0, ', ', True, 32, 0, 'Median cut', 'luminance', False, 'Illustration', 'svg', True, True, False, 0.5, True, 16, True, 16, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', False, 'None', 'Not set', True, False, '', '', '', '', '', 1.3, 'Not set', 'Not set', 'Not set', 1, 1.3, 'Not set', 'Not set', 'Not set', 'Not set', 'Not set', 'Not set', 1.3, 1.3, 1.3, 'Not set', 'Not set', 1.3, True, True, 'Disabled', None, None, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "G:\stablediffusion\stable-diffusion-webui\modules\call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
File "G:\stablediffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\txt2img.py", line 62, in txt2img
processed = processing.process_images(p)
File "G:\stablediffusion\stable-diffusion-webui\modules\processing.py", line 639, in process_images
res = process_images_inner(p)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\processing.py", line 759, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "G:\stablediffusion\stable-diffusion-webui\modules\processing.py", line 1012, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 464, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling
return func()
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 464, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 183, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 21, in mm_tes_forward
x = layer(x, context)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 79, in forward
hidden_states = self.temporal_transformer(hidden_states, encoder_hidden_states, attention_mask)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 144, in forward
hidden_states = self.norm(hidden_states)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
return F.group_norm(
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [4, 2560, 8, 8]
The text was updated successfully, but these errors were encountered: