No CUDA GPUs are available #139
Unanswered
Metaldandy
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Getting this output when hitting queue prompt. Windows 10, GTX 1660 Super,
D:\Programs\ComfyUI>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
Set vram state to: NORMAL VRAM
Using xformers cross attention
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
D:\Programs\ComfyUI\python_embeded\lib\site-packages\safetensors\torch.py:99: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
with safe_open(filename, framework="pt", device=device) as f:
D:\Programs\ComfyUI\python_embeded\lib\site-packages\torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
D:\Programs\ComfyUI\python_embeded\lib\site-packages\torch\storage.py:899: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = cls(wrap_storage=untyped_storage)
Traceback (most recent call last):
File "D:\Programs\ComfyUI\ComfyUI\execution.py", line 174, in execute
executed += recursive_execute(self.server, prompt, self.outputs, x, extra_data)
File "D:\Programs\ComfyUI\ComfyUI\execution.py", line 54, in recursive_execute
executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
File "D:\Programs\ComfyUI\ComfyUI\execution.py", line 54, in recursive_execute
executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
File "D:\Programs\ComfyUI\ComfyUI\execution.py", line 54, in recursive_execute
executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
[Previous line repeated 1 more time]
File "D:\Programs\ComfyUI\ComfyUI\execution.py", line 63, in recursive_execute
outputs[unique_id] = getattr(obj, obj.FUNCTION)(**input_data_all)
File "D:\Programs\ComfyUI\ComfyUI\nodes.py", line 244, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=CheckpointLoader.embedding_directory)
File "D:\Programs\ComfyUI\ComfyUI\comfy\sd.py", line 776, in load_checkpoint_guess_config
fp16 = model_management.should_use_fp16()
File "D:\Programs\ComfyUI\ComfyUI\comfy\model_management.py", line 226, in should_use_fp16
if torch.cuda.is_bf16_supported():
File "D:\Programs\ComfyUI\python_embeded\lib\site-packages\torch\cuda_init_.py", line 122, in is_bf16_supported
return torch.cuda.get_device_properties(torch.cuda.current_device()).major >= 8 and cuda_maj_decide
File "D:\Programs\ComfyUI\python_embeded\lib\site-packages\torch\cuda_init_.py", line 674, in current_device
lazy_init()
File "D:\Programs\ComfyUI\python_embeded\lib\site-packages\torch\cuda_init.py", line 247, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
Beta Was this translation helpful? Give feedback.
All reactions