You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /content/drive/MyDrive/ComfyUI/models/insightface/models/antelopev2/glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5
2024-02-29 12:26:58.549731581 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /content/drive/MyDrive/ComfyUI/models/insightface/models/antelopev2/scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0
set det-size: (640, 640)
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/diffusers/configuration_utils.py", line 376, in load_config
config_file = hf_hub_download(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/content/drive/MyDrive/ComfyUI/models/controlnet/ControlNetModeldiffusion_pytorch_model.safetensors'. Use repo_type argument if needed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/content/drive/MyDrive/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/content/drive/MyDrive/ComfyUI/custom_nodes/comfyui-instantId-faceswap/node.py", line 206, in create_pipeline
controlnet = ControlNetModel.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 569, in from_pretrained
config, unused_kwargs, commit_hash = cls.load_config(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/configuration_utils.py", line 411, in load_config
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co/' to load this model, couldn't find it in the cached files and it looks like /content/drive/MyDrive/ComfyUI/models/controlnet/ControlNetModeldiffusion_pytorch_model.safetensors is not the path to a directory containing a config.json file.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/diffusers/installation#offline-mode'.
Prompt executed in 2.39 seconds
FETCH DATA from: /content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Manager/.cache/1514988643_custom-node-list.json
FETCH DATA from: /content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Manager/.cache/1742899825_extension-node-map.json
The text was updated successfully, but these errors were encountered:
/content/drive/MyDrive/ComfyUI/models/controlnet/ControlNetModeldiffusion_pytorch_model.safetensors
Interesting, try to add in "controlnet_name" slash at the end, instead of "/ControlNetModel" try "/ControlNetModel/"
I tried to run google drive in google colab.
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /content/drive/MyDrive/ComfyUI/models/insightface/models/antelopev2/glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5
2024-02-29 12:26:58.549731581 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1209 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /content/drive/MyDrive/ComfyUI/models/insightface/models/antelopev2/scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0
set det-size: (640, 640)
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/diffusers/configuration_utils.py", line 376, in load_config
config_file = hf_hub_download(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/content/drive/MyDrive/ComfyUI/models/controlnet/ControlNetModeldiffusion_pytorch_model.safetensors'. Use
repo_type
argument if needed.During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/content/drive/MyDrive/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/content/drive/MyDrive/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/content/drive/MyDrive/ComfyUI/custom_nodes/comfyui-instantId-faceswap/node.py", line 206, in create_pipeline
controlnet = ControlNetModel.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 569, in from_pretrained
config, unused_kwargs, commit_hash = cls.load_config(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/configuration_utils.py", line 411, in load_config
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co/' to load this model, couldn't find it in the cached files and it looks like /content/drive/MyDrive/ComfyUI/models/controlnet/ControlNetModeldiffusion_pytorch_model.safetensors is not the path to a directory containing a config.json file.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/diffusers/installation#offline-mode'.
Prompt executed in 2.39 seconds
FETCH DATA from: /content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Manager/.cache/1514988643_custom-node-list.json
FETCH DATA from: /content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI-Manager/.cache/1742899825_extension-node-map.json
The text was updated successfully, but these errors were encountered: