-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LoadLibrary failed with error 126 when trying to load onnxruntime_providers_cuda.dll #20049
Comments
(1) Try install vc runtime according to https://onnxruntime.ai/docs/install/#requirements
|
@kartikpodugu |
I tried this, but I still see the same error. Can you elaborate what is this, and why you feel it will solve the issue I reported. I am a bit clueless here. |
@kartikpodugu, user_compute_stream is not supported in 1.17.1. Are you using some nightly build or build from source by yourself? I tested from a new Windows 11 machine in AZure. I only installed latest VC runtime ( https://aka.ms/vs/17/release/vc_redist.x64.exe), cuda 12.2 and python 3.11 and nothing else (I did not install cuDNN since torch has cuDNN and torch is imported first in the script). Then I created a python venv and install torch 2.2.1+cu121 and onnxruntime-gpu 1.17.1 for cuda 12. The following script can run well after that:
Using dependency walker, we can see that the external DLLs used in onnxruntime-gpu is a subset of torch 2.2.1+cu121: |
#tianleiwu --> I tried this and it helped resolve my error. |
True, I removed user_compute_stream and it worked |
Had this error. My problem was the ONNXRuntime version I was using did not support the TensorRT version I was using. |
[SOLVED] |
Describe the issue
GPU: NVIDIA RTX 3060
Operating System : Windows 11
Python: 3.11.8
ONNX version: 1.15.0
ONNX Runtime version: 1.17.1 (installed using pip install onnxruntime)
ONNX Runtime GPU version: 1.17.1 (installed using 'pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/')
Torch cuda is available is True
CUDA 12.2.1
CUDNN 8.9.2.26
CUDA bin path in PATH environment variable.
nvdia-smi showing 12.2
nvcc --version showing 12.2
According to this page https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements
CUDA execution provider is available in execution providers list.
When I try to create ORT session, I get into following problem.
************* EP Error ***************
EP Error D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
To reproduce
Windows 11
Python 3.11.8
Create virtual environment
onnx 1.15.0
onnxruntime 1.17.1
onnxruntime-gpu 1.17.1
CUDA 12.2.1
CUDNN 8.9.2.26
Urgency
No response
Platform
Windows
OS Version
Windows 11
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.17.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
CUDA 12.2
The text was updated successfully, but these errors were encountered: