Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cudnn ops64_9.dll is not found #1080

Closed
jhj0517 opened this issue Oct 22, 2024 · 14 comments
Closed

cudnn ops64_9.dll is not found #1080

jhj0517 opened this issue Oct 22, 2024 · 14 comments

Comments

@jhj0517
Copy link

jhj0517 commented Oct 22, 2024

Just a note about what I've experienced here:

Since OpenNMT/CTranslate2#1803 is merged and ctranslate2 version is now bumped to 4.5.0,

I got cudnn ops64_9.dll is not found error which means cuDNN version mismatch.
This only happens with torch <= 2.3.1, fixed by upgrading to torch >= 2.4.0. (Tested on Windows)

It seems faster-whisper now needs torch >= 2.4.0 with the latest CTranslate2 update.

@BBC-Esq
Copy link
Contributor

BBC-Esq commented Oct 22, 2024

Can you try something? Try installing the following libraries into your virtual environment:

pip install nvidia-cudnn-cu12==9.5.0.50
pip install nvidia-cuda-nvrtc-cu12==12.4.127
pip install nvidia-cuda-runtime-cu12==12.4.127
pip install nvidia-cublas-cu12==12.4.5.8

And then in your script running faster-whisper include the following code snippet before any other imports:

import sys
import os
from pathlib import Path

def set_cuda_paths():
    venv_base = Path(sys.executable).parent.parent
    nvidia_base_path = venv_base / 'Lib' / 'site-packages' / 'nvidia'
    cuda_path = nvidia_base_path / 'cuda_runtime' / 'bin'
    cublas_path = nvidia_base_path / 'cublas' / 'bin'
    cudnn_path = nvidia_base_path / 'cudnn' / 'bin'
    paths_to_add = [str(cuda_path), str(cublas_path), str(cudnn_path)]
    env_vars = ['CUDA_PATH', 'CUDA_PATH_V12_4', 'PATH']
    
    for env_var in env_vars:
        current_value = os.environ.get(env_var, '')
        new_value = os.pathsep.join(paths_to_add + [current_value] if current_value else paths_to_add)
        os.environ[env_var] = new_value

set_cuda_paths()

Lastly, pip install the proper version of torch and torchaudio:

  • If using Python 3.11:
pip install https://download.pytorch.org/whl/cu124/torch-2.5.0%2Bcu124-cp311-cp311-win_amd64.whl#sha256=270e004f028b2fc6886388ca01b5f22389f3a7babbd2004a1eec82aa7f669c12
pip install https://download.pytorch.org/whl/cu124/torchaudio-2.5.0%2Bcu124-cp311-cp311-win_amd64.whl#sha256=9a51aea447519fd6282a80308f15cc6c31138ac92822925a5364f0fd6e4f28f9
  • If using Python 3.12:
pip install https://download.pytorch.org/whl/cu124/torch-2.5.0%2Bcu124-cp312-cp312-win_amd64.whl#sha256=cc08ff3a26dbba92b4d9ae007a64da294270abe662fdac197f1e0a4880afe3bd
pip install https://download.pytorch.org/whl/cu124/torchaudio-2.5.0%2Bcu124-cp312-cp312-win_amd64.whl#sha256=0ce88068a8880da5cad7bff3cc28a12d8f797e597bacb9a292e1f2f8164ff0cb

Please Note

If you pip install faster-whisper as per usual you MUST PIP INSTALL TORCH AND TORCHAUDIO after installing faster-whisper, otherwise, faster-whisper will use the versions that it currently specifies as its dependencies.

Did it work?

  • This should allow you to NOT have CUDA installed at all on your system on a system-wide level...or have an entirely different version installed because, perhaps, other programs/libraries use a different version. Same with torch and torchaudio...
  • Granted, however, you must be within a virtual environment when you run your faster-whisper script of course.

@jhj0517
Copy link
Author

jhj0517 commented Oct 22, 2024

Good. anyway

pip install faster-whisper

didn't work before because it installed the latest torch and the latest torch was not compatible with the latest ctranslate2.
I had to manually downgrade torch.

Now ctranslate2 is updated to 4.45.0 and both the latest ctranslate2 & torch are compatible, just

pip install faster-whisper

works. I'm happy for it now anyway.

@BBC-Esq
Copy link
Contributor

BBC-Esq commented Oct 22, 2024

So can you recap exactly what you did please? Like...did you install faster-whisper first, then pip install a different version of torch/torchaudio/etc? If so, which versions? Thanks. Any other dependencies that you had to pip install, overriding the ones that faster-whisper installs by default?

@jhj0517
Copy link
Author

jhj0517 commented Oct 22, 2024

No problem. I used requirements.txt and installed via pip install -r requirements.txt :

--extra-index-url https://download.pytorch.org/whl/cu121
torch==2.3.1
torchaudio==2.3.1
faster-whisper

This was a workaround follows #958, it was a valid workaround because CTranslate2 didn't support cuDNN 9.

Now CTranslate2 is updated and makes another cuDNN element missing error with torch==2.3.1, ( in Windows 10 )
Editing the requirements.txt to install latest torch worked :

--extra-index-url https://download.pytorch.org/whl/cu124
torch
torchaudio
faster-whisper

@BBC-Esq
Copy link
Contributor

BBC-Esq commented Oct 22, 2024

Ok great...Torch 2.5.0 is the latest stable version...so basically, faster-whisper is ALREADY compatible with torch 2.5.0 (and the corresponding version of torchaudio)? And what version of CUDA are you running by chance? Did you install the individual libraries like I suggested or are you relying on a system-wide installation? Thanks again.

@jhj0517
Copy link
Author

jhj0517 commented Oct 22, 2024

I'm using CUDA 12.4 from the official nvidia site:

https://developer.nvidia.com/cuda-12-4-0-download-archive

And as far as I know, CUDA 12.6 also has no problem running with the latest torch.

@shkstar
Copy link

shkstar commented Oct 23, 2024

I’m encountering an issue when running Faster Whisper in a Google Colab environment. The code crashes which might be related to the ctranslate2 with the following error:

Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so}

Could you advise on how to resolve this error when running Faster Whisper in Colab?

@kacpersh
Copy link

+1 to the Colab error above.

@dgoryeo
Copy link

dgoryeo commented Oct 23, 2024

Downgrade ctranslate2:
!pip install ctranslate2==4.4.0

@kacpersh
Copy link

!pip install ctranslate2==4.4.0

Can confirm, it works! Thank you!

@jhj0517
Copy link
Author

jhj0517 commented Oct 23, 2024

Yes you have to downgrade to ctranslate2==4.4.0. jhj0517/Whisper-WebUI#348

As far as I know colab uses ubuntu and maybe the latest CTranslate2 build isn't compatible with ubuntu?

It is more of a CUDA 12.1 vs ctranslate2==4.5.0 version incompatibility: OpenNMT/CTranslate2#1806 (comment),

Colab uses CUDA 12.2 and torch 2.5.0+cu121 by default

@MahmoudAshraf97
Copy link
Collaborator

@shkstar @kacpersh
please check #1086

@leonheart58
Copy link

im using
cuda 12.4 torch 2.4 and python 3.10
and everytime i want to use subsai with faster whisper
this appears
caould not locate cudnn_ips_infer64_8.dll before other similar error happen but i had a different torch and cuda

@BBC-Esq
Copy link
Contributor

BBC-Esq commented Oct 28, 2024

I didn't understand your question, I'm sorry, because of the typos. "subsai"? "cudnn_ips..."? And are you doing a different setup...""but i had a different torch and cuda"? Can you clearly say what worked previously and what's not working now? I'm a little confused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants