Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorFlow GPU Nightmare! #379

Open
muhendis80 opened this issue Nov 30, 2024 · 2 comments
Open

TensorFlow GPU Nightmare! #379

muhendis80 opened this issue Nov 30, 2024 · 2 comments

Comments

@muhendis80
Copy link

Hi, I can't get TensorFlow to use the GPU. My computer has an RTX 4070 graphics card. I've tried installing every CUDA and cuDNN version, but TensorFlow still can't detect them and keeps using the CPU. On the other hand, when I use the same code and run PyTorch with the command pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124, PyTorch immediately accesses the GPU without any issues. Please provide a similarly simple solution for TensorFlow, as I can't figure this out.

@ALOK158
Copy link

ALOK158 commented Dec 20, 2024

I am taking into account that you have a latest version of all mentioned above

Well, first you should verify whether your GPU is available or not via the given code
print("Is GPU available:", tf.config.list_physical_devices('GPU'))

If it fails, then you should ensure that path environment variable includes CuSA and CUDNN

@AbstractEyes
Copy link

AbstractEyes commented Jan 2, 2025

I had this same exact problem. I'm just not getting anything through tensorflow, but the gpu is most definitely available through the environment.
I've been here for hours trying to debug it. I've run through the ringer. PATH set, reinstalled drivers, correct versions of cuda, WSL2, none of it worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants