Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA capability sm_86 is not compatible with the current PyTorch installation #203

Open
marceljhuber opened this issue Jul 13, 2022 · 1 comment

Comments

@marceljhuber
Copy link

Hello!
I am using a RTX 3080 Ti and I can't figure out which PyTorch and which CUDA versions to use in order to get it working.

  • The current CUDA version is 11.7.
  • The current PyTorch version is 1.12.0+cu102

The full error message is:

Setting jit to False because torch version is not 1.7.1.
/home/user/.local/lib/python3.8/site-packages/torch/cuda/init.py:146: UserWarning:
NVIDIA GeForce RTX 3080 Ti with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3080 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
Traceback (most recent call last):
File "/home/user/.local/bin/imagine", line 8, in
sys.exit(main())
File "/home/user/.local/lib/python3.8/site-packages/deep_daze/cli.py", line 151, in main
fire.Fire(train)
File "/home/user/.local/lib/python3.8/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/user/.local/lib/python3.8/site-packages/fire/core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/user/.local/lib/python3.8/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/deep_daze/cli.py", line 99, in train
imagine = Imagine(
File "/home/user/.local/lib/python3.8/site-packages/deep_daze/deep_daze.py", line 396, in init
self.clip_encoding = self.create_clip_encoding(text=text, img=img, encoding=clip_encoding)
File "/home/user/.local/lib/python3.8/site-packages/deep_daze/deep_daze.py", line 424, in create_clip_encoding
encoding = self.create_text_encoding(text)
File "/home/user/.local/lib/python3.8/site-packages/deep_daze/deep_daze.py", line 432, in create_text_encoding
text_encoding = self.perceptor.encode_text(tokenized_text).detach()
File "/home/user/.local/lib/python3.8/site-packages/deep_daze/clip.py", line 525, in encode_text
x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return F.embedding(
File "/home/user/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 2199, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

@ghost
Copy link

ghost commented Aug 10, 2022

Hi!

The torch version is too old for the 3080, "1.12.0+cu102" should be "1.12.1+cu116" today. Uninstall and reinstall with the correct torch-everything. If you're using pipenv, this works for me:

pipenv lock is as slow proportional to download speed, due downloading every torch package (1.7GB each) that matches this version in order to compute its hash.

[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"

[[source]]
# https://github.com/pypa/pipenv/issues/4961
url = "https://download.pytorch.org/whl/cu116/"
verify_ssl = true
name = "pytorch"

[packages]
pytorch-lightning = "*"
torch = {index="pytorch", version="==1.12.1+cu116"}
torchaudio = {index="pytorch", version="*"}
torchinfo = "*"
torchnet = "*"
torchvision = {index="pytorch", version="*"}
tensorboard = "*"

[dev-packages]

[requires]
python_version = "3.10.6"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant