You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to calculate pseudo likelihoods for my amino acid sequences using antiberty. I run into the following error when trying to use the pseudo_log_likelihood function:
_nll = torch.nn.functional.cross_entropy(
File "C:\Users\anama\miniconda3\envs\plm\lib\site-packages\torch\nn\functional.py", line 3053, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument target in method wrapper_CUDA__nll_loss2d_forward)
Looking at your code briefly, I believe it is because the labels stay on the cpu while the logits are on cuda when calculating nll. I have cuda on my laptop so I checked and confirmed that my antiberty instance is by default located on my GPU. Could you please look into this or let me know if I am missing something? Thanks a lot!
The text was updated successfully, but these errors were encountered:
I wanted to calculate pseudo likelihoods for my amino acid sequences using antiberty. I run into the following error when trying to use the pseudo_log_likelihood function:
_nll = torch.nn.functional.cross_entropy( File "C:\Users\anama\miniconda3\envs\plm\lib\site-packages\torch\nn\functional.py", line 3053, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, Reduction.get_enum(reduction), ignore_index, label_smoothing) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument target in method wrapper_CUDA__nll_loss2d_forward)
Looking at your code briefly, I believe it is because the labels stay on the cpu while the logits are on cuda when calculating nll. I have cuda on my laptop so I checked and confirmed that my antiberty instance is by default located on my GPU. Could you please look into this or let me know if I am missing something? Thanks a lot!
Hi, I met the same issue today. A simple change labels = self.tokenizer.encode( " ".join(list(s)), return_tensors="pt", )[:, 1:-1].to('cuda') in line 266 at pseudo_log_likelihood function worked fine for me. Hope it's helpful!
Hi!
I wanted to calculate pseudo likelihoods for my amino acid sequences using antiberty. I run into the following error when trying to use the
pseudo_log_likelihood
function:_nll = torch.nn.functional.cross_entropy(
File "C:\Users\anama\miniconda3\envs\plm\lib\site-packages\torch\nn\functional.py", line 3053, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument target in method wrapper_CUDA__nll_loss2d_forward)
Looking at your code briefly, I believe it is because the labels stay on the cpu while the logits are on cuda when calculating nll. I have cuda on my laptop so I checked and confirmed that my antiberty instance is by default located on my GPU. Could you please look into this or let me know if I am missing something? Thanks a lot!
The text was updated successfully, but these errors were encountered: