You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you're working with multiple GPUs and the input tensor x is not on cuda:0, the following line causes the tensor to land on the default GPU. This causes a mismatch in any later call on x, as other tensors are likely still on the original GPU.
If you're working with multiple GPUs and the input tensor
x
is not oncuda:0
, the following line causes the tensor to land on the default GPU. This causes a mismatch in any later call onx
, as other tensors are likely still on the original GPU.torchmetrics/src/torchmetrics/utilities/data.py
Line 218 in 50e7da3
A simple reference to
x.device
can solve this problem:return x.cpu().cumsum(dim=dim, dtype=dtype).cuda(x.device)
The text was updated successfully, but these errors were encountered: