You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I've tried to use your loss implementation using pytorch 1.4 and the network do not learn anything. This is because you've used Variable to define some tensors in the loss and in this Pytorch version the Variable has been deprecated as you can see in: https://pytorch.org/docs/stable/autograd.html#variable-deprecated.
Hello @brooklyn1900
I have tried LovaszSoftmaxLoss on different datasets. In one of them, I have the same problem with you, loss was always swing around at 0.30 and got nan value later. But on the other dataset, everything was normal.
Hi, I've tried to use your loss implementation using pytorch 1.4 and the network do not learn anything. This is because you've used Variable to define some tensors in the loss and in this Pytorch version the Variable has been deprecated as you can see in:
https://pytorch.org/docs/stable/autograd.html#variable-deprecated.
I've fixed this problem by adding
requires_grad=True
in all Variable definitions in the file:https://github.com/bermanmaxim/LovaszSoftmax/blob/master/pytorch/lovasz_losses.py
The text was updated successfully, but these errors were encountered: