You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I was running the demo code
python run_experiment.py --nosrun -e exp_specs/sac.yaml
but get following error, do you have any idea why this happens? Thank you!
(I am using pytorch 1.6.0 and python version 3.7.9)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [32, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
The text was updated successfully, but these errors were encountered:
Hello, I overcame the same error. Try to update policy network before updating q1 and q2 and set retain_graph=True and it might work. This error seems to be relevant to the torch or cuda version...
Hi, sorry for missing the original post. I also think maybe this is coming from not using the correct torch version (the conda virtual env specs are given in the .yml file).
Hi, I was running the demo code
python run_experiment.py --nosrun -e exp_specs/sac.yaml
but get following error, do you have any idea why this happens? Thank you!
(I am using pytorch 1.6.0 and python version 3.7.9)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [32, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
The text was updated successfully, but these errors were encountered: