-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Second order gradient wrt inputs, expected behaviour. #14991
Comments
@mxnet-label-bot add [question] |
Calling autograd.grad on a first order ndarray seems not working this way. The API design could have been better documented.
|
Which branch are you using? I'm getting the following when running your example:
|
I am using my own branch: https://github.com/apeforest/incubator-mxnet/tree/develop/higher_order_grad I think one line you need to change is: https://github.com/apache/incubator-mxnet/pull/14613/files#diff-2d0bd6acfa276757ae59106dbed8e3e5R353 |
I merged your branch, works, thanks. |
I get the warning: "[16:36:03] /home/ANT.AMAZON.COM/pllarroy/devel/mxnet/src/imperative/imperative.cc:362: There are no inputs in computation graph that require gradients." |
This example works for me. I'm able to call grad two times, the second gradient has the correct value, given than the second function is 3 * 2 *x, so the grad is 6 for all elements. When using ag.grad the gradient is not stored in x though. One key issue here is that create_graph in the second call to grad has to be set to false, otherwise we are re-creating the graph and having the problem I had before that the graph only contains backward nodes and NOT the original nodes.
|
What would be the expected behaviour of this code?
It tries to calculate the gradient of a function using the gradient wrt of the inputs of the first gradient.
The text was updated successfully, but these errors were encountered: