This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[fix] missing input log higher order. #15331
Merged
Merged
Changes from 2 commits
Commits
Show all changes
13 commits
Select commit
Hold shift + click to select a range
5171e1d
fix missing input
kshitij12345 5b95fb3
update comments
kshitij12345 a64b35f
update comments
kshitij12345 6c5354c
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
kshitij12345 0581432
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
kshitij12345 bfd2d9b
retrigger CI
kshitij12345 7574c16
merge latest 'master' into fix/missing-input
kshitij12345 fa51e35
retrigger CI
kshitij12345 7e354f0
Merge branch 'master' into fix/missing-input
kshitij12345 9885e62
update node_op_util
kshitij12345 09ada96
use NodeOpGen for _backward_log*
kshitij12345 15ef0ba
retrigger cI
kshitij12345 60e51eb
retrigger CI
kshitij12345 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1090,9 +1090,9 @@ MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log, | |
unary_bwd<mshadow_op::log_grad>) | ||
.set_attr<nnvm::FGradient>("FGradient", | ||
[](const nnvm::NodePtr& n, const std::vector<nnvm::NodeEntry>& ograds) { | ||
// ograds[0]: dL/dxgrad | ||
// ograds[0]: dL/dygrad | ||
// inputs[0]: dL/dy | ||
// inputs[1]: x | ||
// inputs[1]: x (ElemewiseGradUseIn) | ||
// f(x) = y = log(x) | ||
// f'(x) = 1/x | ||
// f''(x) = -1 * (f'(x) * f'(x)) | ||
|
@@ -1117,15 +1117,15 @@ MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log10, | |
unary_bwd<mshadow_op::log10_grad>) | ||
.set_attr<nnvm::FGradient>("FGradient", | ||
[](const nnvm::NodePtr& n, const std::vector<nnvm::NodeEntry>& ograds) { | ||
// ograds[0]: dL/dxgrad | ||
// ograds[0]: dL/dygrad | ||
// inputs[0]: dL/dy | ||
// inputs[1]: x | ||
// inputs[1]: x (ElemewiseGradUseIn) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nice comment, helps. |
||
// f(x) = y = log10(x) | ||
// f'(x) = 1 / (log(10) * x) | ||
// f''(x) = -1 * (f'(x) * 1/x) | ||
auto dydx_mul_dldy = nnvm::NodeEntry{n}; // f'(x) * head_grads | ||
auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx", | ||
{n->inputs[0]}, nullptr, &n); | ||
{dydx_mul_dldy, n->inputs[0]}, nullptr, &n); | ||
auto dlogx = MakeNode("reciprocal", n->attrs.name + "_dlogx", | ||
{n->inputs[1]}, nullptr, &n); | ||
auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid", | ||
|
@@ -1146,15 +1146,15 @@ MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log2, | |
unary_bwd<mshadow_op::log2_grad>) | ||
.set_attr<nnvm::FGradient>("FGradient", | ||
[](const nnvm::NodePtr& n, const std::vector<nnvm::NodeEntry>& ograds) { | ||
// ograds[0]: dL/dxgrad | ||
// ograds[0]: dL/dygrad | ||
// inputs[0]: dL/dy | ||
// inputs[1]: x | ||
// inputs[1]: x (ElemewiseGradUseIn) | ||
// f(x) = y = log2(x) | ||
// f'(x) = 1 / (log(2) * x) | ||
// f''(x) = -1 * (f'(x) * 1/x) | ||
auto dydx_mul_dldy = nnvm::NodeEntry{n}; // f'(x) * head_grads | ||
auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx", | ||
{n->inputs[0]}, nullptr, &n); | ||
{dydx_mul_dldy, n->inputs[0]}, nullptr, &n); | ||
auto dlogx = MakeNode("reciprocal", n->attrs.name + "_dlogx", | ||
{n->inputs[1]}, nullptr, &n); | ||
auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid", | ||
|
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is dL/dx_grad. The head gradient is the gradient with respect to the previous output right? the previous output is x_grad or dL/dx so this thing is dL/(dL/dx) or dL/dx_grad in lack of a better notation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it should be, dL/dy_grad as we are computing/returning dL/dx_grad,
Eg.
That is why we have,
https://github.com/apache/incubator-mxnet/blob/5b95fb3ee3581ba20fe1def336621d68a811e17f/src/operator/tensor/elemwise_unary_op_basic.cc#L1111-L1112
These multiplications performing,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the notation is complicating us in excess as it gets pretty hairy. It's the head gradient of the previous (output) node, which has shape of x, and x_grad. So it has to be related to x, not y.
I think in Lagrange notation it would be$$F_{L_x}$$ (derivative of some head function with respect to the derivative of the first loss wrt to x. (x_grad).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh. I get it now. If I understand it correctly then, crudely
ograds[0]
is how much does thex_grad
affect theL
and then we compute how doesx_grad
change withx
. Makes sense now.Thank you very much. Will reflect it in this and other PRs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kshitij12345 I think what you write makes sense. I'm also unsure about notations, maybe you can come with a better one. If not maybe we leave the comment out, so we can merge the PR, as the code seems to do what's needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. Thanks Again.