Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
update comments
Browse files Browse the repository at this point in the history
  • Loading branch information
kshitij12345 committed Jul 26, 2019
1 parent cb051fe commit 794eefd
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/operator/tensor/elemwise_unary_op_trig.cc
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ The storage type of ``sinh`` output depends upon the input storage type:
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_sinh, unary_bwd<mshadow_op::sinh_grad>)
.set_attr<nnvm::FGradient>("FGradient",
[](const nnvm::NodePtr& n, const std::vector<nnvm::NodeEntry>& ograds) {
// ograds[0]: d^2L/dx^2
// ograds[0]: head_grad_grads (dL/dxgrad)
// inputs[0]: dL/dy
// inputs[1]: x (ElemwiseUseIn)
// f(x) = sinh(x)
Expand Down Expand Up @@ -298,7 +298,7 @@ The storage type of ``cosh`` output is always dense
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_cosh, unary_bwd<mshadow_op::cosh_grad>)
.set_attr<nnvm::FGradient>("FGradient",
[](const nnvm::NodePtr& n, const std::vector<nnvm::NodeEntry>& ograds) {
// ograds[0]: d^2L/dx^2
// ograds[0]: head_grad_grads (dL/dxgrad)
// inputs[0]: dL/dy
// inputs[1]: x (ElemwiseUseIn)
// f(x) = cosh(x)
Expand Down

0 comments on commit 794eefd

Please sign in to comment.