Skip to content

Commit

Permalink
beta doc fixes (apache#13860)
Browse files Browse the repository at this point in the history
  • Loading branch information
anirudhacharya authored and stephenrawls committed Feb 16, 2019
1 parent cf7cc9f commit 0d7dc6a
Show file tree
Hide file tree
Showing 6 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion R-package/R/context.R
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ init.context.default <- function() {

#' Set/Get default context for array creation.
#'
#' @param new, optional takes \code{mx.cpu()} or \code{mx.gpu(id)}, new default ctx.
#' @param new optional takes \code{mx.cpu()} or \code{mx.gpu(id)}, new default ctx.
#' @return The default context.
#'
#' @export
Expand Down
2 changes: 1 addition & 1 deletion R-package/R/model.R
Original file line number Diff line number Diff line change
Expand Up @@ -562,7 +562,7 @@ mx.model.FeedForward.create <-
#'
#' @param model The MXNet Model.
#' @param X The dataset to predict.
#' @param ctx mx.cpu() or mx.gpu(i) The device used to generate the prediction.
#' @param ctx mx.cpu() or mx.gpu(). The device used to generate the prediction.
#' @param array.batch.size The batch size used in batching. Only used when X is R's array.
#' @param array.layout can be "auto", "colmajor", "rowmajor", (detault=auto)
#' The layout of array. "rowmajor" is only supported for two dimensional array.
Expand Down
2 changes: 1 addition & 1 deletion R-package/R/optimizer.R
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
#' @param learning.rate float, default=0.01
#' The initial learning rate.
#' @param momentum float, default=0
#' The momentumvalue
#' The momentum value
#' @param wd float, default=0.0
#' L2 regularization coefficient add to all the weights.
#' @param rescale.grad float, default=1.0
Expand Down
2 changes: 1 addition & 1 deletion R-package/R/rnn.graph.R
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ gru.cell <- function(num_hidden, indata, prev.state, param, seqidx, layeridx, dr
}


#' unroll representation of RNN running on non CUDA device
#' Unroll representation of RNN running on non CUDA device
#'
#' @param config Either seq-to-one or one-to-one
#' @param cell_type Type of RNN cell: either gru or lstm
Expand Down
2 changes: 1 addition & 1 deletion src/operator/optimizer_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,7 @@ inline bool SGDStorageType(const nnvm::NodeAttrs& attrs,

NNVM_REGISTER_OP(sgd_update)
MXNET_ADD_SPARSE_OP_ALIAS(sgd_update)
.describe(R"code(Update function for Stochastic Gradient Descent (SDG) optimizer.
.describe(R"code(Update function for Stochastic Gradient Descent (SGD) optimizer.
It updates the weights using::
Expand Down
2 changes: 1 addition & 1 deletion src/operator/tensor/elemwise_unary_op_basic.cc
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ static bool IdentityAttrLikeRhsStorageType(const nnvm::NodeAttrs& attrs,

// relu
MXNET_OPERATOR_REGISTER_UNARY_WITH_RSP_CSR(relu, cpu, mshadow_op::relu)
.describe(R"code(Computes rectified linear.
.describe(R"code(Computes rectified linear activation.
.. math::
max(features, 0)
Expand Down

0 comments on commit 0d7dc6a

Please sign in to comment.