Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

second round of fixing broken links in multiple files #16598

Merged
merged 1 commit into from
Oct 24, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ to train the MLP network we defined above.
For our training, we will make use of the stochastic gradient descent (SGD) optimizer. In particular, we'll be using mini-batch SGD. Standard SGD processes train data one example at a time. In practice, this is very slow and one can speed up the process by processing examples in small batches. In this case, our batch size will be 100, which is a reasonable choice. Another parameter we select here is the learning rate, which controls the step size the optimizer takes in search of a solution. We'll pick a learning rate of 0.02, again a reasonable choice. Settings such as batch size and learning rate are what are usually referred to as hyper-parameters. What values we give them can have a great impact on training performance.

We will use [Trainer](/api/python/docs/api/gluon/trainer.html) class to apply the
[SGD optimizer](https://mxnet.io/api/python/docs/api/gluon-related/_autogen/mxnet.optimizer.SGD.html) on the
[SGD optimizer](/api/python/docs/api/optimizer/index.html#mxnet.optimizer.SGD) on the
initialized parameters.

```python
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -556,7 +556,7 @@ except mx.MXNetError as err:

## Next

[Train a Linear Regression Model with Sparse Symbols](http://mxnet.apache.org/tutorials/sparse/train.html)
[Train a Linear Regression Model with Sparse Symbols](/api/python/docs/tutorials/packages/ndarray/sparse/train.html)


<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
Original file line number Diff line number Diff line change
Expand Up @@ -578,7 +578,7 @@ except mx.MXNetError as err:

## Next

[Train a Linear Regression Model with Sparse Symbols](http://mxnet.apache.org/tutorials/sparse/train.html)
[Train a Linear Regression Model with Sparse Symbols](/api/python/docs/tutorials/packages/ndarray/sparse/train.html)


<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
Original file line number Diff line number Diff line change
Expand Up @@ -27,18 +27,18 @@ then train a linear regression model using sparse symbols with the Module API.

To complete this tutorial, we need:

- MXNet. See the instructions for your operating system in [Setup and Installation](https://mxnet.io/get_started).
- MXNet. See the instructions for your operating system in [Setup and Installation](/get_started).

- [Jupyter Notebook](https://jupyter.org/index.html) and [Python Requests](http://docs.python-requests.org/en/master/) packages.
- [Jupyter Notebook](https://jupyter.org/index.html) and [Python Requests](https://3.python-requests.org/) packages.
```
pip install jupyter requests
```

- Basic knowledge of Symbol in MXNet. See the detailed tutorial for Symbol in [Symbol - Neural Network Graphs and Auto-differentiation](https://mxnet.apache.org/tutorials/basic/symbol.html).

- Basic knowledge of CSRNDArray in MXNet. See the detailed tutorial for CSRNDArray in [CSRNDArray - NDArray in Compressed Sparse Row Storage Format](https://mxnet.apache.org/versions/master/tutorials/sparse/csr.html).
- Basic knowledge of CSRNDArray in MXNet. See the detailed tutorial for CSRNDArray in [CSRNDArray - NDArray in Compressed Sparse Row Storage Format](/api/python/docs/tutorials/packages/ndarray/sparse/csr.html).

- Basic knowledge of RowSparseNDArray in MXNet. See the detailed tutorial for RowSparseNDArray in [RowSparseNDArray - NDArray for Sparse Gradient Updates](https://mxnet.apache.org/versions/master/tutorials/sparse/row_sparse.html).
- Basic knowledge of RowSparseNDArray in MXNet. See the detailed tutorial for RowSparseNDArray in [RowSparseNDArray - NDArray for Sparse Gradient Updates](/api/python/docs/tutorials/packages/ndarray/sparse/row_sparse.html).

## Variables

Expand Down Expand Up @@ -155,7 +155,7 @@ f = mx.sym.sparse.elemwise_add(c, c)
### Storage Type Inference

What will be the output storage types of sparse symbols? In MXNet, for any sparse symbol, the result storage types are inferred based on storage types of inputs.
You can read the [Sparse Symbol API](https://mxnet.apache.org/versions/master/api/python/symbol/sparse.html) documentation to find what output storage types are. In the example below we will try out the storage types introduced in the Row Sparse and Compressed Sparse Row tutorials: `default` (dense), `csr`, and `row_sparse`.
You can read the [Sparse Symbol API](/api/python/docs/api/symbol/sparse/index.html) documentation to find what output storage types are. In the example below we will try out the storage types introduced in the Row Sparse and Compressed Sparse Row tutorials: `default` (dense), `csr`, and `row_sparse`.


```python
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# NVIDIA Jetson Devices

To install MXNet on a Jetson TX or Nano, please refer to the [Jetson installation
guide](get_started/jetson_setup).
guide](/get_started/jetson_setup).
8 changes: 4 additions & 4 deletions docs/static_site/src/_includes/get_started/get_started.html
Original file line number Diff line number Diff line change
Expand Up @@ -256,8 +256,8 @@ <h2>Installing MXNet</h2>
</div> <!-- END - C++-->

<br>
For more installation options, refer to the <a href="get_started/ubuntu_setup.html">Ubuntu installation guide</a> and
<a href="get_started/centos_setup.html">CentOS installation guide</a>.
For more installation options, refer to the <a href="/get_started/ubuntu_setup.html">Ubuntu installation guide</a> and
<a href="/get_started/centos_setup.html">CentOS installation guide</a>.
</div> <!-- END - Linux -->


Expand Down Expand Up @@ -354,7 +354,7 @@ <h2>Installing MXNet</h2>
</div> <!-- End of cpu gpu -->
</div>
<br>
For more installation options, refer to the <a href="get_started/osx_setup.html">MXNet macOS installation guide</a>.
For more installation options, refer to the <a href="/get_started/osx_setup.html">MXNet macOS installation guide</a>.
</div> <!-- END - Mac OS -->


Expand Down Expand Up @@ -440,7 +440,7 @@ <h2>Installing MXNet</h2>
</div> <!-- End of cpu gpu -->
</div> <!-- End of C++ -->

For more installation options, refer to the <a href="get_started/windows_setup.html">MXNet Windows installation guide</a>.
For more installation options, refer to the <a href="/get_started/windows_setup.html">MXNet Windows installation guide</a>.
</div> <!-- End of Windows -->


Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
You can use the Maven packages defined in the following dependency to include MXNet in your Java
project. The Java API is provided as a subset of the Scala API and is intended for inference only.
Please refer to the <a href="get_started/java_setup.html">MXNet-Java setup guide</a> for a detailed set of
Please refer to the <a href="/get_started/java_setup.html">MXNet-Java setup guide</a> for a detailed set of
instructions to help you with the setup process.

<a href="https://repository.apache.org/#nexus-search;gav~org.apache.mxnet~~1.5.0~~">
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
You can use the Maven packages defined in the following dependency to include MXNet in your Java
project. The Java API is provided as a subset of the Scala API and is intended for inference only.
Please refer to the <a href="get_started/java_setup.html">MXNet-Java setup guide</a> for a detailed set of
Please refer to the <a href="/get_started/java_setup.html">MXNet-Java setup guide</a> for a detailed set of
instructions to help you with the setup process.

<a href="https://repository.apache.org/#nexus-search;gav~org.apache.mxnet~~1.5.0~~">
Expand Down
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
Refer to the [Julia section of the MXNet Ubuntu installation guide](get_started/ubuntu_setup#install-the-mxnet-package-for-julia).
Refer to the [Julia section of the MXNet Ubuntu installation guide](/get_started/ubuntu_setup#install-the-mxnet-package-for-julia).

Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
The default version of R that is installed with `apt-get` is insufficient. You will need
to first [install R v3.4.4+ and build MXNet from source](get_started/ubuntu_setup.html#install-the-mxnet-package-for-r).
to first [install R v3.4.4+ and build MXNet from source](/get_started/ubuntu_setup.html#install-the-mxnet-package-for-r).

After you have setup R v3.4.4+ and MXNet, you can build and install the MXNet R bindings with the following, assuming that `incubator-mxnet` is the source directory you used to build MXNet as follows:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
The default version of R that is installed with `apt-get` is insufficient. You will need
to first
[install R v3.4.4+ and build MXNet from
source](get_started/ubuntu_setup.html#install-the-mxnet-package-for-r).
source](/get_started/ubuntu_setup.html#install-the-mxnet-package-for-r).

After you have setup R v3.4.4+ and MXNet, you can build and install the MXNet R bindings
with the
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
You can use the Maven packages defined in the following dependency to include MXNet in
your Java project. The Java API is provided as a subset of the Scala API and is intended for
inference only.
Please refer to the [MXNet-Java setup guide](get_started/java_setup.html) for a detailed set of instructions to help you with the setup process.
Please refer to the [MXNet-Java setup guide](/get_started/java_setup.html) for a detailed set of instructions to help you with the setup process.

<a href="https://repository.apache.org/#nexus-search;gav~org.apache.mxnet~~1.5.0~~"><img
src="https://img.shields.io/badge/org.apache.mxnet-mac cpu-green.svg"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MXNet offers MKL pip packages that will be much faster when running on Intel hardware.
Check the chart below for other options, refer to <a href="https://pypi.org/project/mxnet/">PyPI for
other MXNet pip packages</a>, or <a href="get_started/validate_mxnet.html">validate your MXNet installation</a>.
other MXNet pip packages</a>, or <a href="/get_started/validate_mxnet.html">validate your MXNet installation</a>.

<div style="text-align: center">
<img src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.5.1.png"
Expand Down
Original file line number Diff line number Diff line change
@@ -1 +1 @@
Refer to the [Julia section of the MXNet Windows installation guide](get_started/windows_setup.html#install-the-mxnet-package-for-julia).
Refer to the [Julia section of the MXNet Windows installation guide](/get_started/windows_setup.html#install-the-mxnet-package-for-julia).
Original file line number Diff line number Diff line change
@@ -1 +1 @@
Refer to the [Perl section of the MXNet Windows installation guide](get_started/windows_setup.html#install-the-mxnet-package-for-perl).
Refer to the [Perl section of the MXNet Windows installation guide](/get_started/windows_setup.html#install-the-mxnet-package-for-perl).
Original file line number Diff line number Diff line change
@@ -1 +1 @@
Refer to the [MXNet Windows installation guide](get_started/windows_setup.html)
Refer to the [MXNet Windows installation guide](/get_started/windows_setup.html)
Original file line number Diff line number Diff line change
@@ -1 +1 @@
To build from source, refer to the [MXNet Windows installation guide](get_started/windows_setup.html).
To build from source, refer to the [MXNet Windows installation guide](/get_started/windows_setup.html).
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Note: packages for 3.6.x are not yet available.
Install 3.5.x of R from [CRAN](https://cran.r-project.org/bin/windows/base/old/).

You can [build MXNet-R from source](get_started/windows_setup.html#install-mxnet-package-for-r), or
You can [build MXNet-R from source](/get_started/windows_setup.html#install-mxnet-package-for-r), or
you can use a
pre-built binary:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
You can [build MXNet-R from source](get_started/windows_setup.html#install-mxnet-package-for-r), or
You can [build MXNet-R from source](/get_started/windows_setup.html#install-mxnet-package-for-r), or
you can use a
pre-built binary:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ Yes! You can stop the training early with `return(FALSE)`. See the following exa
When the validation metric dips below the threshold we set, the training process stops.

## Next Steps
* [Neural Networks with MXNet in Five Minutes](https://mxnet.io/tutorials/r/fiveMinutesNeuralNetwork.html)
* [Classify Real-World Images with a Pretrained Model](https://mxnet.io/tutorials/r/classifyRealImageWithPretrainedModel.html)
* [Handwritten Digits Classification Competition](https://mxnet.io/tutorials/r/mnistCompetition.html)
* [Character Language Model Using RNN](https://mxnet.io/tutorials/r/charRnnModel.html)
* [Neural Networks with MXNet in Five Minutes](/api/r/docs/tutorials/five_minutes_neural_network)
* [Classify Real-World Images with a Pretrained Model](/api/r/docs/tutorials/classify_real_image_with_pretrained_model)
* [Handwritten Digits Classification Competition](/api/r/docs/tutorials/mnist_competition)
* [Character Language Model Using RNN](/api/r/docs/tutorials/char_rnn_model)
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ sum(abs(test.y - pred6[1,])) / length(test.y)


## Next Steps
* [Neural Networks with MXNet in Five Minutes](https://mxnet.io/tutorials/r/fiveMinutesNeuralNetwork.html)
* [Classify Real-World Images with a PreTrained Model](https://mxnet.io/tutorials/r/classifyRealImageWithPretrainedModel.html)
* [Handwritten Digits Classification Competition](https://mxnet.io/tutorials/r/mnistCompetition.html)
* [Character Language Model Using RNN](https://mxnet.io/tutorials/r/charRnnModel.html)
* [Neural Networks with MXNet in Five Minutes](/api/r/docs/tutorials/five_minutes_neural_network)
* [Classify Real-World Images with a PreTrained Model](/api/r/docs/tutorials/classify_real_image_with_pretrained_model)
* [Handwritten Digits Classification Competition](/api/r/docs/tutorials/mnist_competition)
* [Character Language Model Using RNN](/api/r/docs/tutorials/char_rnn_model)
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ PM2.5 concentration levels.

Load and pre-process the data
---------
The first step is to load in the data and preprocess it. It is assumed that the data has been downloaded in a .csv file: data.csv from the [pollution dataset](https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data)
The first step is to load in the data and preprocess it. It is assumed that the data has been downloaded in a .csv file: data.csv from the [pollution dataset](https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data).

```r
## Loading required packages
Expand Down Expand Up @@ -324,4 +324,4 @@ We also repeated the above experiments to generate the next 100 samples to 301st

The above tutorial is just for demonstration purposes and has not been tuned extensively for accuracy.

For more tutorials on MXNet-R, head on to [MXNet-R tutorials](https://mxnet.apache.org/tutorials/r/index.html)
For more tutorials on MXNet-R, head on to [MXNet-R tutorials](/api/r/docs/tutorials)
12 changes: 6 additions & 6 deletions docs/static_site/src/pages/api/r/docs/tutorials/ndarray.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,9 +223,9 @@ The actual computations are finished, allowing us to copy the results someplace
the results.

## Next Steps
* [Symbol](https://mxnet.io/tutorials/r/symbol.html)
* [Write and use callback functions](https://mxnet.io/tutorials/r/CallbackFunction.html)
* [Neural Networks with MXNet in Five Minutes](https://mxnet.io/tutorials/r/fiveMinutesNeuralNetwork.html)
* [Classify Real-World Images with Pre-trained Model](https://mxnet.io/tutorials/r/classifyRealImageWithPretrainedModel.html)
* [Handwritten Digits Classification Competition](https://mxnet.io/tutorials/r/mnistCompetition.html)
* [Character Language Model using RNN](https://mxnet.io/tutorials/r/charRnnModel.html)
* [Symbol](/api/r/docs/tutorials/symbol)
* [Write and use callback functions](/api/r/docs/tutorials/callback_function)
* [Neural Networks with MXNet in Five Minutes](/api/r/docs/tutorials/five_minutes_neural_network)
* [Classify Real-World Images with Pre-trained Model](/api/r/docs/tutorials/classify_real_image_with_pretrained_model)
* [Handwritten Digits Classification Competition](/api/r/docs/tutorials/mnist_competition)
* [Character Language Model using RNN](/api/r/docs/tutorials/char_rnn_model)
12 changes: 6 additions & 6 deletions docs/static_site/src/pages/api/r/docs/tutorials/symbol.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ In the example, *net* is used as a function to apply to an existing symbol

The [model API](https://github.com/apache/incubator-mxnet/blob/master/R-package/R/model.R) is a thin wrapper around the symbolic executors to support neural net training.

We encourage you to read [Symbolic Configuration and Execution in Pictures for python package](../../api/python/symbol_in_pictures/symbol_in_pictures.md)for a detailed explanation of concepts in pictures.
We encourage you to read [Symbolic Configuration and Execution in Pictures for python package](/api/python/symbol_in_pictures/symbol_in_pictures.md)for a detailed explanation of concepts in pictures.

## How Efficient Is the Symbolic API?

Expand All @@ -147,8 +147,8 @@ be more memory efficient than CXXNet and gets to the same runtime with
greater flexibility.

## Next Steps
* [Write and use callback functions](https://mxnet.io/tutorials/r/CallbackFunction.html)
* [Neural Networks with MXNet in Five Minutes](https://mxnet.io/tutorials/r/fiveMinutesNeuralNetwork.html)
* [Classify Real-World Images with Pre-trained Model](https://mxnet.io/tutorials/r/classifyRealImageWithPretrainedModel.html)
* [Handwritten Digits Classification Competition](https://mxnet.io/tutorials/r/mnistCompetition.html)
* [Character Language Model using RNN](https://mxnet.io/tutorials/r/charRnnModel.html)
* [Write and use callback functions](/api/r/docs/tutorials/callback_function)
* [Neural Networks with MXNet in Five Minutes](/api/r/docs/tutorials/five_minutes_neural_network)
* [Classify Real-World Images with Pre-trained Model](/api/r/docs/tutorials/classify_real_image_with_pretrained_model)
* [Handwritten Digits Classification Competition](/api/r/docs/tutorials/mnist_competition)
* [Character Language Model using RNN](/api/r/docs/tutorials/char_rnn_model)
2 changes: 1 addition & 1 deletion docs/static_site/src/pages/get_started/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,6 @@
<div class="get-started-from-source">
<div class="wrapper">
<h2>Download from source</h2>
<p>The signed source code for Apache MXNet (incubating) is available for download <a href="get_started/download">here</a></p>
<p>The signed source code for Apache MXNet (incubating) is available for download <a href="/get_started/download">here</a></p>
</div>
</div>
2 changes: 1 addition & 1 deletion julia/docs/src/tutorial/mnist.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ multi-layer perceptron and then a convolutional neural network (the
LeNet architecture) on the [MNIST handwritten digit
dataset](http://yann.lecun.com/exdb/mnist/). The code for this tutorial
could be found in
[examples/mnist](https://github.com/apache/incubator-mxnet/blob/master/julia/docs/src/tutorial/mnist.md). There are also two Jupyter notebooks that expand a little more on the [MLP](https://github.com/ultradian/julia_notebooks/blob/master/mxnet/mnistMLP.ipynb) and the [LeNet](https://github.com/ultradian/julia_notebooks/blob/master/mxnet/mnistLenet.ipynb), using the more general `ArrayDataProvider`.
[examples/mnist](/api/julia/docs/api/tutorial/mnist/). There are also two Jupyter notebooks that expand a little more on the [MLP](https://github.com/ultradian/julia_notebooks/blob/master/mxnet/mnistMLP.ipynb) and the [LeNet](https://github.com/ultradian/julia_notebooks/blob/master/mxnet/mnistLenet.ipynb), using the more general `ArrayDataProvider`.

Simple 3-layer MLP
------------------
Expand Down