Skip to content

Commit

Permalink
Sphinx error reduction (apache#12323)
Browse files Browse the repository at this point in the history
* new toctrees and index pages to reduce sphinx warnings and errors

* exclude depricated and working dirs; update config

add title, fix render errors, fix inaccurate text

* new toctrees and index pages to reduce sphinx warnings and errors

* exclude depricated and working dirs; update config

add title, fix render errors, fix inaccurate text

* removing unused and unnecessary files (that cause sphinx warnings)

* add c++ index to whitelist

* add more index to whitelist
  • Loading branch information
aaronmarkham committed Sep 11, 2018
1 parent f4ab9c6 commit 0577f0a
Show file tree
Hide file tree
Showing 25 changed files with 273 additions and 136 deletions.
11 changes: 11 additions & 0 deletions docs/api/clojure/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,20 @@
# MXNet - Clojure API

MXNet supports the Clojure programming language. The MXNet Clojure package brings flexible and efficient GPU
computing and state-of-art deep learning to Clojure. It enables you to write seamless tensor/matrix computation with multiple GPUs in Clojure. It also lets you construct and customize the state-of-art deep learning models in Clojure, and apply them to tasks, such as image classification and data science challenges.

See the [MXNet Clojure API Documentation](docs/index.html) for detailed API information.

```eval_rst
.. toctree::
:maxdepth: 1
kvstore.md
module.md
ndarray.md
symbol_in_pictures.md
symbol.md
```

## Tensor and Matrix Computations
You can perform tensor or matrix computation in pure Clojure:
Expand Down
14 changes: 14 additions & 0 deletions docs/api/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# MXNet APIs

```eval_rst
.. toctree::
:maxdepth: 1
c++/index.md
clojure/index.md
julia/index.md
perl/index.md
python/index.md
r/index.md
scala/index.md
```
100 changes: 59 additions & 41 deletions docs/api/python/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,58 +17,41 @@ Code examples are placed throughout the API documentation and these can be run a
```eval_rst
.. note:: A convenient way to execute code examples is using the ``%doctest_mode`` mode of
Jupyter notebook, which allows for pasting multi-line examples containing
``>>>`` while preserving indentation. Run ``%doctest_mode?`` in Jupyter notebook
for more details.
Jupyter notebook, which allows for pasting multi-line examples containing
``>>>`` while preserving indentation. Run ``%doctest_mode?`` in Jupyter notebook
for more details.
```

\* Some old references to Model API may exist, but this API has been deprecated.

## NDArray API

```eval_rst
.. toctree::
:maxdepth: 1
ndarray/ndarray.md
ndarray/random.md
ndarray/linalg.md
ndarray/sparse.md
ndarray/contrib.md
```

## Symbol API
## Autograd API

```eval_rst
.. toctree::
:maxdepth: 1
symbol/symbol.md
symbol/random.md
symbol/linalg.md
symbol/sparse.md
symbol/contrib.md
symbol/rnn.md
autograd/autograd.md
```

## Module API
## Callback API

```eval_rst
.. toctree::
:maxdepth: 1
module/module.md
executor/executor.md
callback/callback.md
```

## Autograd API
## Contrib Package

```eval_rst
.. toctree::
:maxdepth: 1
autograd/autograd.md
contrib/contrib.md
contrib/text.md
contrib/onnx.md
```

## Gluon API
Expand All @@ -86,6 +69,15 @@ Code examples are placed throughout the API documentation and these can be run a
gluon/contrib.md
```

## Image API

```eval_rst
.. toctree::
:maxdepth: 1
image/image.md
```

## IO API

```eval_rst
Expand All @@ -95,40 +87,54 @@ Code examples are placed throughout the API documentation and these can be run a
io/io.md
```

## Image API
## KV Store API

```eval_rst
.. toctree::
:maxdepth: 1
image/image.md
kvstore/kvstore.md
```

## Optimization API
## Metric API

```eval_rst
.. toctree::
:maxdepth: 1
optimization/optimization.md
metric/metric.md
```

## Callback API
## Module API

```eval_rst
.. toctree::
:maxdepth: 1
callback/callback.md
module/module.md
executor/executor.md
```

## Metric API
## NDArray API

```eval_rst
.. toctree::
:maxdepth: 1
metric/metric.md
ndarray/ndarray.md
ndarray/random.md
ndarray/linalg.md
ndarray/sparse.md
ndarray/contrib.md
```

## Optimization API

```eval_rst
.. toctree::
:maxdepth: 1
optimization/optimization.md
```

## Profiler API
Expand All @@ -144,18 +150,30 @@ Code examples are placed throughout the API documentation and these can be run a

```eval_rst
.. toctree::
:maxdepth 1
:maxdepth: 1
rtc/rtc.md
```

## Contrib Package
## Symbol API

```eval_rst
.. toctree::
:maxdepth: 1
contrib/contrib.md
contrib/text.md
contrib/onnx.md
symbol/symbol.md
symbol/random.md
symbol/linalg.md
symbol/sparse.md
symbol/contrib.md
symbol/rnn.md
```

## Symbol in Pictures API

```eval_rst
.. toctree::
:maxdepth: 1
symbol_in_pictures/symbol_in_pictures.md
```
14 changes: 14 additions & 0 deletions docs/api/scala/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,23 @@
# MXNet - Scala API

MXNet supports the Scala programming language. The MXNet Scala package brings flexible and efficient GPU
computing and state-of-art deep learning to Scala. It enables you to write seamless tensor/matrix computation with multiple GPUs in Scala. It also lets you construct and customize the state-of-art deep learning models in Scala, and apply them to tasks, such as image classification and data science challenges.

See the [MXNet Scala API Documentation](docs/index.html#org.apache.mxnet.package) for detailed API information.

```eval_rst
.. toctree::
:maxdepth: 1
infer.md
io.md
kvstore.md
model.md
module.md
ndarray.md
symbol_in_pictures.md
symbol.md
```

## Image Classification with the Scala Infer API
The Infer API can be used for single and batch image classification. More information can be found at the following locations:
Expand Down
18 changes: 12 additions & 6 deletions docs/architecture/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,15 @@ Mainly, they focus on the following 3 areas:
abstraction, optimization, and trade-offs between efficiency and flexibility.
Additionally, we provide an overview of the complete MXNet system.

* [MXNet System Overview](http://mxnet.io/architecture/overview.html)
* [Deep Learning Programming Style: Symbolic vs Imperative](http://mxnet.io/architecture/program_model.html)
* [Dependency Engine for Deep Learning](http://mxnet.io/architecture/note_engine.html)
* [Optimizing the Memory Consumption in Deep Learning](http://mxnet.io/architecture/note_memory.html)
* [Efficient Data Loading Module for Deep Learning](http://mxnet.io/architecture/note_data_loading.html)
* [Exception Handling in MXNet](http://mxnet.io/architecture/exception_handling.html)
```eval_rst
.. toctree::
:maxdepth: 1
overview.md
program_model.md
note_engine.md
note_memory.md
note_data_loading.md
exception_handling.md
rnn_interface.md
```
49 changes: 0 additions & 49 deletions docs/architecture/release_note_0_9.md

This file was deleted.

11 changes: 11 additions & 0 deletions docs/community/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# MXNet Community

```eval_rst
.. toctree::
:maxdepth: 1
contribute.md
ecosystem.md
powered_by.md
mxnet_channels.md
```
8 changes: 8 additions & 0 deletions docs/faq/index.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# MXNet FAQ

```eval_rst
.. toctree::
:hidden:
:glob:
*
```

This section addresses common questions about how to use _MXNet_. These include performance issues, e.g., how to train with multiple GPUs.
They also include workflow questions, e.g., how to visualize a neural network computation graph.
These answers are fairly focused. For more didactic, self-contained introductions to neural networks
Expand Down
8 changes: 0 additions & 8 deletions docs/get_started/index.md

This file was deleted.

14 changes: 8 additions & 6 deletions docs/gluon/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
![](https://github.com/dmlc/web-data/blob/master/mxnet/image/image-gluon-logo.png?raw=true)
# About Gluon

![gluon logo](https://github.com/dmlc/web-data/blob/master/mxnet/image/image-gluon-logo.png?raw=true)

Based on the [the Gluon API specification](https://github.com/gluon-api/gluon-api), the new Gluon library in Apache MXNet provides a clear, concise, and simple API for deep learning. It makes it easy to prototype, build, and train deep learning models without sacrificing training speed. Install the latest version of MXNet to get access to Gluon by either following these easy steps or using this simple command:

```python
pip install mxnet --pre --user
```bash
pip install mxnet
```
<br/>
<div class="boxed">
Expand Down Expand Up @@ -39,8 +41,8 @@ Use plug-and-play neural network building blocks, including predefined layers, o

```python
net = gluon.nn.Sequential()
# When instantiated, Sequential stores a chain of neural network layers.
# Once presented with data, Sequential executes each layer in turn, using
# When instantiated, Sequential stores a chain of neural network layers.
# Once presented with data, Sequential executes each layer in turn, using
# the output of one layer as the input for the next
with net.name_scope():
net.add(gluon.nn.Dense(256, activation="relu")) # 1st layer (256 nodes)
Expand Down Expand Up @@ -81,7 +83,7 @@ def forward(self, F, inputs, tree):
<br/>
**__High Performance__**

Easily cache the neural network to achieve high performance by defining your neural network with ``HybridSequential`` and calling the ``hybridize`` method:
Easily cache the neural network to achieve high performance by defining your neural network with ``HybridSequential`` and calling the ``hybridize`` method:

```python
net = nn.HybridSequential()
Expand Down
Loading

0 comments on commit 0577f0a

Please sign in to comment.