Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
Fix linkand head issue (#3744)
Browse files Browse the repository at this point in the history
* Fix heading hierarchy and class title font size

* Fix broken links
  • Loading branch information
kevinthesun authored and piiswrong committed Nov 7, 2016
1 parent 9107fc3 commit 266e439
Show file tree
Hide file tree
Showing 13 changed files with 44 additions and 25 deletions.
41 changes: 30 additions & 11 deletions docs/_static/mxnet.css
Original file line number Diff line number Diff line change
Expand Up @@ -582,33 +582,42 @@ div.sphinxsidebar ul ul { margin-left: 15px }


.section h1 {
padding-top: 90px;
padding-top: 100px;
margin-top: -60px;
padding-bottom: 10px;
font-size: 28px;
font-size: 32px;
}

.section h2 {
padding-top: 100px;
margin-top: -60px;
padding-bottom: 10px;
font-size: 25px;
font-size: 29px;
}

.section h3 {
padding-top: 80px;
margin-top: -64px;
padding-top: 100px;
margin-top: -60px;
padding-bottom: 8px;
font-size: 26px;
}

.section h4 {
padding-top: 80px;
margin-top: -64px;
padding-top: 100px;
margin-top: -60px;
padding-bottom: 8px;
font-size: 23px;
}

.section ul ul {
display: none
.section h5 {
padding-top: 100px;
margin-top: -60px;
padding-bottom: 8px;
font-size: 20px;
}

div.content a[href="#module-mxnet.symbol"] + ul {
display: none;
}

dt {
Expand Down Expand Up @@ -687,8 +696,13 @@ div.highlight-python, div.highlight-none {
padding-left: 20px
}

.content ul {
padding-left: 20px
.content ol {
padding-left: 20px !important
}

.content ul ul {
padding-left: 40px;
padding-bottom: 40px
}

/*API function formation*/
Expand All @@ -707,4 +721,9 @@ td {

td p.first {
margin-bottom: 0
}

/*Class title font size*/
dl.class > dt {
font-size: 1.2em;
}
4 changes: 2 additions & 2 deletions docs/get_started/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ If you are running Python on Amazon Linux or Ubuntu, you can use Git Bash script

For users of Python on Amazon Linux and Ubuntu operating systems, MXNet provides a set of Git Bash scripts that installs all of the required MXNet dependencies and the MXNet library.

**Note:** To contribute easy installation scripts for other operating systems and programming languages, see [community page](http://mxnet.io/how_to/contribute.html).
**Note:** To contribute easy installation scripts for other operating systems and programming languages, see [community page](http://mxnet.io/community/contribute.html).

### Quick Installation on Amazon Linux

Expand Down Expand Up @@ -406,7 +406,7 @@ You might want to add this command to your ```~/.bashrc``` file. If you do, you
Pkg.add("MXNet")
```

For more details about installing and using MXNet with Julia, see the [MXNet Julia documentation](http://mxnetjl.readthedocs.org/en/latest/user-guide/install.html)
For more details about installing and using MXNet with Julia, see the [MXNet Julia documentation](http://dmlc.ml/MXNet.jl/latest/user-guide/install/)

#### Install the MXNet Package for Scala
There are four ways to install the MXNet package for Scala:
Expand Down
2 changes: 1 addition & 1 deletion docs/how_to/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,5 +63,5 @@ memory efficient than cxxnet, purine and more flexible than minerva.


#### What is the Relation to Tensorflow
Both MXNet and [Tensorflow] (https://www.tensorflow.org/) use a computation graph abstraction, which is initially used by Theano, then also adopted by other packages such as CGT, caffe2, purine. Currently TensorFlow adopts an optimized symbolic API. While MXNet supports a more [mixed flavour](https://mxnet.io/architecture/program_model.html), with a dynamic dependency scheduler to combine symbolic and imperative programming together.
Both MXNet and [Tensorflow] (https://www.tensorflow.org/) use a computation graph abstraction, which is initially used by Theano, then also adopted by other packages such as CGT, caffe2, purine. Currently TensorFlow adopts an optimized symbolic API. While MXNet supports a more [mixed flavour](http://mxnet.io/architecture/program_model.html), with a dynamic dependency scheduler to combine symbolic and imperative programming together.
In short, MXNet is lightweight and “mixed”, with flexibility from imperative programming, while getting similar advantages by using a computation graph to make it very fast and memory efficient. That being said, most systems will involve and we expect both systems can learn and benefit from each other.
2 changes: 1 addition & 1 deletion docs/tutorials/computer_vision/detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ and Fast R-CNN. Fast R-CNN weights are used to initiate RPN for training.
## Getting Started
* Install python package `easydict`, `cv2`, `matplotlib`. MXNet require `numpy`.
* Install MXNet with version no later than Commit 8a3424e, preferably the latest master.
Follow the instructions at http://mxnet.readthedocs.io/en/latest/how_to/build.html. Install the python interface.
Follow the instructions at http://mxnet.io/get_started/setup.html#quick-installation. Install the python interface.
* Try out detection result by running `python demo.py --prefix final --epoch 0 --image myimage.jpg --gpu 0`.
Suppose you have downloaded pretrained network and place the extracted file `final-0000.params` in this folder and there is an image named `myimage.jpg`.

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/computer_vision/image_classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ width=400/>

## How to use

First build mxnet by following the [guide](http://mxnet.readthedocs.io/en/latest/how_to/build.html)
First build mxnet by following the [guide](http://mxnet.io/get_started/setup.html#quick-installation)

### Train

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/computer_vision/imagenet_full.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@ After packing, together with threaded buffer iterator, we can simply achieve an


Now we have data. We need to consider which network structure to use. We use Inception-BN [3] style model, compared to other models such as VGG, it has fewer parameters, less parameters simplified sync problem. Considering our problem is much more challenging than 1k classes problem, we add suitable capacity into original Inception-BN structure, by increasing the size of filter by factor of 1.5 in bottom layers of original Inception-BN network.
This however, creates a challenge for GPU memory. As GTX980 only have 4G of GPU RAM. We really need to minimize the memory consumption to fit larger batch-size into the training. To solve this problem we use the techniques such as node memory reuse, and inplace optimization, which reduces the memory consumption by half, more details can be found in [memory optimization note](http://mxnet.readthedocs.org/en/latest/developer-guide/note_memory.html)
This however, creates a challenge for GPU memory. As GTX980 only have 4G of GPU RAM. We really need to minimize the memory consumption to fit larger batch-size into the training. To solve this problem we use the techniques such as node memory reuse, and inplace optimization, which reduces the memory consumption by half, more details can be found in [memory optimization note](http://mxnet.io/architecture/note_memory.html)

Finally, we cannot train the model using a single GPU because this is a really large net, and a lot of data. We use data parallelism on four GPUs to train this model, which involves smart synchronization of parameters between different GPUs, and overlap the communication and computation. A [runtime denpdency engine](https://mxnet.readthedocs.org/en/latest/developer-guide/note_engine.html) is used to simplify this task, allowing us to run the training at around 170 images/sec.
Finally, we cannot train the model using a single GPU because this is a really large net, and a lot of data. We use data parallelism on four GPUs to train this model, which involves smart synchronization of parameters between different GPUs, and overlap the communication and computation. A [runtime denpdency engine](http://mxnet.io/architecture/note_engine.html) is used to simplify this task, allowing us to run the training at around 170 images/sec.

Here is a learning curve of the training process:
![alt text](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/imagenet_full/curve.png "Learning Curve")
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/r/CallbackFunctionTutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ which can very useful in model training.

This tutorial is written in Rmarkdown.

- You can directly view the hosted version of the tutorial from [MXNet R Document](http://mxnet.io/api/r/CallbackFunctionTutorial.html)
- You can directly view the hosted version of the tutorial from [MXNet R Document](http://mxnet.io/tutorials/r/CallbackFunctionTutorial.html)

- You can find the Rmarkdown source from [here](https://github.com/dmlc/mxnet/blob/master/R-package/vignettes/CallbackFunctionTutorial.Rmd)

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/r/charRnnModel.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Data can be found at [here](https://github.com/dmlc/web-data/tree/master/mxnet/t
Preface
-------
This tutorial is written in Rmarkdown.
- You can directly view the hosted version of the tutorial from [MXNet R Document](http://mxnet.io/api/r/CharRnnModel.html)
- You can directly view the hosted version of the tutorial from [MXNet R Document](http://mxnet.io/api/tutorials/charRnnModel.html)
- You can find the download the Rmarkdown source from [here](https://github.com/dmlc/mxnet/blob/master/R-package/vignettes/CharRnnModel.Rmd)

Load Data
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/r/classifyRealImageWithPretrainedModel.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ This model gives the recent state-of-art prediction accuracy on image net datase
Preface
-------
This tutorial is written in Rmarkdown.
- You can directly view the hosted version of the tutorial from [MXNet R Document](http://mxnet.io/api/r/classifyRealImageWithPretrainedModel.html)
- You can directly view the hosted version of the tutorial from [MXNet R Document](http://mxnet.io/tutorials/r/classifyRealImageWithPretrainedModel.html)
- You can find the download the Rmarkdown source from [here](https://github.com/dmlc/mxnet/blob/master/R-package/vignettes/classifyRealImageWithPretrainedModel.Rmd)

Package Loading
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/r/fiveMinutesNeuralNetwork.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ We will show you how to do classification and regression tasks respectively. The
Preface
-------
This tutorial is written in Rmarkdown.
- You can directly view the hosted version of the tutorial from [MXNet R Document](http://mxnet.io/api/r/fiveMinutesNeuralNetwork.html)
- You can directly view the hosted version of the tutorial from [MXNet R Document](http://mxnet.io/tutorials/r/fiveMinutesNeuralNetwork.html)
- You can find the download the Rmarkdown source from [here](https://github.com/dmlc/mxnet/blob/master/R-package/vignettes/fiveMinutesNeuralNetwork.Rmd)

## Classification
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/r/mnistCompetition.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Handwritten Digits Classification Competition
We will present the basic usage of [MXNet](https://github.com/dmlc/mxnet/tree/master/R-package) to compete in this challenge.

This tutorial is written in Rmarkdown. You can download the source [here](https://github.com/dmlc/mxnet/blob/master/R-package/vignettes/mnistCompetition.Rmd) and view a
hosted version of tutorial [here](http://mxnet.io/api/r/mnistCompetition.html).
hosted version of tutorial [here](http://mxnet.io/tutorials/r/mnistCompetition.html).

## Data Loading

Expand Down
2 changes: 1 addition & 1 deletion docs/zh/api/r/CallbackFunctionTutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

本文将概述我们如何在模型训练的时候使用或者自定义一些回调函数。本教程使用 Rmarkdown 编写。

- 你可以直接看我们在主站上的教程: [MXNet R Document](http://mxnet.io/api/r/CallbackFunctionTutorial.html)
- 你可以直接看我们在主站上的教程: [MXNet R Document](http://mxnet.io/tutorials/r/CallbackFunctionTutorial.html)

- 你可以在这里找到Rmarkdown的源码: [here](https://github.com/dmlc/mxnet/blob/master/R-package/vignettes/CallbackFunctionTutorial.Rmd)

Expand Down
2 changes: 1 addition & 1 deletion docs/zh/api/r/fiveMinutesNeuralNetwork.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

本教程由Rmd编辑完成。

- 你可以直接访问主站版本的教程:[MXNet R Document](http://mxnet.io/api/r/fiveMinutesNeuralNetwork.html)
- 你可以直接访问主站版本的教程:[MXNet R Document](http://mxnet.io/tutorials/r/fiveMinutesNeuralNetwork.html)
- 你也可以从[这里](https://github.com/dmlc/mxnet/blob/master/R-package/vignettes/fiveMinutesNeuralNetwork.Rmd) 下载到Rmarkdown源文件

## 分类
Expand Down

0 comments on commit 266e439

Please sign in to comment.