Skip to content

Commit

Permalink
Merge pull request #16 from apache/master
Browse files Browse the repository at this point in the history
sync to latest code
  • Loading branch information
Hao Li authored Mar 25, 2019
2 parents bb6e9f1 + 056fce4 commit c58192d
Show file tree
Hide file tree
Showing 38 changed files with 1,556 additions and 406 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ What's New
* [Version 0.9.1 Release (NNVM refactor)](./docs/architecture/release_note_0_9.md) - NNVM branch is merged into master now. An official release will be made soon.
* [Version 0.8.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.8.0)
* [Updated Image Classification with new Pre-trained Models](./example/image-classification)
* [Notebooks How to Use MXNet](https://github.com/zackchase/mxnet-the-straight-dope)
* [Notebooks How to Use MXNet](https://github.com/d2l-ai/d2l-en)
* [MKLDNN for Faster CPU Performance](./docs/tutorials/mkldnn/MKLDNN_README.md)
* [MXNet Memory Monger, Training Deeper Nets with Sublinear Memory Cost](https://github.com/dmlc/mxnet-memonger)
* [Tutorial for NVidia GTC 2016](https://github.com/dmlc/mxnet-gtc-tutorial)
Expand Down
4 changes: 2 additions & 2 deletions docs/_static/mxnet-theme/navbar.html
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ <h1 id="logo-wrap">
<a href="#" class="main-nav-link dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="true">Gluon <span class="caret"></span></a>
<ul id="package-dropdown-menu" class="dropdown-menu navbar-menu">
<li><a class="main-nav-link" href="{{url_root}}gluon/index.html">About</a></li>
<li><a class="main-nav-link" href="http://gluon.mxnet.io">The Straight Dope (Tutorials)</a></li>
<li><a class="main-nav-link" href="https://www.d2l.ai/">Dive into Deep Learning</a></li>
<li><a class="main-nav-link" href="https://gluon-cv.mxnet.io">GluonCV Toolkit</a></li>
<li><a class="main-nav-link" href="https://gluon-nlp.mxnet.io/">GluonNLP Toolkit</a></li>
</ul>
Expand Down Expand Up @@ -108,7 +108,7 @@ <h1 id="logo-wrap">
</li>
</ul>
</div>

<div class="plusIcon dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button"><span class="glyphicon glyphicon-plus" aria-hidden="true"></span></a>
<ul id="plusMenu" class="dropdown-menu dropdown-menu-right"></ul>
Expand Down
2 changes: 1 addition & 1 deletion docs/api/perl/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ In addition please refer to [excellent metacpan doc interface](https://metacpan.
[MXNet Python API Documentation](http://mxnet.io/api/python/index.html).

AI::MXNet supports new imperative PyTorch like Gluon MXNet interface. Please get acquainted with this new interface
at [Deep Learning - The Straight Dope](http://gluon.mxnet.io/).
at [Dive into Deep Learning](https://www.d2l.ai/).

For specific Perl Gluon usage please refer to Perl examples and tests directories on github, but be assured that the Python and Perl usage
are extremely close in order to make the use of the Python Gluon docs and examples as easy as possible.
Expand Down
2 changes: 1 addition & 1 deletion docs/community/ecosystem.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Community contributions to MXNet have added many new valuable features and funct

* [Gluon 60 Minute Crash Course](https://gluon-crash-course.mxnet.io/) - deep learning practitioners can learn Gluon quickly with these six 10-minute tutorials.
- [YouTube Series](https://www.youtube.com/playlist?list=PLkEvNnRk8uVmVKRDgznk3o3LxmjFRaW7s)
* [The Straight Dope](https://gluon.mxnet.io/) - a series of notebooks designed to teach deep learning using the Gluon Python API for MXNet.
* [Dive into Deep Learning](https://www.d2l.ai/) - a series of notebooks designed to teach deep learning using the Gluon Python API for MXNet.


## MXNet APIs
Expand Down
14 changes: 7 additions & 7 deletions docs/gluon/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ To get started with Gluon, checkout the following resources and tutorials:
* [60-minute Gluon Crash Course](https://gluon-crash-course.mxnet.io/) - six 10-minute lessons on using Gluon
* [GluonCV Toolkit](https://gluon-cv.mxnet.io/) - implementations of state of the art deep learning algorithms in **Computer Vision (CV)**
* [GluonNLP Toolkit](https://gluon-nlp.mxnet.io/) - implementations of state of the art deep learning algorithms in **Natural Language Processing (NLP)**
* [Gluon: The Straight Dope](https://gluon.mxnet.io/) - notebooks designed to teach deep learning from the ground up, all using the Gluon API
* [Dive into Deep Learning](https://www.d2l.ai/) - notebooks designed to teach deep learning from the ground up, all using the Gluon API

<br/>
<div class="boxed">
Expand All @@ -42,14 +42,14 @@ To get started with Gluon, checkout the following resources and tutorials:

<br/>
<div class="boxed">
The Straight Dope
Dive into Deep Learning
</div>

The community is also working on parallel effort to create a foundational resource for learning about machine learning. The Straight Dope is a book composed of introductory as well as advanced tutorials – all based on the Gluon interface. For example,
The community is also working on parallel effort to create a foundational resource for learning about machine learning. Dive into Deep Learning is a book composed of introductory as well as advanced tutorials – all based on the Gluon interface. For example,

* [Learn about machine learning basics](http://gluon.mxnet.io/chapter01_crashcourse/introduction.html).
* [Develop and train a simple neural network model](http://gluon.mxnet.io/chapter03_deep-neural-networks/mlp-gluon.html).
* [Implement a Recurrent Neural Network (RNN) model for Language Modeling](http://gluon.mxnet.io/chapter05_recurrent-neural-networks/simple-rnn.html).
* [Learn about machine learning basics](https://www.d2l.ai/chapter_introduction/intro.html).
* [Develop and train a simple neural network model](https://www.d2l.ai/chapter_multilayer-perceptrons/mlp-scratch.html).
* [Implement a Recurrent Neural Network (RNN) model for Language Modeling](https://www.d2l.ai/chapter_recurrent-neural-networks/rnn-scratch.html).

<br/>
<div class="boxed">
Expand Down Expand Up @@ -124,4 +124,4 @@ net.hybridize()
* [60-minute Gluon Crash Course](https://gluon-crash-course.mxnet.io/)
* [GluonCV Toolkit](https://gluon-cv.mxnet.io/)
* [GluonNLP Toolkit](https://gluon-nlp.mxnet.io/)
* [Gluon: The Straight Dope](https://gluon.mxnet.io/)
* [Dive into Deep Learning](https://www.d2l.ai)
20 changes: 20 additions & 0 deletions docs/tutorials/gluon/customop.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Custom operator in python is easy to develop and good for prototyping, but may h
import numpy as np
import mxnet as mx
from mxnet import gluon, autograd
import os
```

## Parameter-less operators
Expand Down Expand Up @@ -214,5 +215,24 @@ y = dense(x)
print(y)
```

## Using custom operators with fork
In Linux systems, the default method in multiprocessing to create process is by using fork. If there are unfinished async custom operations when forking, the program will be blocked because of python GIL. Always use sync calls like `wait_to_read` or `waitall` before calling fork.

```
x = mx.nd.array([0, 1, 2, 3])
y = mx.nd.Custom(x, op_type='sigmoid')
# unfinished async sigmoid operation will cause blocking
os.fork()
```

Correctly handling this will make mxnet depend upon libpython, so the workaround now is to ensure that all custom operations are executed before forking process.

```
x = mx.nd.array([0, 1, 2, 3])
y = mx.nd.Custom(x, op_type='sigmoid')
# force execution by reading y
print(y.asnumpy())
os.fork()
```

<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
4 changes: 2 additions & 2 deletions docs/tutorials/gluon/gluon_from_experiment_to_deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -322,9 +322,9 @@ You can also find more ways to run inference and deploy your models here:
## References

1. [Transfer Learning for Oxford102 Flower Dataset](https://github.com/Arsey/keras-transfer-learning-for-oxford102)
2. [Gluon book on fine-tuning](https://gluon.mxnet.io/chapter08_computer-vision/fine-tuning.html)
2. [Gluon book on fine-tuning](https://www.d2l.ai/chapter_computer-vision/fine-tuning.html)
3. [Gluon CV transfer learning tutorial](https://gluon-cv.mxnet.io/build/examples_classification/transfer_learning_minc.html)
4. [Gluon crash course](https://gluon-crash-course.mxnet.io/)
5. [Gluon CPP inference example](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/)

<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
7 changes: 5 additions & 2 deletions docs/tutorials/gluon/hybrid.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,10 @@ You can use other language bindings to load them. You can also load them back
to gluon with `SymbolBlock`:

```python
net2 = gluon.SymbolBlock.imports('model-symbol.json', ['data'], 'model-0001.params')
import warnings

with warnings.catch_warnings():
net2 = gluon.SymbolBlock.imports('model-symbol.json', ['data'], 'model-0001.params')
```

## Operators that do not work with hybridize
Expand Down Expand Up @@ -259,4 +262,4 @@ For example, avoid writing `x += y` and use `x = x + y`, otherwise you will get

The recommended practice is to utilize the flexibility of imperative NDArray API during experimentation. Once you finalized your model, make necessary changes mentioned above so you can call `hybridize` function to improve performance.

<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
4 changes: 3 additions & 1 deletion docs/tutorials/gluon/save_load_params.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,9 @@ One of the main reasons to serialize model architecture into a JSON file is to l
Serialized Hybrid networks (saved as .JSON and .params file) can be loaded and used inside Python frontend using `gluon.nn.SymbolBlock`. To demonstrate that, let's load the network we serialized above.

```python
deserialized_net = gluon.nn.SymbolBlock.imports("lenet-symbol.json", ['data'], "lenet-0001.params", ctx=ctx)
import warnings
with warnings.catch_warnings():
deserialized_net = gluon.nn.SymbolBlock.imports("lenet-symbol.json", ['data'], "lenet-0001.params", ctx=ctx)
```

`deserialized_net` now contains the network we deserialized from files. Let's test the deserialized network to make sure it works.
Expand Down
7 changes: 4 additions & 3 deletions docs/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Another great resource for learning MXNet is our [examples section](https://gith

We have two types of API available for Python: Gluon APIs and Module APIs. [See here](/api/python/gluon/gluon.html) for a comparison.

A comprehensive introduction to Gluon can be found at [The Straight Dope](http://gluon.mxnet.io/). Structured like a book, it build up from first principles of deep learning and take a theoretical walkthrough of progressively more complex models using the Gluon API. Also check out the [60-Minute Gluon Crash Course](http://gluon-crash-course.mxnet.io/) if you're short on time or have used other deep learning frameworks before.
A comprehensive introduction to Gluon can be found at [Dive into Deep Learning](http://www.d2l.ai/). Structured like a book, it build up from first principles of deep learning and take a theoretical walkthrough of progressively more complex models using the Gluon API. Also check out the [60-Minute Gluon Crash Course](http://gluon-crash-course.mxnet.io/) if you're short on time or have used other deep learning frameworks before.

Use the tutorial selector below to filter to the relevant tutorials. You might see a download link in the top right corner of some tutorials. Use this to download a Jupyter Notebook version of the tutorial, and re-run and adjust the code as you wish.

Expand Down Expand Up @@ -90,8 +90,9 @@ Select API:&nbsp;
* [Learning Rate Schedules](/tutorials/gluon/learning_rate_schedules.html)
* [Advanced Learning Rate Schedules](/tutorials/gluon/learning_rate_schedules_advanced.html)
* [Profiling MXNet Models](/tutorials/python/profiler.html)
* [Hybridize Gluon models with control flows](/tutorials/control_flow/ControlFlowTutorial.html)
* [Module to Gluon API](/tutorials/python/module_to_gluon.html)<span style="color:red"> (new!)</span>
* [Gluon end to end from training to inference](/tutorials/gluon/gluon_from_experiment_to_deployment.html)

* API Guides
* Core APIs
* NDArray
Expand All @@ -114,6 +115,7 @@ Select API:&nbsp;
* [HybridBlocks](/tutorials/gluon/hybrid.html) ([Alternative](http://gluon.mxnet.io/chapter07_distributed-learning/hybridize.html) <img src="https://upload.wikimedia.org/wikipedia/commons/6/6a/External_link_font_awesome.svg" alt="External link" height="15px" style="margin: 0px 0px 3px 3px;"/>)
* [Block Naming](/tutorials/gluon/naming.html)
* [Custom Operators](/tutorials/gluon/customop.html)
* [Control Flow operators](/tutorials/control_flow/ControlFlowTutorial.html)<span style="color:red"> (new!)</span>
* Autograd
* [AutoGrad API](/tutorials/gluon/autograd.html)
* [AutoGrad API with chain rule](http://gluon.mxnet.io/chapter01_crashcourse/autograd.html) <img src="https://upload.wikimedia.org/wikipedia/commons/6/6a/External_link_font_awesome.svg" alt="External link" height="15px" style="margin: 0px 0px 3px 3px;"/>
Expand All @@ -135,7 +137,6 @@ Select API:&nbsp;
* [MNIST Handwritten Digit Classification](/tutorials/python/mnist.html)
* [Movie Review Classification using Convolutional Networks](/tutorials/nlp/cnn.html)
* [Generative Adversarial Networks (GANs)](/tutorials/unsupervised_learning/gan.html)
* [Recommender Systems using Matrix Factorization](/tutorials/python/matrix_factorization.html)
* [Speech Recognition with Connectionist Temporal Classification Loss](/tutorials/speech_recognition/ctc.html)
* Practitioner Guides
* [Predicting on new images using a pre-trained ImageNet model](/tutorials/python/predict_image.html)
Expand Down
4 changes: 3 additions & 1 deletion docs/tutorials/onnx/fine_tuning_gluon.md
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,9 @@ We create a symbol block that is going to hold all our pre-trained layers, and a


```python
pre_trained = gluon.nn.SymbolBlock(outputs=new_sym, inputs=mx.sym.var('data_0'))
import warnings
with warnings.catch_warnings():
pre_trained = gluon.nn.SymbolBlock(outputs=new_sym, inputs=mx.sym.var('data_0'))
net_params = pre_trained.collect_params()
for param in new_arg_params:
if param in net_params:
Expand Down
5 changes: 4 additions & 1 deletion docs/tutorials/onnx/inference_on_onnx_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,9 @@ print(data_names)
And load them into a MXNet Gluon symbol block.

```python
net = gluon.nn.SymbolBlock(outputs=sym, inputs=mx.sym.var('data_0'))
import warnings
with warnings.catch_warnings():
net = gluon.nn.SymbolBlock(outputs=sym, inputs=mx.sym.var('data_0'))
net_params = net.collect_params()
for param in arg_params:
if param in net_params:
Expand Down Expand Up @@ -247,6 +249,7 @@ Lucky for us, the [Caltech101 dataset](http://www.vision.caltech.edu/Image_Datas

We show that in our next tutorial:


- [Fine-tuning an ONNX Model using the modern imperative MXNet/Gluon](http://mxnet.incubator.apache.org/tutorials/onnx/fine_tuning_gluon.html)

<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
Loading

0 comments on commit c58192d

Please sign in to comment.