Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Fixing links for website + Fixing search #16284

Merged
merged 9 commits into from
Sep 27, 2019
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Please feel free to remove inapplicable items for your PR.
- For user-facing API changes, API doc string has been updated.
- For new C++ functions in header files, their functionalities and arguments are documented.
- For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
- Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
- Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
- [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

### Changes ###
Expand Down
2 changes: 1 addition & 1 deletion MKLDNN_README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,4 @@
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->

File is moved to [docs/tutorials/mkldnn/MKLDNN_README.md](docs/tutorials/mkldnn/MKLDNN_README.md).
File is moved to [docs/tutorials/mkldnn/MKLDNN_README.md](docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_readme.md).
8 changes: 4 additions & 4 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -1213,7 +1213,7 @@ MKLDNN backend takes advantage of MXNet subgraph to implement the most of possib
##### Quantization
Performance of reduced-precision (INT8) computation is also dramatically improved after the graph optimization feature is applied on CPU Platforms. Various models are supported and can benefit from reduced-precision computation, including symbolic models, Gluon models and even custom models. Users can run most of the pre-trained models with only a few lines of commands and a new quantization script imagenet_gen_qsym_mkldnn.py. The observed accuracy loss is less than 0.5% for popular CNN networks, like ResNet-50, Inception-BN, MobileNet, etc.

Please find detailed information and performance/accuracy numbers here: [MKLDNN README](https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/mkldnn/MKLDNN_README.md), [quantization README](https://github.com/apache/incubator-mxnet/tree/master/example/quantization#1) and [design proposal](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN)
Please find detailed information and performance/accuracy numbers here: [MKLDNN README](https://mxnet.incubator.apache.org/api/python/docs/tutorials/performance/backend/mkldnn/mkldnn_readme.html), [quantization README](https://github.com/apache/incubator-mxnet/tree/master/example/quantization#1) and [design proposal](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN)

### New Operators

Expand Down Expand Up @@ -1756,7 +1756,7 @@ For more information and examples, see [full release notes](https://cwiki.apache

### New Features - Clojure package (experimental)
- MXNet now supports the Clojure programming language. The MXNet Clojure package brings flexible and efficient GPU computing and state-of-art deep learning to Clojure. It enables you to write seamless tensor/matrix computation with multiple GPUs in Clojure. It also lets you construct and customize the state-of-art deep learning models in Clojure, and apply them to tasks, such as image classification and data science challenges.([#11205](https://github.com/apache/incubator-mxnet/pull/11205))
- Checkout examples and API documentation [here](http://mxnet.incubator.apache.org/api/clojure/index.html).
- Checkout examples and API documentation [here](https://mxnet.incubator.apache.org/api/clojure/index.html).

### New Features - Synchronized Cross-GPU Batch Norm (experimental)
- Gluon now supports Synchronized Batch Normalization (#11502).
Expand Down Expand Up @@ -1786,8 +1786,8 @@ For more information and examples, see [full release notes](https://cwiki.apache
- Set environment variable `MXNET_KVSTORE_USETREE=1` to enable.

### New Features - Export MXNet models to ONNX format (experimental)
- With this feature, now MXNet models can be exported to ONNX format([#11213](https://github.com/apache/incubator-mxnet/pull/11213)). Currently, MXNet supports ONNX v1.2.1. [API documentation](http://mxnet.incubator.apache.org/api/python/contrib/onnx.html).
- Checkout this [tutorial](http://mxnet.incubator.apache.org/tutorials/onnx/export_mxnet_to_onnx.html) which shows how to use MXNet to ONNX exporter APIs. ONNX protobuf so that those models can be imported in other frameworks for inference.
- With this feature, now MXNet models can be exported to ONNX format([#11213](https://github.com/apache/incubator-mxnet/pull/11213)). Currently, MXNet supports ONNX v1.2.1. [API documentation](https://mxnet.incubator.apache.org/api/python/contrib/onnx.html).
- Checkout this [tutorial](https://mxnet.incubator.apache.org/tutorials/onnx/export_mxnet_to_onnx.html) which shows how to use MXNet to ONNX exporter APIs. ONNX protobuf so that those models can be imported in other frameworks for inference.

### New Features - TensorRT Runtime Integration (experimental)
- [TensorRT](https://developer.nvidia.com/tensorrt) provides significant acceleration of model inference on NVIDIA GPUs compared to running the full graph in MxNet using unfused GPU operators. In addition to faster fp32 inference, TensorRT optimizes fp16 inference, and is capable of int8 inference (provided the quantization steps are performed). Besides increasing throughput, TensorRT significantly reduces inference latency, especially for small batches.
Expand Down
2 changes: 1 addition & 1 deletion R-package/R/zzz.R
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ NULL

tips <- c(
"Need help? Feel free to open an issue on https://github.com/dmlc/mxnet/issues",
"For more documents, please visit http://mxnet.io",
"For more documents, please visit https://mxnet.io",
"Use suppressPackageStartupMessages() to eliminate package startup messages."
)

Expand Down
2 changes: 1 addition & 1 deletion R-package/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ options(repos = cran)
install.packages("mxnet")
```

To use the GPU version or to use it on Linux, please follow [Installation Guide](http://mxnet.io/install/index.html)
To use the GPU version or to use it on Linux, please follow [Installation Guide](https://mxnet.io/install/index.html)

License
-------
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ What's New
* [Version 0.8.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.8.0)
* [Updated Image Classification with new Pre-trained Models](./example/image-classification)
* [Notebooks How to Use MXNet](https://github.com/d2l-ai/d2l-en)
* [MKLDNN for Faster CPU Performance](./docs/tutorials/mkldnn/MKLDNN_README.md)
* [MKLDNN for Faster CPU Performance](docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_readme.md)
* [MXNet Memory Monger, Training Deeper Nets with Sublinear Memory Cost](https://github.com/dmlc/mxnet-memonger)
* [Tutorial for NVidia GTC 2016](https://github.com/dmlc/mxnet-gtc-tutorial)
* [MXNet.js: Javascript Package for Deep Learning in Browser (without server)](https://github.com/dmlc/mxnet.js/)
Expand Down
2 changes: 1 addition & 1 deletion ci/docker/install/ubuntu_r.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
# the whole docker cache for the image

# Important Maintenance Instructions:
# Align changes with installation instructions in /docs/install/ubuntu_setup.md
# Align changes with installation instructions in /get_started/ubuntu_setup.md
# Align with R install script: /docs/install/install_mxnet_ubuntu_r.sh

set -ex
Expand Down
2 changes: 1 addition & 1 deletion ci/other/ci_deploy_doc.sh
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,4 @@
set -ex

aws s3 sync --delete . s3://mxnet-ci-doc/$1/$2 \
&& echo "Doc is hosted at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/$1/$2/index.html"
&& echo "Doc is hosted at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/$1/$2/index.html"
4 changes: 2 additions & 2 deletions contrib/clojure-package/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,8 +189,8 @@ If you have previous builds and other unwanted files lying around in the working

Detailed instructions for building MXNet core from source can be found [in the MXNet installation documentation](https://mxnet.incubator.apache.org/install/index.html). The relevant sections are:

- For Ubuntu Linux: [CUDA Dependencies](https://mxnet.incubator.apache.org/install/ubuntu_setup.html#cuda-dependencies) and [Building MXNet from Source](https://mxnet.incubator.apache.org/install/ubuntu_setup.html#build-mxnet-from-source)
- For Mac OSX: [Build the Shared Library](https://mxnet.incubator.apache.org/install/osx_setup.html#build-the-shared-library)
- For Ubuntu Linux: [CUDA Dependencies](https://mxnet.incubator.apache.org/get_started/ubuntu_setup#cuda-dependencies) and [Building MXNet from Source](https://mxnet.incubator.apache.org/get_started/ubuntu_setup#build-mxnet-from-source)
- For Mac OSX: [Build the Shared Library](https://mxnet.incubator.apache.org/get_started/osx_setup.html#build-the-shared-library)

In particular, ignore all of the language-interface-specific sections.

Expand Down
6 changes: 3 additions & 3 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@

# Building and Updating MXNet Documentation

The website is hosted at http://mxnet.incubator.apache.org/.
http://mxnet.io redirects to this site and advised to use links with http://mxnet.incubator.apache.org/ instead of http://mxnet.io/.
The website is hosted at https://mxnet.incubator.apache.org/.
https://mxnet.io redirects to this site and advised to use links with https://mxnet.incubator.apache.org/ instead of https://mxnet.io/.

## Website & Documentation Contributions

Expand All @@ -36,7 +36,7 @@ If you plan to contribute changes to the documentation or website, please submit

## Python Docs

MXNet's Python documentation is built with [Sphinx](http://www.sphinx-doc.org) and a variety of plugins including [pandoc](https://pandoc.org/), and [recommonmark](https://github.com/rtfd/recommonmark).
MXNet's Python documentation is built with [Sphinx](https://www.sphinx-doc.org) and a variety of plugins including [pandoc](https://pandoc.org/), and [recommonmark](https://github.com/rtfd/recommonmark).

More information on the dependencies can be found in the [CI folder's installation scripts](https://github.com/apache/incubator-mxnet/tree/master/ci/docker/install/ubuntu_docs.sh).

Expand Down
4 changes: 2 additions & 2 deletions docs/python_docs/python/scripts/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,9 @@
# General information about the project.
project = u'Apache MXNet'
author = u'%s developers' % project
copyright = u'2015-2018, %s' % author
copyright = u'2015-2019, %s' % author
github_doc_root = 'https://github.com/apache/incubator-mxnet/tree/master/docs/'
doc_root = 'http://mxnet.io/'
doc_root = 'https://mxnet.incubator.apache.org/'

# add markdown parser
source_parsers = {
Expand Down
13 changes: 12 additions & 1 deletion docs/python_docs/python/tutorials/deploy/export/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,16 @@ but you also have the option to export most models to the ONNX format.

.. card::
:title: Export ONNX Models
:link: onnx.html

Export your MXNet model to the Open Neural Exchange Format

.. card::
:title: Save / Load Parameters
:link: ../../packages/gluon/blocks/save_load_params.html

Save and Load your model parameters with MXnet

Coming Soon!

.. card::
:title: Export with GluonCV
Expand All @@ -39,4 +46,8 @@ but you also have the option to export most models to the ONNX format.
.. toctree::
:hidden:
:maxdepth: 1
:glob:

*
Export Gluon CV Models <https://gluon-cv.mxnet.io/build/examples_deployment/export_network.html>
Save / Load Parameters <https://mxnet.incubator.apache.org/api/python/docs/tutorials/packages/gluon/blocks/save_load_params.html>
ThomasDelteil marked this conversation as resolved.
Show resolved Hide resolved
150 changes: 150 additions & 0 deletions docs/python_docs/python/tutorials/deploy/export/onnx.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->

<!--- http://www.apache.org/licenses/LICENSE-2.0 -->

<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->

# Exporting to ONNX format

[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.

In this tutorial, we will show how you can save MXNet models to the ONNX format.

MXNet-ONNX operators coverage and features are updated regularly. Visit the [ONNX operator coverage](https://cwiki.apache.org/confluence/display/MXNET/ONNX+Operator+Coverage) page for the latest information.

In this tutorial, we will learn how to use MXNet to ONNX exporter on pre-trained models.

## Prerequisites

To run the tutorial you will need to have installed the following python modules:
- [MXNet >= 1.3.0](http://mxnet.incubator.apache.org/install/index.html)
- [onnx]( https://github.com/onnx/onnx#installation) v1.2.1 (follow the install guide)

*Note:* MXNet-ONNX importer and exporter follows version 7 of ONNX operator set which comes with ONNX v1.2.1.


```python
import mxnet as mx
import numpy as np
from mxnet.contrib import onnx as onnx_mxnet
import logging
logging.basicConfig(level=logging.INFO)
```

## Downloading a model from the MXNet model zoo

We download the pre-trained ResNet-18 [ImageNet](http://www.image-net.org/) model from the [MXNet Model Zoo](http://data.mxnet.io/models/imagenet/).
We will also download synset file to match labels.

```python
# Download pre-trained resnet model - json and params by running following code.
path='http://data.mxnet.io/models/imagenet/'
[mx.test_utils.download(path+'resnet/18-layers/resnet-18-0000.params'),
mx.test_utils.download(path+'resnet/18-layers/resnet-18-symbol.json'),
mx.test_utils.download(path+'synset.txt')]
```

Now, we have downloaded ResNet-18 symbol, params and synset file on the disk.

## MXNet to ONNX exporter API

Let us describe the MXNet's `export_model` API.

```python
help(onnx_mxnet.export_model)
```

Output:

```text
Help on function export_model in module mxnet.contrib.onnx.mx2onnx.export_model:

export_model(sym, params, input_shape, input_type=<type 'numpy.float32'>, onnx_file_path=u'model.onnx', verbose=False)
Exports the MXNet model file, passed as a parameter, into ONNX model.
Accepts both symbol,parameter objects as well as json and params filepaths as input.
Operator support and coverage - https://cwiki.apache.org/confluence/display/MXNET/MXNet-ONNX+Integration

Parameters
----------
sym : str or symbol object
Path to the json file or Symbol object
params : str or symbol object
Path to the params file or params dictionary. (Including both arg_params and aux_params)
input_shape : List of tuple
Input shape of the model e.g [(1,3,224,224)]
input_type : data type
Input data type e.g. np.float32
onnx_file_path : str
Path where to save the generated onnx file
verbose : Boolean
If true will print logs of the model conversion

Returns
-------
onnx_file_path : str
Onnx file path
```

`export_model` API can accept the MXNet model in one of the following two ways.

1. MXNet sym, params objects:
* This is useful if we are training a model. At the end of training, we just need to invoke the `export_model` function and provide sym and params objects as inputs with other attributes to save the model in ONNX format.
2. MXNet's exported json and params files:
* This is useful if we have pre-trained models and we want to convert them to ONNX format.

Since we have downloaded pre-trained model files, we will use the `export_model` API by passing the path for symbol and params files.

## How to use MXNet to ONNX exporter API

We will use the downloaded pre-trained model files (sym, params) and define input variables.

```python
# Downloaded input symbol and params files
sym = './resnet-18-symbol.json'
params = './resnet-18-0000.params'

# Standard Imagenet input - 3 channels, 224*224
input_shape = (1,3,224,224)

# Path of the output file
onnx_file = './mxnet_exported_resnet50.onnx'
```

We have defined the input parameters required for the `export_model` API. Now, we are ready to covert the MXNet model into ONNX format.

```python
# Invoke export model API. It returns path of the converted onnx model
converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file)
```

This API returns path of the converted model which you can later use to import the model into other frameworks.

## Check validity of ONNX model

Now we can check validity of the converted ONNX model by using ONNX checker tool. The tool will validate the model by checking if the content contains valid protobuf:

```python
from onnx import checker
import onnx

# Load onnx model
model_proto = onnx.load_model(converted_model_path)

# Check if converted ONNX protobuf is valid
checker.check_graph(model_proto.graph)
```

If the converted protobuf format doesn't qualify to ONNX proto specifications, the checker will throw errors, but in this case it successfully passes.

This method confirms exported model protobuf is valid. Now, the model is ready to be imported in other frameworks for inference!
2 changes: 1 addition & 1 deletion docs/python_docs/python/tutorials/deploy/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ Security

.. toctree::
:hidden:
:maxdepth: 0
:maxdepth: 1

export/index
inference/index
Expand Down
38 changes: 24 additions & 14 deletions docs/python_docs/python/tutorials/deploy/inference/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,25 +33,35 @@ The following tutorials will help you learn how to deploy MXNet models for infer
:link: https://gluon-cv.mxnet.io/build/examples_deployment/int8_inference.html

How to use quantized GluonCV models for inference on Intel Xeon Processors to gain higher performance.
..
PLACEHOLDER

.. card::
:title: Scala and Java
:link: scala.html

How to use MXNet models in a Scala or Java environment.
The following tutorials will help you learn how to deploy MXNet models for inference applications.

.. container:: cards

.. card::
:title: Scala and Java
:link: scala.html

How to use MXNet models in a Scala or Java environment.

.. card::
:title: C++
:link: cpp.html

How to use MXNet models in a C++ environment.

.. card::
:title: C++
:link: cpp.html

How to use MXNet models in a C++ environment.
PLACEHOLDER
..
.. card::
:title: Raspberry Pi
:link: wine_detector.html

Example of running a wine detector on a raspberry pi.


.. toctree::
:hidden:
:maxdepth: 0

:maxdepth: 1
:glob:

*
Loading