Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[Doc] Add MKL-DNN operator list #14891

Merged
merged 33 commits into from
May 18, 2019
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
c202b36
improve mkldnn document
TaoLv May 6, 2019
6fbfe48
fix
TaoLv May 6, 2019
15446a8
enable fusion
TaoLv May 6, 2019
8a3b9b9
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv May 6, 2019
70f3723
adjust table
TaoLv May 6, 2019
5b3f6db
fix comments
TaoLv May 6, 2019
58e8ac1
promote mxnet-mkl package
TaoLv May 7, 2019
ce82caa
Update docs/tutorials/mkldnn/MKLDNN_README.md
aaronmarkham May 8, 2019
dab9ddf
Update docs/install/index.md
aaronmarkham May 8, 2019
2a5e2cd
Update docs/install/index.md
aaronmarkham May 8, 2019
708edee
Update docs/install/index.md
aaronmarkham May 8, 2019
fb5fcc3
Update docs/install/index.md
aaronmarkham May 8, 2019
4a61c8e
Update docs/tutorials/mkldnn/operator_list.md
aaronmarkham May 8, 2019
4c897e8
Update docs/faq/perf.md
aaronmarkham May 8, 2019
eb36414
Update docs/faq/perf.md
aaronmarkham May 8, 2019
bfc5ac0
Update docs/tutorials/mkldnn/operator_list.md
aaronmarkham May 8, 2019
7e53e8d
Update docs/tutorials/mkldnn/operator_list.md
aaronmarkham May 8, 2019
b74f3f7
fix markdown table
TaoLv May 8, 2019
d1cf743
fix comments
TaoLv May 13, 2019
762945b
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv May 13, 2019
6b01ce3
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv May 14, 2019
5c3d067
Update docs/faq/env_var.md
TaoLv May 15, 2019
d9fcea4
Update docs/install/index.md
TaoLv May 15, 2019
b6de387
Update docs/tutorials/mkldnn/MKLDNN_README.md
TaoLv May 15, 2019
ad5e2c8
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv May 15, 2019
2d8e4d8
Merge branch 'doc-op-list' of https://github.com/TaoLv/incubator-mxne…
TaoLv May 15, 2019
d7c7776
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv May 16, 2019
e2f0d03
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv May 17, 2019
bd49e9d
Merge branch 'master' of https://github.com/apache/incubator-mxnet in…
TaoLv May 17, 2019
b783f58
change name of env variable
TaoLv May 17, 2019
0f82b3b
retrigger ci
TaoLv May 17, 2019
dd4cfa5
Merge branch 'master' into doc-op-list
szha May 18, 2019
4d36bbf
Update env_var.md
szha May 18, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion docs/install/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,12 @@ $ pip install mxnet --pre

</div> <!-- End of master-->
<hr> <!-- pip footer -->
MXNet offers MKL pip packages that will be much faster when running on Intel hardware.
MXNet offers MKL pip packages that will be much faster when running on Intel hardware. Try the following command line to install it and find performance numbers and tuning guide in [performance on Intel CPU](https://mxnet.incubator.apache.org/versions/master/faq/perf.html#intel-cpu).
TaoLv marked this conversation as resolved.
Show resolved Hide resolved

```
$ pip install mxnet-mkl --pre
```

Check the chart below for other options, refer to <a href="https://pypi.org/project/mxnet/">PyPI for other MXNet pip packages</a>, or <a href="validate_mxnet.html">validate your MXNet installation</a>.

<img src="https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/install/pip-packages-1.4.0.png" alt="pip packages"/>
Expand Down
4 changes: 3 additions & 1 deletion docs/tutorials/mkldnn/MKLDNN_README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@

A better training and inference performance is expected to be achieved on Intel-Architecture CPUs with MXNet built with [Intel MKL-DNN](https://github.com/intel/mkl-dnn) on multiple operating system, including Linux, Windows and MacOS.
In the following sections, you will find build instructions for MXNet with Intel MKL-DNN on Linux, MacOS and Windows.


Please find MKL-DNN optimized operators and other features in [MKL-DNN operator list](http://mxnet.incubator.apache.org/tutorials/mkldnn/operator_list.html)
TaoLv marked this conversation as resolved.
Show resolved Hide resolved

The detailed performance data collected on Intel Xeon CPU with MXNet built with Intel MKL-DNN can be found [here](https://mxnet.incubator.apache.org/faq/perf.html#intel-cpu).


Expand Down
85 changes: 85 additions & 0 deletions docs/tutorials/mkldnn/operator_list.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->

<!--- http://www.apache.org/licenses/LICENSE-2.0 -->

<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->

# MKL-DNN Operator list

MXNet MKL-DNN backend provides optimized implementations for various opertors covering a broad range of applications including image classification, object detection, natural language processing. We also provide the lower precision version for part of these operators on CPU leveraging the DL Boost technology from Intel. On computation graph level, a set of graph fusion pass and quantization pass is implemneted based on the sugraph feature of MXNet. To help users understanding MKL-DNN backend better, the tables below summarize the list of supported operators, data types and functionalities. As the community keeps working on more new features for MKL-DNN backend, the tables will be updated continuously.
TaoLv marked this conversation as resolved.
Show resolved Hide resolved
TaoLv marked this conversation as resolved.
Show resolved Hide resolved


| Operator | Function | FP32 Training (backward) | FP32 Inference | INT8 Inference |
| :--: | :--: | :--: | :--: | :--: |
| **Convolution** | 1D Convolution | Y | Y | N |
| | 2D Convolution | Y | Y | Y |
| | 3D Convolution | Y | Y | N |
| **Deconvolution** | 2D Deconvolution | Y | Y | N |
| | 3D Deconvolution | Y | Y | N |
| **FullyConnected** | 1D-4D input, flatten=Ture | N | Y | Y |
| | 1D-4D input, flatten=False | N | Y | Y |
| **Pooling** | 2D max Pooling | Y | Y | Y |
| | 2D avg pooling | Y | Y | Y |
| **BatchNorm** | 2D BatchNorm | Y | Y | N |
| **LRN** | 2D LRN | Y | Y | N |
| **Activation** | ReLU | Y | Y | Y |
| | Tanh | Y | Y | N |
| | SoftReLU | Y | Y | N |
| | Sigmoid | Y | Y | N |
| **softmax** | 1D-4D input | Y | Y | N |
| **Softmax_output** | 1D-4D input | N | Y | N |
| **Transpose** | 1D-4D input | N | Y | N |
| **elemwise_add** | 1D-4D input | Y | Y | Y |
| **Concat** | 1D-4D input | Y | Y | Y |
| **slice** | 1D-4D input | N | Y | N |
| **Quantization** | 1D-4D input | N | N | Y |
| **Dequantization** | 1D-4D input | N | N | Y |
| **Requantization** | 1D-4D input | N | N | Y |


Besides direct operator optimizations, we also provide graph fusion passes listed in the table below. Users can choose to enable or disable these fusion patterns through environmental variables.

For example, you can enable all fusion passes by:

```
export MXNET_SUBGRAPH_BACKEND=MKLDNN
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all fusion passes might not be accurate enough here.
MKLDNN backend is only applicable to the FP32 mode fusion, while MKLDNN_POST_QUANTIZE is applicable to the INT8 mode fusion (quantized_op +requantize/dequantize).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice to know! How many subgraph backend options do we have now? What about INT8 convolution + relu fusion?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, there're only two options as mentioned above.
There's no fusion pass for INT8 conv +relu, the current flow is doing the fusion of FP32 conv +relu firstly, and then quantizing FP32 sg_conv to INT8 sg_conv.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the information. I think we need to put them into this page: https://mxnet.incubator.apache.org/versions/master/faq/env_var.html

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the information. I think we need to put them into this page: https://mxnet.incubator.apache.org/versions/master/faq/env_var.html

Let's maintain them in one spot and then reference where appropriate. Either way some mention is needed in the env_var page.

```

And disable `Convolution + Activation(ReLU)` fusion by:

```
export MXNET_DISABLE_MKLDNN_FUSE_CONV_RELU=1
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's also other options for end-use to choose, such as:
MXNET_DISABLE_MKLDNN_CONV_OPT to disable all MKLDNN convolution optimization pass.
MXNET_DISABLE_MKLDNN_FC_OPT to disable all MKLDNN FullyConnected optimization pass.


| Fusion pattern | Enable | Disable |
| :--: | :--: | :--: |
| Convolution + Activation(ReLU) | MXNET_SUBGRAPH_BACKEND | MXNET_DISABLE_MKLDNN_FUSE_CONV_RELU |
| Convolution + elemwise_add | MXNET_SUBGRAPH_BACKEND | MXNET_DISABLE_MKLDNN_FUSE_CONV_SUM |
| Convolution + BatchNorm | MXNET_SUBGRAPH_BACKEND | MXNET_DISABLE_MKLDNN_FUSE_CONV_BN |
| Convolution + Activation(ReLu) + elemwise_add | MXNET_SUBGRAPH_BACKEND | |
| Convolution + BatchNorm + Activation(ReLu) + elemwise_add | MXNET_SUBGRAPH_BACKEND | |
| FullyConnected + Activation(ReLU) | MXNET_SUBGRAPH_BACKEND | MXNET_DISABLE_MKLDNN_FUSE_FC_RELU |
| Convolution (INT8) + re-quantization | MXNET_SUBGRAPH_BACKEND | |
| FullyConnected (INT8) + re-quantization | MXNET_SUBGRAPH_BACKEND | |

TaoLv marked this conversation as resolved.
Show resolved Hide resolved

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

miss a pattern here: FullyConnected (INT8) + re-quantization + de-quantization

To try these features out, you can install MXNet MKL-DNN backend through pip:
TaoLv marked this conversation as resolved.
Show resolved Hide resolved

```
pip install mxnet-mkl
```

To build MXNet MKL-DNN backend from source code, please refer to [MKL-DNN backend readme](http://mxnet.incubator.apache.org/tutorials/mkldnn/MKLDNN_README.html)

For performance numbers, please refer to [performance on Intel CPU](https://mxnet.incubator.apache.org/versions/master/faq/perf.html#intel-cpu)
TaoLv marked this conversation as resolved.
Show resolved Hide resolved
1 change: 1 addition & 0 deletions tests/tutorials/test_sanity_tutorials.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@
'gluon/index.md',
'mkldnn/index.md',
'mkldnn/MKLDNN_README.md',
'mkldnn/operator_list.md',
'nlp/index.md',
'onnx/index.md',
'python/index.md',
Expand Down