Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
add quantization example to readme
Browse files Browse the repository at this point in the history
  • Loading branch information
xinyu-intel committed Feb 18, 2019
1 parent 5adb6fc commit db32f9e
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ If your tutorial depends on specific packages, simply add them to this provision
* [Model Parallelism](model-parallel) - various model parallelism examples
* [Model Parallelism with LSTM](model-parallel/lstm) - an example showing how to do model parallelism with a LSTM
* [Model Parallelism with Matrix Factorization](model-parallel/lstm) - a matrix factorization algorithm for recommendations
* [Model Quantization with Calibration Examples](quantization) - examples of quantizing a FP32 model with Intel® MKL-DNN or CUDNN
* [Module API](module) - examples with the Python Module API
* [Multi-task Learning](multi-task) - how to use MXNet for multi-task learning
* [MXNet Adversarial Variational Autoencoder](mxnet_adversarial_vae) - combines a variational autoencoder with a generative adversarial network
Expand Down

0 comments on commit db32f9e

Please sign in to comment.