This repository has been archived by the owner on Feb 9, 2021. It is now read-only.
forked from apache/mxnet
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Update autoencoder example (apache#12933)
* Fixing the autoencoder example * adding pointer to VAE * fix typos * Update README.md * Updating notebook * Update after comments * Update README.md * Update README.md * Retrigger build * Updates after review
- Loading branch information
1 parent
8db75f7
commit a7da6ad
Showing
7 changed files
with
557 additions
and
579 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,16 +1,20 @@ | ||
# Example of Autencoder | ||
# Example of a Convolutional Autoencoder | ||
|
||
Autoencoder architecture is often used for unsupervised feature learning. This [link](http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/) contains an introduction tutorial to autoencoders. This example illustrates a simple autoencoder using stack of fully-connected layers for both encoder and decoder. The number of hidden layers and size of each hidden layer can be customized using command line arguments. | ||
Autoencoder architectures are often used for unsupervised feature learning. This [link](http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/) contains an introduction tutorial to autoencoders. This example illustrates a simple autoencoder using a stack of convolutional layers for both the encoder and the decoder. | ||
|
||
## Training Stages | ||
This example uses a two-stage training. In the first stage, each layer of encoder and its corresponding decoder are trained separately in a layer-wise training loop. In the second stage the entire autoencoder network is fine-tuned end to end. | ||
|
||
![](https://cdn-images-1.medium.com/max/800/1*LSYNW5m3TN7xRX61BZhoZA.png) | ||
|
||
([Diagram source](https://towardsdatascience.com/autoencoders-introduction-and-implementation-3f40483b0a85)) | ||
|
||
|
||
The idea of an autoencoder is to learn to use bottleneck architecture to encode the input and then try to decode it to reproduce the original. By doing so, the network learns to effectively compress the information of the input, the resulting embedding representation can then be used in several domains. For example as featurized representation for visual search, or in anomaly detection. | ||
|
||
## Dataset | ||
The dataset used in this example is [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. This example uses scikit-learn module to download this dataset. | ||
|
||
## Simple autoencoder example | ||
mnist_sae.py: this example uses a simple auto-encoder architecture to encode and decode MNIST images with size of 28x28 pixels. It contains several command line arguments. Pass -h (or --help) to view all available options. To start the training on CPU (use --gpu option for training on GPU) using default options: | ||
The dataset used in this example is [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset. | ||
|
||
## Variational Autoencoder | ||
|
||
You can check an example of variational autoencoder [here](https://gluon.mxnet.io/chapter13_unsupervised-learning/vae-gluon.html) | ||
|
||
``` | ||
python mnist_sae.py | ||
``` |
This file was deleted.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.