Skip to content

Variational auto-encoder trained on celebA . All rights reserved.

Notifications You must be signed in to change notification settings

yzwxx/vae-celebA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

084d524 · Dec 14, 2017

History

32 Commits
Dec 14, 2017
Aug 14, 2017
Aug 14, 2017
Sep 15, 2017
Aug 14, 2017
Aug 14, 2017
Sep 15, 2017
Sep 15, 2017
Aug 14, 2017
Sep 15, 2017
Sep 15, 2017
Sep 15, 2017
Aug 14, 2017

Repository files navigation

vae-celebA

Hereby we present plain VAE and modified VAE model, both of which are trained on celebA dataset to synthesize facial images.

Result:

plain VAE

DFC-VAE

input image:

reconstruction:
randomly generation:

To run the code, you are required to install Tensorflow and Tensorlayer on your machine. how to install Tensorlayer

SOME NOTES

This is the code for the paper Deep Feature Consistent Variational Autoencoder
In loss function we used a vgg loss.Check this how to load and use a pretrained VGG-16? if you have trouble reading vgg_loss.py.

How to Run

Firstly, download the celebA dataset and VGG-16 weights. After installing all the third-party packages required, we can train the models by:

python train_vae.py # for plain VAE
python train_dfc_vae.py # for DFC-VAE