This repository contains cleaned-up code for reproducing the quantitative experiments in Isolating Sources of Disentanglement in Variational Autoencoders [arxiv].
To train a model:
python vae_quant.py --dataset [shapes/faces] --beta 6 --tcvae
Specify --conv
to use the convolutional VAE. We used a mlp for dSprites and conv for 3d faces. To see all options, use the -h
flag.
The main computational difference between beta-VAE and beta-TCVAE is summarized in these lines.
To evaluate the MIG of a model:
python disentanglement_metrics.py --checkpt [checkpt]
To see all options, use the -h
flag.
Download the npz file from here and place it into data/
.
We cannot publicly distribute this due to the license. Please contact me for the data.
Email [email protected] if you have questions about the code/data.
@inproceedings{chen2018isolating,
title={Isolating Sources of Disentanglement in Variational Autoencoders},
author={Chen, Ricky T. Q. and Li, Xuechen and Grosse, Roger and Duvenaud, David},
booktitle = {Advances in Neural Information Processing Systems},
year={2018}
}