Describe What to Change: A Text-guided Unsupervised Image-to-Image Translation Approach, accepted to ACM International Conference on Multimedia(ACM MM), 2020. [Paper]|[arXiv]|[code]
See the environment.yaml
. We provide an user-friendly configuring method via Conda system, and you can create a new Conda environment using the command:
conda env create -f environment.yaml
- Official homepage of dataset: link
- Prepare the dataset as the bellow structure:
datasets
|__celeba
|__images
| |__xxx.jpg
| |__...
|__list_attr_celeba.txt
- CelebA: google drive (coming soon)
- Train:
sh ./scripts/train_celeba_faces.sh <gpu_id> 0
We evaluate the performances of the compared models mainly based on this repo: GAN-Metrics
If our project is useful for you, please cite our papers:
@inproceedings{liu2020describe,
title={Describe What to Change: A Text-guided Unsupervised Image-to-Image Translation Approach},
author={Liu, Yahui and De Nadai, Marco and Cai, Deng and Li, Huayang and Alameda-Pineda, Xavier and Sebe, Nicu and Lepri, Bruno},
booktitle={Proceedings of the 28th ACM International Conference on Multimedia},
year={2020}
}