Skip to content
/ StarGAN Public
forked from yunjey/stargan

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

License

Notifications You must be signed in to change notification settings

c1a1o1/StarGAN

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


PyTorch implementation of StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. StarGAN can flexibly translate an input image to any desired target domain using only a single generator and a discriminator.

Authors

Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sung Kim, and Jaegul Choo
Korea Universitiy, Clova AI Research (NAVER), The College of New Jersey, HKUST
 

Results

Facial Attribute Transfer on CelebA

The images are generated by StarGAN trained on the CelebA dataset.

Facial Expression Synthesis on RaFD

The images are generated by StarGAN trained on the RaFD dataset.

Facial Expression Synthesis on CelebA

The images are generated by StarGAN trained on both the CelebA and RaFD dataset.

 

Model Description

Training within a Single Dataset

Overview of StarGAN, consisting of two modules, a discriminator D and a generator G. (a) D learns to distinguish between real and fake images and classify the real images to its corresponding domain. (b) G takes in as input both the image and target domain label and generates an fake image. The target domain label is spatially replicated and concatenated with the input image. (c) G tries to reconstruct the original image from the fake image given the original domain label. (d) G tries to generate images indistinguishable from real images and classifiable as target domain by D.

Training with Multiple Datasets

Overview of StarGAN when training with both CelebA and RaFD. (a) ~ (d) shows the training process using CelebA, and (e) ~ (h) shows the training process using RaFD. (a), (e) The discriminator D learns to distinguish between real and fake images and minimize the classification error only for the known label. (b), (c), (f), (g) When the mask vector (purple) is [1, 0], the generator G learns to focus on the CelebA label (yellow) and ignore the RaFD label (green) to perform image-to-image translation, and vice versa when the mask vector is [0, 1]. (d), (h) G tries to generate images that are both indistinguishable from real images and classifiable by D as belonging to the target domain.

 

Prerequisites

 

Getting Started

1. Clone the repository

$ git clone https://github.com/yunjey/StarGAN.git
$ cd StarGAN/

2. Download the dataset

(i) CelebA dataset
$ bash download.sh
(ii) RaFD dataset

Because RaFD is not a public dataset, you must first request access to the dataset from the Radboud Faces Database website. Then, you need to create the folder structure as decribed here.

3. Train StarGAN

(i) Training with CelebA
$ python main.py --mode='train' --dataset='CelebA' --c_dim=5 --image_size=128 \
                 --sample_path='stargan_celebA/samples' --log_path='stargan_celebA/logs' \
                 --model_save_path='stargan_celebA/models' --result_path='stargan_celebA/results'
(ii) Training with RaFD
$ python main.py --mode='train' --dataset='RaFD' --c_dim=8 --image_size=128 \
                 --num_epochs=200 --num_epochs_decay=100 --sample_step=200 --model_save_step=200 \
                 --sample_path='stargan_rafd/samples' --log_path='stargan_rafd/logs' \
                 --model_save_path='stargan_rafd/models' --result_path='stargan_rafd/results'
(iii) Training with CelebA+RaFD
$ python main.py --mode='train' --dataset='Both' --image_size=256 --num_iters=200000 --num_iters_decay=100000 \
                 --sample_path='stargan_both/samples' --log_path='stargan_both/logs' \
                 --model_save_path='stargan_both/models' --result_path='stargan_both/results'

4. Test StarGAN

(i) Facial attribute transfer on CelebA
$ python main.py --mode='test' --dataset='CelebA' --c_dim=5 --image_size=128 --test_model='20_1000' \
                 --sample_path='stargan_celebA/samples' --log_path='stargan_celebA/logs' \
                 --model_save_path='stargan_celebA/models' --result_path='stargan_celebA/results'
(ii) Facial expression synthesis on RaFD
$ python main.py --mode='test' --dataset='RaFD' --c_dim=8 --image_size=128 \
                 --test_model='200_200' --rafd_image_path='data/RaFD/test' \
                 --sample_path='stargan_rafd/samples' --log_path='stargan_rafd/logs' \
                 --model_save_path='stargan_rafd/models' --result_path='stargan_rafd/results'
(iii) Facial expression synthesis on CelebA
$ python main.py --mode='test' --dataset='Both' --image_size=256 --test_model='200000' \
                 --sample_path='stargan_both/samples' --log_path='stargan_both/logs' \
                 --model_save_path='stargan_both/models' --result_path='stargan_both/results'

 

Citation

@article{choi2017stargan,
 title = {StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation},    
 author = {Choi, Yunjey and Choi, Minje and Kim, Munyoung and Ha, Jung-Woo and Kim, Sunghun and Choo, Jaegul},
 journal= {arXiv preprint arXiv:1711.09020},
 Year = {2017}
}

 

Acknowledgement

This work was mainly done while the first author did a research internship at Clova AI Research, NAVER (CLAIR). We also thank all the researchers at CLAIR, especially Donghyun Kwak, for insightful discussions.

About

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.1%
  • Shell 0.9%