Skip to content
/ MSGAN Public

MSGAN: Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis (CVPR2019)

Notifications You must be signed in to change notification settings

HelenMao/MSGAN

Repository files navigation

Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis

Pytorch implementation for our MSGAN (Miss-GAN). We propose a simple yet effective mode seeking regularization term that can be applied to arbitrary conditional generative adversarial networks in different tasks to alleviate the mode collapse issue and improve the diversity.

Contact: Qi Mao ([email protected]), Hsin-Ying Lee ([email protected]), and Hung-Yu Tseng ([email protected])

Paper

Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis
Qi Mao*, Hsin-Ying Lee*, Hung-Yu Tseng*, Siwei Ma, and Ming-Hsuan Yang
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019 (* equal contribution)
[arxiv]

Citing MSGAN

If you find MSGAN useful in your research, please consider citing:

@inproceedings{MSGAN,
  author = {Mao, Qi and Lee, Hsin-Ying and Tseng, Hung-Yu and Ma, Siwei and Yang, Ming-Hsuan},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
  title = {Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis},
  year = {2019}
}

Example Results

Usage

Prerequisites

Install

  • Clone this repo:
git clone https://github.com/HelenMao/MSGAN.git

Training Examples

Download datasets for each task into the dataset folder

mkdir datasets

Conditoned on Label

cd MSGAN/DCGAN-Mode-Seeking
python train.py --dataroot ./datasets/Cifar10

Conditioned on Image

  • Paired Data: facades and maps
  • Baseline: Pix2Pix

You can download the facades and maps datasets from the BicycleGAN [Github Project].
We employ the network architecture of the BicycleGAN and follow the training process of Pix2Pix.

cd MSGAN/Pix2Pix-Mode-Seeking
python train.py --dataroot ./datasets/facades
  • Unpaired Data: Yosemite (summer <-> winter) and Cat2Dog (cat <-> dog)
  • Baseline: DRIT

You can download the datasets from the DRIT [Github Project].
Specify --concat 0 for Cat2Dog to handle large shape variation translation

cd MSGAN/DRIT-Mode-Seeking
python train.py --dataroot ./datasets/cat2dog

Conditioned on Text

  • Dataset: CUB-200-2011
  • Baseline: StackGAN++

You can download the datasets from the StackGAN++ [Github Project].

cd MSGAN/StackGAN++-Mode-Seeking
python main.py --cfg cfg/birds_3stages.yml

Pre-trained Models

Download and save them into

./models/

Evaluation

For Pix2Pix, DRIT, and StackGAN++, please follow the instructions of corresponding github projects of the baseline frameworks for more evaluation details.

Testing Examples

DCGAN-Mode-Seeking

python test.py --dataroot ./datasets/Cifar10 --resume ./models/DCGAN-Mode-Seeking/00199.pth

Pix2Pix-Mode-Seeking

python test.py --dataroot ./datasets/facades --checkpoints_dir ./models/Pix2Pix-Mode-Seeking/facades --epoch 400
python test.py --dataroot ./datasets/maps --checkpoints_dir ./models/Pix2Pix-Mode-Seeking/maps --epoch 400

DRIT-Mode-Seeking

python test.py --dataroot ./datasets/yosemite --resume ./models/DRIT-Mode-Seeking/yosemite/01200.pth --concat 1
python test.py --dataroot ./datasets/cat2dog --resume ./models/DRIT-Mode-Seeking/cat2dog/01999.pth --concat 0

StackGAN++-Mode-Seeking

python main.py --cfg cfg/eval_birds.yml 

Reference

Quantitative Evaluation Metrics

About

MSGAN: Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis (CVPR2019)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages