Skip to content

The official implementation of SIGGRAPH 2023 conference paper, FashionTex: Controllable Virtual Try-on with Text and Texture.

License

Notifications You must be signed in to change notification settings

picksh/FashionTex

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FashionTex

The official implementation of SIGGRAPH 2023 conference paper, FashionTex: Controllable Virtual Try-on with Text and Texture. (https://arxiv.org/abs/2305.04451)

TODO:

  • Training Code
  • Processed data
  • Test Code
  • ID Recovery Module

Requirement

  1. Create a conda virtual environment and activate it:
conda create -n fashiontex python=3.8
conda activate fashiontex
  1. Install required packages:
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
pip install ftfy regex tqdm gdown 
pip install pytorch-lightning==1.4.2
pip install git+https://github.com/openai/CLIP.git
  1. Install required packages for DenseCLIP.
  2. Download Pretrained StyleGAN-Human weight(stylegan_human_v2_1024.pkl) from https://github.com/stylegan-human/StyleGAN-Human
  3. Download Pretrained IR-SE50 model taken from TreB1eN for use in our ID loss during training.
  4. Download Pretrained DenseCLIP weight.

Default path for pretrained weights is ./pretrained. You can change the path in mapper/options/train_options.py

Prepare data

The processed data can be found in link. You can check README.md and mapper/options/train_options.py to find where to place these files. In this project, we use DeepFashion-MultiModal dataset. We use e4e to invert images into latent space.

  1. Download DeepFashion-MultiModal dataset.
  2. In order to use the pre-trained StyleGAN-Human model, we should align images with Aligned raw images. Put the aligned images in data/data_split/aligned.

In case you want to prepare your own data, here is an example. Otherwise, the previously downloaded data can be used directly.

  1. Invert aligned images: The simplest way is to follow Invert real image with PTI and we only need the output embedding "0.pt" in 'outputs/pti/'. (Since we only need the output of e4e, you can comment out the finetuning code to save time.)
  2. Then you can use a script to gather the processed data for your training.

Training

You can set the GPU number in run.sh. If you would like to change the data, weights, output path or other settings, you can find them in mapper/options/train_options.py.

bash run.sh

Test

First set the 'checkpoint_path','test_data_list', 'test_img_dir' and 'test_texture_dir' in test.sh file. Then run the test script:

bash test.sh

The 'test_data_list' is default as a JSON file with the following format:

[
    # For clothes with upper and lower parts
    {
        "img": "image_name",
        "text":[
            "",
            [
                "target_upper_cloth_type", # e.g. "shirt",
                "target_lower_cloth_type"  # e.g. "pants"
            ]
        ],
        "texture":[
            "ref_texture_for_upper_cloth.jpg",
            "ref_texture_for_lower_cloth.jpg"
        ]
    },
    # For dress/rompers/...
    {
        "img": "image_name",
        "text":[
            [
                "target_upper_type", # e.g. "tank dress",
                "target_lower_type"  # e.g. "short dress"
            ],
            [
                "completed_target_type" # e.g. "tank short dress"
            ]
        ],
        "texture":[
            "ref_texture_for_upper_cloth.jpg",
            "ref_texture_for_lower_cloth.jpg"
        ]
    }
]

An example is in 'data/test_data/test_example.json'.

ID Recovery Module

First set the data path in 'recovery_module/pti/pti_configs/paths_config.py'.

cd recovery_module
python run_pti.py

Acknowledgements

This code is based on StyleCLIP and HairCLIP

About

The official implementation of SIGGRAPH 2023 conference paper, FashionTex: Controllable Virtual Try-on with Text and Texture.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages