Skip to content

Facial Depth and Normal Estimation using Dual-Pixel Camera (ECCV 22)

License

Notifications You must be signed in to change notification settings

MinJunKang/DualPixelFace

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Face Reconstruction from Dual-Pixel Camera

This is an official implementation of the paper,

Facial Depth and Normal Estimation using Single Dual-Pixel Camera
Minjun Kang, Jaesung Choe, Hyowon Ha, Hae-Gon Jeon, Sunghoon Im, In So Kweon, and KuK-Jin Yoon
European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 2022
[Paper] [Video] [Video Slide] [Poster] [Dataset]

Project Description

  • Provide face related dual-pixel benchmark for all the developers/researchers working with the dual pixel sensor.
  • Release new benchmark dataset and baseline code.
  • Summarize awesome Dual-Pixel papers Page.

drawing

Environment Setting

  • Conda environment : Ubuntu 18.04 CUDA-10.1 (10.2) with Pytorch==1.5.0, Torchvision==0.6.0 (python 3.6).
# Create Environment
conda create -n dpface python=3.6
conda activate dpface

# Install pytorch, torchvision, cudatoolkit
conda install pytorch==1.5.0 torchvision==0.6.0 cudatoolkit=10.1 (10.2) -c pytorch

# Install package and cuda build
sh ./installer.sh
  • Docker environment : Ubuntu 18.04 CUDA-10.2 with Pytorch==1.6.0, Torchvision==0.7.0 (python 3.7).
# Pull docker image
docker pull jack4852/eccv22_facialdocker:latest

# create container and include dataset's path
docker run -it -d --gpus all --name dpface --shm-size 64G --mount type=bind,source=[Local Dataset Path],target=[Docker Dataset Path] jack4852/eccv22_facialdocker:latest

# start container
docker start dpface

# attach container
docker attach dpface

# pull the code from github
git init
git pull https://github.com/MinJunKang/DualPixelFace

# Install package and cuda build
sh ./installer.sh

Facial Dual Pixel Benchmark

(Since dataset is huge (~600G), we are now providing download link for the researchers who request the dataset.)

  • How to get dataset?
  1. Download, read LICENSE AGREEMENT, and confirm that all the terms are agreed.
  2. Then scan the signed LICENSE AGREEMENT. (Digital signature is allowed.)
  3. Send an email to [email protected] with your signed agreement.
  • Directory structure of our dataset
- Parent Directory
  - 2020-1-15_group2
  - 2020-1-16_group3
    - NORMAL                : surface normal (*.npy)
    - MASK                  : mask obtained from Structured Light (*.npy)
    - JSON                  : including path, calibration info (*.json)
    - IMG                   : IMG of LEFT, RIGHT, LEFT + RIGHT (*.JPG)
    - DEPTH                 : metric-scale depth [mm] (*.npy)
    - CALIBRATION
      - pose.npy            : camera extrinsics (8 cameras)
      - Metadata.npy        : focal length [mm], focal distance [mm], Fnumber, pixel size [um]
      - light.npy           : light direction of 6 different light conditions
      - intrinsic.npy       : intrinsic matrix (8 cameras)
      - Disp2Depth.npy      : currently not used
    - ALBEDO                : albedo map (*.npy)
  - ...
  - 2020-2-19_group25
  - test.txt                : list of directories for test set
  - train.txt               : list of directories for training set

Supporting Models

(1) PSMNet Paper Code [Pretrained]

(2) DPNet Paper Code [Pretrained]

(3) StereoNet Paper Code Pretrained

(4) NNet Paper Code Pretrained

(5) BTS Paper Code [Pretrained]

(6) StereoDPNet (Ours) Code Pretrained

If you use these models, please cite their papers.

Instructions for Code

Code Structure (Simple rule for name)

  • config_/[main config].json : set options of dataset, model, and augmentations to use.

  • src/model/[model_name] : If you want to add your own model, main class name should be the upper case of "model_name".

    (The model should contain json file that indicates specific parameters of the model.)

  • src/dataloader/[dataset_name] : If you want to add your own dataset, main class name should be the "[dataset_name]Loader".

    (The dataset should contain json file that indicates specific parameters of the dataset.)

  • You can set the model to run by setting "model_name" parameter in config_/[main config].json.

    (must be the same as the model_name of src/model)

Training & Validation

CUDA_VISIBLE_DEVICES=[gpu idx] python main.py --config [main config] --workspace [Workspace Name]

The results will be automatically saved in ./workspace/[model name]/[Workspace Name].

Example (1). Train StereoDPNet with our face dataset

(results and checkpoints are saved in ./workspace/stereodpnet/base)

CUDA_VISIBLE_DEVICES=[gpu idx] python main.py --config train_faceDP --workspace base

Example (2). Train DPNet with our face dataset

(results and checkpoints are saved in ./workspace/dpnet/base2)

CUDA_VISIBLE_DEVICES=[gpu idx] python main.py --config train_faceDP_dpnet --workspace base2

Example (3). Resume training of StereoDPNet with our face dataset

(results and checkpoints are saved in ./workspace/stereodpnet/base2)

CUDA_VISIBLE_DEVICES=[gpu idx] python main.py --config train_faceDP --workspace base2 --load_model [path to checkpoint]

Testing

If you want to use your own pretrained weight for test, please run like this.

CUDA_VISIBLE_DEVICES=[gpu idx] python main.py --config eval_faceDP --workspace [Workspace Name] --load_model [relative/absolute path to checkpoint]

Demo

Will be updated soon!

Acknowledgements

This work is in part supported by the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT) through the International Cooperative R&D program in part (P0019797), `Project for Science and Technology Opens the Future of the Region' program through the INNOPOLIS FOUNDATION funded by Ministry of Science and ICT (Project Number: 2022-DD-UP-0312), and also supported by the Samsung Electronics Co., Ltd (Project Number: G01210570).

Face-Segmentation-Tool : We use this repo to get face mask for demo at here.

3D Deformable Conv : We use this repo to implement ANM module of StereoDPNet at here.

Affine DP Metric : We use this repo to measure performance using affine metric at here.

Our code is based on PytorchLightning.

References

@article{kang2021facial,
  title={Facial Depth and Normal Estimation using Single Dual-Pixel Camera},
  author={Kang, Minjun and Choe, Jaesung and Ha, Hyowon and Jeon, Hae-Gon and Im, Sunghoon and Kweon, In So and Yoon, KuK-Jin},
  journal={arXiv preprint arXiv:2111.12928},
  year={2021}
}

About

Facial Depth and Normal Estimation using Dual-Pixel Camera (ECCV 22)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published