Skip to content

CNN-based 3DMM prediction, 3D reconstruction and rendering, with training and inference code

Notifications You must be signed in to change notification settings

HoiM/face-3dmm-reconstruction

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

3D face reconstruction

Introduction

Implementation of the 3DMM deep face reconstruction method with train and test code:

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set

This is an experiment that was done many years ago. Now I release the code if anyone needs it.

Qualitative Results

Selected Results (rendered reconstructed faces)

See assets/images and assets/results.

Usage

Prerequisites

Libraries that are needed for running the code, with preferred versions:

torch==2.2.1
torchvision==0.17.1
pytorch3d==0.7.6
tensorboard==2.14.0
opencv-python==3.4.11.43
dlib==19.17.0

Download the parameters that are needed for running the code:

  • Please go to this repo, download BFM and Expression Basis, and run the conversion program to get BFM09_model_info.mat. Then place the BFM09_model_info.mat file in params/bfm and run parse.py inside the params directory.
  • If you want to train the model by yourself, please go to this repo and download model_ir_se50.pth. Place the downloaded file inside the params directory. This is only needed for training.
Inference
  • An example for inference is shown by running python reconstruct_and_render.py, which reconstructs the faces in assets/images and the rendered results are saved in assets/results.
Training
  • Training dataset: if you want to construct your own dataset, please follow the following steps
    • Collect images with human faces and place them in data/images. It is preferred that each image contains only one face.
    • Inside the data directory, run python main.py. The program will crop the faces, get 68 facial landmarks and the corresponding face region masks. Processed training data will be saved in data/data. I use dlib to get facial landmarks. You can replace it with a better one. Face regions are detected by using nasir6/face-segmentation (Thanks to this work!)
  • Train the model: run python train.py. Hyperparameters are written inside this file.
    • TODO: Distributed Data Parallel is not used yet.
    • TODO: 3DMM parameters can also be trained in order to get a better model. I haven't experiment with this.

References

Thanks to the following works:

About

CNN-based 3DMM prediction, 3D reconstruction and rendering, with training and inference code

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages