- This branch includes avatar creation pipeline and animation function.
${ROOT}
|-- main
|-- common
|-- |-- utils/human_model_files
|-- |-- |-- smplx/SMPLX_FEMALE.npz
|-- |-- |-- smplx/SMPLX_MALE.npz
|-- |-- |-- smplx/SMPLX_NEUTRAL.npz
|-- |-- |-- smplx/MANO_SMPLX_vertex_ids.pkl
|-- |-- |-- smplx/SMPL-X__FLAME_vertex_ids.npy
|-- |-- |-- smplx/smplx_flip_correspondences.npz
|-- |-- |-- flame/flame_dynamic_embedding.npy
|-- |-- |-- flame/FLAME_FEMALE.pkl
|-- |-- |-- flame/FLAME_MALE.pkl
|-- |-- |-- flame/FLAME_NEUTRAL.pkl
|-- |-- |-- flame/flame_static_embedding.pkl
|-- |-- |-- flame/FLAME_texture.npz
|-- data
|-- |-- Custom
|-- |-- |-- data
|-- |-- NeuMan
|-- |-- |-- data/bike
|-- |-- |-- data/citron
|-- |-- |-- data/jogging
|-- |-- |-- data/lab
|-- |-- |-- data/parkinglot
|-- |-- |-- data/seattle
|-- tools
|-- output
main
contains high-level code for the avatar creation/animation and configurations.common
contains kernel code. Download SMPL-X 1.1 version from here. Download FLAME 2020 version from here.data
contains data loading code.tools
contains pre-processing and evaluation code.output
contains log, visualized outputs, and fitting result.- We use a modified 3DGS, which supports depth map and mask rendering. This is exactly the same as the original 3DGS except additional supports of the depth map and mask rendering. Please install the modified 3DGS from here.
- We recommend capturing your own video outdoor as inside usually has too strong illuminations, which cast strong shadows. The less strong shadow, the better.
- Place your video at
data/Custom/data/$SUBJECT_ID/video.mp4
. - Get optimized and smoothed SMPL-X parameters with here.
- Go to
segment-anything
folder in../fitting/tools
and runpython run_sam.py --root_path $ROOT/data/Custom/data/$SUBJECT_ID
to obtain foreground masks. - (Background option 1) If background of your own video is static, we get background point cloud with monocular depth estimator. To this end, go to
Depth-Anything-V2
folder in../fitting/tools
and runpython run_depth_anything.py --root_path $ROOT/data/Custom/data/$SUBJECT_ID
, which outputsbkg_point_cloud.txt
atdata/Custom/data/$SUBJECT_ID
. - (Background option 2) If background of your own video is dynamic (like NeuMan videos), no worry. You already ran COLMAP when doing the fitting. You can proceed to next step.
- We provide an example in here.
- You can download original NeuMan data from here.
- We provide pre-processed Neuman data in here.
- We provide train/test/validation split files in here, made following the official code.
- We used the same pre-processing stage of the above one for the custom videos.
- Set
dataset
inmain/config.py
. - Go to
main
folder and runpython train.py --subject_id $SUBJECT_ID
. The checkpoints are saved inoutput/model/$SUBJECT_ID
. - You can see reconstruction results on the training frames by running
python test.py --subject_id $SUBJECT_ID --test_epoch 4
. The results are saved tooutput/result/$SUBJECT_ID
.
- Set
dataset
inmain/config.py
. - Go to
main
folder and runpython get_neutral_pose.py --subject_id $SUBJECT_ID --test_epoch 4
. - You can see a rotating avatar with the neutral pose in
./main/neutral_pose
.
- Set
dataset
inmain/config.py
. - Go to
main
folder and runpython animation.py --subject_id $SUBJECT_ID --test_epoch 4 --motion_path $PATH
if you want to use an avatar inoutput/model_dump/$SUBJECT_ID
.$PATH
should contain SMPL-X parameters to animate the avatar. You can prepare$PATH
with here. - To render the avatar from rotating camera, run
python animate_view_rot.py --subject_id $SUBJECT_ID --test_epoch 4 --motion_path $PATH
. - We provide SMPL-X parameters of several videos (examples of
$PATH
) in here.
- Before testing, please make sure that you placed the correct ID parameters(SMPLX shape parameter, joint offset, and face offset) at
../data/$DATASET/data/$SUBJECT_ID/smplx_optimized
. This should be the same one that you used to train your avatar. For the provided ExAvatar checkpoints, please place our pre-processed ones. - For the evaluation on the NeuMan dataset, we optimize SMPL-X paraemeters of testing frames with image loss while fixing the pre-trained avatars following 1, 2, and Section 4 B Evaluation. This is specified in here.
- Go to
tools
folder and runpython prepare_fit_pose_to_test.py --root_path ../output/model_dump/$SUBJECT_ID
if you want to use an avatar inoutput/model_dump/$SUBJECT_ID
. It simply setsepoch
of a checkpoint to 0 and save it to'output/model_dump/$SUBJECT_ID' + '_fit_pose_to_test'
. - Set
dataset='NeuMan'
inmain/config.py
. - Go to
main
folder and runpython train.py --subject_id $SUBJECT_ID --fit_pose_to_test --continue
. - You can see test results on the testing frames by running
python test.py --subject_id $SUBJECT_ID --fit_pose_to_test --test_epoch 4
. The results are saved to'output/result/$SUBJECT_ID' + '_fit_pose_to_test'
. - For the evaluation of the NeuMan dataset, go to
tools
folder and runpython eval_neuman.py --output_path '../output/result/$SUBJECT_ID' + '_fit_pose_to_test' --subject_id $SUBJECT_ID
. If you want to include background pixels during the evaluation, add--include_bkg
.
- Gyeongsik's avatar Download and ID parameters Download
- NeuMan's avatar Download and ID parameters Download
@inproceedings{moon2024exavatar,
title={Expressive Whole-Body 3D Gaussian Avatar},
author = {Moon, Gyeongsik and Shiratori, Takaaki and Saito, Shunsuke},
booktitle={ECCV},
year={2024}
}