📖 To address the oversight of motion's influence on surface deformation in current methodologies, we introduce an innovative framework, Motion-Based 3D Clothed Humans Synthesis (MOSS), which employs kinematic information to achieve motion-aware Gaussian split on the human surface. For more visual results, please check out our project page.
conda create --name MOSS python=3.8 -y
conda activate MOSS
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia -y
pip install ninja
pip install submodules/diff-gaussian-rasterization
pip install submodules/simple-knn
pip install --upgrade https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl
pip install -r requirement.txt
# pip install -r requirement.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
Download SMPL Models Register and download SMPL models here. Put the downloaded models in the folder smpl_models. Only the neutral one is needed. The folder structure should look like
./
├── ...
└── assets/
├── SMPL_NEUTRAL.pkl
Please follow the instructions of Instant-NVR to download ZJU-Mocap-Refine and MonoCap dataset.
Download pre-trained model ( ZJU_mocap & Monocap ) and put it into the models folder.
./
├── ...
└── output/
├── ZJU.tar.gz
tar -xzvf ZJU.tar.gz
# Change the path "/home/tom/fsas/workspace/dataset/ZJU_moncap" in the variable "sys_list" to the path of your ZJU_MoCap_refine dataset. refine dataset path.
python train_ZJU.py
Train Monocap dataset
# Change the path to the same as ZJU.
python train_monocap.py
python render_ZJU.py
python render_monocap.py
Our repository is modified and adapted from these amazing repositories. If you find their work useful for your research, please also consider citing them: Gaussian-Splatting, HumanNeRF, GauHuman and Animatable NeRF.