Prerequisites: 64-bit Windows system, Python, Anaconda
To set up and run the Mesh2SMPL repository using Anaconda, follow the detailed step-by-step instructions below EXACTLY as specified. You can use either PowerShell or the Anaconda Command Prompt to execute these instructions.
Video Instructions: https://youtu.be/elg-wVWFlO0
-
Create and activate a conda environment for Python 3.9
First, create a new conda environment for Python 3.9 and activate it:
conda create --name multiview python=3.9 conda activate multiview
-
Clone the repository and navigate into it
Clone the Mesh2SMPL repository and navigate into the directory:
git clone --recurse-submodules https://github.com/ddkhen11/Mesh2SMPL cd Mesh2SMPL
-
Install dependencies using pip
Install the necessary dependencies using pip:
pip install --upgrade setuptools wheel build pip install pyembree pip install -r requirements-multiview.txt
-
Download the PyOpenGL wheel specific to Python 3.9
Download the PyOpenGL wheel specific to Python 3.9 from Google Drive. The required file is named
PyOpenGL-3.1.7-cp39-cp39-win_amd64.whl
. Save this file to a known location on your computer. -
Install PyOpenGL from the downloaded wheel
Navigate to the directory where you downloaded the wheel file:
cd path\to\downloaded\wheel
Install the PyOpenGL package from the downloaded wheel file:
pip install PyOpenGL-3.1.7-cp39-cp39-win_amd64.whl
After that is done, navigate back to the Mesh2SMPL directory:
cd path\to\Mesh2SMPL
Due to the complexities of installing and running OpenPose, we will instead use AlphaPose on our multiview images and then convert the AlphaPose keypoints to OpenPose keypoints. The following set of instructions is a detailed guide to installing AlphaPose, so please refer to AlphaPose's GitHub repository if you need to troubleshoot anything.
-
Create and activate a conda environment for Python 3.7
If your previous conda environment is still active, make sure you deactivate it first:
conda deactivate
Then create a new conda environment for Python 3.7 and activate it:
conda create --name alphapose python=3.7 conda activate alphapose
-
Install dependencies using pip
Navigate back to the Mesh2SMPL directory and install the necessary dependencies using pip:
cd path\to\Mesh2SMPL pip install -r requirements-pose.txt
-
Delete necessary lines and files from AlphaPose directory
In
third_party/AlphaPose
opensetup.py
and delete line 211. Don't forget to save the file before exiting.In the same folder, delete the
setup.cfg
file. -
Build and install AlphaPose
Navigate to the AlphaPose directory, build and install AlphaPose, and navigate back to the Mesh2SMPL directory. Ensure that Microsoft Visual C++ 14.0 or greater has been installed through Microsoft C++ Build Tools https://visualstudio.microsoft.com/visual-cpp-build-tools/ prior to running this:
cd third_party/AlphaPose python setup.py build develop cd ../..
-
Download pretrained models
First, download the YOLO object detection model from this link and place the file in
third_party/AlphaPose/detector/yolo/data
.Then, download the pretrained Fast Pose pose estimation model from this link and place the file in the
third_party/AlphaPose/pretrained_models
directory.
-
Create and activate a conda environment for Python 3.6
If your previous conda environment is still active, make sure you deactivate it first:
conda deactivate
Then create a new conda environment for Python 3.6 and activate it:
conda create --name fit-smpl python=3.6 conda activate fit-smpl
-
Install dependencies
Install the necessary dependencies:
python -m pip install --upgrade pip pip install -r requirements-smpl.txt conda install pytorch-cpu==1.0.0 torchvision-cpu==0.2.1 cpuonly -c pytorch
-
Download and clean the SMPL model files
Go to https://smpl.is.tue.mpg.de/, make an account, and download version 1.0.0 of SMPL for Python Users from the downloads page. Extract this zip file. In the extracted folder, go to
SMPL_python_v.1.0.0/smpl/models
. Copy and paste the files in this folder totools/smpl_models
in your Mesh2SMPL directory.Then, go to https://smplify.is.tue.mpg.de/, make an account, and download
SMPLIFY_CODE_V2.ZIP
from the downloads page. Extract this zip file. In the extracted folder, go tosmplify_public/code/models
. Copy and pastebasicModel_neutral_lbs_10_207_0_v1.0.0.pkl
totools/smpl_models
in your Mesh2SMPL directory.Finally, run the following command (make sure your current directory is still Mesh2SMPL):
python tools/clean_models.py --input-models tools/smpl_models/*.pkl --output-folder third_party/MultiviewSMPLifyX/smplx/models/smpl
-
Download pretrained VPoser models
Go to https://smpl-x.is.tue.mpg.de/, make an account, and download
VPoser v1.0 - CVPR'19 (2.5 MB)
from the downloads page. Extract this zip file. In the extracted folder, go tovposer_v1_0/snapshots
. Copy and paste the*.pt
files tothird_party/MultiviewSMPLifyX/vposer/models/snapshots
in your Mesh2SMPL directory.