This repo contains all files and samples necessaries to estimate pose, save the points of each frame of a video and upload those frames into Blender.
Test were made using two repositories.
https://github.com/jiajunhua/ildoonet-tf-pose-estimation
https://github.com/DenisTome/Lifting-from-the-Deep-release
- CUDA (Nvidia GPU) 10.0 version: (tested with NVIDIA GEFORCE GTX 1650 Ti 4 GB)
- NVIDIA graphics card with at least 1.2 GB available.
- Highly recommended: cuDNN 7.6.5 or compatible version for CUDA.
Change these lines in estimator.py from the first repository and _pose_estimator.py from the second for your specific needs.
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
I set the limit up to 30% of my GPU memory.
-
Create conda environment with Python 3.6
-
Clone the repo and install 3rd-party libraries from requirements.txt.
$ git clone https://github.c/OscarSantosMu/TF_pose_estimation_and_Blender
$ cd TF_pose_estimation_and_Blender
$ pip install -r requirements.txt
Dependencies
You need dependencies below.
- tensorflow-gpu 1.16
- opencv3, protobuf, python3-tk
- slidingwindow
- https://github.com/adamrehn/slidingwindow
- With some changes from the above git repo.
Build c++ library for post processing. See : https://github.com/ildoonet/tf-pose-estimation/tree/master/tf_pose/pafprocess
$ cd tf_pose/pafprocess
$ swig -python -c++ pafprocess.i && python3 setup.py build_ext --inplace
- Blender 2.91
Probably it works in different versions as well, but it was tested using that version.
To test first repo follow the following instructions
To test second repo go to the following folder
- Run save_points.py
You will see that json file is generated with 17 points estimated by Lifting from the Deep model. The idea is to save 17 points of each frame of a video.
Therefore, same code was implemented in camera.py
- Run use_points.py
This will access to the 17 points saved from the previous script and print them in the terminal.
- Run camera.py
This generates the json file that will be imported in Blender.
- Open BodyMaleTemplate_spheres.blend, then run spheres_test.py while selecting all spheres from the canvas.
This script creates the animation.
- Clic play on the timeline.
Make sure to modify end value if your recording is too long.
Enjoy it!
Here are some comparisons between video, matplotlib 3d figure and Blender.