Skip to content
Yixin Zhu edited this page Mar 15, 2017 · 29 revisions

License

Copyright (c) 2016

Yixin Zhu, Chenfanfu Jiang, Yibiao Zhao, Demetri Terzopoulos and Song-Chun Zhu

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

If the code is used in an article, the following publication shall be cited:

@InProceedings{cvpr2016chair,
    author = {Zhu, Yixin and Jiang, Chenfanfu and Zhao, Yibiao and Terzopoulos, Demetri and Zhu, Song-Chun},
    title = {Inferring Forces and Learning Human Utilities From Videos},
    booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2016}}

Project website: http://www.yzhu.io/projects/cvpr16_chair/index.html

Usage

You can use the pre-configured code in a virtual machine provided below, or compile from the source.

Unless otherwise stated, the default operating system is Ubuntu 14.04.

If you have any problems on the code provided in this repo, please use the issue tracker instead of sending authors emails. For questions regarding the paper, you can reach out to the authors: Yixin Zhu ([email protected]) and Chenfanfu Jiang ([email protected]).

1. Use Virtual Machine

The virtual machine file can be open by VMWare Workstation (tested), or VirtualBox (untested). The code is stored in folder

~\Development\ChairPerson

VirtualMachine v0.2. Include human reconstruction and FEM simulation code. File size before unzip is ~8GB, and ~16GB after unzip.

VirtualMachine v0.1. Only has human reconstruction code. File size before unzip is ~7GB, and ~15GB after unzip.

username: chair
password: person
hostname: cvpr

Configuration: 4 cores + 8GB memory

2. Compile from Source

2.1 Install openvdb

Dependencies

sudo apt-get install build-essential cmake git liblog4cplus-dev libboost-all-dev
libglew-dev libopenexr-dev libtbb-dev libghc-zlib-dev doxygen libcunit1-dev 
libcppunit-dev libglfw-dev freeglut3-dev texlive-full python-dev python-numpy 
python-matplotlib libpython-all-dev

Compile c-blosc

cd c-blosc
mkdir build
cd build
cmake ..
make
sudo make install

Build openvdb

cd openvdb
make all -j4 # 4 is the number of cores of your computer

Test

make test

Install

sudo make install

2.2 Collect Kinect Skeleton Data (Require Windows 8.1/10 Machine)

Download and collect skeleton using "Kinect v2 Skeleton Recorder" from https://github.com/xiaozhuchacha/Kinect2Toolbox

2.3 Convert Skeleton Data to Poly Format

Copy the needed skeleton text files from the previous step to "Txt2Poly" folder.

If there's only one file, rename the text file to "skeleton.txt", and run

python Convert_Skeleton_Data.py -i 'skeleton.txt' -s 1

where "i" is the input filename, -s is the plot flag (1 means it will plot the skeleton after computations).

If there're multiple files, please use Convert_Skeleton_Data_All.py with additional parameter "-r" to specify the range.

2.4 Install and Run HumanReconstruction

cd HumanReconstruction
cmake .
make
./HumanReconstruction

2.5 Run FEM Simulation

The simulation code is provided in binary form, due to the difficulties in compiling the source code. The binary FEM program requires gcc-4.9

sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install gcc-4.9

Run the example

cd FEMSim
bash run.sh

Paramaters

-chair chair_cooked.obj # Input: watertight scene
-human meat_cooked.obj # Input: reconstructed human
-chair_cook chair_ls.dat # Input/Output: pre-computed levelset of scene
-human_cook human_ls.dat # Input/Output: pre-computed levelset of human
-chair_pointcloud chair_pc.obj # Output: point cloud of the scene volume.
-max_dt 1e-3 # maximum simulation time step (should work for most cases unless scene is too noisy).
-solver_tolerance 1e-2 # implicit solver tolerance (don't change).
-scale_E 150 # Youngs modulus. Controls the stiffness of the human tissue.
-collision_res 450 # Collision scene resolution. This is the resolution of the collision object level set. Increase to capture small features or large scenes.
-bcc_volume_res 300 # The resolution of the human BCC tetrahedron mesh.
-bcc_dx 0.03  # The spacing of the human BCC tetrahedron mesh.
-friction_up_mid_interface 0.64 # interface of large and medium y.
-friction_mid_bot_interface 0.07 # interface of medium and small y.
-friction_up 0 # friction for large y. 
-friction_mid 1e-3 # friction for medium y.
-friction_bot 1e-3 # friction for small y.
-fps 12 # frames per second
-o output # output folder
-last_frame 120 # total number of frames
-drag 50 # damping coefficient
-tx 0 -ty 3 -tz 0 # translation vector
-R11 1 -R12 0 -R13 0 -R21 0 -R22 1 -R23 0 -R31 0 -R32 0 -R33 1 # rotation matrix
-cook_levelset # whether to re-compute level-set (needed if the .dat files don't exist)

2.6 Parsing FEM simulation results and visualization

Dependency

sudo apt-get install python-colorama

Parsing and visualization

cd FEMSim
python Parse_Force.py -i 'output/ply/69.ply' -v -p

The results will be dumped into 'output/force' folder. A ply file colored with force data is also produced.

Sample output: https://www.dropbox.com/s/lm6co2cxlv0a9v8/fem_sim.png