-
Install Unity (using 2018.2.8f1)
-
Download and compile
OpenCV 4.1.1
from https://opencv.org/ -
Download and compile
Dlib 19.17
from http://dlib.net/ -
Follow the directions at facial-pose-estimation-opencv to compile
facial-pose-estimation-opencv.dll
(Using the included might not work on all systems) -
Follow the directions in facial-pose-estimation-pytorch to train the facial pose detection network, or simply download this ONNX Model
-
Clone this repo:
git clone https://github.com/NeuralVFX/facial-pose-estimation-unity.git
- Copy
.dll
files fromOpenCV
Bin
folder intofacial-pose-estimation-unity\Assets\Plugins
- Copy
facial-pose-estimation-opencv.dll
intofacial-pose-estimation-unity\Assets\Plugins
- Download the
SSD
,Landmark Detection
, andFacial Pose Estimation
models and place intofacial-pose-estimation-unity
- Rename the
Facial Pose Estimation
modelopt_model.onnx
Model | Link |
---|---|
Facial Pose Estimation Model |
opt_model.onnx |
Face Detection SSD Meta |
deploy.prototxt |
Face Detection SSD Model |
res10_300x300_ssd_iter_140000_fp16.caffemodel |
Landmark Detection Model |
shape_predictor_68_face_landmarks.dat |
- Should be applied to face model
- Reads blend-shape values, and sets them on the facial mesh
- Uses momentum to smooth values temporally
--Momentum Weight, default=2.0, type=float # How far in the future to guess value, based on previus two frames (1.0 means no projection into the future)
--Smoothing Weight, default=.8, type=float # Blend ratio between inference value at this frame, and projected value based on previos two frames
- Should be applied to transform above facial model
- Reads position and rotation values, and sets them on transform
- Uses momentum to smooth values temporally
--Momentum Weight, default=2.0, type=float # How far in the future to guess value, based on previus two frames (1.0 means no projection into the future)
--Smoothing Weight, default=.8, type=float # Blend ratio between inference value at this frame, and projected value based on previos two frames
- Should be applied to camera
- Opens and runs OpenCV interop
--Detect Ratio, default=1, type=int # Amount to scale down image before Bounding Box detector
--Cam Id, default=0, type=int # ID of camera to run stream from(Front, Back, etc...)
--Fov Zoom, default=1.0, type=float # FOV Zoom multiplier, high value will shrink FOV use for PnP Solve
--Draw Face Points, default=false, type=bool # Whether or not to draw points and axis ornament on face
--Lock Eyes Nose, default=true, type=bool # Whether or not to consider blendshapes for eyes and nose during PnP Solve
- Should be applied to BG plane object
- Retrieves video stream and applies to BG plane as a texture
--Texture Resolution, default=1024, type=int # Resolution of video feed texture
- Open
facial-pose-estimation-unity
as a Unity project - Within the project, open the scene
\Assets\Scenes\SampleScene.unity
- Press Play, and the face should snap to your face