Open postdoc position at LIMSI combining machine learning, NLP, speech processing, and computer vision.
a toolkit for face detection, tracking, and clustering in videos
Create a new conda
environment:
$ conda create -n pyannote python=3.6 anaconda
$ source activate pyannote
Install pyannote-video
and its dependencies:
$ pip install pyannote-video
Download dlib
models:
$ git clone https://github.com/pyannote/pyannote-data.git
$ git clone https://github.com/davisking/dlib-models.git
$ bunzip2 dlib-models/dlib_face_recognition_resnet_model_v1.dat.bz2
$ bunzip2 dlib-models/shape_predictor_68_face_landmarks.dat.bz2
To execute the "Getting started" notebook locally, download the example video and pyannote.video
source code:
$ git clone https://github.com/pyannote/pyannote-data.git
$ git clone https://github.com/pyannote/pyannote-video.git
$ pip install jupyter
$ jupyter notebook --notebook-dir="pyannote-video/doc"
No proper documentation for the time being...