-
Clone and enter this repository:
git clone [email protected]:timmeinhardt/trackformer.git cd trackformer
-
Install packages for Python 3.7:
pip3 install -r requirements.txt
- Install PyTorch 1.5 and torchvision 0.6 from here.
- Install pycocotools (with fixed ignore flag):
pip3 install -U 'git+https://github.com/timmeinhardt/cocoapi.git#subdirectory=PythonAPI'
- Install MultiScaleDeformableAttention package:
python src/trackformer/models/ops/setup.py build --build-base=src/trackformer/models/ops/ install
-
Download and unpack datasets in the
data
directory:-
wget https://motchallenge.net/data/MOT17.zip unzip MOT17.zip python src/generate_coco_from_mot.py
-
(Optional) MOT20:
wget https://motchallenge.net/data/MOT20.zip unzip MOT20.zip python src/generate_coco_from_mot.py --mot20
-
(Optional) MOTS20:
wget https://motchallenge.net/data/MOTS.zip unzip MOTS.zip python src/generate_coco_from_mot.py --mots
-
(Optional) CrowdHuman:
- Create a
CrowdHuman
andCrowdHuman/annotations
directory. - Download and extract the
train
andval
datasets including their corresponding*.odgt
annotation file into theCrowdHuman
directory. - Create a
CrowdHuman/train_val
directory and merge or symlink thetrain
andval
image folders. - Run
python src/generate_coco_from_crowdhuman.py
- The final folder structure should resemble this:
|-- data |-- CrowdHuman | |-- train | | |-- *.jpg | |-- val | | |-- *.jpg | |-- train_val | | |-- *.jpg | |-- annotations | | |-- annotation_train.odgt | | |-- annotation_val.odgt | | |-- train_val.json
- Create a
-
-
Download and unpack pretrained TrackFormer model files in the
models
directory:wget https://vision.in.tum.de/webshare/u/meinhard/trackformer_models_v1.zip unzip trackformer_models_v1.zip
-
(optional) The evaluation of MOTS20 metrics requires two steps:
- Run Trackformer with
src/track.py
and output prediction files - Download the official MOTChallenge devkit and run the MOTS evaluation on the prediction files
- Run Trackformer with
In order to configure, log and reproduce our computational experiments, we structure our code with the Sacred framework. For a detailed explanation of the Sacred interface please read its documentation.