Skip to content

sir3mat/SchoolInAI_ProjectWork

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

School in AI Project Work


Logo

Domain adaption on pedestrian detection with Faster R-CNN

This repository contains the code of the "School in AI: Deep Learning, Vision and Language for Industry - second edition" final project work by Matteo Sirri.

Demo Links

Google Colab Demo Huggingface Demo Report
Open in Colab Hugging Face Spaces

Installation

N.B.: Installation only avaiable in win64 environments

Create and activate an environment with all required packages:

conda create --name pedestrian_detector --file deps/wins/conda_requirements.txt
conda activate pedestrian_detector
pip install -r deps/win/pip_requirements.txt

Dataset download and preparation:

Prepare MOTSynth dataset

  1. Download MOTSynth_1.
wget -P ./storage/MOTSynth https://motchallenge.net/data/MOTSynth_1.zip
unzip ./storage/MOTSynth/MOTSynth_1.zip -d ./storage/MOTSynth/
rm ./storage/MOTSynth/MOTSynth_1.zip
  1. Delete video from 123 to 256
  2. Extract frames from the videos
python tools/anns/to_frames.py --motsynth-root ./storage/MOTSynth

# now you can delete other videos
rm -r ./storage/MOTSynth/MOTSynth_1
  1. Download and extract annotations
wget -P ./storage/MOTSynth https://motchallenge.net/data/MOTSynth_coco_annotations.zip
unzip ./storage/MOTSynth/MOTSynth_coco_annotations.zip -d ./storage/MOTSynth/
rm ./storage/MOTSynth/MOTSynth_coco_annotations.zip
  1. Prepare combined annotations for MOTSynth from the original COCO annotations
python tools/anns/combine_anns.py --motsynth-path ./storage/MOTSynth
  1. Prepare motsynth ouput dir for training results
mkdir ./storage/motsynth_output

Prepare MOT17 dataset

  1. Download MOT17
wget -P ./storage/MOTChallenge https://motchallenge.net/data/MOT17.zip
unzip ./storage/MOTChallenge/MOT17.zip -d ./storage/MOTChallenge
rm ./storage/MOTChallenge/MOTSynth_1.zip
  1. Generate COCO format annotations
python tools/anns/motcha_to_coco.py --data-root ./storage/MOTChallenge

Download pretrained models folder from GDrive

You can find all pretrained models here https://drive.google.com/drive/folders/15Lv40x3MquSnKbI4U5aGSZtqQuEmiwMH?usp=share_link (download them and paste the .pth files in storage/pretrained_models directory).

Storage directory tree

After runnning this step, your storage directory should look like this:

storage
    ├── MOTChallenge
        ├── MOT17
        ├── motcha_coco_annotations
    ├── MOTSynth
        ├── annotations
        ├── comb_annotations
        ├── frames
    ├── motsynth_output
    ├── pretrained_models

Object Detection

An adaption of torchvision's detection reference code is done to train Faster R-CNN on a portion of the MOTSynth dataset.

  • To train the model you can run (change params in the script):
./scripts/train_detector.sh
  • To fine-tuning the model you can run (change params in the script):
./scripts/fine_tuning_detector.sh
  • To evaluate the model you can run (change params in the script):
./scripts/evaluate_detector.sh
  • To make inference and show results you can run (change params in the script):
./scripts/inference_detector.sh

Contact

Matteo Sirri - [email protected]

About

Project Work for School in AI 2nd edition

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published