Maintainer: Attila Lengyel ([email protected])
We present the "Visual Inductive Priors for Data-Efficient Computer Vision" challenge. We offer two challenges, where models are to be trained from scratch in a data deficient setting.
The main objective of the challenge is to segment all player and ball instances in the provided images. The dataset consists of images recorded at different basketball matches.
The dataset can be downloaded from the Codalab competition page. Unzip the file and you're ready to go.
We provide a baseline model based on detectron2. To run the baseline training code, use the official detectron2 docker image or install detectron2 and the pycocotools package.
You can now run python baseline.py
to train the baseline model, and python baseline.py --dataset_path 'path/to/dataset' --predict 'model_weights.pth.tar'
to perform prediction on the test set and to generate the .json file for submission to Codalab.
The COCO evaluation metrics are used. The primary metric is AP @ 0.50:0.95. We provide an evaluation script to test your model over the validation set. Note that this script cannot be used to evaluate models over the testing set, as we do not provide labels for the test set. It is good practice to ensure your predictions work with this script, as the same script is used on the evaluation server.
Run python evaluate.py
with the arguments as described in the script to evaulate the predictions. This creates a results.txt
file containing the metrics used for the challenge leaderboard.
The evaluation server is hosted using CodaLab. Submitting to the challenge requires a CodaLab account.
Please find the evaluation server here.
To participate in the challenge one uploads a .zip file of the .json file generated by baseline.py
in prediction mode.