Skip to content

tira-io/ir-experiment-platform-benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Shared Task on ir-benchmarks

This repository exemplifies how archived shared task repositories of the IR Experiment Platform look like. Here, we have archived the Retrieval Benchmarks in the IR Experiment Platform.

The archived shared task repositories allow post-hoc experiments, and we provide several tutorials with examples in Jupyter Notebooks.

To start the jupyter notebook, please clone the archived shared task repository:

git clone [email protected]:tira-io/ir-experiment-platform-benchmarks.git

Inside the cloned repository, you can start the Jupyter notebook which automatically installs a minimal virtual environment using:

make jupyterlab

The installation of the environment is simplified with a virtual environment and executing make jupyterlab installs the virtual environment (if not already done) and starts the jupyter notebook ready to run all parts of the tutorial.

For each of the softwares submitted to TIRA, the tira integration to PyTerrier loads the Docker Image submitted to TIRA to execute it in PyTerrier pipelines (i.e., a first execution could take sligthly longer).

The following tutorial notebooks are available:

Up-To-Date Leaderboards

Comparing the leaderboards accross different tasks is quite interesting (we have a large scale evaluation on that in the paper), e.g., compare MS MARCO DL 2019 with Antique or Args.me: On MS MARCO, all kinds of deep learning models are at the top, which totally reverses for other corpora, e.g., Args.me or Antique.

The current leaderboards can be viewed in tira.io:

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published