Skip to content

Commit

Permalink
Rename MS MARCO regressions into consistent schema (#2519)
Browse files Browse the repository at this point in the history
  • Loading branch information
lintool authored Jun 8, 2024
1 parent 98def6f commit 59330e3
Show file tree
Hide file tree
Showing 231 changed files with 1,488 additions and 1,486 deletions.
152 changes: 76 additions & 76 deletions README.md

Large diffs are not rendered by default.

146 changes: 73 additions & 73 deletions docs/regressions.md

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,13 @@ The experiments on this page are not actually reported in the paper.
However, the model is the same, applied to the MS MARCO _segmented_ document corpus (without any expansions).
Retrieval uses MaxP technique, where we select the score of the highest-scoring passage from a document as the score for that document to produce a document ranking.

The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/dl19-doc-segmented.unicoil-noexp.yaml).
Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/dl19-doc-segmented.unicoil-noexp.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/dl19-doc-segmented.unicoil-noexp.cached.yaml).
Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/dl19-doc-segmented.unicoil-noexp.cached.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.

From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:

```bash
python src/main/python/run_regression.py --index --verify --search --regression dl19-doc-segmented.unicoil-noexp
python src/main/python/run_regression.py --index --verify --search --regression dl19-doc-segmented.unicoil-noexp.cached
```

We make available a version of the MS MARCO document corpus that has already been processed with uniCOIL, i.e., we have performed model inference on every document and stored the output sparse vectors.
Expand All @@ -26,7 +26,7 @@ Thus, no neural inference is involved.
From any machine, the following command will download the corpus and perform the complete regression, end to end:

```bash
python src/main/python/run_regression.py --download --index --verify --search --regression dl19-doc-segmented.unicoil-noexp
python src/main/python/run_regression.py --download --index --verify --search --regression dl19-doc-segmented.unicoil-noexp.cached
```

The `run_regression.py` script automates the following steps, but if you want to perform each step manually, simply copy/paste from the commands below and you'll obtain the same regression results.
Expand All @@ -44,7 +44,7 @@ To confirm, `msmarco-doc-segmented-unicoil-noexp.tar` is 11 GB and has MD5 check
With the corpus downloaded, the following command will perform the remaining steps below:

```bash
python src/main/python/run_regression.py --index --verify --search --regression dl19-doc-segmented.unicoil-noexp \
python src/main/python/run_regression.py --index --verify --search --regression dl19-doc-segmented.unicoil-noexp.cached \
--corpus-path collections/msmarco-doc-segmented-unicoil-noexp
```

Expand Down Expand Up @@ -152,7 +152,7 @@ However, for these topics, we get the same effectiveness results; that is, the t

## Reproduction Log[*](../../docs/reproducibility.md)

To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/dl19-doc-segmented.unicoil-noexp.template) and run `bin/build.sh` to rebuild the documentation.
To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/dl19-doc-segmented.unicoil-noexp.cached.template) and run `bin/build.sh` to rebuild the documentation.

+ Results reproduced by [@manveertamber](https://github.com/manveertamber) on 2022-02-25 (commit [`7472d86`](https://github.com/castorini/anserini/commit/7472d862c7311bc8bbd30655c940d6396e27c223))
+ Results reproduced by [@mayankanand007](https://github.com/mayankanand007) on 2022-02-28 (commit [`950d7fd`](https://github.com/castorini/anserini/commit/950d7fd88dbb87f39e9c1f6ccf9e41cbb6f04f36))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,13 @@ The experiments on this page are not actually reported in the paper.
However, the model is the same, applied to the MS MARCO _segmented_ document corpus (with doc2query-T5 expansions).
Retrieval uses MaxP technique, where we select the score of the highest-scoring passage from a document as the score for that document to produce a document ranking.

The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/dl19-doc-segmented.unicoil.yaml).
Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/dl19-doc-segmented.unicoil.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/dl19-doc-segmented.unicoil.cached.yaml).
Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/dl19-doc-segmented.unicoil.cached.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.

From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:

```bash
python src/main/python/run_regression.py --index --verify --search --regression dl19-doc-segmented.unicoil
python src/main/python/run_regression.py --index --verify --search --regression dl19-doc-segmented.unicoil.cached
```

We make available a version of the MS MARCO document corpus that has already been processed with uniCOIL, i.e., we have applied doc2query-T5 expansions, performed model inference on every document, and stored the output sparse vectors.
Expand All @@ -26,7 +26,7 @@ Thus, no neural inference is involved.
From any machine, the following command will download the corpus and perform the complete regression, end to end:

```bash
python src/main/python/run_regression.py --download --index --verify --search --regression dl19-doc-segmented.unicoil
python src/main/python/run_regression.py --download --index --verify --search --regression dl19-doc-segmented.unicoil.cached
```

The `run_regression.py` script automates the following steps, but if you want to perform each step manually, simply copy/paste from the commands below and you'll obtain the same regression results.
Expand All @@ -44,7 +44,7 @@ To confirm, `msmarco-doc-segmented-unicoil.tar` is 19 GB and has MD5 checksum `6
With the corpus downloaded, the following command will perform the remaining steps below:

```bash
python src/main/python/run_regression.py --index --verify --search --regression dl19-doc-segmented.unicoil \
python src/main/python/run_regression.py --index --verify --search --regression dl19-doc-segmented.unicoil.cached \
--corpus-path collections/msmarco-doc-segmented-unicoil
```

Expand Down Expand Up @@ -152,7 +152,7 @@ However, for these topics, we get the same effectiveness results; that is, the t

## Reproduction Log[*](../../docs/reproducibility.md)

To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/dl19-doc-segmented.unicoil.template) and run `bin/build.sh` to rebuild the documentation.
To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/dl19-doc-segmented.unicoil.cached.template) and run `bin/build.sh` to rebuild the documentation.

+ Results reproduced by [@manveertamber](https://github.com/manveertamber) on 2022-02-25 (commit [`7472d86`](https://github.com/castorini/anserini/commit/7472d862c7311bc8bbd30655c940d6396e27c223))
+ Results reproduced by [@mayankanand007](https://github.com/mayankanand007) on 2022-02-28 (commit [`950d7fd`](https://github.com/castorini/anserini/commit/950d7fd88dbb87f39e9c1f6ccf9e41cbb6f04f36))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,21 +11,21 @@ In these experiments, we are using pre-encoded queries (i.e., cached results of
Note that the NIST relevance judgments provide far more relevant passages per topic, unlike the "sparse" judgments provided by Microsoft (these are sometimes called "dense" judgments to emphasize this contrast).
For additional instructions on working with MS MARCO passage collection, refer to [this page](experiments-msmarco-passage.md).

The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/dl19-passage.bge-base-en-v1.5.hnsw-int8.yaml).
Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/dl19-passage.bge-base-en-v1.5.hnsw-int8.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/dl19-passage.bge-base-en-v1.5.hnsw-int8.cached.yaml).
Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/dl19-passage.bge-base-en-v1.5.hnsw-int8.cached.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.

From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:

```bash
python src/main/python/run_regression.py --index --verify --search --regression dl19-passage.bge-base-en-v1.5.hnsw-int8
python src/main/python/run_regression.py --index --verify --search --regression dl19-passage.bge-base-en-v1.5.hnsw-int8.cached
```

We make available a version of the MS MARCO Passage Corpus that has already been encoded with cosDPR-distil.

From any machine, the following command will download the corpus and perform the complete regression, end to end:

```bash
python src/main/python/run_regression.py --download --index --verify --search --regression dl19-passage.bge-base-en-v1.5.hnsw-int8
python src/main/python/run_regression.py --download --index --verify --search --regression dl19-passage.bge-base-en-v1.5.hnsw-int8.cached
```

The `run_regression.py` script automates the following steps, but if you want to perform each step manually, simply copy/paste from the commands below and you'll obtain the same regression results.
Expand All @@ -43,7 +43,7 @@ To confirm, `msmarco-passage-bge-base-en-v1.5.tar` is 59 GB and has MD5 checksum
With the corpus downloaded, the following command will perform the remaining steps below:

```bash
python src/main/python/run_regression.py --index --verify --search --regression dl19-passage.bge-base-en-v1.5.hnsw-int8 \
python src/main/python/run_regression.py --index --verify --search --regression dl19-passage.bge-base-en-v1.5.hnsw-int8.cached \
--corpus-path collections/msmarco-passage-bge-base-en-v1.5
```

Expand Down Expand Up @@ -82,17 +82,17 @@ bin/run.sh io.anserini.search.SearchHnswDenseVectors \
-index indexes/lucene-hnsw-int8.msmarco-v1-passage.bge-base-en-v1.5/ \
-topics tools/topics-and-qrels/topics.dl19-passage.bge-base-en-v1.5.jsonl.gz \
-topicReader JsonIntVector \
-output runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-cached_q.topics.dl19-passage.bge-base-en-v1.5.jsonl.txt \
-output runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-cached.topics.dl19-passage.bge-base-en-v1.5.jsonl.txt \
-generator VectorQueryGenerator -topicField vector -threads 16 -hits 1000 -efSearch 1000 &
```

Evaluation can be performed using `trec_eval`:

```bash
bin/trec_eval -m map -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-cached_q.topics.dl19-passage.bge-base-en-v1.5.jsonl.txt
bin/trec_eval -m ndcg_cut.10 -c tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-cached_q.topics.dl19-passage.bge-base-en-v1.5.jsonl.txt
bin/trec_eval -m recall.100 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-cached_q.topics.dl19-passage.bge-base-en-v1.5.jsonl.txt
bin/trec_eval -m recall.1000 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-cached_q.topics.dl19-passage.bge-base-en-v1.5.jsonl.txt
bin/trec_eval -m map -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-cached.topics.dl19-passage.bge-base-en-v1.5.jsonl.txt
bin/trec_eval -m ndcg_cut.10 -c tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-cached.topics.dl19-passage.bge-base-en-v1.5.jsonl.txt
bin/trec_eval -m recall.100 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-cached.topics.dl19-passage.bge-base-en-v1.5.jsonl.txt
bin/trec_eval -m recall.1000 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-cached.topics.dl19-passage.bge-base-en-v1.5.jsonl.txt
```

## Effectiveness
Expand All @@ -110,12 +110,12 @@ With the above commands, you should be able to reproduce the following results:
| [DL19 (Passage)](https://trec.nist.gov/data/deep2020.html) | 0.843 |

Note that due to the non-deterministic nature of HNSW indexing, results may differ slightly between each experimental run.
Nevertheless, scores are generally within 0.005 of the reference values recorded in [our YAML configuration file](../../src/main/resources/regression/dl19-passage.bge-base-en-v1.5.hnsw-int8.yaml).
Nevertheless, scores are generally within 0.005 of the reference values recorded in [our YAML configuration file](../../src/main/resources/regression/dl19-passage.bge-base-en-v1.5.hnsw-int8.cached.yaml).

Also note that retrieval metrics are computed to depth 1000 hits per query (as opposed to 100 hits per query for document ranking).
Also, for computing nDCG, remember that we keep qrels of _all_ relevance grades, whereas for other metrics (e.g., AP), relevance grade 1 is considered not relevant (i.e., use the `-l 2` option in `trec_eval`).
The experimental results reported here are directly comparable to the results reported in the [track overview paper](https://arxiv.org/abs/2003.07820).

## Reproduction Log[*](reproducibility.md)

To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/dl19-passage.bge-base-en-v1.5.hnsw-int8.template) and run `bin/build.sh` to rebuild the documentation.
To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/dl19-passage.bge-base-en-v1.5.hnsw-int8.cached.template) and run `bin/build.sh` to rebuild the documentation.
Original file line number Diff line number Diff line change
Expand Up @@ -82,17 +82,17 @@ bin/run.sh io.anserini.search.SearchHnswDenseVectors \
-index indexes/lucene-hnsw-int8.msmarco-v1-passage.bge-base-en-v1.5/ \
-topics tools/topics-and-qrels/topics.dl19-passage.txt \
-topicReader TsvInt \
-output runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw.topics.dl19-passage.txt \
-output runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-onnx.topics.dl19-passage.txt \
-generator VectorQueryGenerator -topicField title -threads 16 -hits 1000 -efSearch 1000 -encoder BgeBaseEn15 &
```

Evaluation can be performed using `trec_eval`:

```bash
bin/trec_eval -m map -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw.topics.dl19-passage.txt
bin/trec_eval -m ndcg_cut.10 -c tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw.topics.dl19-passage.txt
bin/trec_eval -m recall.100 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw.topics.dl19-passage.txt
bin/trec_eval -m recall.1000 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw.topics.dl19-passage.txt
bin/trec_eval -m map -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-onnx.topics.dl19-passage.txt
bin/trec_eval -m ndcg_cut.10 -c tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-onnx.topics.dl19-passage.txt
bin/trec_eval -m recall.100 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-onnx.topics.dl19-passage.txt
bin/trec_eval -m recall.1000 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-bge-base-en-v1.5.bge-hnsw-onnx.topics.dl19-passage.txt
```

## Effectiveness
Expand Down
Loading

0 comments on commit 59330e3

Please sign in to comment.