MS MARCO V2 Passage

The two-click* reproduction matrix below provides commands for reproducing experimental results reported in the following paper. Numbered rows correspond to tables in the paper; additional conditions are provided for comparison purposes.

Xueguang Ma, Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. Document Expansions and Learned Sparse Lexical Representations for MS MARCO V1 and V2. Proceedings of the 45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022), July 2022.

Instructions for programmatic execution are shown at the bottom of this page (scroll down).

TREC 2021 TREC 2022 TREC 2023 dev dev2

AP
nDCG@10 R@1K
AP
nDCG@10 R@1K
AP
nDCG@10 R@1K RR@100 R@1K RR@100 R@1K
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage \
  --topics dl21 \
  --output run.msmarco-v2-passage.bm25-default.dl21.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-default.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage \
  --topics dl22 \
  --output run.msmarco-v2-passage.bm25-default.dl22.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.bm25-default.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.bm25-default.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.bm25-default.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage \
  --topics dl23 \
  --output run.msmarco-v2-passage.bm25-default.dl23.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.bm25-default.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.bm25-default.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.bm25-default.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage \
  --topics msmarco-v2-passage-dev \
  --output run.msmarco-v2-passage.bm25-default.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage \
  --topics msmarco-v2-passage-dev2 \
  --output run.msmarco-v2-passage.bm25-default.dev2.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented \
  --topics dl21 \
  --output run.msmarco-v2-passage.bm25-augmented-default.dl21.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-augmented-default.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented \
  --topics dl22 \
  --output run.msmarco-v2-passage.bm25-augmented-default.dl22.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.bm25-augmented-default.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.bm25-augmented-default.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.bm25-augmented-default.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented \
  --topics dl23 \
  --output run.msmarco-v2-passage.bm25-augmented-default.dl23.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.bm25-augmented-default.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.bm25-augmented-default.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.bm25-augmented-default.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented \
  --topics msmarco-v2-passage-dev \
  --output run.msmarco-v2-passage.bm25-augmented-default.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-augmented-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-augmented-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented \
  --topics msmarco-v2-passage-dev2 \
  --output run.msmarco-v2-passage.bm25-augmented-default.dev2.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-augmented-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-augmented-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage \
  --topics dl21 \
  --output run.msmarco-v2-passage.bm25-rm3-default.dl21.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-rm3-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-rm3-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-rm3-default.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage \
  --topics dl22 \
  --output run.msmarco-v2-passage.bm25-rm3-default.dl22.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.bm25-rm3-default.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.bm25-rm3-default.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.bm25-rm3-default.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage \
  --topics dl23 \
  --output run.msmarco-v2-passage.bm25-rm3-default.dl23.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.bm25-rm3-default.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.bm25-rm3-default.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.bm25-rm3-default.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage \
  --topics msmarco-v2-passage-dev \
  --output run.msmarco-v2-passage.bm25-rm3-default.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage \
  --topics msmarco-v2-passage-dev2 \
  --output run.msmarco-v2-passage.bm25-rm3-default.dev2.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented \
  --topics dl21 \
  --output run.msmarco-v2-passage.bm25-rm3-augmented-default.dl21.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented \
  --topics dl22 \
  --output run.msmarco-v2-passage.bm25-rm3-augmented-default.dl22.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented \
  --topics dl23 \
  --output run.msmarco-v2-passage.bm25-rm3-augmented-default.dl23.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.bm25-rm3-augmented-default.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented \
  --topics msmarco-v2-passage-dev \
  --output run.msmarco-v2-passage.bm25-rm3-augmented-default.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-augmented-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-augmented-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented \
  --topics msmarco-v2-passage-dev2 \
  --output run.msmarco-v2-passage.bm25-rm3-augmented-default.dev2.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-augmented-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-augmented-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-d2q-t5 \
  --topics dl21 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-default.dl21.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-d2q-t5 \
  --topics dl22 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-default.dl22.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-d2q-t5 \
  --topics dl23 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-default.dl23.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.bm25-d2q-t5-default.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-d2q-t5 \
  --topics msmarco-v2-passage-dev \
  --output run.msmarco-v2-passage.bm25-d2q-t5-default.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-d2q-t5-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-d2q-t5-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-d2q-t5 \
  --topics msmarco-v2-passage-dev2 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-default.dev2.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-d2q-t5-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-d2q-t5-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented-d2q-t5 \
  --topics dl21 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl21.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented-d2q-t5 \
  --topics dl22 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl22.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented-d2q-t5 \
  --topics dl23 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl23.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented-d2q-t5 \
  --topics msmarco-v2-passage-dev \
  --output run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented-d2q-t5 \
  --topics msmarco-v2-passage-dev2 \
  --output run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev2.txt \
  --bm25
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-d2q-t5-augmented-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-d2q-t5-docvectors \
  --topics dl21 \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl21.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-d2q-t5-docvectors \
  --topics dl22 \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl22.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-d2q-t5-docvectors \
  --topics dl23 \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl23.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-d2q-t5-docvectors \
  --topics msmarco-v2-passage-dev \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-d2q-t5-docvectors \
  --topics msmarco-v2-passage-dev2 \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev2.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-d2q-t5-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented-d2q-t5-docvectors \
  --topics dl21 \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl21.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented-d2q-t5-docvectors \
  --topics dl22 \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl22.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented-d2q-t5-docvectors \
  --topics dl23 \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl23.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented-d2q-t5-docvectors \
  --topics msmarco-v2-passage-dev \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-augmented-d2q-t5-docvectors \
  --topics msmarco-v2-passage-dev2 \
  --output run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev2.txt \
  --bm25 --rm3
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.bm25-rm3-d2q-t5-augmented-default.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics dl21-unicoil-noexp \
  --output run.msmarco-v2-passage.unicoil-noexp.dl21.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.unicoil-noexp.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.unicoil-noexp.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.unicoil-noexp.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics dl22-unicoil-noexp \
  --output run.msmarco-v2-passage.unicoil-noexp.dl22.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.unicoil-noexp.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.unicoil-noexp.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.unicoil-noexp.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics dl23-unicoil-noexp \
  --output run.msmarco-v2-passage.unicoil-noexp.dl23.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.unicoil-noexp.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.unicoil-noexp.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.unicoil-noexp.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics msmarco-v2-passage-dev-unicoil-noexp \
  --output run.msmarco-v2-passage.unicoil-noexp.dev.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-noexp.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-noexp.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics msmarco-v2-passage-dev2-unicoil-noexp \
  --output run.msmarco-v2-passage.unicoil-noexp.dev2.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-noexp.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-noexp.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics dl21-unicoil \
  --output run.msmarco-v2-passage.unicoil.dl21.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.unicoil.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.unicoil.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.unicoil.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics dl22-unicoil \
  --output run.msmarco-v2-passage.unicoil.dl22.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.unicoil.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.unicoil.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.unicoil.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics dl23-unicoil \
  --output run.msmarco-v2-passage.unicoil.dl23.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.unicoil.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.unicoil.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.unicoil.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics msmarco-v2-passage-dev-unicoil \
  --output run.msmarco-v2-passage.unicoil.dev.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics msmarco-v2-passage-dev2-unicoil \
  --output run.msmarco-v2-passage.unicoil.dev2.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics dl21 \
  --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-noexp-otf.dl21.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics dl22 \
  --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-noexp-otf.dl22.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics dl23 \
  --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-noexp-otf.dl23.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.unicoil-noexp-otf.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics msmarco-v2-passage-dev \
  --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-noexp-otf.dev.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-noexp-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-noexp-otf.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-noexp-0shot \
  --topics msmarco-v2-passage-dev2 \
  --encoder castorini/unicoil-noexp-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-noexp-otf.dev2.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-noexp-otf.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-noexp-otf.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics dl21 \
  --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-otf.dl21.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.unicoil-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.unicoil-otf.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.unicoil-otf.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics dl22 \
  --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-otf.dl22.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.unicoil-otf.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.unicoil-otf.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.unicoil-otf.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics dl23 \
  --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-otf.dl23.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.unicoil-otf.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.unicoil-otf.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.unicoil-otf.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics msmarco-v2-passage-dev \
  --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-otf.dev.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-otf.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.unicoil-otf.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-unicoil-0shot \
  --topics msmarco-v2-passage-dev2 \
  --encoder castorini/unicoil-msmarco-passage \
  --output run.msmarco-v2-passage.unicoil-otf.dev2.txt \
  --hits 1000 --impact
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-otf.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.unicoil-otf.dev2.txt
Command to generate run on TREC 2021 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-slimr-pp-norefine-0shot \
  --topics dl21 \
  --encoder castorini/slimr-pp-msmarco-passage \
  --output run.msmarco-v2-passage.slimr-pp.dl21.txt \
  --hits 1000 --impact --min-idf 1
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl21-passage run.msmarco-v2-passage.slimr-pp.dl21.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl21-passage run.msmarco-v2-passage.slimr-pp.dl21.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl21-passage run.msmarco-v2-passage.slimr-pp.dl21.txt
Command to generate run on TREC 2022 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-slimr-pp-norefine-0shot \
  --topics dl22 \
  --encoder castorini/slimr-pp-msmarco-passage \
  --output run.msmarco-v2-passage.slimr-pp.dl22.txt \
  --hits 1000 --impact --min-idf 1
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl22-passage run.msmarco-v2-passage.slimr-pp.dl22.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl22-passage run.msmarco-v2-passage.slimr-pp.dl22.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl22-passage run.msmarco-v2-passage.slimr-pp.dl22.txt
Command to generate run on TREC 2023 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-slimr-pp-norefine-0shot \
  --topics dl23 \
  --encoder castorini/slimr-pp-msmarco-passage \
  --output run.msmarco-v2-passage.slimr-pp.dl23.txt \
  --hits 1000 --impact --min-idf 1
Evaluation commands:
python -m pyserini.eval.trec_eval -c -l 2 -M 100 -m map dl23-passage run.msmarco-v2-passage.slimr-pp.dl23.txt
python -m pyserini.eval.trec_eval -c -m ndcg_cut.10 dl23-passage run.msmarco-v2-passage.slimr-pp.dl23.txt
python -m pyserini.eval.trec_eval -c -l 2 -m recall.1000 dl23-passage run.msmarco-v2-passage.slimr-pp.dl23.txt
Command to generate run on dev queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-slimr-pp-norefine-0shot \
  --topics msmarco-v2-passage-dev \
  --encoder castorini/slimr-pp-msmarco-passage \
  --output run.msmarco-v2-passage.slimr-pp.dev.txt \
  --hits 1000 --impact --min-idf 1
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev run.msmarco-v2-passage.slimr-pp.dev.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev run.msmarco-v2-passage.slimr-pp.dev.txt
Command to generate run on dev2 queries:
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --index msmarco-v2-passage-slimr-pp-norefine-0shot \
  --topics msmarco-v2-passage-dev2 \
  --encoder castorini/slimr-pp-msmarco-passage \
  --output run.msmarco-v2-passage.slimr-pp.dev2.txt \
  --hits 1000 --impact --min-idf 1
Evaluation commands:
python -m pyserini.eval.trec_eval -c -M 100 -m recip_rank msmarco-v2-passage-dev2 run.msmarco-v2-passage.slimr-pp.dev2.txt
python -m pyserini.eval.trec_eval -c -m recall.1000 msmarco-v2-passage-dev2 run.msmarco-v2-passage.slimr-pp.dev2.txt

Programmatic Execution

All experimental runs shown in the above table can be programmatically executed based on the instructions below. To list all the experimental conditions:

python -m pyserini.2cr.msmarco --collection v2-passage --list-conditions

These conditions correspond to the table rows above.

For all conditions, just show the commands in a "dry run":

python -m pyserini.2cr.msmarco --collection v2-passage --all --display-commands --dry-run

To actually run all the experimental conditions:

python -m pyserini.2cr.msmarco --collection v2-passage --all --display-commands

With the above command, run files will be placed in the current directory. Use the option --directory runs/ to place the runs in a sub-directory.

To show the commands for a specific condition:

python -m pyserini.2cr.msmarco --collection v2-passage --condition bm25-default --display-commands --dry-run

This will generate exactly the commands for a specific condition above (corresponding to a row in the table).

To actually run a specific condition:

python -m pyserini.2cr.msmarco --collection v2-passage --condition bm25-default --display-commands

Again, with the above command, run files will be placed in the current directory. Use the option --directory runs/ to place the runs in a sub-directory.

Finally, to generate this page:

python -m pyserini.2cr.msmarco --collection v2-passage --generate-report --output msmarco-v2-passage.html

The output file msmarco-v2-passage.html should be identical to this page.