Abstract and 1 Introduction

2 Background and Related work

2.1 Web Scale Information Retrieval

2.2 Existing Datasets

3 MS Marco Web Search Dataset and 3.1 Document Preparation

3.2 Query Selection and Labeling

3.3 Dataset Analysis

3.4 New Challenges Raised by MS MARCO Web Search

4 Benchmark Results and 4.1 Environment Setup

4.2 Baseline Methods

4.3 Evaluation Metrics

4.4 Evaluation of Embedding Models and 4.5 Evaluation of ANN Algorithms

4.6 Evaluation of End-to-end Performance

5 Potential Biases and Limitations

6 Future Work and Conclusions, and References

4.3 Evaluation Metrics

We evaluate all the baselines on both result quality and system performance aspects. For result quality, we take Mean Reciprocal Rank (MRR) and recall as the evaluation metrics:

• MRR: the average of the multiplicative inverse of the rank of the first correct result, which is widely used for evaluating the model quality.

• Recall: the average percentage of ground truth items recalled during the search. For the embedding model challenge and the end-to-end retrieval system challenge, we use our test query-document labels as the ground truth. For the embedding retrieval algorithm challenge, we use the brute-force vector search results as the ground truth (ANN recall) to evaluate the ANN algorithm performance.

For system performance, we evaluate the following metrics under limited resource cost to align with industry scenarios:

• Throughput: All queries are provided at once, and we measure the wall clock time between the ingestion of the vectors and when all the results are output using all the threads in a machine. Then the throughput is calculated as the processed queries per second (QPS).

• Latency: we measure the 50, 90 and 99 percentile query latency at certain QPS.

Authors:

(1) Qi Chen, Microsoft Beijing, China;

(2) Xiubo Geng, Microsoft Beijing, China;

(3) Corby Rosset, Microsoft, Redmond, United States;

(4) Carolyn Buractaon, Microsoft, Redmond, United States;

(5) Jingwen Lu, Microsoft, Redmond, United States;

(6) Tao Shen, University of Technology Sydney, Sydney, Australia and the work was done at Microsoft;

(7) Kun Zhou, Microsoft, Beijing, China;

(8) Chenyan Xiong, Carnegie Mellon University, Pittsburgh, United States and the work was done at Microsoft;

(9) Yeyun Gong, Microsoft, Beijing, China;

(10) Paul Bennett, Spotify, New York, United States and the work was done at Microsoft;

(11) Nick Craswell, Microsoft, Redmond, United States;

(12) Xing Xie, Microsoft, Beijing, China;

(13) Fan Yang, Microsoft, Beijing, China;

(14) Bryan Tower, Microsoft, Redmond, United States;

(15) Nikhil Rao, Microsoft, Mountain View, United States;

(16) Anlei Dong, Microsoft, Mountain View, United States;

(17) Wenqi Jiang, ETH Zürich, Zürich, Switzerland;

(18) Zheng Liu, Microsoft, Beijing, China;

(19) Mingqin Li, Microsoft, Redmond, United States;

(20) Chuanjie Liu, Microsoft, Beijing, China;

(21) Zengzhong Li, Microsoft, Redmond, United States;

(22) Rangan Majumder, Microsoft, Redmond, United States;

(23) Jennifer Neville, Microsoft, Redmond, United States;

(24) Andy Oakley, Microsoft, Redmond, United States;

(25) Knut Magne Risvik, Microsoft, Oslo, Norway;

(26) Harsha Vardhan Simhadri, Microsoft, Bengaluru, India;

(27) Manik Varma, Microsoft, Bengaluru, India;

(28) Yujing Wang, Microsoft, Beijing, China;

(29) Linjun Yang, Microsoft, Redmond, United States;

(30) Mao Yang, Microsoft, Beijing, China;

(31) Ce Zhang, ETH Zürich, Zürich, Switzerland and the work was done at Microsoft.


This paper is available on arxiv under CC BY 4.0 DEED license.