Table of Links
2 Background and Related work
2.1 Web Scale Information Retrieval
3 MS Marco Web Search Dataset and 3.1 Document Preparation
3.2 Query Selection and Labeling
3.4 New Challenges Raised by MS MARCO Web Search
4 Benchmark Results and 4.1 Environment Setup
4.4 Evaluation of Embedding Models and 4.5 Evaluation of ANN Algorithms
4.6 Evaluation of End-to-end Performance
5 Potential Biases and Limitations
6 Future Work and Conclusions, and References
4.6 Evaluation of End-to-end Performance
In this section, we evaluate the end-to-end performance of the three baseline embedding models plus SPANN index and the widely-used Elasticsearch BM25 solution. Table 6 and table 7 demonstrate the result quality and system performance of all these baseline systems, respectively. Compared with table 4, we can see that after using the ANN index, the final result quality drops a lot. For example, the metric recall@100 drops more than 10 points for all baseline models. There exists large quality gaps between the ANN and KNN results (see table 5). Moreover, we notice that using the ANN index will change the model ranking trend. SimANS achieves the best results for all the result quality metrics with brute-force search. However, when using the SPANN index, it performs worse than ANCE in recall@20 and recall@100. We further analyze the phenomenon in detail and find that SimANS has a larger gap between the average distance of query to the top100 documents relative to the average distance of document to the top100 documents than ANCE. The gap in SimANS and ANCE are 103.35 and 73.29, respectively. This will cause inaccurate distance bound estimation for a query to the neighbors of a document. As a result, ANN cannot perform well because it relies on distance estimated according to the triangle inequality. Both result quality and system performance results of the end-to-end evaluation call for more innovations on the end-toend retrieval system design.
Authors:
(1) Qi Chen, Microsoft Beijing, China;
(2) Xiubo Geng, Microsoft Beijing, China;
(3) Corby Rosset, Microsoft, Redmond, United States;
(4) Carolyn Buractaon, Microsoft, Redmond, United States;
(5) Jingwen Lu, Microsoft, Redmond, United States;
(6) Tao Shen, University of Technology Sydney, Sydney, Australia and the work was done at Microsoft;
(7) Kun Zhou, Microsoft, Beijing, China;
(8) Chenyan Xiong, Carnegie Mellon University, Pittsburgh, United States and the work was done at Microsoft;
(9) Yeyun Gong, Microsoft, Beijing, China;
(10) Paul Bennett, Spotify, New York, United States and the work was done at Microsoft;
(11) Nick Craswell, Microsoft, Redmond, United States;
(12) Xing Xie, Microsoft, Beijing, China;
(13) Fan Yang, Microsoft, Beijing, China;
(14) Bryan Tower, Microsoft, Redmond, United States;
(15) Nikhil Rao, Microsoft, Mountain View, United States;
(16) Anlei Dong, Microsoft, Mountain View, United States;
(17) Wenqi Jiang, ETH Zürich, Zürich, Switzerland;
(18) Zheng Liu, Microsoft, Beijing, China;
(19) Mingqin Li, Microsoft, Redmond, United States;
(20) Chuanjie Liu, Microsoft, Beijing, China;
(21) Zengzhong Li, Microsoft, Redmond, United States;
(22) Rangan Majumder, Microsoft, Redmond, United States;
(23) Jennifer Neville, Microsoft, Redmond, United States;
(24) Andy Oakley, Microsoft, Redmond, United States;
(25) Knut Magne Risvik, Microsoft, Oslo, Norway;
(26) Harsha Vardhan Simhadri, Microsoft, Bengaluru, India;
(27) Manik Varma, Microsoft, Bengaluru, India;
(28) Yujing Wang, Microsoft, Beijing, China;
(29) Linjun Yang, Microsoft, Redmond, United States;
(30) Mao Yang, Microsoft, Beijing, China;
(31) Ce Zhang, ETH Zürich, Zürich, Switzerland and the work was done at Microsoft.
This paper is