Abstract
The TREC Deep Learning tracks used MS MARCO Version 1 as their official training data until 2020 and switched to Version 2 in 2021. For Version 2, all previously judged documents were re-crawled. Interestingly, in the track’s 2021 edition, models trained on the new data were less effective than models trained on the old data. To investigate this phenomenon, we compare the predicted relevance probabilities of monoT5 for the two versions of the judged documents and find substantial differences. A further manual inspection reveals major content changes for some documents (e.g., the new version being off-topic). To analyze whether these changes may have contributed to the observed effectiveness drop, we conduct experiments with different document version selection strategies. Our results show that training a retrieval model on the “wrong” version can reduce the nDCG@10 by up to 75%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
References
Arabzadeh, N., Vtyurina, A., Yan, X., Clarke, C.: Shallow pooling for sparse labels. CoRR abs/2109.00062 (2021)
Bevendorff, J., Potthast, M., Stein, B.: FastWARC: optimizing large-scale web archive analytics. In: Proceeding of OSSYM 2021. OSF (2021)
Bevendorff, J., Stein, B., Hagen, M., Potthast, M.: Elastic chatnoir: search engine for the clueweb and the common crawl. In: Pasi, G., Piwowarski, B., Azzopardi, L., Hanbury, A. (eds.) ECIR 2018. LNCS, vol. 10772, pp. 820–824. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-76941-7_83
Cen, R., Liu, Y., Zhang, M., Zhou, B., Ru, L., Ma, S.: Exploring relevance for clicks. In: Proceeding of CIKM 2009, pp. 1847–1850. ACM (2009)
Cho, J., Garcia-Molina, H.: The evolution of the web and implications for an incremental crawler. In: Proceeding of VLDB 2000, pp. 200–209 (2000)
Craswell, N., Mitra, B., Yilmaz, E., Campos, D.: Overview of the TREC 2020 deep learning track. In: Proceeding of TREC 2020. NIST (2020)
Craswell, N., Mitra, B., Yilmaz, E., Campos, D., Lin, J.: MS MARCO: benchmarking ranking models in the large-data regime. In: Proceeding of SIGIR 2021, pp. 1566–1576. ACM (2021)
Craswell, N., Mitra, B., Yilmaz, E., Campos, D., Voorhees, E.: Overview of the TREC 2019 deep learning track. In: Proceeding of TREC 2019. NIST (2019)
Craswell, N., Mitra, B., Yilmaz, E., Campos, D.: Overview of the TREC 2021 deep learning track. In: Voorhees, E.M., Ellis, A. (eds.) Notebook. NIST (2021)
Dai, Z., Callan, J.: Context-aware document term weighting for ad-hoc search. In: Proceeding of WWW 2020, pp. 1897–1907. ACM (2020)
Feng, L., Shu, S., Lin, Z., Lv, F., Li, L., An, B.: Can cross entropy loss be robust to label noise? In: Proceeding of IJCAI 2020, pp. 2206–2212. IJCAI (2020)
Fetterly, D., Manasse, M., Najork, M.: On the evolution of clusters of near-duplicate web pages. In: Proceeding of LA-WEB 2003, pp. 37–45 (2003)
Fetterly, D., Manasse, M., Najork, M., Wiener, J.: A large-scale study of the evolution of web pages. In: Proceeding of WWW 2003, pp. 669–678 (2003)
Frénay, B., Verleysen, M.: Classification in the presence of label noise: a survey. IEEE Trans. Neural Netw. Learn. Syst. 25(5), 845–869 (2014)
Fröbe, M., et al.: CopyCat: near-duplicates within and between the clueweb and the common crawl. In: Proceeding of SIGIR 2021, pp. 2398–2404. ACM (2021)
Gao, L., Dai, Z., Fan, Z., Callan, J.: Complementing lexical retrieval with semantic residual embedding. CoRR abs/2004.13969 (2020)
Kaszkiel, M., Zobel, J.: Passage retrieval revisited. In: Proceeding of SIGIR 1997, pp. 178–185. ACM (1997)
Lin, J., Ma, X., Lin, S., Yang, J., Pradeep, R., Nogueira, R.: Pyserini: a python toolkit for reproducible information retrieval research with sparse and dense representations. In: Proceeding of SIGIR 2021, pp. 2356–2362. ACM (2021)
MacAvaney, S., Macdonald, C., Ounis, I.: Reproducing personalised session search over the AOL query log. In: Proceeding of ECIR 2022 (2022). https://doi.org/10.1007/978-3-030-99736-6_42
MacAvaney, S., Yates, A., Feldman, S., Downey, D., Cohan, A., Goharian, N.: Simplified data wrangling with ir_datasets. In: Proceeding of SIGIR 2021, pp. 2429–2436. ACM (2021)
Macdonald, C., Tonellotto, N., MacAvaney, S., Ounis, I.: PyTerrier: declarative experimentation in python from BM25 to dense retrieval. In: Proceeding of CIKM 2021, pp. 4526–4533. ACM (2021)
Mokrii, I., Boytsov, L., Braslavski, P.: A systematic evaluation of transfer learning and pseudo-labeling with BERT-based ranking models. In: Proceeding of SIGIR 2021, pp. 2081–2085. ACM (2021)
Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., Deng, L.: MS MARCO: a human generated machine reading comprehension dataset. In: Proceeding of CoCo@N(eur)IPS 2016. CEUR, vol. 1773. CEUR-WS.org (2016)
Nogueira, R., Jiang, Z., Pradeep, R., Lin, J.: Document ranking with a pretrained sequence-to-sequence model. In: Findings of EMNLP 2020, pp. 708–718. ACL (2020)
Nogueira, R., Yang, W., Cho, K., Lin, J.: Multi-stage document ranking with BERT, pp. 1–13. CoRR abs/1910.14424 (2019)
Ntoulas, A., Cho, J., Olston, C.: What’s new on the web? the evolution of the web from a search engine perspective. In: Proceeding of WWW 2004, pp. 1–12. ACM (2004)
Olston, C., Pandey, S.: Recrawl scheduling based on information longevity. In: Proceeding of WWW 2008, pp. 437–446. ACM (2008)
Pradeep, R., Nogueira, R., Lin, J.: The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models, pp. 1–23. CoRR abs/2101.05667 (2021)
Qu, Y., et al.: Rocketqa: An optimized training approach to dense passage retrieval for open-domain question answering. In: Proceeding of NAACL 2021, pp. 5835–5847 (2021)
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 140:1–140:67 (2020)
Rudra, K., Anand, A.: Distant supervision in bert-based adhoc document retrieval. In: Proceeding of CIKM 2020, pp. 2197–2200. ACM (2020)
Singla, A., White, R.: Sampling high-quality clicks from noisy click data. In: Proceeding of WWW 2010, pp. 1187–1188. ACM (2010)
Voorhees, E., Craswell, N., Lin, J.: Too many relevants: whither cranfield test collections? In: Proceeding of SIGIR 2022. ACM (2022)
Wu, X., Liu, Q., Qin, J., Yu, Y.: PeerRank: robust learning to rank with peer loss over noisy labels. IEEE Access 10, 6830–6841 (2022)
Yates, A., Arora, S., Zhang, X., Yang, W., Jose, K., Lin, J.: Capreolus: a toolkit for end-to-end neural ad hoc retrieval. In: Proceeding of WSDM 2020, pp. 861–864. ACM (2020)
Yates, A., Nogueira, R., Lin, J.: Pretrained transformers for text ranking: BERT and beyond. In: Proceeding of SIGIR 2021, pp. 2666–2668. ACM (2021)
Zhan, J., Mao, J., Liu, Y., Guo, J., Zhang, M., Ma, S.: Optimizing dense retrieval model training with hard negatives. In: Proceeding of SIGIR 2021, pp. 1503–1512. ACM (2021)
Zhan, J., Mao, J., Liu, Y., Zhang, M., Ma, S.: Repbert: contextualized text embeddings for first-stage retrieval. CoRR abs/2006.15498 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Fröbe, M., Akiki, C., Potthast, M., Hagen, M. (2022). Noise-Reduction for Automatically Transferred Relevance Judgments. In: Barrón-Cedeño, A., et al. Experimental IR Meets Multilinguality, Multimodality, and Interaction. CLEF 2022. Lecture Notes in Computer Science, vol 13390. Springer, Cham. https://doi.org/10.1007/978-3-031-13643-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-13643-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-13642-9
Online ISBN: 978-3-031-13643-6
eBook Packages: Computer ScienceComputer Science (R0)