skip to main content
research-article

ViewSer: enabling large-scale remote user studies of web search examination and interaction

Published: 24 July 2011 Publication History

Abstract

Web search behaviour studies, including eye-tracking studies of search result examination, have resulted in numerous insights to improve search result quality and presentation. Yet, eye tracking studies have been restricted in scale, due to the expense and the effort required. Furthermore, as the reach of the Web expands, it becomes increasingly important to understand how searchers around the world see and interact with the search results. To address both challenges, we introduce ViewSer, a novel methodology for performing web search examination studies remotely, at scale, and without requiring eye-tracking equipment. ViewSer operates by automatically modifying the appearance of a search engine result page, to clearly show one search result at a time as if through a "viewport", while partially blurring the rest and allowing the participant to move the viewport naturally with a computer mouse or trackpad. Remarkably, the resulting result viewing and clickthrough patterns agree closely with unrestricted viewing of results, as measured by eye-tracking equipment, validated by a study with over 100 participants. We also explore applications of ViewSer to practical search tasks, such as analyzing the search result summary (snip- pet) attractiveness, result re-ranking, and evaluating snippet quality. These experiments could have only be done previously by tracking the eye movements for a small number of subjects in the lab. In contrast, our study was performed with over 100 participants, allowing us to reproduce and extend previous findings, establishing ViewSer as a valuable tool for large-scale search behavior experiments.

References

[1]
E. Agichtein, E. Brill, and S. Dumais. Improving web search ranking by incorporating user behavior information. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 19--26. SIGIR, 2006.
[2]
O. Alonso, D. Rose, and B. Stewart. Crowdsourcing for relevance evaluation. In ACM SIGIR Forum, volume 42, pages 9--15. SIGIR, 2008.
[3]
R. Bednarik and M. Tukiainen. Validating the restricted focus viewer: A study using eye-movement tracking. Behavior research methods, 39(2):274--282, 2007.
[4]
A. Blackwell, A. Jansen, and K. Marriott. Restricted focus viewer: a tool for tracking visual attention. Theory and Application of Diagrams, pages 575--588, 2000.
[5]
O. Chapelle and Y. Zhang. A dynamic bayesian network click model for web search ranking. In Proceedings of the 18th international conference on World wide web, pages 1--10. WWW, 2009.
[6]
E. Chi, P. Pirolli, and S. Lam. Aspects of augmented social cognition: Social information foraging and social search. In Proceedings of the 2nd international conference on Online communities and social computing, pages 60--69. Springer-Verlag, 2007.
[7]
C. Clarke, E. Agichtein, S. Dumais, and R. White. The in uence of caption features on clickthrough patterns in web search. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 135--142. SIGIR, 2007.
[8]
E. Cutrell and Z. Guan. What are you looking for?: an eye-tracking study of information usage in web search. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 407--416. CHI, 2007.
[9]
J. Friedman, T. Hastie, and R. Tibshirani. Special invited paper. additive logistic regression: A statistical view of boosting. The annals of statistics, 28(2):337--374, 2000.
[10]
Q. Guo and E. Agichtein. Towards predicting web searcher gaze position from mouse movements. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems, pages 3601--3606, 2010.
[11]
J. Heer and M. Bostock. Crowdsourcing graphical perception: using mechanical turk to assess visualization design. In Proceedings of the 28th international conference on Human factors in computing systems, pages 203--212. CHI, 2010.
[12]
J. Huang, R. W. White, and S. Dumais. No clicks, no problem: using cursor movements to understand and improve search. In Proceedings of the 2011 annual conference on Human factors in computing systems. CHI, 2011.
[13]
A. Jansen, A. Blackwell, and K. Marriott. A tool for tracking visual attention: The restricted focus viewer. Behavior Research Methods, 35(1):57--69, 2003.
[14]
K. Jarvelin and J. Kekalainen. Cumulated gain-based evaluation of information retrieval techniques. ACM Transactions on Information Systems (TOIS), 20(4):422--446, 2002.
[15]
T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 133--142. KDD, 2002.
[16]
T. Joachims. Training linear svms in linear time. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 217--226. KDD, 2006.
[17]
T. Kanungo and D. Orr. Predicting the readability of short web summaries. In Proceedings of the Second ACM International Conference on Web Search and Data Mining, pages 202--211. WSDM, 2009.
[18]
G. Kazai, N. Milic-Frayling, and J. Costello. Towards methods for the collective gathering and quality control of relevance assessments. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 452--459. SIGIR, 2009.
[19]
D. Kelly and K. Gyllstrom. An examination of two delivery modes for interactive search system experiments: remote and laboratory. In Proceedings of the 2011 annual conference on Human factors in computing systems, pages 1531--1540. CHI, 2011.
[20]
A. Kittur, E. Chi, and B. Suh. Crowdsourcing user studies with mechanical turk. In Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, pages 453--456. CHI, 2008.
[21]
J. Le, A. Edmonds, V. Hester, and L. Biewald. Ensuring quality in crowdsourced search relevance evaluation. In Proceedings of the ACM SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation (CSE 2010), pages 17--20.
[22]
L. Lorigo, M. Haridasan, H. Brynjarsd-ottir, L. Xia, T. Joachims, G. Gay, L. Granka, F. Pellacini, and B. Pan. Eye tracking and online search: Lessons learned and challenges ahead. Journal of the American Society for Information Science and Technology, 59(7):1041{1052, 2008.
[23]
K. Rayner. Eye movements in reading and information processing: 20 years of research. Psychological bulletin, 124(3):372, 1998.
[24]
K. Wang, N. Gloy, and X. Li. Inferring search behaviors using partially observable markov (pom) model. In Proceedings of the third ACM international conference on Web search and data mining, pages 211--220. WSDM, 2010.
[25]
Y. Yue, R. Patel, and H. Roehrig. Beyond position bias: examining result attractiveness as a source of presentation bias in clickthrough data. In Proceedings of the 19th international conference on World wide web, pages 1011--1018. WWW, 2010.
[26]
Z. Zhu, W. Chen, T. Minka, C. Zhu, and Z. Chen. A novel click model and its applications to online advertising. In Proceedings of the third ACM international conference on Web search and data mining, pages 321--330. WSDM, 2010.

Cited By

View all
  • (2023)A Passage-Level Reading Behavior Model for Mobile SearchProceedings of the ACM Web Conference 202310.1145/3543507.3583343(3236-3246)Online publication date: 30-Apr-2023
  • (2022)STARE: Augmented Reality Data Visualization for Explainable Decision Support in Smart EnvironmentsIEEE Access10.1109/ACCESS.2022.315669710(29543-29557)Online publication date: 2022
  • (2022)Less is Less: When are Snippets Insufficient for Human vs Machine Relevance Estimation?Advances in Information Retrieval10.1007/978-3-030-99739-7_18(153-162)Online publication date: 5-Apr-2022
  • Show More Cited By

Index Terms

  1. ViewSer: enabling large-scale remote user studies of web search examination and interaction

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '11: Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
    July 2011
    1374 pages
    ISBN:9781450307574
    DOI:10.1145/2009916
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 July 2011

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. remote user studies
    2. web search behavior
    3. web search evaluation

    Qualifiers

    • Research-article

    Conference

    SIGIR '11
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)9
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 24 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)A Passage-Level Reading Behavior Model for Mobile SearchProceedings of the ACM Web Conference 202310.1145/3543507.3583343(3236-3246)Online publication date: 30-Apr-2023
    • (2022)STARE: Augmented Reality Data Visualization for Explainable Decision Support in Smart EnvironmentsIEEE Access10.1109/ACCESS.2022.315669710(29543-29557)Online publication date: 2022
    • (2022)Less is Less: When are Snippets Insufficient for Human vs Machine Relevance Estimation?Advances in Information Retrieval10.1007/978-3-030-99739-7_18(153-162)Online publication date: 5-Apr-2022
    • (2021)A Virtual Reality Memory Palace Variant Aids Knowledge Retrieval from Scholarly ArticlesIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2020.300900327:12(4359-4373)Online publication date: 1-Dec-2021
    • (2021)A day at the racesApplied Intelligence10.1007/s10489-021-02719-2Online publication date: 17-Aug-2021
    • (2020)Providing Direct Answers in Search Results: A Study of User BehaviorProceedings of the 29th ACM International Conference on Information & Knowledge Management10.1145/3340531.3412017(1635-1644)Online publication date: 19-Oct-2020
    • (2020)Factors influencing viewing behaviour on search engine results pages: a review of eye-tracking researchBehaviour & Information Technology10.1080/0144929X.2020.176145040:14(1485-1515)Online publication date: 22-May-2020
    • (2019)The Practice of CrowdsourcingSynthesis Lectures on Information Concepts, Retrieval, and Services10.2200/S00904ED1V01Y201903ICR06611:1(1-149)Online publication date: 28-May-2019
    • (2019)Optimizing Visual Element Placement via Visual Attention Analysis2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)10.1109/VR.2019.8797816(464-473)Online publication date: Mar-2019
    • (2018)A Large-Scale Study of Mobile Search Examination BehaviorThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval10.1145/3209978.3210099(1129-1132)Online publication date: 27-Jun-2018
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media