Open access
Author
Date
2021Type
- Doctoral Thesis
ETH Bibliography
yes
Altmetrics
Abstract
Feature-rich environments require the presentation of large amounts of data in a meaningful way. Maps, in particular, visualize spatial relations between locations and attributes to provide an overall picture of spatial and complex phenomena. However, the sheer mass of visual information may strain human processing capabilities. Furthermore, complex decision-making usually involves evaluating data from different parts of a map, which adds extra workload. In Human-Computer Interaction and Geographic Information Science (GIScience), eye tracking is a frequently used method for assessing the usability of different visualizations and user interfaces. Besides reflecting aspects of the map design, such as saliency of elements or visual clutter, gaze behavior also provides an insight into user aspects like background knowledge or expertise. Previous research in GIScience mainly focused on post-study eye tracking analysis to evaluate either the user or the map design.
This dissertation investigates leveraging live eye tracking for context-adaptive assistance on interactive maps. We focus on utilizing implicit gaze input for acquiring information about the users’ cognitive state, plans, and ongoing tasks to provide personalized assistance and adjust the visualization. We introduce a framework that enables automated logging of cartographic features a user has inspected by matching fixations with the rendered vector model of the map. We then discuss how this detailed information allows for detecting aspects, such as map task complexity, differences between users, and variations in search strategies. Building on this, we introduce a system that merges this data with explicit user input to predict the currently performed map task. This information is essential for the examined supportive adaptations. First, we present a novel gaze-adaptive legend that reduces the search space and optimizes its placement with regard to the user’s gaze. Furthermore, we introduce a gaze-adaptive map that provides highlights making it easier to revisit previously inspected Points of Interest. We also demonstrate how gaze-adaptive map lenses can reduce visual clutter without sacrificing details.
In conclusion, the main contributions of this dissertation are: matching gaze with visible features in real-time, recognizing activities and user context from gaze behavior, and providing assistive interface adaptations based on implicit gaze information. While these findings improve interaction with maps, we see potential for various feature-rich environments. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000513243Publication status
publishedExternal links
Search print copy at ETH Library
Publisher
ETH ZurichSubject
Human computer interaction (HCI); Eye Tracking; Gaze-based assistance; Gaze-Based Interaction; Interface Design; User Experience; Activity Recognition; Machine Learning; GIScience; Digital Maps; Map readingOrganisational unit
03901 - Raubal, Martin / Raubal, Martin
Funding
162886 - Intention-Aware Gaze-Based Assistance on Maps (SNF)
Related publications and datasets
More
Show all metadata
ETH Bibliography
yes
Altmetrics