Return to search

Effects of Augmented Reality Head-up Display Graphics’ Perceptual Form on Driver Spatial Knowledge Acquisition

In this study, we investigated whether modifying augmented reality head-up display (AR HUD) graphics’ perceptual form influences spatial learning of the environment. We employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium-fidelity driving simulator at the COGENT lab at Virginia Tech. Two different navigation cues systems were compared: world-relative and screen-relative. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. We captured empirical data regarding changes in driving behaviors, glance behaviors, spatial knowledge acquisition (measured in terms of landmark and route knowledge), reported workload, and usability of the interface.

Results showed that both screen-relative and world-relative AR head-up display interfaces have similar impact on the levels of spatial knowledge acquired; suggesting that world-relative AR graphics may be used for navigation with no comparative reduction in spatial knowledge acquisition. Even though our initial assumption that the conformal AR HUD interface would draw drivers’ attention to a specific part of the display was correct, this type of interface was not helpful to increase spatial knowledge acquisition. This finding contrasts a common perspective in the AR community that conformal, world-relative graphics are inherently more effective than screen-relative graphics. We suggest that simple, screen-fixed designs may indeed be effective in certain contexts.

Finally, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR HUD interfaces; with conformal-graphics demanding more visual attention from drivers. We showed that the distribution of visual attention allocation was that the world-relative condition was typically associated with fewer glances in total, but glances of longer duration. / M.S. / As humans, we develop mental representations of our surroundings as we move through and learn about our environment. When navigating via car, developing robust mental representations (spatial knowledge) of the environment is crucial in situations where technology fails, or we need to find locations not included in a navigation system’s database. Over-reliance on traditional in-vehicle navigation devices has been shown to negatively impact our ability to navigate based on our own internal knowledge. Recently, the automotive industry has been developing new in-vehicle devices that have the potential to promote more active navigation and potentially enhance spatial knowledge acquisition. Vehicles with augmented reality (AR) graphics delivered via head-up displays (HUDs) present navigation information directly within drivers’ forward field of view, allowing drivers to gather information needed without looking away from the road. While this AR navigation technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. In this work, we present a user study that examines how screen-relative and world-relative AR HUD interface designs affect drivers’ spatial knowledge acquisition.

Results showed that both screen-relative and world-relative AR head-up display interfaces have similar impact on the levels of spatial knowledge acquired; suggesting that world-relative AR graphics may be used for navigation with no comparative reduction in spatial knowledge acquisition. However, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR HUD interfaces; with conformal-graphics demanding more visual attention from drivers

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/96704
Date16 December 2019
CreatorsDe Oliveira Faria, Nayara
ContributorsIndustrial and Systems Engineering, Gabbard, Joseph L., Klauer, Charlie, Smith, Martha Irene
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
Languageen_US
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0023 seconds