More and more novel functions are being integrated into the vehicle infotainment system to allow individuals to perform secondary tasks with high accuracy and low accident risks. Mid-air gesture interactions are one of them. This thesis designed and tested a novel interface to solve a specific issue caused by this method of interaction: visual distraction within the car. In this study, a Heads-Up Display (HUD) was integrated with a gesture-based menu navigation system to allow drivers to see menu selections without looking away from the road. An experiment was conducted to investigate the potential of this system in improving drivers’ driving performance, situation awareness, and gesture interactions. The thesis recruited 24 participants to test the system. Participants provided subjective feedback about using the system and objective performance data. This thesis found that HUD significantly outperformed the Heads-Down Display (HDD) in participants’ preference, perceived workload, level 1 situation awareness, and secondary-task performance. However, to achieve this, the participants compensated by having poor driving performance and relatively longer visual distraction. This thesis will provide directions for future research and improve the overall user experience while the driver interacts with the in-vehicle gesture interaction system. / M.S. / Driving is becoming one of the essential daily activities. Unless a fully autonomous vehicle is made, driving will remain as the primary task when operating the vehicle. However, to improve the overall experience during traveling, drivers are also required to perform secondary tasks such as changing the AC, switching the music, navigating the map, and other functions. Nevertheless, car accidents may happen when drivers are performing secondary tasks because those tasks are considered a distraction from the primary task, which is driving safely. Many novel interaction methods have been implemented in a modern car, such as touch screen interaction, voice interaction, etc. This thesis introduces a new gesture interaction system that allows the user to use mid-air gestures to navigate through the secondary task menus. To further avoid visual distraction caused by the system, the gesture interaction system integrated a head-up display (HUD) to allow the user to see visual feedback on their front windshield. The HUD will let the driver use the system without looking in the other directions and keep peripheral vision on the road. The experiment recruited 24 participants to test the system. Each participant provided subjective feedback about their workload, experience, and preference. In the experiment, driving simulator was used to collect their driving performance. The eye tracker glasses were used to collect eye gaze data, and the gesture menu system was used to collect gesture system performance. This thesis expects four key factors to affect the user experience: HUD vs. Heads-Down Display (visual feedback types), with sound feedback vs. without sound feedback. Results showed that HUD helped the driver perform the secondary task faster, understand the current situation better, and reduce workload. Most of the participants preferred using the HUD over using HDD. However, there are some compensations that drivers needed to make if they use HUD: focusing on the HUD for more time while performing secondary tasks and having poor driving performance. By analyzing result data, this thesis provides a direction for conducting HUD or in-vehicle gesture interaction research and improving the users’ performance and overall experience.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/115121 |
Date | 04 1900 |
Creators | Cao, Yusheng |
Contributors | Computer Science, Jeon, Myounghoon, Mccrickard, Scott, Lee, Sang Won |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Language | English |
Detected Language | English |
Type | Thesis |
Format | ETD, application/pdf, application/pdf, application/pdf |
Rights | CC0 1.0 Universal, http://creativecommons.org/publicdomain/zero/1.0/ |
Page generated in 0.0019 seconds