With the development of technology over the past decade and the former challenges of virtual environments mitigated, the need for further study of human interaction with these environments has grown apparent. The visual and interaction components for virtual reality applications have been comprehensively studied, but a lack of spatial audio fidelity leaves a need for understanding how well humans can localize aural cues and discern audio-visual disparity in these virtual environments. In order for development of accurate and efficient levels of audio fidelity, a human study was conducted with 18 participants to see how far a bimodal audio and visual cue need to separate for someone to notice. As suspected, having a visual component paired with an auditory one led to biasing toward the visual component. The average participant noticed a disparity when the audio component was 33.7° apart from the visual one, pertaining to the azimuth. There was no significant evidence to suggest that speed or direction of audio component disparity led to better localization performance by participant. Presence and prior experience did not have an effect on localization performance; however, a larger participant base may be needed to draw further conclusions. Increase in localization ability was observed within a few practice rounds for participants. Overall, performance in virtual reality was parallel to augmented reality performance when a visual source biased sound localization, and can this be a both tool and design constraint for virtual environment developers. / Master of Science / Virtual Reality has overcame a large technological gap over the past decade, allowing itself to be a strong tool in many applications from training to entertainment. The need for studying audio fidelity in virtual environments has emerged from a gap of research in the virtual reality domain. Therefore, a human participant study was conducted to see how well they could localize sound in a virtual environment. This involved signaling when they noticed a visual object and a sound split. The average of 72 trials with 18 participants was 33.7° of separation on the horizontal plane. This can be both a tool and a design constraint for virtual reality developers, who can use this visual biasing as a guideline for future applications.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/85015 |
Date | 13 September 2018 |
Creators | Wiker, Erik Daniel |
Contributors | Mechanical Engineering, Roan, Michael J., Wicks, Alfred L., Tarazaga, Pablo Alberto |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Detected Language | English |
Type | Thesis |
Format | ETD, application/pdf, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0024 seconds