We introduce Sonifind, a micro guidance system implemented on a smartphone that uniquely integrates computer vision and spatial audio intended to assist users who are visually impaired. Currently, micro guidance systems require head-mounted displays and various hardware, limiting their practicality for everyday use. Our system employs an off-the-shelf computer vision library and a smartphone to interpret the user's surroundings and provide intuitive spatial audio cues for real-time guidance. We conducted a between-subjects study with sighted, blindfolded participants to evaluate the learnability, performance, and user satisfaction of Sonifind using two different versions of the system to further inform our ongoing design considerations. We found that the first version of our system (2D Feedback System) with directional (yaw) and depth (translation) audio cues resulted in quicker learnability but worse overall performance. In the second version of our system (3D Feedback System) with vertical axis guidance (pitch), we observed greater user satisfaction but a larger learning curve. These findings suggest that Sonifind has the potential to provide an effective and user-friendly navigation aid for non-sighted individuals, facilitating greater independence and mobility.
Identifer | oai:union.ndltd.org:CALPOLY/oai:digitalcommons.calpoly.edu:theses-4544 |
Date | 01 June 2024 |
Creators | Li, Lucas |
Publisher | DigitalCommons@CalPoly |
Source Sets | California Polytechnic State University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Master's Theses |
Page generated in 0.0019 seconds