Return to search

Sonification of the Scene in the Image Environment and Metaverse Using Natural Language

This metaverse and computer vision-powered application is designed to serve people with low vision or a visual impairment, ranging from adults to old age. Specifically, we hope to improve the situational awareness of users in a scene by narrating the visual content from their point of view. The user would be able to understand the information through auditory channels as the system would narrate the scene's description using speech technology. This could increase the accessibility of visual-spatial information for the users in a metaverse and later in the physical world.
This solution is designed and developed considering the hypothesis that if we enable the narration of a scene's visual content, we can increase the understanding and access to that scene. This study paves the way for VR technology to be used as a training and exploration tool not limited to blind people in generic environments, but applicable to specific domains such as military, healthcare, or architecture and planning. We have run a user study and evaluated our hypothesis about which set of algorithms will perform better for a specific category of tasks - like search or survey - and evaluated the narration algorithms by the user's ratings of naturalness, correctness and satisfaction. The tasks and algorithms have been discussed in detail in the chapters of this thesis. / Master of Science / The solution is built using an object detection algorithm and virtual environments which run on the web browser using X3DOM. The solution would help improve situational awareness for normal people as well as for low vision individuals through speech. On a broader scale, we seek to contribute to accessibility solutions. We have designed four algorithms which will help user to understand the scene information through auditory channels as the system would narrate the scene's description using speech technology. The idea would increase the accessibility of visual-spatial information for the users in a metaverse and later in the physical world.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/113216
Date17 January 2023
CreatorsWasi, Mohd Sheeban
ContributorsComputer Science and Applications, Polys, Nicholas F., McCrickard, D. Scott, Bukvic, Ivica
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0025 seconds