Return to search

Exploration Of Codeless In-situ Extended Reality Authoring Environment For Asynchronous Immersive Spatial Instructions

<p>Immersive reality technology, such as augmented and virtual reality, has recently become quite prevalent due to innovation in hardware and software, leading to cheaper devices such as Head-mounted displays. There is significant evidence of an improved rate of skill acquisition with immersive reality training. However, the knowledge required to develop content for such immersive media is still relatively high. Subject experts often work together with programmers to create such content. </p>
<p><br></p>
<p>Our research goal in this thesis can be broadly classified into four distinct but mutually dependent categories. First, we explored the problem of immersive content creation with ProcessAR, an AR-based system to develop 2D/3D content that captures subject matter experts (SMEs) environment-object interactions in situ. The design space for ProcessAR was identified from formative interviews with AR programming experts and SMEs, alongside a comparative design study with SMEs and novice users. To enable smooth workflows, ProcessAR locates and identifies different tools/objects through computer vision within the workspace when the author looks at them. We explored additional features, such as embedding 2D videos with detected objects and user-adaptive triggers. A final user evaluation comparing ProcessAR and a baseline AR authoring environment showed that, according to our qualitative questionnaire, users preferred ProcessAR.</p>
<p><br></p>
<p>Second, we explored a unified authoring and editing environment, EditAR, that can create content for multiple media, such as AR, VR, and video instructions, based on a single demonstration. EditAR captures the user's interaction within an environment and creates a digital twin, enabling users without programming backgrounds to develop content. We conducted formative interviews with the subject and media experts to design the system. The prototype was developed and reviewed by experts. We also performed a user study comparing traditional video creation with 2D video creation from 3D recordings via a 3D editor, which uses freehand interaction for in-headset editing. Users took five times less time to record instructions and preferred EditAR, giving significantly higher usability scores.</p>
<p><br></p>
<p>We then explore AnnotateXR, an extended reality (XR) workflow to collect various high fidelity data and auto-annotate it in a single demonstration. AnnotateXR allows users to align virtual models over physical objects, tracked with 6DoF sensors. AnnotateXR utilizes a hand tracking capable XR HMD coupled with 6DoF information and collision detection to enable algorithmic segmentation of different actions in videos through its digital twin. The virtual-physical mapping provides a tight bounding volume to generate semantic segmentation masks for the captured image data. Alongside supporting object and action segmentation, we also support other dimensions of annotation required by modern CV, such as Human-Object, Object-Object, and rich 3D recordings, all with a single demonstration. Our user study shows AnnotateXR produced over 112,000 annotated data points in 67 minutes while maintaining the same performance quality as manual annotations.</p>
<p><br></p>
<p>Lastly, We conducted two elicitation studies empirically evaluated to determine design guidance for cross-modal devices capable of supporting an immersive interface in VR, allowing for simultaneous interaction with direct hand interaction while allowing for keyboard and mouse input. Recent advances in hand tracking have allowed users to interact with and experience interactions closer and similar to interactions in the physical world. However, these added benefits of natural interaction come at the cost of precision and accuracy offered by legacy input media such as a keyboard/mouse. The results and the guidelines from the two studies were used to develop a prototype called the Immersive Keyboard, which was evaluated against only traditional interface of only the keyboard and mouse. </p>
<p><br></p>
<p>In this thesis, we have explored a novel extended reality authoring environment that enables users without programming to author asynchronous immersive content in-situ, especially for spatial instructions.</p>

  1. 10.25394/pgs.21641738.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/21641738
Date29 November 2022
CreatorsSubramanian Chidambaram (14191622)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Exploration_Of_Codeless_In-situ_Extended_Reality_Authoring_Environment_For_Asynchronous_Immersive_Spatial_Instructions/21641738

Page generated in 0.0018 seconds