<p>With the advents in hardware techniques and mobile computing powers, Augmented Reality (AR) has been promising in various areas of our everyday life and work. By superimposing virtual assets onto the real world, the boundary between the digital and physical spaces has been significantly blurred, which bridges a large amount of digital augmentation and intelligence with the surroundings of the physical reality. Meanwhile, thanks to the increasing developments of Artificial Intelligence (AI) perception algorithms such as object detection, scene reconstruction, and human tracking, the dynamic behaviors of digital AR content have extensively been associated with the physical contexts of both humans and environments. This context-awareness enabled by the emerging techniques enriches the potential interaction modalities of AR experiences and improves the intuitiveness and effectiveness of the digital augmentation delivered to the consumers. Therefore, researchers are gradually motivated to include more contextual information in the AR domain to create novel AR experiences used for augmenting their activities in the physical world.</p>
<p> </p>
<p>On a broader level, our work in this thesis focuses on novel designs and modalities that combine contextual information with AR content behaviors in context-aware AR experiences. In particular, we design the AR experiences by inspecting different types of contexts from the real world, namely 1) human actions, 2) physical entities, and 3) interactions between humans and physical environments. To this end, we explore 1) software and hardware modules, and conceptual models that perceive and interpret the contexts required by the AR experiences, and 2) supportive authoring tools and interfaces that enable users and designers to define the associations of the AR contents and the interaction modalities leveraging the contextual information. In this thesis, we mainly study the following workflows: 1) designing adaptive AR tutoring systems for human-machine-interactions, 2) customizing human-involved context-aware AR applications, 3) authoring shareable semantic-aware AR experiences, and 4) enabling hand-object-interaction datasets collection for scalable context-aware AR application deployment. We further develop the enabling techniques and algorithms including 1) an adaptation model that adaptively vary the AR tutoring elements based on the real-time learner's interactions with the physical machines, 2) a customized video-see-through AR headset for pervasive human-activity detecting, 3) a semantic adaptation model that adjusts the spatial relationships of the AR contents according to the semantic understanding of different physical entities and environments, and 4) an AR-based interface that empowers novice users to collect high-quality datasets used for training user- and cite-specific networks in hand-object-interaction-aware AR applications.</p>
<p><br></p>
<p>Takeaways from the research series include 1) the usage of the modern AI modules effectively enlarges both the spatial and contextual scalability of AR experiences, and 2) the design of the authoring systems and interfaces lowers the barrier for end-users and domain experts to leverage AI outputs in the creation of AR experiences that are tailored for target users. We conclude that involving AI techniques in both the creation and implementation stages of AR applications is crucial to building an intelligent, adaptive, and scalable ecosystem of context-aware AR applications.</p>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/22677568 |
Date | 24 April 2023 |
Creators | Xun Qian (15339328) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/Explore_the_Design_and_Authoring_of_AI-Driven_Context-Aware_Augmented_Reality_Experiences/22677568 |
Page generated in 0.0022 seconds