Return to search

Context-aware mixed reality: A learning-based framework for semantic-level interaction

Yes / Mixed reality (MR) is a powerful interactive technology for new types of user experience. We present a semantic‐based interactive MR framework that is beyond current geometry‐based approaches, offering a step change in generating high‐level context‐aware interactions. Our key insight is that by building semantic understanding in MR, we can develop a system that not only greatly enhances user experience through object‐specific behaviours, but also it paves the way for solving complex interaction design challenges. In this paper, our proposed framework generates semantic properties of the real‐world environment through a dense scene reconstruction and deep image understanding scheme. We demonstrate our approach by developing a material‐aware prototype system for context‐aware physical interactions between the real and virtual objects. Quantitative and qualitative evaluation results show that the framework delivers accurate and consistent semantic information in an interactive MR environment, providing effective real‐time semantic‐level interactions.

Identiferoai:union.ndltd.org:BRADFORD/oai:bradscholars.brad.ac.uk:10454/17543
Date16 December 2019
CreatorsChen, L., Tang, W., John, N.W., Wan, Tao Ruan, Zhang, J.J.
Source SetsBradford Scholars
LanguageEnglish
Detected LanguageEnglish
TypeArticle, Published version
Rights© 2019 The Authors. Computer Graphics Forum published by Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

Page generated in 0.0017 seconds