Augmented Reality (AR) has the possibility of interacting with virtual objects and real objects at the same time since it combines the real world with computer-generated contents seamlessly. However, most AR interface research uses general Virtual Reality (VR) interaction techniques without modification. In this research we develop a multimodal interface (MMI) for AR with speech and 3D hand gesture input. We develop a multimodal signal fusion architecture based on the user behaviour while interacting with the MMI that provides more effective and natural multimodal signal fusion. Speech and 3D vision-based free hand gestures are used as multimodal input channels. There were two user observations (1) a Wizard of Oz study and (2)Gesture modelling. With the Wizard of Oz study, we observed user behaviours of interaction with our MMI. Gesture modelling was undertaken to explore whether different types of gestures can be described by pattern curves. Based on the experimental observations, we designed our own multimodal fusion architecture and developed an MMI. User evaluations have been conducted to evaluate the usability of our MMI. As a result, we found that MMI is more efficient and users are more satisfied with it when compared to the unimodal interfaces. We also describe design guidelines which were derived from our findings through the user studies.
Identifer | oai:union.ndltd.org:canterbury.ac.nz/oai:ir.canterbury.ac.nz:10092/4094 |
Date | January 2010 |
Creators | Lee, Minkyung |
Publisher | University of Canterbury. Department of Computer Science and Software Engineering |
Source Sets | University of Canterbury |
Language | English |
Detected Language | English |
Type | Electronic thesis or dissertation, Text |
Rights | Copyright Min Kyung Lee, http://library.canterbury.ac.nz/thesis/etheses_copyright.shtml |
Relation | NZCU |
Page generated in 0.0019 seconds