Return to search

Evidential Reasoning for Multimodal Fusion in Human Computer Interaction

Fusion of information from multiple modalities in Human Computer Interfaces
(HCI) has gained a lot of attention in recent years, and has far reaching
implications in many areas of human-machine interaction. However, a major
limitation of current HCI fusion systems is that the fusion process tends to
ignore the semantic nature of modalities, which may reinforce, complement or
contradict each other over time. Also, most systems are not robust in
representing the ambiguity inherent in human gestures. In this work, we
investigate an evidential reasoning based approach for intelligent multimodal
fusion, and apply this algorithm to a proposed multimodal system consisting of
a Hand Gesture sensor and a Brain Computing Interface (BCI). There are three
major contributions of this work to the area of human computer interaction.
First, we propose an algorithm for reconstruction of the 3D hand pose given a
2D input video. Second, we develop a BCI using Steady State Visually Evoked
Potentials, and show how a multimodal system consisting of the two sensors can
improve the efficiency and the complexity of the system, while retaining the
same levels of accuracy. Finally, we propose an semantic fusion algorithm based
on Transferable Belief Models, which can successfully fuse information from
these two sensors, to form meaningful concepts and resolve ambiguity. We also
analyze this system for robustness under various operating scenarios.

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:OWTU.10012/2674
Date January 2007
CreatorsReddy, Bakkama Srinath
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
LanguageEnglish
Detected LanguageEnglish
TypeThesis or Dissertation
Format1541810 bytes, application/pdf

Page generated in 0.1693 seconds