Return to search

Multi-modal integration for gesture and speech

Demonstratives, in particular gestures that "only"
accompany speech, are not a big issue in current
theories of grammar. If we deal with gestures,
fixing their function is one big problem, the
other one is how to integrate the representations
originating from different channels and, ultimately,
how to determine their composite meanings. The
growing interest in multi-modal settings, computer
simulations, human-machine interfaces and VRapplications
increases the need for theories ofmultimodal
structures and events. <br>In our workshopcontribution
we focus on the integration of multimodal
contents and investigate different approaches
dealing with this problem such as Johnston et al.
(1997) and Johnston (1998), Johnston and Bangalore
(2000), Chierchia (1995), Asher (2005), and
Rieser (2005).

Identiferoai:union.ndltd.org:Potsdam/oai:kobv.de-opus-ubp:1039
Date January 2006
CreatorsLücking, Andy, Rieser, Hannes, Staudacher, Marc
PublisherUniversität Potsdam, Extern. Extern
Source SetsPotsdam University
LanguageEnglish
Detected LanguageEnglish
TypeInProceedings
Formatapplication/pdf
Sourcebrandial´06 : Proceedings of the 10th workshop on the semantics and pragmatics of dialogue (SemDial-10) / ed. By David Schlangen ; Raquel Fernández. - Univ.-Verl. : Potsdam, 2006. - vii, 201 S. : Ill., graph. Darst.
Rightshttp://opus.kobv.de/ubp/doku/urheberrecht.php

Page generated in 0.002 seconds