Return to search

Visual prosody in speech-driven facial animation: elicitation, prediction, and perceptual evaluation

Facial animations capable of articulating accurate movements in synchrony with a
speech track have become a subject of much research during the past decade. Most of
these efforts have focused on articulation of lip and tongue movements, since these are
the primary sources of information in speech reading. However, a wealth of
paralinguistic information is implicitly conveyed through visual prosody (e.g., head and
eyebrow movements). In contrast with lip/tongue movements, however, for which the
articulation rules are fairly well known (i.e., viseme-phoneme mappings, coarticulation),
little is known about the generation of visual prosody.
The objective of this thesis is to explore the perceptual contributions of visual prosody in
speech-driven facial avatars. Our main hypothesis is that visual prosody driven by
acoustics of the speech signal, as opposed to random or no visual prosody, results in
more realistic, coherent and convincing facial animations. To test this hypothesis, we
have developed an audio-visual system capable of capturing synchronized speech and
facial motion from a speaker using infrared illumination and retro-reflective markers. In
order to elicit natural visual prosody, a story-telling experiment was designed in which
the actors were shown a short cartoon video, and subsequently asked to narrate the
episode. From this audio-visual data, four different facial animations were generated,
articulating no visual prosody, Perlin-noise, speech-driven movements, and ground truth
movements. Speech-driven movements were driven by acoustic features of the speech
signal (e.g., fundamental frequency and energy) using rule-based heuristics and
autoregressive models. A pair-wise perceptual evaluation shows that subjects can clearly
discriminate among the four visual prosody animations. It also shows that speech-driven
movements and Perlin-noise, in that order, approach the performance of veridical
motion. The results are quite promising and suggest that speech-driven motion could
outperform Perlin-noise if more powerful motion prediction models are used. In
addition, our results also show that exaggeration can bias the viewer to perceive a
computer generated character to be more realistic motion-wise.

Identiferoai:union.ndltd.org:tamu.edu/oai:repository.tamu.edu:1969.1/2436
Date29 August 2005
CreatorsZavala Chmelicka, Marco Enrique
ContributorsGutierrez-Osuna, Ricardo
PublisherTexas A&M University
Source SetsTexas A and M University
Languageen_US
Detected LanguageEnglish
TypeBook, Thesis, Electronic Thesis, text
Format4014463 bytes, electronic, application/pdf, born digital

Page generated in 0.0017 seconds