1 |
Human Facial Animation Based on Real Image SequenceChang, Ying-Liang 19 May 2003 (has links)
none
|
2 |
PDE-based Facial Animation: Making the Complex SimpleSheng, Y., Willis, P., Gonzalez Castro, Gabriela, Ugail, Hassan January 2008 (has links)
Yes / Direct parameterisation is among the most widely used facial animation techniques but requires complicated ways to animate face models which have complex topology. This paper develops a simple solution by introducing a PDE-based facial animation scheme. Using a PDE face model means we only need to animate a group of boundary curves without using any other conventional surface interpolation algorithms. We describe the basis of the method and show results from a practical implementation. / EPSRC
|
3 |
Visual prosody in speech-driven facial animation: elicitation, prediction, and perceptual evaluationZavala Chmelicka, Marco Enrique 29 August 2005 (has links)
Facial animations capable of articulating accurate movements in synchrony with a
speech track have become a subject of much research during the past decade. Most of
these efforts have focused on articulation of lip and tongue movements, since these are
the primary sources of information in speech reading. However, a wealth of
paralinguistic information is implicitly conveyed through visual prosody (e.g., head and
eyebrow movements). In contrast with lip/tongue movements, however, for which the
articulation rules are fairly well known (i.e., viseme-phoneme mappings, coarticulation),
little is known about the generation of visual prosody.
The objective of this thesis is to explore the perceptual contributions of visual prosody in
speech-driven facial avatars. Our main hypothesis is that visual prosody driven by
acoustics of the speech signal, as opposed to random or no visual prosody, results in
more realistic, coherent and convincing facial animations. To test this hypothesis, we
have developed an audio-visual system capable of capturing synchronized speech and
facial motion from a speaker using infrared illumination and retro-reflective markers. In
order to elicit natural visual prosody, a story-telling experiment was designed in which
the actors were shown a short cartoon video, and subsequently asked to narrate the
episode. From this audio-visual data, four different facial animations were generated,
articulating no visual prosody, Perlin-noise, speech-driven movements, and ground truth
movements. Speech-driven movements were driven by acoustic features of the speech
signal (e.g., fundamental frequency and energy) using rule-based heuristics and
autoregressive models. A pair-wise perceptual evaluation shows that subjects can clearly
discriminate among the four visual prosody animations. It also shows that speech-driven
movements and Perlin-noise, in that order, approach the performance of veridical
motion. The results are quite promising and suggest that speech-driven motion could
outperform Perlin-noise if more powerful motion prediction models are used. In
addition, our results also show that exaggeration can bias the viewer to perceive a
computer generated character to be more realistic motion-wise.
|
4 |
Human Facial Animation Based on Real Image SequenceYeh, Shih-Hao 24 July 2001 (has links)
How to efficiently and relistically generate 3D human face models is a very interesting and difficult problem in computer graphics. animated face models are essential to computer games, films making, online chat, virtual presence, video conferencing, etc. As the progress of computer technology, people request for more and more multimedia effects. Therefore, construct 3D human face models and facial animation are enthusiastically investigated in recent years.
There are many kinds of method that used to construct 3D human face models. Such as laser scanners and computer graphics. So far, the most popular commercially available tools have utilized laser scanners. But it is not able to trace moving object. We bring up a technique that construct 3D human face model based on real image sequence. The full procedure can be divided into 4 parts. In the first step we use two cameras take picture con human face simultaneously. By the distance within two cameras we can calculate the depth of human face and build up a 3D face model. The second step is aimed at one image sequence which is taken by the same camera. By comparing the feature poins on previous image afterward image we can get the motion vector of human face. Now we can construct a template of animated 3D face model. After that we can map any kind of 2D new character image into the template, then build new character's animation. The full procedure is automatic. We can construct exquisite human facial animation easily.
|
5 |
Internet video-conferencing using model-based image coding with agent technologyAl-Qayedi, Ali January 1999 (has links)
No description available.
|
6 |
Making FACES : the Facial Animation, Construction and Editing SystemPatel, Manjula January 1991 (has links)
The human face is a fascinating, but extremely complex object; the research project described is concerned with the computer generation and animation of faces. However, the age old captivation with the face transforms into a major obstacle when creating synthetic faces. The face and head are the most visible attributes of a person. We master the skills of recognising faces and interpreting facial movement at a very early age. As a result, we are likely to notice the smallest deviation from our concept of how a face should appear and behave. Computer animation in general, is often perceived to be ``wooden' and very ``rigid'; the aim is therefore to provide facilities for the generation of believable faces and convincing facial movement. The major issues addressed within the project concern the modelling of a large variety of faces and their animation. Computer modelling of arbitrary faces is an area that has received relatively little attention in comparison with the animation of faces. Another problem that has been considered is that of providing the user with adequate and effective control over the modelling and animation of the face. The Facial Animation, Construction and Editing System or FACES was conceived as a system for investigating these issues. A promising approach is to look a little deeper than the surface of the skin. A three-layer anatomical model of the head, which incorporates bone, muscle, skin and surface features, has been developed. As well as serving as a foundation which integrates all the facilities available within FACES, the advantage of the model is that it allows differing strategies to be used for modelling and animation. FACES is an interactive system, which helps with both the generation and animation of faces, while hiding the structural complexities of the face from the user. The software consists of four sub-systems; CONSTRUCT and MODIFY cater for modelling functionality, while ANIMATE allows animation sequences to be generated and RENDER provides for shading and motion evaluation.
|
7 |
A Nonlinear Framework for Facial AnimationBastani, Hanieh 25 July 2008 (has links)
This thesis researches techniques for modelling static facial expressions, as well as the dynamics of continuous
facial motion. We demonstrate how static and dynamic properties of facial expressions can be represented within a linear
and nonlinear context, respectively. These two representations do not act in isolation, but are mutually reinforcing in
conceding a cohesive framework for the analysis, animation, and manipulation of expressive faces. We derive a basis for
the linear space of expressions through Principal Components Analysis (PCA). We introduce and formalize the notion
of "expression manifolds", manifolds residing in PCA space that model motion dynamics for semantically similar expressions.
We then integrate these manifolds into an animation workflow by performing Nonlinear Dimensionality Reduction (NLDR) on the
expression manifolds. This operation yields expression maps that encode a wealth of information relating
to complex facial dynamics, in a low dimensional space that is intuitive to navigate and efficient to manage.
|
8 |
A Nonlinear Framework for Facial AnimationBastani, Hanieh 25 July 2008 (has links)
This thesis researches techniques for modelling static facial expressions, as well as the dynamics of continuous
facial motion. We demonstrate how static and dynamic properties of facial expressions can be represented within a linear
and nonlinear context, respectively. These two representations do not act in isolation, but are mutually reinforcing in
conceding a cohesive framework for the analysis, animation, and manipulation of expressive faces. We derive a basis for
the linear space of expressions through Principal Components Analysis (PCA). We introduce and formalize the notion
of "expression manifolds", manifolds residing in PCA space that model motion dynamics for semantically similar expressions.
We then integrate these manifolds into an animation workflow by performing Nonlinear Dimensionality Reduction (NLDR) on the
expression manifolds. This operation yields expression maps that encode a wealth of information relating
to complex facial dynamics, in a low dimensional space that is intuitive to navigate and efficient to manage.
|
9 |
Perceptually Valid Dynamics for Smiles and BlinksTrutoiu, Laura 01 August 2014 (has links)
In many applications, such as conversational agents, virtual reality, movies, and games, animated facial expressions of computer-generated (CG) characters are used to communicate, teach, or entertain. With an increased demand for CG characters, it is important to animate accurate, realistic facial expressions because human facial expressions communicate a wealth of information. However, realistically animating faces is challenging and time-consuming for two reasons. First, human observers are adept at detecting anomalies in realistic CG facial animations. Second, traditional animation techniques based on keyframing sometimes approximate the dynamics of facial expressions or require extensive artistic input while high-resolution performance capture techniques are cost prohibitive. In this thesis, we develop a framework to explore representations of two key facial expressions, blinks and smiles, and we show that data-driven models are needed to realistically animate these expressions. Our approach relies on utilizing high-resolution performance capture data to build models that can be used in traditional keyframing systems. First, we record large collections of high-resolution dynamic expressions through video and motion capture technology. Next, we build expression-specific models of the dynamic data properties of blinks and smiles. We explore variants of the model and assess whether viewers perceive the models as more natural than the simplified models present in the literature. In the first part of the thesis, we build a generative model of the characteristic dynamics of blinks: fast closing of the eyelids followed by a slow opening. Blinks have a characteristic profile with relatively little variation across instances or people. Our results demonstrate the need for an accurate model of eye blink dynamics rather than simple approximations, as viewers perceive the difference. In the second part of the thesis, we investigate how spatial and temporal linearities impact smile genuineness and build a model for genuine smiles. Our perceptual results indicate that a smile model needs to preserve temporal information. With this model, we synthesize perceptually genuine smiles that outperform traditional animation methods accompanied by plausible head motions. In the last part of the thesis, we investigate how blinks synchronize with the start and end of spontaneous smiles. Our analysis shows that eye blinks correlate with the end of the smile and occur before the lip corners stop moving downwards. We argue that the timing of blinks relative to smiles is useful in creating compelling facial expressions. Our work is directly applicable to current methods in animation. For example, we illustrate how our models can be used in the popular framework of blendshape animation to increase realism while keeping the system complexity low. Furthermore, our perceptual results can inform the design of realistic animation systems by highlighting common assumptions that over-simplify the dynamics of expressions.
|
10 |
Implementação e avaliação de um apresentador virtual tridimensional / Implementation and evaluation of a three-dimensional virtual presenterHippler, Denise 11 November 2008 (has links)
Orientador: Jose Mario De Martino / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-12T10:53:44Z (GMT). No. of bitstreams: 1
Hippler_Denise_M.pdf: 2772724 bytes, checksum: b21fd48df7d9b02044e8b191efa6c622 (MD5)
Previous issue date: 2008 / Resumo: Um apresentador virtual é um agente representado por uma forma antropomórfica que permite a transmissão de informações usando uma combinação de formas de expressão que pode envolver elementos como fala, gestos, expressões faciais e mudanças de postura, podendo ainda incluir a interação com objetos. Apresentadores virtuais possuem um amplo leque de aplicações em potencial, como guias e instrutores. Este trabalho relata a implementação e avaliação de um apresentador virtual de notícias que realiza movimentos verbais sincronizados com o sinal acústico da fala e movimentos não-verbais que complementam a fala, estruturando a apresentação. A animação é gerada a partir
de informações acerca do texto a ser apresentado oralmente e dos movimentos não-verbais que devem acompanhá-lo, descritas pelo usuário em um arquivo de entrada. De modo a gerar animações convincentes, estudou-se gravações de apresentações de notícias de um canal televisivo e derivou-se regras de comportamento, um conjunto de movimentos que se repetem em dadas situações. Esse padrão comportamental insere alguns movimentos automaticamente na animação, complementando os movimentos dados pelo usuário. Por fim, realizando experimentos com voluntários, avaliou-se a contribuição dos movimentos não-verbais à apresentação. / Abstract: A virtual presenter is an anthropomorphic graphical representation, that transmits informations using a combination of communication possibilities like speech, gesture, facial expression, posture and interaction with virtual objects. Embodied Conversational Agents (ECAs) have several potential applications like guides and instructors. This work reports on the implementation and evaluation of a virtual news presenter that makes verbal movements in synchrony with the acoustic speech signal and non-verbal movements that complement the speech, structuring the presentation. The input file contains the description of the text that should be presented and the movements to be performed. In order
to generate convincing animations, a real news presenter has been studied allowing the definition of behavior rules, a group of movements that occur in certain situations. This behavior model inserts some movements automatically in the animation, complementing the movements given by the user. Experiments were made with volunteers to evaluate the contribution of the nonverbal movements to the presentation. / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
|
Page generated in 0.1174 seconds