• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 7
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Reconstruction and Analysis of 3D Individualized Facial Expressions

Wang, Jing January 2015 (has links)
This thesis proposes a new way to analyze facial expressions through 3D scanned faces of real-life people. The expression analysis is based on learning the facial motion vectors that are the differences between a neutral face and a face with an expression. There are several expression analysis based on real-life face database such as 2D image-based Cohn-Kanade AU-Coded Facial Expression Database and Binghamton University 3D Facial Expression Database. To handle large pose variations and increase the general understanding of facial behavior, 2D image-based expression database is not enough. The Binghamton University 3D Facial Expression Database is mainly used for facial expression recognition and it is difficult to compare, resolve, and extend the problems related detailed 3D facial expression analysis. Our work aims to find a new and an intuitively way of visualizing the detailed point by point movements of 3D face model for a facial expression. In our work, we have created our own 3D facial expression database on a detailed level, which each expression model has been processed to have the same structure to compare differences between different people for a given expression. The first step is to obtain same structured but individually shaped face models. All the head models are recreated by deforming a generic model to adapt a laser-scanned individualized face shape in both coarse level and fine level. We repeat this recreation method on different human subjects to establish a database. The second step is expression cloning. The motion vectors are obtained by subtracting two head models with/without expression. The extracted facial motion vectors are applied onto a different human subject’s neutral face. Facial expression cloning is proved to be robust and fast as well as easy to use. The last step is about analyzing the facial motion vectors obtained from the second step. First we transferred several human subjects’ expressions on a single human neutral face. Then the analysis is done to compare different expression pairs in two main regions: the whole face surface analysis and facial muscle analysis. Through our work where smiling has been chosen for the experiment, we find our approach to analysis through face scanning a good way to visualize how differently people move their facial muscles for the same expression. People smile in a similar manner moving their mouths and cheeks in similar orientations, but each person shows her/his own unique way of moving. The difference between individual smiles is the differences of movements they make.
2

Editing, Streaming and Playing of MPEG-4 Facial Animations

Rudol, Piotr, Wzorek, Mariusz January 2003 (has links)
<p>Computer animated faces have found their way into a wide variety of areas. Starting from entertainment like computer games, through television and films to user interfaces using “talking heads”. Animated faces are also becoming popular in web applications in form of human-like assistants or newsreaders. </p><p>This thesis presents a few aspects of dealing with human face animations, namely: editing, playing and transmitting such animations. It describes a standard for handling human face animations, the MPEG-4 Face Animation, and shows the process of designing, implementing and evaluating applications compliant to this standard. </p><p>First, it presents changes introduced to the existing components of the Visage|toolkit package for dealing with facial animations, offered by the company Visage Technologies AB. It also presents the process of designing and implementing of an application for editing facial animations compliant to the MPEG-4 Face Animation standard. Finally, it discusses several approaches to the problem of streaming facial animations over the Internet or the Local Area Network (LAN).</p>
3

Perceptual Evaluation of Video-Realistic Speech

Geiger, Gadi, Ezzat, Tony, Poggio, Tomaso 28 February 2003 (has links)
abstract With many visual speech animation techniques now available, there is a clear need for systematic perceptual evaluation schemes. We describe here our scheme and its application to a new video-realistic (potentially indistinguishable from real recorded video) visual-speech animation system, called Mary 101. Two types of experiments were performed: a) distinguishing visually between real and synthetic image- sequences of the same utterances, ("Turing tests") and b) gauging visual speech recognition by comparing lip-reading performance of the real and synthetic image-sequences of the same utterances ("Intelligibility tests"). Subjects that were presented randomly with either real or synthetic image-sequences could not tell the synthetic from the real sequences above chance level. The same subjects when asked to lip-read the utterances from the same image-sequences recognized speech from real image-sequences significantly better than from synthetic ones. However, performance for both, real and synthetic, were at levels suggested in the literature on lip-reading. We conclude from the two experiments that the animation of Mary 101 is adequate for providing a percept of a talking head. However, additional effort is required to improve the animation for lip-reading purposes like rehabilitation and language learning. In addition, these two tasks could be considered as explicit and implicit perceptual discrimination tasks. In the explicit task (a), each stimulus is classified directly as a synthetic or real image-sequence by detecting a possible difference between the synthetic and the real image-sequences. The implicit perceptual discrimination task (b) consists of a comparison between visual recognition of speech of real and synthetic image-sequences. Our results suggest that implicit perceptual discrimination is a more sensitive method for discrimination between synthetic and real image-sequences than explicit perceptual discrimination.
4

Editing, Streaming and Playing of MPEG-4 Facial Animations

Rudol, Piotr, Wzorek, Mariusz January 2003 (has links)
Computer animated faces have found their way into a wide variety of areas. Starting from entertainment like computer games, through television and films to user interfaces using “talking heads”. Animated faces are also becoming popular in web applications in form of human-like assistants or newsreaders. This thesis presents a few aspects of dealing with human face animations, namely: editing, playing and transmitting such animations. It describes a standard for handling human face animations, the MPEG-4 Face Animation, and shows the process of designing, implementing and evaluating applications compliant to this standard. First, it presents changes introduced to the existing components of the Visage|toolkit package for dealing with facial animations, offered by the company Visage Technologies AB. It also presents the process of designing and implementing of an application for editing facial animations compliant to the MPEG-4 Face Animation standard. Finally, it discusses several approaches to the problem of streaming facial animations over the Internet or the Local Area Network (LAN).
5

Photorealistic models for pupil light reflex and iridal pattern deformation / Modelos fotorealistas para dinâmica pupilar em função da iluminação e deformação dos padrões da iris

Pamplona, Vitor Fernando January 2008 (has links)
Este trabalho introduz um modelo fisiológico para o reflexo pupilar em função das condições de iluminação (Pupil Light Reflex - PLR), e um modelo baseado em imagem para deformação dos padrões da íris. O modelo para PLR expressa o diâmetro da pupila ao longo do tempo e em função da iluminação ambiental, sendo descrito por uma equação diferencial com atraso, adaptando naturalmente o tamanho da pupila a mudanças bruscas de iluminação. Como os parâmetros do nosso modelo são derivados a partir de modelos baseados em experimentos científicos, ele simula corretamente o comportamento da pupila humana para um indivíduo médio. O modelo é então estendido para dar suporte a diferenças individuais e a hippus, além de utilizar modelos para latência e velocidade de dilatação e contração. Outra contribuição deste trabalho é um modelo para deformação realista dos padrões da íris em função da contração e dilatação da pupila. Após capturar várias imagens de íris de diversos voluntários durante diferentes estágios de dilatação, as trajetórias das estruturas das íris foram mapeadas e foi identificado um comportamento médio para as mesmas. Demonstramos a eficácia e qualidade dos resultados obtidos, comparando-os com fotografias e vídeos capturados de íris reais. Os modelos aqui apresentados produzem efeitos foto-realistas e podem ser utilizados para produzir animações preditivas da pupila e da íris em tempo real, na presença de variações na iluminação. Combinados, os dois modelos permitem elevar a qualidade de animações faciais, mais especificamente, animações da íris humana. / This thesis introduces a physiologically-based model for pupil light reflex (PLR) and an image-based model for iridal pattern deformation. The PLR model expresses the pupil diameter as a function of the environment lighting, naturally adapting the pupil diameter even to abrupt changes in light conditions. Since the parameters of the PLR model were derived from measured data, it correctly simulates the actual behavior of the human pupil. The model is extended to include latency, constriction and dilation velocities, individual differences and some constrained random noise to model hippus. The predictability and quality of the simulations were validated through comparisons of modeled results against measured data derived from experiments also described in this work. Another contribution is a model for realist deformation of the iris pattern as a function of pupil dilation and constriction. The salient features of the iris are tracked in photographs, taken from several volunteers during an induced pupil-dilation process, and an average behavior of the iridal features is defined. The effectiveness and quality of the results are demonstrated by comparing the renderings produced by the models with photographs and videos captured from real irises. The resulting models produce high-fidelity appearance effects and can be used to produce real-time predictive animations of the pupil and iris under variable lighting conditions. Combined, the proposed models can bring facial animation to new photorealistic standards.
6

Photorealistic models for pupil light reflex and iridal pattern deformation / Modelos fotorealistas para dinâmica pupilar em função da iluminação e deformação dos padrões da iris

Pamplona, Vitor Fernando January 2008 (has links)
Este trabalho introduz um modelo fisiológico para o reflexo pupilar em função das condições de iluminação (Pupil Light Reflex - PLR), e um modelo baseado em imagem para deformação dos padrões da íris. O modelo para PLR expressa o diâmetro da pupila ao longo do tempo e em função da iluminação ambiental, sendo descrito por uma equação diferencial com atraso, adaptando naturalmente o tamanho da pupila a mudanças bruscas de iluminação. Como os parâmetros do nosso modelo são derivados a partir de modelos baseados em experimentos científicos, ele simula corretamente o comportamento da pupila humana para um indivíduo médio. O modelo é então estendido para dar suporte a diferenças individuais e a hippus, além de utilizar modelos para latência e velocidade de dilatação e contração. Outra contribuição deste trabalho é um modelo para deformação realista dos padrões da íris em função da contração e dilatação da pupila. Após capturar várias imagens de íris de diversos voluntários durante diferentes estágios de dilatação, as trajetórias das estruturas das íris foram mapeadas e foi identificado um comportamento médio para as mesmas. Demonstramos a eficácia e qualidade dos resultados obtidos, comparando-os com fotografias e vídeos capturados de íris reais. Os modelos aqui apresentados produzem efeitos foto-realistas e podem ser utilizados para produzir animações preditivas da pupila e da íris em tempo real, na presença de variações na iluminação. Combinados, os dois modelos permitem elevar a qualidade de animações faciais, mais especificamente, animações da íris humana. / This thesis introduces a physiologically-based model for pupil light reflex (PLR) and an image-based model for iridal pattern deformation. The PLR model expresses the pupil diameter as a function of the environment lighting, naturally adapting the pupil diameter even to abrupt changes in light conditions. Since the parameters of the PLR model were derived from measured data, it correctly simulates the actual behavior of the human pupil. The model is extended to include latency, constriction and dilation velocities, individual differences and some constrained random noise to model hippus. The predictability and quality of the simulations were validated through comparisons of modeled results against measured data derived from experiments also described in this work. Another contribution is a model for realist deformation of the iris pattern as a function of pupil dilation and constriction. The salient features of the iris are tracked in photographs, taken from several volunteers during an induced pupil-dilation process, and an average behavior of the iridal features is defined. The effectiveness and quality of the results are demonstrated by comparing the renderings produced by the models with photographs and videos captured from real irises. The resulting models produce high-fidelity appearance effects and can be used to produce real-time predictive animations of the pupil and iris under variable lighting conditions. Combined, the proposed models can bring facial animation to new photorealistic standards.
7

Photorealistic models for pupil light reflex and iridal pattern deformation / Modelos fotorealistas para dinâmica pupilar em função da iluminação e deformação dos padrões da iris

Pamplona, Vitor Fernando January 2008 (has links)
Este trabalho introduz um modelo fisiológico para o reflexo pupilar em função das condições de iluminação (Pupil Light Reflex - PLR), e um modelo baseado em imagem para deformação dos padrões da íris. O modelo para PLR expressa o diâmetro da pupila ao longo do tempo e em função da iluminação ambiental, sendo descrito por uma equação diferencial com atraso, adaptando naturalmente o tamanho da pupila a mudanças bruscas de iluminação. Como os parâmetros do nosso modelo são derivados a partir de modelos baseados em experimentos científicos, ele simula corretamente o comportamento da pupila humana para um indivíduo médio. O modelo é então estendido para dar suporte a diferenças individuais e a hippus, além de utilizar modelos para latência e velocidade de dilatação e contração. Outra contribuição deste trabalho é um modelo para deformação realista dos padrões da íris em função da contração e dilatação da pupila. Após capturar várias imagens de íris de diversos voluntários durante diferentes estágios de dilatação, as trajetórias das estruturas das íris foram mapeadas e foi identificado um comportamento médio para as mesmas. Demonstramos a eficácia e qualidade dos resultados obtidos, comparando-os com fotografias e vídeos capturados de íris reais. Os modelos aqui apresentados produzem efeitos foto-realistas e podem ser utilizados para produzir animações preditivas da pupila e da íris em tempo real, na presença de variações na iluminação. Combinados, os dois modelos permitem elevar a qualidade de animações faciais, mais especificamente, animações da íris humana. / This thesis introduces a physiologically-based model for pupil light reflex (PLR) and an image-based model for iridal pattern deformation. The PLR model expresses the pupil diameter as a function of the environment lighting, naturally adapting the pupil diameter even to abrupt changes in light conditions. Since the parameters of the PLR model were derived from measured data, it correctly simulates the actual behavior of the human pupil. The model is extended to include latency, constriction and dilation velocities, individual differences and some constrained random noise to model hippus. The predictability and quality of the simulations were validated through comparisons of modeled results against measured data derived from experiments also described in this work. Another contribution is a model for realist deformation of the iris pattern as a function of pupil dilation and constriction. The salient features of the iris are tracked in photographs, taken from several volunteers during an induced pupil-dilation process, and an average behavior of the iridal features is defined. The effectiveness and quality of the results are demonstrated by comparing the renderings produced by the models with photographs and videos captured from real irises. The resulting models produce high-fidelity appearance effects and can be used to produce real-time predictive animations of the pupil and iris under variable lighting conditions. Combined, the proposed models can bring facial animation to new photorealistic standards.

Page generated in 0.1337 seconds