• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 41
  • 41
  • 21
  • 11
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Facial Model and Animation Techniques for Animated Speech

King, Scott Alan 11 October 2001 (has links)
No description available.
12

A COMPARISON STUDY BETWEEN RESULTS OF 3D VIRTUAL FACIAL ANIMATION METHODS: SKELETON, BLENDSHAPE, AUDIO-DRIVEN TECHNIQUE, AND VISION-BASED CAPTURE

Mingzhu Wei (13158648) 27 July 2022 (has links)
<p> In this paper, the authors explore different approaches to animating 3D facial emotions, some of which use manual keyframe facial animation and some of which use machine learning. To compare approaches the authors conducted an experiment consisting of side-by-side comparisons of animation clips generated by skeleton, blendshape, audio-driven, and vision-based capture techniques.</p> <p>Ninety-five participants viewed twenty face animation clips of characters expressing five distinct emotions (anger, sadness, happiness, fear, neutral), which were created using four different facial animation techniques. After viewing each clip, the participants were asked to score the naturalness on a 5-point Likert scale and to identify the emotions that the characters appeared to be conveying.</p> <p>Although the happy emotion clips differed slightly in the naturalness ratings, the naturalness scores of happy emotions produced by the four methods tended to be consistent. The naturalness ratings of the fear emotion created with skeletal animation were higher than other methods.Recognition of sad and neutral were very low for all methods as compared to other emotions. Findings also showed that a few people participants were able to identify the clips that were machine generated rather than created by a human artist.The means, boxplots and HSD revealed that the skeleton approach had significantly higher ratings for naturalness and higher recognition rate than the other methods.</p>
13

Model-Based Eye Detection and Animation

Trejo Guerrero, Sandra January 2006 (has links)
<p>In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.</p><p>Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.</p>
14

Model-Based Eye Detection and Animation

Trejo Guerrero, Sandra January 2006 (has links)
In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted. Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.
15

A quadratic deformation model for representing facial expressions

Obaid, Mohammad Hisham Rashid January 2011 (has links)
Techniques for facial expression generation are employed in several applications in computer graphics as well as in the processing of image and video sequences containing faces. Video coding standards such as MPEG-4 support facial expression animation. There are a number of facial expression representations that are application dependent or facial animation standard dependent and most of them require a lot of computational effort. We have developed a completely novel and effective method for representing the primary facial expressions using a model-independent set of deformation parameters (derived using rubber-sheet transformations), which can be easily applied to transform facial feature points. The developed mathematical model captures the necessary non-linear characteristics of deformations of facial muscle regions; producing well-recognizable expressions on images, sketches, and three dimensional models of faces. To show the effectiveness of the method, we developed a variety of novel applications such as facial expression recognition, expression mapping, facial animation and caricature generation.
16

Editing, Streaming and Playing of MPEG-4 Facial Animations

Rudol, Piotr, Wzorek, Mariusz January 2003 (has links)
<p>Computer animated faces have found their way into a wide variety of areas. Starting from entertainment like computer games, through television and films to user interfaces using “talking heads”. Animated faces are also becoming popular in web applications in form of human-like assistants or newsreaders. </p><p>This thesis presents a few aspects of dealing with human face animations, namely: editing, playing and transmitting such animations. It describes a standard for handling human face animations, the MPEG-4 Face Animation, and shows the process of designing, implementing and evaluating applications compliant to this standard. </p><p>First, it presents changes introduced to the existing components of the Visage|toolkit package for dealing with facial animations, offered by the company Visage Technologies AB. It also presents the process of designing and implementing of an application for editing facial animations compliant to the MPEG-4 Face Animation standard. Finally, it discusses several approaches to the problem of streaming facial animations over the Internet or the Local Area Network (LAN).</p>
17

Talking Heads - Models and Applications for Multimodal Speech Synthesis

Beskow, Jonas January 2003 (has links)
This thesis presents work in the area of computer-animatedtalking heads. A system for multimodal speech synthesis hasbeen developed, capable of generating audiovisual speechanimations from arbitrary text, using parametrically controlled3D models of the face and head. A speech-specific directparameterisation of the movement of the visible articulators(lips, tongue and jaw) is suggested, along with a flexiblescheme for parameterising facial surface deformations based onwell-defined articulatory targets. To improve the realism and validity of facial and intra-oralspeech movements, measurements from real speakers have beenincorporated from several types of static and dynamic datasources. These include ultrasound measurements of tonguesurface shape, dynamic optical motion tracking of face pointsin 3D, as well as electromagnetic articulography (EMA)providing dynamic tongue movement data in 2D. Ultrasound dataare used to estimate target configurations for a complex tonguemodel for a number of sustained articulations. Simultaneousoptical and electromagnetic measurements are performed and thedata are used to resynthesise facial and intra-oralarticulation in the model. A robust resynthesis procedure,capable of animating facial geometries that differ in shapefrom the measured subject, is described. To drive articulation from symbolic (phonetic) input, forexample in the context of a text-to-speech system, bothrule-based and data-driven articulatory control models havebeen developed. The rule-based model effectively handlesforward and backward coarticulation by targetunder-specification, while the data-driven model uses ANNs toestimate articulatory parameter trajectories, trained ontrajectories resynthesised from optical measurements. Thearticulatory control models are evaluated and compared againstother data-driven models trained on the same data. Experimentswith ANNs for driving the articulation of a talking headdirectly from acoustic speech input are also reported. A flexible strategy for generation of non-verbal facialgestures is presented. It is based on a gesture libraryorganised by communicative function, where each function hasmultiple alternative realisations. The gestures can be used tosignal e.g. turn-taking, back-channelling and prominence whenthe talking head is employed as output channel in a spokendialogue system. A device independent XML-based formalism fornon-verbal and verbal output in multimodal dialogue systems isproposed, and it is described how the output specification isinterpreted in the context of a talking head and converted intofacial animation using the gesture library. Through a series of audiovisual perceptual experiments withnoise-degraded audio, it is demonstrated that the animatedtalking head provides significantly increased intelligibilityover the audio-only case, in some cases not significantly belowthat provided by a natural face. Finally, several projects and applications are presented,where the described talking head technology has beensuccessfully employed. Four different multimodal spokendialogue systems are outlined, and the role of the talkingheads in each of the systems is discussed. A telecommunicationapplication where the talking head functions as an aid forhearing-impaired users is also described, as well as a speechtraining application where talking heads and languagetechnology are used with the purpose of improving speechproduction in profoundly deaf children. / QC 20100506
18

Editing, Streaming and Playing of MPEG-4 Facial Animations

Rudol, Piotr, Wzorek, Mariusz January 2003 (has links)
Computer animated faces have found their way into a wide variety of areas. Starting from entertainment like computer games, through television and films to user interfaces using “talking heads”. Animated faces are also becoming popular in web applications in form of human-like assistants or newsreaders. This thesis presents a few aspects of dealing with human face animations, namely: editing, playing and transmitting such animations. It describes a standard for handling human face animations, the MPEG-4 Face Animation, and shows the process of designing, implementing and evaluating applications compliant to this standard. First, it presents changes introduced to the existing components of the Visage|toolkit package for dealing with facial animations, offered by the company Visage Technologies AB. It also presents the process of designing and implementing of an application for editing facial animations compliant to the MPEG-4 Face Animation standard. Finally, it discusses several approaches to the problem of streaming facial animations over the Internet or the Local Area Network (LAN).
19

Animação facial por computador baseada em modelagem biomecanica / Computer facial animation based on biomechanical modeling

Correa, Renata 11 July 2007 (has links)
Orientadores: Leo Pini Magalhães, Jose Mario De Martino / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-10T02:00:25Z (GMT). No. of bitstreams: 1 Correa_Renata_M.pdf: 4570462 bytes, checksum: c427bcfe94559d86730c51711bd67985 (MD5) Previous issue date: 2007 / Resumo: A crescente busca pelo realismo em personagens virtuais encontrados em diversas aplicações na indústria do cinema, no ensino, jogos, entre outras, é a motivação do presente trabalho. O trabalho descreve um modelo de animação que emprega a estratégia biomecânica para o desenvolvimento de um protótipo computacional, chamado SABiom. A técnica utilizada baseia-se na simulação de características físicas da face humana, tais como as camadas de pele e músculos, que são modeladas de forma a permitir a simulação do comportamento mecânico do tecido facial sob a ação de forças musculares. Embora existam vários movimentos produzidos por uma face, o presente trabalho restringiu-se às simulações dos movimentos de expressões faciais focalizando os lábios. Para validar os resultados obtidos com o SABiom, comparou-se as imagens do modelo virtual obtidas através do protótipo desenvolvido com imagens obtidas de um modelo humano / Abstract: The increasing search for realism in virtual characters found in' many applications as movies, education, games, so on, is the motivation ofthis thesis. The thesis describes an animation model that employs the biomechanics strategy for the development of a computing prototype, called SABiom. The method used is based on simulation of physical features of the human face, such as layers of the skin and musc1es, that are modeled for simulation of the mechanical behavior of the facial tissue under the action of muscle forces. Although there are several movements produced by a face, the current work limits itself to the simulations of the facial expressions focusing the lips. To validate the results obtained from SABiom, we compared the images of the virtual model with images from a human model / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
20

Ansiktsanimation i 3D : Vikten av Disneys 12 animationsprinciper: Appeal, Follow Through and Overlapping Action, Secondary Action och Anticipation

Jansson, Andreas, Eckardt, Mattias January 2013 (has links)
Den här uppsatsen undersöker vikten av fyra av Disneys 12 animationsprinciper i ansiktsanimation för 3D. De principer som undersöks är Anticipation, Followthrough and Overlapping action, Appeal och Secondary action. Detta prövas genom att först skapa en originalanimation, där alla Disneys animationsprinciper används, och därefter skapa fyra ytterligare, separata, animationer där en av detidigare nämnda principerna tas bort. Dessa animationer visades sedan för sex testpersoner som därefter fick svara frågor i en intervju. Det märktes tydligt att alla av de fyra testade principerna är viktiga för en animation, dock var avsaknaden av Appeal eller Secondary action tydligast. / This bachelor-thesis discusses the importance of four of Disney’s 12 animation principles in 3D facial animation. The principles discussed are Anticipation, Followthrough and Overlapping action, Appeal and Secondary Action. This is done by first constructing a standard animation, using all of Disney’s animation principles, and then creating four additional, separate, animations by removing the usage of one of the afore mentioned principles at the time. Six test subjects then view the animations and answer a series of questions in an interview format. We found that all four of the tested principles are important to an animation, but that out of the four it was the most obvious when either Appeal or Secondary Action was omitted.

Page generated in 0.0305 seconds