• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 23
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Facial Model and Animation Techniques for Animated Speech

King, Scott Alan 11 October 2001 (has links)
No description available.
12

A COMPARISON STUDY BETWEEN RESULTS OF 3D VIRTUAL FACIAL ANIMATION METHODS: SKELETON, BLENDSHAPE, AUDIO-DRIVEN TECHNIQUE, AND VISION-BASED CAPTURE

Mingzhu Wei (13158648) 27 July 2022 (has links)
<p> In this paper, the authors explore different approaches to animating 3D facial emotions, some of which use manual keyframe facial animation and some of which use machine learning. To compare approaches the authors conducted an experiment consisting of side-by-side comparisons of animation clips generated by skeleton, blendshape, audio-driven, and vision-based capture techniques.</p> <p>Ninety-five participants viewed twenty face animation clips of characters expressing five distinct emotions (anger, sadness, happiness, fear, neutral), which were created using four different facial animation techniques. After viewing each clip, the participants were asked to score the naturalness on a 5-point Likert scale and to identify the emotions that the characters appeared to be conveying.</p> <p>Although the happy emotion clips differed slightly in the naturalness ratings, the naturalness scores of happy emotions produced by the four methods tended to be consistent. The naturalness ratings of the fear emotion created with skeletal animation were higher than other methods.Recognition of sad and neutral were very low for all methods as compared to other emotions. Findings also showed that a few people participants were able to identify the clips that were machine generated rather than created by a human artist.The means, boxplots and HSD revealed that the skeleton approach had significantly higher ratings for naturalness and higher recognition rate than the other methods.</p>
13

Model-Based Eye Detection and Animation

Trejo Guerrero, Sandra January 2006 (has links)
<p>In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.</p><p>Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.</p>
14

Model-Based Eye Detection and Animation

Trejo Guerrero, Sandra January 2006 (has links)
In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted. Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.
15

THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

Lin, Alice J. 01 January 2011 (has links)
Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained.
16

A quadratic deformation model for representing facial expressions

Obaid, Mohammad Hisham Rashid January 2011 (has links)
Techniques for facial expression generation are employed in several applications in computer graphics as well as in the processing of image and video sequences containing faces. Video coding standards such as MPEG-4 support facial expression animation. There are a number of facial expression representations that are application dependent or facial animation standard dependent and most of them require a lot of computational effort. We have developed a completely novel and effective method for representing the primary facial expressions using a model-independent set of deformation parameters (derived using rubber-sheet transformations), which can be easily applied to transform facial feature points. The developed mathematical model captures the necessary non-linear characteristics of deformations of facial muscle regions; producing well-recognizable expressions on images, sketches, and three dimensional models of faces. To show the effectiveness of the method, we developed a variety of novel applications such as facial expression recognition, expression mapping, facial animation and caricature generation.
17

Generating Facial Animation With Emotions In A Neural Text-To-Speech Pipeline

Igeland, Viktor January 2019 (has links)
This thesis presents the work of incorporating facial animation with emotions into a neural text-to-speech pipeline. The project aims to allow for a digital human to utter sentences given only text, removing the need for video input. Our solution consists of a neural network able to generate blend shape weights from speech which is placed in a neural text-to-speech pipeline. We build on ideas from previous work and implement a recurrent neural network using four LSTM layers and later extend this implementation by incorporating emotions. The emotions are learned by the network itself via the emotion layer and used at inference to produce the desired emotion. While using LSTMs for speech-driven facial animation is not a new idea, it has not yet been combined with the idea of using emotional states that are learned by the network itself. Previous approaches are either only two-dimensional, of complicated design or require manual laboring of the emotional states. Thus, we implement a network of simple design, taking advantage of the sequence processing ability of LSTMs and combines it with the idea of emotional states. We trained several variations of the network on data captured using a head mounted camera, and the results of the best performing model were used in a subjective evaluation. During the evaluation the participants were presented several videos and asked to rate the naturalness of the face uttering the sentence. The results showed that the naturalness of the face greatly depends on which emotion vector was used, as some vectors limited the mobility of the face. However, our best achieving emotion vector was rated at the same level of naturalness as the ground truth, proving our method successful. The purpose of the thesis was fulfilled as our implementation demonstrates one possibility of incorporating facial animation into a text-to-speech pipeline.
18

Editing, Streaming and Playing of MPEG-4 Facial Animations

Rudol, Piotr, Wzorek, Mariusz January 2003 (has links)
<p>Computer animated faces have found their way into a wide variety of areas. Starting from entertainment like computer games, through television and films to user interfaces using “talking heads”. Animated faces are also becoming popular in web applications in form of human-like assistants or newsreaders. </p><p>This thesis presents a few aspects of dealing with human face animations, namely: editing, playing and transmitting such animations. It describes a standard for handling human face animations, the MPEG-4 Face Animation, and shows the process of designing, implementing and evaluating applications compliant to this standard. </p><p>First, it presents changes introduced to the existing components of the Visage|toolkit package for dealing with facial animations, offered by the company Visage Technologies AB. It also presents the process of designing and implementing of an application for editing facial animations compliant to the MPEG-4 Face Animation standard. Finally, it discusses several approaches to the problem of streaming facial animations over the Internet or the Local Area Network (LAN).</p>
19

Talking Heads - Models and Applications for Multimodal Speech Synthesis

Beskow, Jonas January 2003 (has links)
This thesis presents work in the area of computer-animatedtalking heads. A system for multimodal speech synthesis hasbeen developed, capable of generating audiovisual speechanimations from arbitrary text, using parametrically controlled3D models of the face and head. A speech-specific directparameterisation of the movement of the visible articulators(lips, tongue and jaw) is suggested, along with a flexiblescheme for parameterising facial surface deformations based onwell-defined articulatory targets. To improve the realism and validity of facial and intra-oralspeech movements, measurements from real speakers have beenincorporated from several types of static and dynamic datasources. These include ultrasound measurements of tonguesurface shape, dynamic optical motion tracking of face pointsin 3D, as well as electromagnetic articulography (EMA)providing dynamic tongue movement data in 2D. Ultrasound dataare used to estimate target configurations for a complex tonguemodel for a number of sustained articulations. Simultaneousoptical and electromagnetic measurements are performed and thedata are used to resynthesise facial and intra-oralarticulation in the model. A robust resynthesis procedure,capable of animating facial geometries that differ in shapefrom the measured subject, is described. To drive articulation from symbolic (phonetic) input, forexample in the context of a text-to-speech system, bothrule-based and data-driven articulatory control models havebeen developed. The rule-based model effectively handlesforward and backward coarticulation by targetunder-specification, while the data-driven model uses ANNs toestimate articulatory parameter trajectories, trained ontrajectories resynthesised from optical measurements. Thearticulatory control models are evaluated and compared againstother data-driven models trained on the same data. Experimentswith ANNs for driving the articulation of a talking headdirectly from acoustic speech input are also reported. A flexible strategy for generation of non-verbal facialgestures is presented. It is based on a gesture libraryorganised by communicative function, where each function hasmultiple alternative realisations. The gestures can be used tosignal e.g. turn-taking, back-channelling and prominence whenthe talking head is employed as output channel in a spokendialogue system. A device independent XML-based formalism fornon-verbal and verbal output in multimodal dialogue systems isproposed, and it is described how the output specification isinterpreted in the context of a talking head and converted intofacial animation using the gesture library. Through a series of audiovisual perceptual experiments withnoise-degraded audio, it is demonstrated that the animatedtalking head provides significantly increased intelligibilityover the audio-only case, in some cases not significantly belowthat provided by a natural face. Finally, several projects and applications are presented,where the described talking head technology has beensuccessfully employed. Four different multimodal spokendialogue systems are outlined, and the role of the talkingheads in each of the systems is discussed. A telecommunicationapplication where the talking head functions as an aid forhearing-impaired users is also described, as well as a speechtraining application where talking heads and languagetechnology are used with the purpose of improving speechproduction in profoundly deaf children. / QC 20100506
20

Editing, Streaming and Playing of MPEG-4 Facial Animations

Rudol, Piotr, Wzorek, Mariusz January 2003 (has links)
Computer animated faces have found their way into a wide variety of areas. Starting from entertainment like computer games, through television and films to user interfaces using “talking heads”. Animated faces are also becoming popular in web applications in form of human-like assistants or newsreaders. This thesis presents a few aspects of dealing with human face animations, namely: editing, playing and transmitting such animations. It describes a standard for handling human face animations, the MPEG-4 Face Animation, and shows the process of designing, implementing and evaluating applications compliant to this standard. First, it presents changes introduced to the existing components of the Visage|toolkit package for dealing with facial animations, offered by the company Visage Technologies AB. It also presents the process of designing and implementing of an application for editing facial animations compliant to the MPEG-4 Face Animation standard. Finally, it discusses several approaches to the problem of streaming facial animations over the Internet or the Local Area Network (LAN).

Page generated in 0.0775 seconds