• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 41
  • 41
  • 21
  • 11
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Morph targets and bone rigging for 3D facial animation : A comparative case study

Larsson, Niklas January 2017 (has links)
Facial animation is an integral and increasing part of 3D games. This study investigates how the two most common methods of 3D facial animation compare to each other. The goal of this study is to summarize the situation and to provide animators and software developers with relevant recommendations. The two most utilized methods of facial animation; morph target animation and bone driven animation are examined with their strong and weak aspects presented. The investigation is based on literature analysis as well as a comparative case study approach which was used for comparing multiple formal and informal sources according to seven parameters such as: performance, production time, technical limitations, details and realism, ease of usability, cross platform compatibility and common combinations of systems. The strengths and weaknesses of the two methods of 3D facial animation are compared and discussed followed by a conclusion part which present recommendation to which is the preferable method to use under different circumstances. In some cases, the results are inconclusive due to a lack of data.  It is concluded that a combination of morph target and bone driven animation will give the most artistic control if time is not limited.
22

Designing an Anatomic Based Eyelid Rig for the Facilitation of Expressive Anthropomorphized Character Animation

English, Ryan Timothy 14 December 2010 (has links)
No description available.
23

The Effect of Facial Expressions Valence on the Perception of the Body Motions of Virtual Groups / Effekten av Känslomässiga Ansiktsuttryck på Uppfattningen av Kroppsrörelser hos Virtuella Grupper

Palmberg, Robin January 2016 (has links)
To understand what modality affects the perception of emotion is interesting since it would help us get closer to understanding and dissecting emotions. Earlier research has shown that body motions can help disambiguate ambiguous (determining something vaguely presented) facial expressions but there is no study that examines the possibility of facial expressions to affect the perception of body motion. That is why this study aims to examine if facial expressions can affect the perception of full body emotions. This is done with a perceptual experiment in which 22 subjects were exposed to stimuli consisting of scenes with virtual characters expressing emotions (see figure 1). It was concluded that the facial expression does affect the perception of joined valence within the group of characters and it is thereby proven that not only body motions can help disambiguating ambiguous facial expressions but that facial expressions can help disambiguating ambiguous body motions and alter the perception of distinct body motions perceived as either positive or negative without face and hands showing. It is also concluded that perceived trustworthiness is affected by the valence of the facial expression, which supports recent studies conducted which aimed to find out what makes for a trustworthy and dominant face with the help of valence as a factor. The perceived relationship within the group is also examined as well as the dominance of the group as a whole but neither gave results clear enough to make any conclusions except that positive valence affects the perceived relationship within the group as closer. The study is conducted using virtual agents but is meant to help better understand people in everyday situations.
24

Analysis and Construction of Engaging Facial Forms and Expressions: Interdisciplinary Approaches from Art, Anatomy, Engineering, Cultural Studies, and Psychology

Kim, Leejin 19 November 2013 (has links)
The topic of this dissertation is the anatomical, psychological, and cultural examination of a human face in order to effectively construct an anatomy-driven 3D virtual face customization and action model. In order to gain a broad perspective of all aspects of a face, theories and methodology from the fields of art, engineering, anatomy, psychology, and cultural studies have been analyzed and implemented. The computer generated facial customization and action model were designed based on the collected data. Using this customization system, culturally-specific attractive face in Korean popular culture, “kot-mi-nam (flower-like beautiful guy),” was modeled and analyzed as a case study. The “kot-mi-nam” phenomenon is overviewed in textual, visual, and contextual aspects, which reveals the gender- and sexuality-fluidity of its masculinity. The analysis and the actual development of the model organically co-construct each other requiring an interwoven process. Chapter 1 introduces anatomical studies of a human face, psychological theories of face recognition and an attractive face, and state-of-the-art face construction projects in the various fields. Chapter 2 and 3 present the Bezier curve-based 3D facial customization (BCFC) and Multi-layered Facial Action Model (MFAF) based on the analysis of human anatomy, to achieve a cost-effective yet realistic quality of facial animation without using 3D scanned data. In the experiments, results for the facial customization for gender, race, fat, and age showed that BCFC achieved enhanced performance of 25.20% compared to existing program Facegen , and 44.12% compared to Facial Studio. The experimental results also proved the realistic quality and effectiveness of MFAM compared with blend shape technique by enhancing 2.87% and 0.03% of facial area for happiness and anger expressions per second, respectively. In Chapter 4, according to the analysis based on BCFC, the 3D face of an average kot-mi-nam is close to gender neutral (male: 50.38%, female: 49.62%), and Caucasian (66.42-66.40%). Culturally-specific images can be misinterpreted in different cultures, due to their different languages, histories, and contexts. This research demonstrates that facial images can be affected by the cultural tastes of the makers and can also be interpreted differently by viewers in different cultures.
25

Facial Modelling and animation trends in the new millennium : a survey

Radovan, Mauricio 11 1900 (has links)
M.Sc (Computer Science) / Facial modelling and animation is considered one of the most challenging areas in the animation world. Since Parke and Waters’s (1996) comprehensive book, no major work encompassing the entire field of facial animation has been published. This thesis covers Parke and Waters’s work, while also providing a survey of the developments in the field since 1996. The thesis describes, analyses, and compares (where applicable) the existing techniques and practices used to produce the facial animation. Where applicable, the related techniques are grouped in the same chapter and described in a chronological fashion, outlining their differences, as well as their advantages and disadvantages. The thesis is concluded by exploratory work towards a talking head for Northern Sotho. Facial animation and lip synchronisation of a fragment of Northern Sotho is done by using software tools primarily designed for English. / Computing
26

Verification, Validation and Evaluation of the Virtual Human Markup Language (VHML) / Verifiering, validering och utvärdering av Virtual Human Markup Language (VHML)

Gustavsson, Camilla, Strindlund, Linda, Wiknertz, Emma January 2002 (has links)
Human communication is inherently multimodal. The information conveyed through body language, facial expression, gaze, intonation, speaking style etc. are all important components of everyday communication. An issue within computer science concerns how to provide multimodal agent based systems. Those are systems that interact with users through several channels. These systems can include Virtual Humans. A Virtual Human might for example be a complete creature, i.e. a creature with a whole body including head, arms, legs etc. but it might also be a creature with only a head, a Talking Head. The aim of the Virtual Human Markup Language (VHML) is to control Virtual Humans regarding speech, facial animation, facial gestures and body animation. These parts have previously been implemented and investigated separately, but VHML aims to combine them. In this thesis VHML is verified, validated and evaluated in order to reach that aim and thus VHML is made more solid, homogenous and complete. Further, a Virtual Human has to communicate with the user and even though VHML supports a number of other ways of communication, an important communication channel is speech. The Virtual Human has to be able to interact with the user, therefore a dialogue between the user and the Virtual Human has to be created. These dialogues tend to expand tremendously, hence the Dialogue Management Tool (DMT) was developed. Having a toolmakes it easier for programmers to create and maintain dialogues for the interaction. Finally, in order to demonstrate the work done in this thesis a Talking Head application, The Mystery at West Bay Hospital, has been developed and evaluated. This has shown the usefulness of the DMT when creating dialogues. The work that has been accomplished within this project has contributed to simplify the development of Talking Head applications.
27

Morphable 3d Facial Animation Based On Thin Plate Splines

Erdogdu, Aysu 01 May 2010 (has links) (PDF)
The aim of this study is to present a novel three dimensional (3D) facial animation method for morphing emotions and facial expressions from one face model to another. For this purpose, smooth and realistic face models were animated with thin plate splines (TPS). Neutral face models were animated and compared with the actual expressive face models. Neutral and expressive face models were obtained from subjects via a 3D face scanner. The face models were preprocessed for pose and size normalization. Then muscle and wrinkle control points were located to the source face with neutral expression according to the human anatomy. Facial Action Coding System (FACS) was used to determine the control points and the face regions in the underlying model. The final positions of the control points after a facial expression were received from the expressive scan data of the source face. Afterwards control points were transferred to the target face using the facial landmarks and TPS as the morphing function. Finally, the neutral target face was animated with control points by TPS. In order to visualize the method, face scans with expressions composed of a selected subset of action units found in Bosphorus Database were used. Five lower-face and three-upper face action units are simulated during this study. For experimental results, the facial expressions were created on the 3D neutral face scan data of a human subject and the synthetic faces were compared to the subject&rsquo / s actual 3D scan data with the same facial expressions taken from the dataset.
28

Verification, Validation and Evaluation of the Virtual Human Markup Language (VHML) / Verifiering, validering och utvärdering av Virtual Human Markup Language (VHML)

Gustavsson, Camilla, Strindlund, Linda, Wiknertz, Emma January 2002 (has links)
<p>Human communication is inherently multimodal. The information conveyed through body language, facial expression, gaze, intonation, speaking style etc. are all important components of everyday communication. An issue within computer science concerns how to provide multimodal agent based systems. Those are systems that interact with users through several channels. These systems can include Virtual Humans. A Virtual Human might for example be a complete creature, i.e. a creature with a whole body including head, arms, legs etc. but it might also be a creature with only a head, a Talking Head. The aim of the Virtual Human Markup Language (VHML) is to control Virtual Humans regarding speech, facial animation, facial gestures and body animation. These parts have previously been implemented and investigated separately, but VHML aims to combine them. In this thesis VHML is verified, validated and evaluated in order to reach that aim and thus VHML is made more solid, homogenous and complete. Further, a Virtual Human has to communicate with the user and even though VHML supports a number of other ways of communication, an important communication channel is speech. The Virtual Human has to be able to interact with the user, therefore a dialogue between the user and the Virtual Human has to be created. These dialogues tend to expand tremendously, hence the Dialogue Management Tool (DMT) was developed. Having a toolmakes it easier for programmers to create and maintain dialogues for the interaction. Finally, in order to demonstrate the work done in this thesis a Talking Head application, The Mystery at West Bay Hospital, has been developed and evaluated. This has shown the usefulness of the DMT when creating dialogues. The work that has been accomplished within this project has contributed to simplify the development of Talking Head applications.</p>
29

Simulation Of Turkish Lip Motion And Facial Expressions In A 3d Environment And Synchronization With A Turkish Speech Engine

Akagunduz, Erdem 01 January 2004 (has links) (PDF)
In this thesis, 3D animation of human facial expressions and lip motion and their synchronization with a Turkish Speech engine using JAVA programming language, JAVA3D API and Java Speech API, is analyzed. A three-dimensional animation model for simulating Turkish lip motion and facial expressions is developed. In addition to lip motion, synchronization with a Turkish speech engine is achieved. The output of the study is facial expressions and Turkish lip motion synchronized with Turkish speech, where the input is Turkish text in Java Speech Markup Language (JSML) format, also indicating expressions. Unlike many other languages, in Turkish, words are easily broken up into syllables. This property of Turkish Language lets us use a simple method to map letters to Turkish visual phonemes. In this method, totally 37 face models are used to represent the Turkish visual phonemes and these letters are mapped to 3D facial models considering the syllable structures. The animation is created using JAVA3D API. 3D facial models corresponding to different lip positions of the same person are morphed to each other to construct the animation. Moreover, simulations of human facial expressions of emotions are created within the animation. Expression weight parameter, which states the weight of the given expression, is introduced. The synchronization of lip motion with Turkish speech is achieved via CloudGarden&reg / &rsquo / s Java Speech API interface. As a final point a virtual Turkish speaker with facial expression of emotions is created for JAVA3D animation.
30

Facial Modelling and animation trends in the new millennium : a survey

Radovan, Mauricio 11 1900 (has links)
M.Sc (Computer Science) / Facial modelling and animation is considered one of the most challenging areas in the animation world. Since Parke and Waters’s (1996) comprehensive book, no major work encompassing the entire field of facial animation has been published. This thesis covers Parke and Waters’s work, while also providing a survey of the developments in the field since 1996. The thesis describes, analyses, and compares (where applicable) the existing techniques and practices used to produce the facial animation. Where applicable, the related techniques are grouped in the same chapter and described in a chronological fashion, outlining their differences, as well as their advantages and disadvantages. The thesis is concluded by exploratory work towards a talking head for Northern Sotho. Facial animation and lip synchronisation of a fragment of Northern Sotho is done by using software tools primarily designed for English. / Computing

Page generated in 0.0284 seconds