41 |
MPEG-4 Facial Feature Point Editor / Editor för MPEG-4 "feature points"Lundberg, Jonas January 2002 (has links)
<p>The use of computer animated interactive faces in film, TV, games is ever growing, with new application areas emerging also on the Internet and mobile environments. Morph targets are one of the most popular methods to animate the face. Up until now 3D artists had to design each morph target defined by the MPEG-4 standard by hand. This is a very monotonous and tedious task. With the newly developed method of Facial Motion Cloning [11]the heavy work is relieved from the artists. From an already animated face model the morph targets can now be copied onto a new static face model. </p><p>For the Facial Motion Cloning process there must be a subset of the feature points specified by the MPEG-4 standard defined. The purpose of this is to correlate the facial features of the two faces. The goal of this project is to develop a graphical editor in which the artists can define the feature points for a face model. The feature points will be saved in a file format that can be used in a Facial Motion Cloning software.</p>
|
42 |
基於形態轉換的多種表情卡通肖像 / Automatic generation of caricatures with multiple expressions using transformative approach賴建安, Lai, Chien An Unknown Date (has links)
隨著數位影像軟、硬體裝置上進步與普及,普羅大眾對於影像的使用不僅限於日常生活之中,更隨著網路分享概念等Web技術的擴張,這些數量龐大的影像,在使用上更朝向娛樂化、趣味化及個人化的範疇。本論文提出結合影像處理中的人臉特徵分析(Facial Features Analysis)資訊以及影像內容分割(Image Content Segmentation)及影像變形轉換(Image Warping and Morphing)等技術,設計出可以將真實照片中的人臉轉換成為卡通化的肖像,供使用者於各類媒體上使用。卡通化肖像不但具有隱藏影像細節,保留部份隱私的優勢,同時又兼具充份擁有個人化特色的表徵,透過臉部動畫的參數(Facial Animation Parameters)設定,我們提出的卡通化系統更容許使用者依心情,來合成喜、怒、哀、樂等不同表情。另外,運用兩種轉描式(Rotoscoping)及圖像變形(Morphing)法,以不同的合成技巧來解決不同裝置在限定顏色及效果偏好上的各類需求。 / As the acquisition of digital images becomes more convenient, diversified applications of image collections have surfaced at a rapid pace. Not only have we witnessed the popularity of photo-sharing platforms, we have also seen strong demand for novel mechanism that offers personalized and creative entertainment in recent years. In this thesis, we proposed and implemented a personal caricature generator using transformative approaches. By combing facial feature detection, image segmentation and image warping/morphing techniques, the system is able to generate stylized caricature using only one reference image. The system can also produce multiple expressions by controlling the MPEG-4 facial animation parameters (FAP). Specifically, by referencing to various pre-drawn caricature in our database as well as feature points for mesh creation, personalized caricatures are automatically generated from the real photos using either rotoscoping or transformative approaches. The resulting caricature can be further modified to exhibit multiple facial expressions. Important issues regarding color reduction and vectorized representation of the caricature have also been discussed in this thesis.
|
43 |
Marc : modèles informatiques des émotions et de leurs expressions faciales pour l’interaction Homme-machine affective temps réel / Marc : computational models of emotions and their facial expressions for real-time affective human-computer interactionCourgeon, Matthieu 21 November 2011 (has links)
Les émotions et leurs expressions par des agents virtuels sont deux enjeux importants pour les interfaces homme-machine affectives à venir. En effet, les évolutions récentes des travaux en psychologie des émotions, ainsi que la progression des techniques de l’informatique graphique, permettent aujourd’hui d’animer des personnages virtuels réalistes et capables d’exprimer leurs émotions via de nombreuses modalités. Si plusieurs systèmes d’agents virtuels existent, ils restent encore limités par la diversité des modèles d’émotions utilisés, par leur niveau de réalisme, et par leurs capacités d’interaction temps réel. Dans nos recherches, nous nous intéressons aux agents virtuels capables d’exprimer des émotions via leurs expressions faciales en situation d’interaction avec l’utilisateur. Nos travaux posent de nombreuses questions scientifiques et ouvrent sur les problématiques suivantes : Comment modéliser les émotions en informatique en se basant sur les différentes approches des émotions en psychologie ? Quel niveau de réalisme visuel de l’agent est nécessaire pour permettre une bonne expressivité émotionnelle ? Comment permettre l’interaction temps réel avec un agent virtuel ? Comment évaluer l’impact des émotions exprimées par l’agent virtuel sur l’utilisateur ? A partir de ces problématiques, nous avons axé nos travaux sur la modélisation informatique des émotions et sur leurs expressions faciales par un personnage virtuel réaliste. En effet, les expressions faciales sont une modalité privilégiée de la communication émotionnelle. Notre objectif principal est de contribuer l’amélioration de l’interaction entre l’utilisateur et un agent virtuel expressif. Nos études ont donc pour objectif de mettre en lumière les avantages et les inconvénients des différentes approches des émotions ainsi que des méthodes graphiques étudiées. Nous avons travaillé selon deux axes de recherches complémentaires. D’une part, nous avons exploré différentes approches des émotions (catégorielle, dimensionnelle, cognitive, et sociale). Pour chacune de ces approches, nous proposons un modèle informatique et une méthode d’animation faciale temps réel associée. Notre second axe de recherche porte sur l’apport du réalisme visuel et du niveau de détail graphique à l’expressivité de l’agent. Cet axe est complémentaire au premier, car un plus grand niveau de détail visuel pourrait permettre de mieux refléter la complexité du modèle émotionnel informatique utilisé. Les travaux que nous avons effectués selon ces deux axes ont été évalués par des études perceptives menées sur des utilisateurs.La combinaison de ces deux axes de recherche est rare dans les systèmes d’agents virtuels expressifs existants. Ainsi, nos travaux ouvrent des perspectives pour l’amélioration de la conception d’agents virtuels expressifs et de la qualité de l’interaction homme machine basée sur les agents virtuels expressifs interactifs. L’ensemble des logiciels que nous avons conçus forme notre plateforme d’agents virtuels MARC (Multimodal Affective and Reactive Characters). MARC a été utilisée dans des applications de natures diverses : jeu, intelligence ambiante, réalité virtuelle, applications thérapeutiques, performances artistiques, etc. / Emotions and their expressions by virtual characters are two important issues for future affective human-machine interfaces. Recent advances in psychology of emotions as well as recent progress in computer graphics allow us to animate virtual characters that are capable of expressing emotions in a realistic way through various modalities. Existing virtual agent systems are often limited in terms of underlying emotional models, visual realism, and real-time interaction capabilities. In our research, we focus on virtual agents capable of expressing emotions through facial expressions while interacting with the user. Our work raises several issues: How can we design computational models of emotions inspired by the different approaches to emotion in Psychology? What is the level of visual realism required for the agent to express emotions? How can we enable real-time interaction with a virtual agent? How can we evaluate the impact on the user of the emotions expressed by the virtual agent? Our work focuses on computational modeling of emotions inspired by psychological theories of emotion and emotional facial expressions by a realistic virtual character. Facial expressions are known to be a privileged emotional communication modality. Our main goal is to contribute to the improvement of the interaction between a user and an expressive virtual agent. For this purpose, our research highlights the pros and cons of different approaches to emotions and different computer graphics techniques. We worked in two complementary directions. First, we explored different approaches to emotions (categorical, dimensional, cognitive, and social). For each of these approaches, a computational model has been designed together with a method for real-time facial animation. Our second line of research focuses on the contribution of visual realism and the level of graphic detail of the expressiveness of the agent. This axis is complementary to the first one, because a greater level of visual detail could contribute to a better expression of the complexity of the underlying computational model of emotion. Our work along these two lines was evaluated by several user-based perceptual studies. The combination of these two lines of research is seldom in existing expressive virtual agents systems. Our work opens future directions for improving human-computer interaction based on expressive and interactive virtual agents. The software modules that we have designed are integrated into our platform MARC (Multimodal Affective and Reactive Characters). MARC has been used in various kinds of applications: games, ubiquitous intelligence, virtual reality, therapeutic applications, performance art, etc.
|
44 |
Paramétrisation et transfert d’animations faciales 3D à partir de séquences vidéo : vers des applications en temps réel / Rigging and retargetting of 3D facial animations from video : towards real-time applicationsDutreve, Ludovic 24 March 2011 (has links)
L’animation faciale est l’un des points clés dans le réalisme des scènes 3D qui mettent en scène des personnages virtuels. Ceci s’explique principalement par les raisons suivantes : le visage et les nombreux muscles qui le composent permettent de générer une multitude d’expressions ; ensuite, notre faculté de perception nous permet de détecter et d’analyser ses mouvements les plus fins. La complexité de ce domaine se retrouve dans les approches existantes par le fait qu’il est très difficile de créer une animation de qualité sans un travail manuel long et fastidieux. Partant de ce constat, cette thèse a pour but de développer des techniques qui contribuent au processus de création d’animations faciales. Trois thèmes sont principalement abordés. Le premier concerne la paramétrisation du visage pour l’animation. La paramétrisation a pour but de définir des moyens de contrôle pour pouvoir déformer et animer le visage. Le second s’oriente sur l’animation, et plus particulièrement sur le transfert d’animation. Le but est de proposer une méthode qui permette d’animer le visage d’un personnage à partir de données variées. Ces données peuvent être issues d’un système de capture de mouvement, ou bien elles peuvent être obtenues à partir de l’animation d’un personnage virtuel qui existe déjà. Enfin, nous nous sommes concentrés sur les détails fins liés à l’animation comme les rides. Bien que ces rides soient fines et discrètes, ces déformations jouent un rôle important dans la perception et l’analyse des émotions. C’est pourquoi nous proposons une technique d’acquisition mono-caméra et une méthode à base de poses références pour synthétiser dynamiquement les détails fins d’animation sur le visage. L’objectif principal des méthodes proposées est d’offrir des solutions afin de faciliter et d’améliorer le processus de création d’animations faciales réalistes utilisées dans le cadre d’applications en temps réel. Nous nous sommes particulièrement concentrés sur la facilité d’utilisation et sur la contrainte du temps réel. De plus, nous offrons la possibilité à l’utilisateur ou au graphiste d’interagir afin de personnaliser sa création et/ou d’améliorer les résultats obtenus / Facial animation is one of the key points of the realism of 3D scenes featuring virtual humans. This is due to several reasons : face and the many muscles that compose it can generate a multitude of expressions ; then, our faculty of perception provides us a great ability to detect and analyze its smallest variations. This complexity is reflected in existing approaches by the fact that it is very difficult to create an animation without a long and a tedious manual work. Based on these observations, this thesis aims to develop techniques that contribute to the process of creating facial animation. Three main themes have been addressed. The first concerns the rigging issue of a virtual 3D face for animation. Rigging aims at defining control parameters in order to deform and animate the face. The second deals with the animation, especially on the animation retargeting issue. The goal is to propose a method to animate a character’s face from various data. These data can be obtained from a motion capture system or from an existing 3D facial animation. Finally, we focus on animation finescale details like wrinkles. Although these are thin and discreet, their deformations play an important part in the perception and analysis of emotions. Therefore we propose a monocular acquisition technique and a reference pose based method to synthetise dynamically animation fine details over the face. The purpose is to propose methods to facilitate and improve the process of creating realistic facial animations for interactive applications. We focused on ease to use in addition to the real-time aspect. Moreover, we offer the possibility to the user or graphist to interact in order to personalize its creation and/or improve the results
|
45 |
[en] A SYSTEM FOR GENERATING DYNAMIC FACIAL EXPRESSIONS IN 3D FACIAL ANIMATION WITH SPEECH PROCESSING / [pt] UM SISTEMA DE GERAÇÃO DE EXPRESSÕES FACIAIS DINÂMICAS EM ANIMAÇÕES FACIAIS 3D COM PROCESSAMENTO DE FALAPAULA SALGADO LUCENA RODRIGUES 24 April 2008 (has links)
[pt] Esta tese apresenta um sistema para geração de expressões
faciais dinâmicas sincronizadas com a fala em uma face
realista tridimensional. Entende-se
por expressões faciais dinâmicas aquelas que variam ao
longo do tempo e que semanticamente estão relacionadas às
emoções, à fala e a fenômenos afetivos que podem modificar
o comportamento de uma face em uma animação. A tese define
um modelo de emoção para personagens virtuais falantes, de-
nominado VeeM (Virtual emotion-to-expression Model ),
proposto a partir de uma releitura e uma reestruturação do
modelo do círculo emocional de Plutchik. O VeeM introduz o
conceito de um hipercubo emocional no espaço canônico do R4
para combinar emoções básicas, dando origem a emoções
derivadas. Para validação do VeeM é desenvolvida uma
ferramenta de autoria e apresentação de animações faciais
denominada DynaFeX (Dynamic Facial eXpression), onde um
processamento de fala é realizado para permitir o
sincronismo entre fonemas e visemas. A ferramenta permite a
definição e o refinamento de emoções para cada quadro ou
grupo de quadros de uma animação facial. O subsistema de
autoria permite também, alternativamente, uma manipulação
em alto-nível, através de scripts de animação.
O subsistema de apresentação controla de modo sincronizado
a fala da personagem e os aspectos emocionais editados. A
DynaFeX faz uso de uma malha poligonal tridimensional
baseada no padrão MPEG-4 de animação facial, favorecendo a
interoperabilidade da ferramenta com outros sistemas
de animação facial. / [en] This thesis presents a system for generating dynamic facial
expressions synchronized with speech, rendered using a
tridimensional realistic face. Dynamic facial expressions
are those temporal-based facial expressions semanti-
cally related with emotions, speech and affective inputs
that can modify a facial animation behavior. The thesis
defines an emotion model for speech virtual actors, named
VeeM (Virtual emotion-to-expression Model ), which
is based on a revision of the emotional wheel of Plutchik
model. The VeeM introduces the emotional hypercube concept
in the R4 canonical space to combine pure emotions and
create new derived emotions. In order to validate VeeM, it
has been developed an authoring and player facial animation
tool, named DynaFeX (Dynamic Facial eXpression), where a
speech processing is realized to allow the phoneme and
viseme synchronization. The tool allows either the
definition and refinement of emotions for each frame, or
group of frames, as the facial animation edition using a
high-level approach based on animation scripts. The tool
player controls the animation presentation synchronizing
the speech and emotional features with the virtual
character performance. DynaFeX is built over a
tridimensional polygonal mesh, compliant with MPEG-4 facial
animation standard, what favors tool
interoperability with other facial animation systems.
|
46 |
Hybrid Methods for the Analysis and Synthesis of Human FacesPaier, Wolfgang 18 November 2024 (has links)
Der Trend hin zu virtueller Realität (VR) hat neues Interesse an Themen wie der Modellierung menschlicher Körper geweckt, da sich neue Möglichkeiten für Unterhaltung, Konferenzsysteme und immersive Anwendungen bieten.
Diese Dissertation stellt deshalb neue Ansätze für die Erstellung animierbarer/realistischer 3D-Kopfmodelle, zur computergestützten Gesichtsanimation aus Text/Sprache sowie zum fotorealistischen Echtzeit-Rendering vor.
Um die 3D-Erfassung zu vereinfachen, wird ein hybrider Ansatz genutzt, der statistische Kopfmodelle mit dynamischen Texturen kombiniert.
Das Modell erfasst Kopfhaltung und großflächige Deformationen, während die Texturen feine Details und komplexe Bewegungen kodieren.
Anhand der erfassten Daten wird ein generatives Modell trainiert, das realistische Gesichtsausdrücke aus einem latenten Merkmalsvektor rekonstruiert.
Zudem wird eine neue neuronale Rendering-Technik presentiert, die lernt den Vordergrund (Kopf) vom Hintergrund zu trennen.
Das erhöht die Flexibilität während der Inferenz (z. B. neuer Hintergrund) und vereinfacht den Trainingsprozess, da die Segmentierung nicht vorab berechnet werden muss.
Ein neuer Animationsansatz ermöglicht die automatische Synthese von Gesichtsvideos auf der Grundlage weniger Trainingssequenzen.
Im Gegensatz zu bestehenden Arbeiten lernt das Verfahren einen latenten Merkmalsraum, der sowohl Emotionen als auch visuelle Variationen der Sprache erfasst, während gelernte Priors Animations-Artefakte und unrealistische Kopfbewegungen minimieren.
Nach dem Training ist es möglich, realistische Sprachsequenzen zu erzeugen, während der latente Stil-Raum zusätzliche Gestaltungsmöglichkeiten bietet.
Die vorgestellten Methoden bilden ein Komplettsystem für die realistische 3D-Modellierung, Animation und Darstellung von menschlichen Köpfen, das den Stand der Technik übertrifft. Dies wird in verschiedenen Experimenten, Ablations-/Nutzerstudien gezeigt und ausführlich diskutiert. / The recent trend of virtual reality (VR) has sparked new interest in human body modeling by offering new possibilities for entertainment, conferencing, and immersive applications (e.g., intelligent virtual assistants). Therefore, this dissertation presents new approaches to creating animatable and realistic 3D head models, animating human faces from text/speech, and the photo-realistic rendering of head models in real-time.
To simplify complex 3D face reconstruction, a hybrid approach is introduced that combines a lightweight statistical head model for 3D geometry with dynamic textures. The model captures head orientation and large-scale deformations, while textures encode fine details and complex motions. A deep variational autoencoder trained on these textured meshes learns to synthesize realistic facial expressions from a compact vector. Additionally, a new neural-rendering technique is proposed that separates the head (foreground) from the background, providing more flexibility during inference (e.g., rendering on novel backgrounds) and simplifying the training process as no segmentation masks have to be pre-computed.
This dissertation also presents a new neural-network-based approach to synthesizing novel face animations based on emotional speech videos of an actor. Unlike existing works, the proposed model learns a latent animation style space that captures emotions as well as natural variations in visual speech. Additionally, learned animation priors minimize animation artifacts and unrealistic head movements. After training, the animation model offers temporally consistent editing of the animation style according to the users’ needs.
Together, the presented methods provide an end-to-end system for realistic 3D modeling, animation, and rendering of human heads. Various experimental results, ablation studies, and user evaluations demonstrate that the proposed approaches outperform the state-of-the-art.
|
Page generated in 0.1144 seconds