• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Real-Time Probabilistic Locomotion Synthesis for Uneven Terrain / Probabilistisk Rörelsesyntes for ojämn terräng i realtid

Jonsson, Emil January 2021 (has links)
In modern games and animation there is a constant strive for more realistic motion. Today a lot of games use motion matching and blending with lots of post-processing steps to produce animations, but these methods often require huge amounts of motions clips while still having problems with realistic joint weights. Using machine learning for generating motion is a fairly new technique, and is proving to be a viable option due to the lower cost and potentially more realistic results. Probabilistic models could be suitable candidates for solving a problem such as this as the are able to model a wide variety of motions due to their built-in randomness. This thesis examines a few different models which could be used for generating motion for character when interacting with terrain, such as when walking up an incline. The main models examined in this thesis are the MoGlow model and a CVAE model. Firstly virtual scenes are built in Unity based upon loads of motion capture clips containing movements interacting with the terrain. A character is then inserted into the scene and the animation clips are played. Data is exported consisting of the character’s joint positions and rotations in relation to the surrounding terrain. This data is then used to train the models using supervised learning. Evaluation of this is done by having character go through an obstacles course of varying terrains, generating motion from the different models. After this foot sliding was measured as well as frame-rates. This was also compared to values from that of a selection of motion capture clips. In addition to this a user study is conducted where the users are asked to rate the quality of generated motion in certain video clips. The results show that both the MoGlow and CVAE models produced movement resembling real human movement on uneven terrain, with the MoGlow model’s results being most similar to that of a the motion capture training data. These were also found to be executable at interactive frame-rates, making them suitable for use in video games. / I moderna spel och animationer finns det en konstant strävan efter mer realistisk rörelse. I dagsläget använder många spel teknologier så som rörelsematchning och flera efterprocessering steg för att producera animationer, men ett problem med dessa metoder är att det oftast krävs enorma mängder rörelse klipp för att kunna anpassas till alla möjliga situationer, samtidigt som man ofta tappar lite av vikten i rörelserna. Användet av maskinginlärning för att generera rörelser är en relativt ny utveckling, och ses som en möjlig lösning till dessa problem. Probabilistka modeller är en typ av modeller som kan användas för detta, eftersom att de kan representera en bred variation av rörelser med samma model, på grund av den underligande slumpmässigheten. Det här pappret kommer att undersöka olika probabilistka modeller som kan användas för att generera rörelse när man även tar hansyn till omgivningen, tex när man går i en uppförsbacke. De huvudsakliga modellerna som kommer undersökas är en MoGlow model och en CVAE model. Först så byggs virtuella scener in Unity utifrån en mängd animationsklipp. Därefter stoppas en karaktär in och de här klippen spelas upp. I detta steg är data exporterad som innehåller karaktärens position och benens rotationer i relation till omgivningen. Denna data används sedan för att träna modellerna med väglett lärande. Evaluering är genomförd genom att ha karaktärer gå igenom hinderbanor uppbyggda av varierande terränger, där modeller genererar rörelser för karaktären. Fotglidande och bildhastighet är avmätt och resultatet av metoderna är jämfört med varandra och med utvald data från inspelade träningsdatan. Utöver detta görs även en användarstudie där personer får ge betyg till generarde rörelser utifrån en mängd videoklipp. Resultaten visar att båda MoGlow och CVAE modellen producerar rörelse som liknar realsiska männsklig rörelse vid interaktion mod ojämn terräng. MoGlow modellen visar resultat mest likt den inspelade data. Alla modeller testade går att kör interaktiva bildhastigheter, vilket gör dem lämpliga för använding i dataspel.
2

Modèle statistique de l'animation expressive de la parole et du rire pour un agent conversationnel animé / Data-driven expressive animation model of speech and laughter for an embodied conversational agent

Ding, Yu 26 September 2014 (has links)
Notre objectif est de simuler des comportements multimodaux expressifs pour les agents conversationnels animés ACA. Ceux-ci sont des entités dotées de capacités affectives et communicationnelles; ils ont souvent une apparence humaine. Quand un ACA parle ou rit, il est capable de montrer de façon autonome des comportements multimodaux pour enrichir et compléter son discours prononcé et transmettre des informations qualitatives telles que ses émotions. Notre recherche utilise les modèles d’apprentissage à partir données. Un modèle de génération de comportements multimodaux pour un personnage virtuel parlant avec des émotions différentes a été proposé ainsi qu’un modèle de simulation du comportement de rire sur un ACA. Notre objectif est d'étudier et de développer des générateurs d'animation pour simuler la parole expressive et le rire d’un ACA. En partant de la relation liant prosodie de la parole et comportements multimodaux, notre générateur d'animation prend en entrée les signaux audio prononcés et fournit en sortie des comportements multimodaux. Notre travail vise à utiliser un modèle statistique pour saisir la relation entre les signaux donnés en entrée et les signaux de sortie; puis cette relation est transformée en modèle d’animation 3D. Durant l'étape d’apprentissage, le modèle statistique est entrainé à partir de paramètres communs qui sont composés de paramètres d'entrée et de sortie. La relation entre les signaux d'entrée et de sortie peut être capturée et caractérisée par les paramètres du modèle statistique. Dans l'étape de synthèse, le modèle entrainé est utilisé pour produire des signaux de sortie (expressions faciale, mouvement de tête et du torse) à partir des signaux d'entrée (F0, énergie de la parole ou pseudo-phonème du rire). La relation apprise durant la phase d'apprentissage peut être rendue dans les signaux de sortie. Notre module proposé est basé sur des variantes des modèles de Markov cachés (HMM), appelées HMM contextuels. Ce modèle est capable de capturer la relation entre les mouvements multimodaux et de la parole (ou rire); puis cette relation est rendue par l’animation de l’ACA. / Our aim is to render expressive multimodal behaviors for Embodied conversational agents, ECAs. ECAs are entities endowed with communicative and emotional capabilities; they have human-like appearance. When an ECA is speaking or laughing, it is capable of displaying autonomously behaviors to enrich and complement the uttered speech and to convey qualitative information such as emotion. Our research lies in the data-driven approach. It focuses on generating the multimodal behaviors for a virtual character speaking with different emotions. It is also concerned with simulating laughing behavior on an ECA. Our aim is to study and to develop human-like animation generators for speaking and laughing ECA. On the basis of the relationship linking speech prosody and multimodal behaviors, our animation generator takes as input human uttered audio signals and output multimodal behaviors. Our work focuses on using statistical framework to capture the relationship between the input and the output signals; then this relationship is rendered into synthesized animation. In the training step, the statistical framework is trained based on joint features, which are composed of input and of output features. The relation between input and output signals can be captured and characterized by the parameters of the statistical framework. In the synthesis step, the trained framework is used to produce output signals (facial expression, head and torso movements) from input signals (F0, energy for speech or pseudo-phoneme of laughter). The relation captured in the training phase can be rendered into the output signals. Our proposed module is based on variants of Hidden Markov Model (HMM), called Contextual HMM. This model is capable of capturing the relationship between human motions and speech (or laughter); then such relationship is rendered into the synthesized animations.

Page generated in 0.1061 seconds