• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 9
  • 8
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Synthesizing gait motions from spline-based progression functions of controlled shape

Salamah, Samer, Brunnett, Guido, Heß, Tobias, Mitschke, Christian 30 October 2019 (has links)
Kinematic approaches of motion generation use joint angles of a specified kinematic skeleton to describe poses and consider motions as sequences of poses. The progression functions, i.e. the functions describing the angular values of the joints over time are usually obtained by motion capturing. So far, approaches to synthesize these functions have only been able to create motions with strong artificial visual appearance. In this paper we present a novel method for generating gait motions from synthesized progression functions. Our method is based on the key observation that the progression functions of the joints involved in gait motions show certain characteristic shapes. Based on empirical evaluations we describe these shapes for all progression functions needed to generate walking movements. Furthermore, we analyze the variation of the described shapes depending on stride length and walking speed. Polynomial splines are used to define functions that mimic the shapes of progression functions and can be easily controlled via the spline parameters. We develop a model that describes how to change the parameters of the splines according to the rules of shape variation observed on empirical data in order to adjust stride length and walking speed. Our method can be used to generate walking motions of virtual characters with user provided stride length or walking speed. To evaluate our method we compare synthesized gait motions with recorded ones. A numerical evaluation shows that the maximal joint displacement between captured and synthesized motions lies within the range of variability of natural human walking.
22

Observational Animation: An Exploration of Improvisation, Interactivity and Spontaneity in Animated Filmmaking

Baker, Jeremy Charles 22 May 2013 (has links)
No description available.
23

Interactive Evolutionary Design with Region-of-Interest Selection for Spatiotemporal Ideation & Generation

Eisenmann, Jonathan A. 26 December 2014 (has links)
No description available.
24

Rhythm & Motion: Animating Chinese Lion Dance with High-level Controls / 節奏與運動:以高階指令控制之中國舞獅動畫

陳哲仁, Chen, Je-Ren Unknown Date (has links)
在這個研究中,我們嘗試將節奏的要素(速度、誇張度與時間調配)參數化,以產生能控制特定風格之人物角色的動畫。角色動作風格化的生成及控制是藉由一個層級式的動畫控制系統RhyCAP (Rhythmic Character Animation Playacting system), 透過一個節奏動作控制(Rhythmic Motion Control, RMC) 的方法來實現。RMC是基於傳統動畫的原則,設計參數化的動作指令,來產生生動並具有說服力的角色動作。此外,RMC也提供了運動行為的模型來控制角色動畫的演出。藉由RhyCAP系統所提供的高階控制介面,即使是沒有經過專業傳統動畫技巧訓練的使用者,也能夠創作出戲劇性的中國舞獅動畫。 / In this research, we attempt to parameterize the rhythmic factors (tempo, exaggeration and timing) into the generation of controllable stylistic character animation. The stylized character motions are generated by a hierarchical animation control system, RhyCAP (Rhythmic Character Animation Playacting system) and realized through an RMC (Rhythmic Motion Control) scheme. The RMC scheme can generate convincible and expressive character motions from versatile action commands with the rhythmic parameters defined according to the principles of traditional animation. Besides, RMC also provide controllable behavior models to enact the characters. By using the high-level control interface of the RhyCAP system, the user is able to create a dramatic Chinese Lion Dance animation intuitively even though he may not be professionally trained with traditional animation skills.
25

Improving and Extending Behavioral Animation Through Machine Learning

Dinerstein, Jonathan J. 20 April 2005 (has links) (PDF)
Behavioral animation has become popular for creating virtual characters that are autonomous agents and thus self-animating. This is useful for lessening the workload of human animators, populating virtual environments with interactive agents, etc. Unfortunately, current behavioral animation techniques suffer from three key problems: (1) deliberative behavioral models (i.e., cognitive models) are slow to execute; (2) interactive virtual characters cannot adapt online due to interaction with a human user; (3) programming of behavioral models is a difficult and time-intensive process. This dissertation presents a collection of papers that seek to overcome each of these problems. Specifically, these issues are alleviated through novel machine learning schemes. Problem 1 is addressed by using fast regression techniques to quickly approximate a cognitive model. Problem 2 is addressed by a novel multi-level technique composed of custom machine learning methods to gather salient knowledge with which to guide decision making. Finally, Problem 3 is addressed through programming-by-demonstration, allowing a non technical user to quickly and intuitively specify agent behavior.
26

Cognitive and Behavioral Model Ensembles for Autonomous Virtual Characters

Whiting, Jeffrey S. 08 June 2007 (has links) (PDF)
Cognitive and behavioral models have become popular methods to create autonomous self-animating characters. Creating these models presents the following challenges: (1) Creating a cognitive or behavioral model is a time intensive and complex process that must be done by an expert programmer (2) The models are created to solve a specific problem in a given environment and because of their specific nature cannot be easily reused. Combining existing models together would allow an animator, without the need of a programmer, to create new characters in less time and would be able to leverage each model's strengths to increase the character's performance, and to create new behaviors and animations. This thesis provides a framework that can aggregate together existing behavioral and cognitive models into an ensemble. An animator only has to rate how appropriately a character performed and through machine learning the system is able to determine how the character should act given the current situation. Empirical results from multiple case studies validate the approach taken.

Page generated in 0.1091 seconds