Spelling suggestions: "subject:"computer animation"" "subject:"computer donimation""
61 |
Digital rotoscoping using Markov random fieldsDrouin, Simon. January 1900 (has links)
Thesis (M.Sc.). / Written for the School of Computer Science. Title from title page of PDF (viewed 2009/06/23). Includes bibliographical references.
|
62 |
Tangent-ball techniques for shape processingWhited, Brian Scott. January 2009 (has links)
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2010. / Committee Chair: Jarek Rossignac; Committee Member: Greg Slabaugh; Committee Member: Greg Turk; Committee Member: Karen Liu; Committee Member: Maryann Simmons. Part of the SMARTech Electronic Thesis and Dissertation Collection.
|
63 |
Anticipating impacts /Hermens, Benjamin J. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2007. / Printout. Includes bibliographical references (leaves 50-53). Also available on the World Wide Web.
|
64 |
Color in three-dimensional shaded computer graphics and animation /Collery, Michael T. January 1900 (has links)
Thesis (M.A.)--Ohio State University, 1985. / Includes bibliographical references (leaves 44-45).
|
65 |
Expressive textures : synthetic and video avatarsFei, Kar Yin Kenny 05 October 2005 (has links)
Please read the abstract in the section 00front of this document / Dissertation (MSc (Computer Science))--University of Pretoria, 2002. / Computer Science / unrestricted
|
66 |
The design of an operating system for a real-time 3-D color animation system /Abaszadeh-Partovi, Naser January 1981 (has links)
No description available.
|
67 |
Collision detection for ellipsoids and other quadricsChoi, Yi-king., 蔡綺瓊. January 2008 (has links)
The Best PhD Thesis in the Faculties of Dentistry, Engineering, Medicine and Science (University of Hong Kong), Li Ka Shing Prize,2007-2008 / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
68 |
Control of objects with a high degree of freedomWang, He January 2012 (has links)
In this thesis, I present novel strategies for controlling objects with high degrees of freedom for the purpose of robotic control and computer animation, including articulated objects such as human bodies or robots and deformable objects such as ropes and cloth. Such control is required for common daily movements such as folding arms, tying ropes, wrapping objects and putting on clothes. Although there is demand in computer graphics and animation for generating such scenes, little work has targeted these problems. The difficulty of solving such problems are due to the following two factors: (1) The complexity of the planning algorithms: The computational costs of the methods that are currently available increase exponentially with respect to the degrees of freedom of the objects and therefore they cannot be applied for full human body structures, ropes and clothes . (2) Lack of abstract descriptors for complex tasks. Models for quantitatively describing the progress of tasks such as wrapping and knotting are absent for animation generation. In this work, we employ the concept of a task-centric manifold to quantitatively describe complex tasks, and incorporate a bi-mapping scheme to bridge this manifold and the configuration space of the controlled objects, called an object-centric manifold. The control problem is solved by first projecting the controlled object onto the task-centric manifold, then getting the next ideal state of the scenario by local planning, and finally projecting the state back to the object-centric manifold to get the desirable state of the controlled object. Using this scheme, complex movements that previously required global path planning can be synthesised by local path planning. Under this framework, we show the applications in various fields. An interpolation algorithm for arbitrary postures of human character is first proposed. Second, a control scheme is suggested in generating Furoshiki wraps with different styles. Finally, new models and planning methods are given for quantitatively control for wrapping/ unwrapping and dressing/undressing problems.
|
69 |
Computer animation via optical video discBender, Walter January 1981 (has links)
Thesis (M.S.V.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1980. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH. VIDEOCASSETTE IN ROTCH VISUAL COLLECTIONS. / Bibliography: leaves 43-45. / This paper explores the notion of marrying two technologies: raster-scan computer animation and optical video discs. Animated sequences, generated at non real-time rates, then transfered to video disc, can be recalled under user control at real-time rates. Highly detailed animation may be combined with other media in interactive systems. Such systems inherently offer a greater degree of flexibility to the animator. The implementation of one such system is discussed in detail. / by Walter Bender. / M.S.V.S.
|
70 |
Facial feature extraction and its applications =: 臉部特徵之擷取及其應用. / 臉部特徵之擷取及其應用 / Facial feature extraction and its applications =: Lian bu te zheng zhi xie qu ji qi ying yong. / Lian bu te zheng zhi xie qu ji qi ying yongJanuary 2001 (has links)
Lau Chun Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 173-177). / Text in English; abstracts in English and Chinese. / Lau Chun Man. / Acknowledgement --- p.ii / Abstract --- p.iii / Contents --- p.vi / List of Tables --- p.x / List of Figures --- p.xi / Notations --- p.xv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Facial features --- p.1 / Chapter 1.1.1 --- Face region --- p.2 / Chapter 1.1.2 --- Contours and locations of facial organs --- p.2 / Chapter 1.1.3 --- Fiducial points --- p.3 / Chapter 1.1.4 --- Features from Principal Components Analysis --- p.5 / Chapter 1.1.5 --- Relationships between facial features --- p.7 / Chapter 1.2 --- Facial feature extraction --- p.8 / Chapter 1.2.1 --- Extraction of contours and locations of facial organs --- p.9 / Chapter 1.2.2 --- Extraction of fiducial points --- p.9 / Chapter 1.3 --- Face recognition --- p.10 / Chapter 1.4 --- Face animation --- p.11 / Chapter 1.5 --- Thesis outline --- p.12 / Chapter 2 --- Extraction of contours and locations of facial organs --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- Deformable template model --- p.21 / Chapter 2.2.1 --- Introduction --- p.21 / Chapter 2.2.2 --- Segmentation of facial organs --- p.21 / Chapter 2.2.3 --- Estimation of iris location --- p.22 / Chapter 2.2.4 --- Eye template model --- p.23 / Chapter 2.2.5 --- Eye contour extraction --- p.24 / Chapter 2.2.6 --- Experimental results --- p.26 / Chapter 2.3 --- Integral projection method --- p.28 / Chapter 2.3.1 --- Introduction --- p.28 / Chapter 2.3.2 --- Pre-processing of the intensity map --- p.28 / Chapter 2.3.3 --- Processing of facial mask --- p.28 / Chapter 2.3.4 --- Integral projection --- p.29 / Chapter 2.3.5 --- Extraction of the irises --- p.30 / Chapter 2.3.6 --- Experimental results --- p.30 / Chapter 2.4 --- Active contour model (Snake) --- p.32 / Chapter 2.4.1 --- Introduction --- p.32 / Chapter 2.4.2 --- Forces on active contour model --- p.33 / Chapter 2.4.3 --- Mathematical representation of Snake --- p.33 / Chapter 2.4.4 --- Internal energy --- p.33 / Chapter 2.4.5 --- Image energy --- p.35 / Chapter 2.4.6 --- External energy --- p.36 / Chapter 2.4.7 --- Energy minimization --- p.36 / Chapter 2.4.8 --- Experimental results --- p.36 / Chapter 2.5 --- Summary --- p.38 / Chapter 3 --- Extraction of fiducial points --- p.39 / Chapter 3.1 --- Introduction --- p.39 / Chapter 3.2 --- Theory --- p.42 / Chapter 3.2.1 --- Face region extraction --- p.42 / Chapter 3.2.2 --- Iris detection and energy function --- p.44 / Chapter 3.2.3 --- Extraction of fiducial points --- p.53 / Chapter 3.2.4 --- Optimization of energy functions --- p.54 / Chapter 3.3 --- Experimental results --- p.55 / Chapter 3.4 --- Geometric features --- p.61 / Chapter 3.4.1 --- Definition of geometric features --- p.61 / Chapter 3.4.2 --- Selection of geometric features for face recognition --- p.63 / Chapter 3.4.3 --- Discussion --- p.73 / Chapter 3.5 --- Gaobr features --- p.75 / Chapter 3.5.1 --- Introduction --- p.75 / Chapter 3.5.2 --- Properties of Gabor wavelets --- p.82 / Chapter 3.5.3 --- Gabor features for face recognition --- p.85 / Chapter 3.6 --- Summary --- p.89 / Chapter 4 --- The use of fiducial points for face recognition --- p.90 / Chapter 4.1 --- Introduction --- p.90 / Chapter 4.1.1 --- Problem of face recognition --- p.92 / Chapter 4.1.2 --- Face recognition process --- p.93 / Chapter 4.1.3 --- Features for face recognition --- p.94 / Chapter 4.1.4 --- Distance measure --- p.95 / Chapter 4.1.5 --- Interpretation of recognition results --- p.96 / Chapter 4.2 --- Face recognition by Principal Components Analysis (PCA) --- p.98 / Chapter 4.2.1 --- Introduction --- p.98 / Chapter 4.2.2 --- PCA recognition system overview --- p.101 / Chapter 4.2.3 --- Face database --- p.103 / Chapter 4.2.4 --- Experimental results and analysis --- p.103 / Chapter 4.3 --- Face recognition by geometric features --- p.105 / Chapter 4.3.1 --- System overview --- p.105 / Chapter 4.3.2 --- Face database --- p.107 / Chapter 4.3.3 --- Experimental results and analysis --- p.107 / Chapter 4.3.4 --- Summary --- p.109 / Chapter 4.4 --- Face recognition by Gabor features --- p.110 / Chapter 4.4.1 --- System overview --- p.110 / Chapter 4.4.2 --- Face database --- p.112 / Chapter 4.4.3 --- Experimental results and analysis --- p.112 / Chapter 4.4.4 --- Comparison of recognition rate --- p.123 / Chapter 4.4.5 --- Summary --- p.124 / Chapter 4.5 --- Summary --- p.125 / Chapter 5 --- The use of fiducial points for face animation --- p.126 / Chapter 5.1 --- Introduction --- p.126 / Chapter 5.2 --- Wire-frame model --- p.129 / Chapter 5.2.1 --- Wire-frame model I --- p.129 / Chapter 5.2.2 --- Wire-frame model II --- p.132 / Chapter 5.3 --- Construction of individualized 3-D face mdoel --- p.133 / Chapter 5.3.1 --- Wire-frame fitting --- p.133 / Chapter 5.3.2 --- Texture mapping --- p.136 / Chapter 5.3.3 --- Experimental results --- p.142 / Chapter 5.4 --- Face definition and animation in MPEG4 --- p.144 / Chapter 5.4.1 --- Introduction --- p.144 / Chapter 5.4.2 --- Correspondences between fiducial points and FDPs --- p.146 / Chapter 5.4.3 --- Automatic generation of FDPs --- p.148 / Chapter 5.4.4 --- Generation of expressions by FAPs --- p.148 / Chapter 5.5 --- Summary --- p.152 / Chapter 6 --- Discussions and Conclusions --- p.153 / Chapter 6.1 --- Discussions --- p.153 / Chapter 6.1.1 --- Extraction of contours and locations of facial organs --- p.154 / Chapter 6.1.2 --- Extraction of fiducial points --- p.155 / Chapter 6.1.3 --- The use of fiducial points for face recognition --- p.156 / Chapter 6.1.4 --- The use of fiducial points for face animation --- p.157 / Chapter 6.2 --- Conclusions --- p.160 / Chapter A --- Mathematical derivation of Principal Components Analysis --- p.160 / Chapter B --- Face database --- p.173 / Bibliography
|
Page generated in 0.1177 seconds