• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 935
  • 173
  • 93
  • 66
  • 33
  • 32
  • 32
  • 32
  • 32
  • 32
  • 30
  • 30
  • 12
  • 8
  • 6
  • Tagged with
  • 1669
  • 1669
  • 255
  • 200
  • 189
  • 169
  • 160
  • 153
  • 149
  • 147
  • 144
  • 143
  • 143
  • 141
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

A 2900 microprocessor design for the graphics real time animation display system /

Shahriari, Parviz. January 1982 (has links)
No description available.
522

An interactive computer graphics package for power system analysis based on two-dimensional projections on the voltge space /

Chan, John Tak Yan January 1987 (has links)
No description available.
523

Microding the AMD 2900 bit-slice microprocessor of the graphics real-time animation display system

Chau, Dominic Wah Yan. January 1984 (has links)
No description available.
524

Bodily Expression of Emotions in Animated Pedagogical Agents

Zachary R Meyer (11205522) 29 July 2021 (has links)
The goal of this research is to identify key affective body gestures that can clearly convey four emotions, namely happy, content, bored, and frustrated, in animated characters that lack facial features. Two studies were conducted, a first to identify affective body gestures from a series of videos, and a second to validate the gestures as representative of the four emotions. Videos were created using motion capture data of four actors portraying the four targeted emotions and mapping the data to two 3D character models, one male and one female. In the first study the researcher identified body gestures that are commonly produced by individuals when they experience each of the four emotions being tested. Each body gesture was then annotated with descriptions of the movements using the FABO database. In the second study the researcher tested four sets of identified body gestures, one set for each emotion. The animated gestures were mapped to the 3D character models and 91 participants were asked to identify the emotional state conveyed by the characters through the body gestures. The participants were also asked to rate intensity, typicality, and sincerity for each emotion using a 5-point Likert scale. The study identified six gestures that were shown to have an acceptable recognition rate of at least 80% for three of the four emotions tested. Content was the only emotion which was not conveyed clearly by the identified body gestures. The gender of the character and the participants’ age were found to have a significant effect on recognition rates for the emotions.
525

Real-Time Motion Transition by Example

Egbert, Cameron Quinn 10 November 2005 (has links) (PDF)
Motion transitioning is a common task in real-time applications such as games. While most character motions can be created a priori using motion capture or hand animation, transitions between these motions must be created by an animation system at runtime. Because of this requirement, it is often difficult to create a transition that preserves the feel that the actor or animator has put into the motion. An additional difficulty is that transitions must be created in real-time. This paper provides a method of creating motion transitions that is both computationally feasible for interactive speeds, and preserves the feel of the original motions. To do this, we build the transition from both a procedural motion and a motion segment taken from the motions being transitioned between.
526

Material Appearance Modeling for Physically Based Rendering

Benamira, Alexis 01 January 2023 (has links) (PDF)
Photorealistic rendering focuses on creating images with a computer that imitates pictures of reallife scenes as faithfully as possible. To achieve this, rendering algorithms require incorporating accurate modeling of how light interacts with various types of matter. For most objects, this model needs to account for the scattering of the light rays. However, this model falls short when rendering objects of sizes smaller or comparable to the wavelength of the incident light. In this case, new phenomena such as diffraction or interference are observed and have been characterized in optics. Digital rendering of those phenomena involve different light representations than the approximate light ray optics properties traditionally used in rendering. A first part of this work has been dedicated to creating analytical models to account for appearance phenomena which occur when light is interacting with small objects, namely, hair fibers, thin film coatings and quantum dots. A second part of this work focuses on measured material appearance models and how to find a parametrization over the appearance which can be used for editing.
527

Signal Processing Approaches for Appearance Matching

Scoggins, Randy Keith 10 May 2003 (has links)
The motivation for this work is to study methods of estimating appropriate level-of-detail (LoD) object models by quantifying appearance errors prior to image synthesis. Visualization systems have been developed that employ LoD objects, however, the criteria are often based on heuristics that restrict the form of the object model and rendering method. Also, object illumination is not considered in the LoD selection. This dissertation proposes an image-based scene learning pre-process to determine appropriate LoD for each object in a scene. Scene learning employs sample images of an object, from many views and with a range of geometric representations, to produce a profile of the LoD image error as a function of viewing distance. Signal processing techniques are employed to quantify how images change with respect to object model resolution, viewing distance, and lighting direction. A frequency-space analysis is presented which includes use of the vision system?s contrast sensitivity to evaluate perceptible image differences with error metrics. The initial development of scene learning is directed to sampling the object?s appearance as a function of viewing distance and object geometry in scene space. A second phase allows local lighting to be incorporated in the scene learning pre-process. Two methods for re-lighting are presented that differ in accuracy and overhead; both allow properties of an object?s image to be computed without rendering. In summary, full-resolution objects pro-duce the best image since the 3D scene is as real as possible. A less realistic 3D scene with simpler objects produces a different appearance in an image, but by what amount? My the-sis is such can be had. Namely that object fidelity in the 3D scene can be loosened further than has previously been shown without introducing significant appearance change in an object and that the relationship between 3D object realism and appearance can be expressed quantitatively.
528

Computer display of time variant functions /

Gómez, Julian E. January 1985 (has links)
No description available.
529

Three dimensional computer graphics animation : a tool for spatial skill instruction /

Zavotka, Susan January 1985 (has links)
No description available.
530

The graphics symbiosis system : an interactive mini-computer animation graphics language designed for habitability and extensibility /

De Fanti, Thomas Albert January 1973 (has links)
No description available.

Page generated in 0.0408 seconds