• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 935
  • 173
  • 93
  • 66
  • 33
  • 32
  • 32
  • 32
  • 32
  • 32
  • 30
  • 30
  • 12
  • 8
  • 6
  • Tagged with
  • 1669
  • 1669
  • 255
  • 200
  • 189
  • 169
  • 160
  • 153
  • 149
  • 147
  • 144
  • 143
  • 143
  • 141
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

Facial Modelling and animation trends in the new millennium : a survey

Radovan, Mauricio 11 1900 (has links)
M.Sc (Computer Science) / Facial modelling and animation is considered one of the most challenging areas in the animation world. Since Parke and Waters’s (1996) comprehensive book, no major work encompassing the entire field of facial animation has been published. This thesis covers Parke and Waters’s work, while also providing a survey of the developments in the field since 1996. The thesis describes, analyses, and compares (where applicable) the existing techniques and practices used to produce the facial animation. Where applicable, the related techniques are grouped in the same chapter and described in a chronological fashion, outlining their differences, as well as their advantages and disadvantages. The thesis is concluded by exploratory work towards a talking head for Northern Sotho. Facial animation and lip synchronisation of a fragment of Northern Sotho is done by using software tools primarily designed for English. / Computing
842

Modeling the performance of many-core programs on GPUs with advanced features

Pei, Mo Mo January 2012 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
843

Simulation of characters with natural interactions

Ye, Yuting 23 February 2012 (has links)
The goal of this thesis is to synthesize believable motions of a character interacting with its surroundings and manipulating objects through physical contacts and forces. Human-like autonomous avatars are in increasing demand in areas such as entertainment, education, and health care. Yet modeling the basic human motor skills of locomotion and manipulation remains a long-standing challenge in animation research. The seemingly simple tasks of navigating an uneven terrain or grasping cups of different shapes involve planning with complex kinematic and physical constraints as well as adaptation to unexpected perturbations. Moreover, natural movements exhibit unique personal characteristics that are complex to model. Although motion capture technologies allow virtual actors to use recorded human motions in many applications, the recorded motions are not directly applicable to tasks involving interactions for two reasons. First, the acquired data cannot be easily adapted to new environments or different tasks goals. Second, acquisition of accurate data is still a challenge for fine scale object manipulations. In this work, we utilize data to create natural looking animations, and mitigate data deficiency with physics-based simulations and numerical optimizations. We develop algorithms based on a single reference motion for three types of control problems. The first problem focuses on motions without contact constraints. We use joint torque patterns identified from the captured motion to simulate responses and recovery of the same style under unexpected pushes. The second problem focuses on locomotion with foot contacts. We use contact forces to control an abstract dynamic model of the center of mass, which sufficiently describes the locomotion task in the input motion. Simulation of the abstract model under unexpected pushes or anticipated changes of the environment results in responses consistent with both the laws of physics and the style of the input. The third problem focuses on fine scale object manipulation tasks, in which accurate finger motions and contact information are not available. We propose a sampling method to discover contact relations between the hand and the object from only the gross motion of the wrists and the object. We then use the abundant contact constraints to synthesize detailed finger motions. The algorithm creates finger motions of various styles for a diverse set of object shapes and tasks, including ones that are not present at capture time. The three algorithms together control an autonomous character with dexterous hands to interact naturally with a virtual world. Our methods are general and robust across character structures and motion contents when testing on a wide variety of motion capture sequences and environments. The work in this thesis brings closer the motor skills of a virtual character to its human counterpart. It provides computational tools for the analysis of human biomechanics, and can potentially inspire the design of novel control algorithms for humanoid robots.
844

Procedural Reduction Maps

Van Horn, R. Brooks, III 16 January 2007 (has links)
Procedural textures and image textures are commonplace in graphics today, finding uses in such places as animated movies and video games. Unlike image texture maps, procedural textures typically suffer from minification aliasing. I present a method that, given a procedural texture on a surface, automatically creates an anti-aliased version of the procedural texture. The new procedural texture maintains the original textures details, but reduces minification aliasing artifacts. This new algorithm creates an image pyramid similar to MIP-Maps to represent the texture. Whereas a MIP-Map stores per-texel color, however, my texture hierarchy stores weighted sums of reflectance functions, allowing a wider-range of effects to be anti-aliased. The stored reflectance functions are automatically selected based on an analysis of the different functions found over the surface. When the texture is viewed at close range, the original texture is used, but as the texture footprint grows, the algorithm gradually replaces the textures result with an anti-aliased one. This results in faster development time for writing procedural textures as well as higher visual fidelity and faster rendering. With the optional addition of authoring guidelines, the analysis phase can be sped up by as much as two orders of magnitude. Furthermore, I developed a method for handling pre-filtered integration of reflectance functions to anti-alias specular highlights. The normal-centric BRDF (NBRDF) allows for fast evaluation over a range of normals appearing on the surface of an object. The NBRDF is easy to implement on the GPU for real-time results and can be combined with procedural reduction maps for real-time procedural texture minification anti-aliasing.
845

Topology Control of Volumetric Data

Vanderhyde, James 06 July 2007 (has links)
Three-dimensional scans and other volumetric data sources often result in representations that are more complex topologically than the original model. The extraneous critical points, handles, and components are called topological noise. Many algorithms in computer graphics require simple topology in order to work optimally, including texture mapping, surface parameterization, flows on surfaces, and conformal mappings. The topological noise disrupts these procedures by requiring each small handle to be dealt with individually. Furthermore, topological descriptions of volumetric data are useful for visualization and data queries. One such description is the contour tree (or Reeb graph), which depicts when the isosurfaces split and merge as the isovalue changes. In the presence of topological noise, the contour tree can be too large to be useful. For these reasons, an important goal in computer graphics is simplification of the topology of volumetric data. The key to this thesis is that the global topology of volumetric data sets is determined by local changes at individual points. Therefore, we march through the data one grid cell at a time, and for each cell, we use a local check to determine if the topology of an isosurface is changing. If so, we change the value of the cell so that the topology change is prevented. In this thesis we describe variations on the local topology check for use in different settings. We use the topology simplification procedure to extract a single component with controlled topology from an isosurface in volume data sets and partially-defined volume data sets. We also use it to remove critical points from three-dimensional volumes, as well as time-varying volumes. We have applied the technique to two-dimensional (plus time) data sets and three dimensional (plus time) data sets.
846

Facial Modelling and animation trends in the new millennium : a survey

Radovan, Mauricio 11 1900 (has links)
M.Sc (Computer Science) / Facial modelling and animation is considered one of the most challenging areas in the animation world. Since Parke and Waters’s (1996) comprehensive book, no major work encompassing the entire field of facial animation has been published. This thesis covers Parke and Waters’s work, while also providing a survey of the developments in the field since 1996. The thesis describes, analyses, and compares (where applicable) the existing techniques and practices used to produce the facial animation. Where applicable, the related techniques are grouped in the same chapter and described in a chronological fashion, outlining their differences, as well as their advantages and disadvantages. The thesis is concluded by exploratory work towards a talking head for Northern Sotho. Facial animation and lip synchronisation of a fragment of Northern Sotho is done by using software tools primarily designed for English. / Computing
847

Computer Graphics Primitives and the Scan-Line Algorithm

Myjak, Michael D. (Michael David) 12 1900 (has links)
This paper presents the scan-line algorithm which has been implemented on the Lisp Machine. The scan-line algorithm resides beneath a library of primitive software routines which draw more fundamental objects: lines, triangles and rectangles. This routine, implemented in microcode, applies the A(BC)*D approach to word boundary alignments in order to create an extremely fast, efficient, and general purpose drawing primitive. The scan-line algorithm improves on previous methodologies by limiting the number of CPU intensive instructions and by minimizing the number of words referenced. This paper will describe how to draw scan-lines and the constraints imposed upon the scan-line algorithm by the Lisp Machine's hardware and software.
848

VISUAL INTERPRETATION TO UNCERTAINTIES IN 2D EMBEDDING FROM PROBABILISTIC-BASED NON-LINEAR DIMENSIONALITY REDUCTION METHODS

Junhan Zhao (11024559) 25 June 2021 (has links)
Enabling human understanding of high-dimensional (HD) data is critical for scientific research but highly challenging. To deal with large datasets, probabilistic-based non-linear DR models, like UMAP and t-SNE, lead the performance on reducing the high dimensionality. However, considering the trade-off between global and local structure preservation and the randomness initialized for computation, applying non-linear models in different parameter settings to unknown high-dimensional structure data may return different 2D visual forms. Much critical neighborhood relationship may be falsely imposed, and uncertainty may be introduced into the low-dimensional embedding visualizations, so-called distortion. In this work, a survey has been conducted to illustrate the most state-of-the-art layout enrichment works for interpreting dimensionality reduction methods and results. Responding to the lack of visual interpretation techniques to probabilistic-based DR methods, we propose a visualization technique called ManiGraph, which facilitates users to explore multi-view 2D embeddings via mesoscopic structure graphs. A dynamic mesoscopic structure first subsets HD data by a hexagonal grid in visual space from non-linear embedding (e.g., UMAP). Then, it measures the regional adapted trustworthiness/continuity and visualizes the restored missing and highlighted false connections between subsets from high-dimensional space to the low-dimensional in a node-linkage manner. The visualization helps users understand and interpret the distortion from both visualization and model stages. We further demonstrate the user cases tested on intuitive 3D toy datasets, fashion-MNIST, and single-cell RNA sequencing with domain experts in unsupervised scenarios. This work will potentially benefit the data science community, from toolkit users to DR algorithm developers.<br>
849

Visual Literacy in Computer Culture: Reading, Writing, and Drawing Logo Turtle Graphics

Horn, Carin E. 08 1900 (has links)
This study seeks to explore relationships between Logo turtle graphics and visual literacy by addressing two related questions: (a) can traditional visual literacy concepts, as found in the published literature, be synthesized in terms of Logo turtle graphics, and (b) do the literature and "hands-on" experience with turtle graphics indicate that visual competencies are pertinent to graphics-based electronic communications in computer culture? The findings of this research illustrate that Logo turtle graphics is a self-contained model to teach visual literacy skills pertinent to computer culture. This model is drawn from synthesizing published literature and the classroom experience of Logo learners, which is demonstrated through their visual solutions to Logo assignments. A visual analysis and interpretation of the subjects' work concludes that the principles and competencies associated with traditional visual literacy skills manifest during the Logo turtle graphics experience. The subjects of this study demonstrate that visual literacy pertinent to computer culture includes reading, writing, and drawing alphanumerics and pictographic information with linguistic equivalence. The logic for this symbolic metaphor is body-syntonic spatial experience explained in geometric terms. The Logo learner employs computational models for visual ideas and visual-verbal symbols for spatial ideas in the course of doing turtle graphics.
850

Machine Learning for 3D Visualisation Using Generative Models

Taif, Khasrouf M.M. January 2020 (has links)
One of the state-of-the-art highlights of deep learning in the past ten years is the introduction of generative adversarial networks (GANs), which had achieved great success in their ability to generate images comparable to real photos with minimum human intervention. These networks can generalise to a multitude of desired outputs, especially in image-to-image problems and image syntheses. This thesis proposes a computer graphics pipeline for 3D rendering by utilising generative adversarial networks (GANs). This thesis is motivated by regression models and convolutional neural networks (ConvNets) such as U-Net architectures, which can be directed to generate realistic global illumination effects, by using a semi-supervised GANs model (Pix2pix) that is comprised of PatchGAN and conditional GAN which is then accompanied by a U-Net structure. Pix2pix had been chosen for this thesis for its ability for training as well as the quality of the output images. It is also different from other forms of GANs by utilising colour labels, which enables further control and consistency of the geometries that comprises the output image. The series of experiments were carried out with laboratory created image sets, to pursue the possibility of which deep learning and generative adversarial networks can lend a hand to enhance the pipeline and speed up the 3D rendering process. First, ConvNet is applied in combination with Support Vector Machine (SVM) in order to pair 3D objects with their corresponding shadows, which can be applied in Augmenter Reality (AR) scenarios. Second, a GANs approach is presented to generate shadows for non-shadowed 3D models, which can also be beneficial in AR scenarios. Third, the possibility of generating high quality renders of image sequences from low polygon density 3D models using GANs. Finally, the possibility to enhance visual coherence of the output image sequences of GAN by utilising multi-colour labels. The results of the adopted GANs model were able to generate realistic outputs comparable to the lab generated 3D rendered ground-truth and control group output images with plausible scores on PSNR and SSIM similarity index metrices.

Page generated in 0.0646 seconds