Spelling suggestions: "subject:"gendering (computer graphics)"" "subject:"gendering (coomputer graphics)""
11 |
Pixelating Vector ArtInglis, Tiffany C. January 2014 (has links)
Pixel art is a popular style of digital art often found in video games. It is typically characterized by its low resolution and use of limited colour palettes. Pixel art is created manually with little automation because it requires attention to pixel-level details. Working with individual pixels is a challenging and abstract task, whereas manipulating higher-level objects in vector graphics is much more intuitive. However, it is difficult to bridge this gap because although many rasterization algorithms exist, they are not well-suited for the particular needs of pixel artists, particularly at low resolutions. In this thesis, we introduce a class of rasterization algorithms called pixelation that is tailored to pixel art needs. We describe how our algorithm suppresses artifacts when pixelating vector paths and preserves shape-level features when pixelating geometric primitives. We also developed methods inspired by pixel art for drawing lines and angles more effectively at low resolutions. We compared our results to rasterization algorithms, rasterizers used in commercial software, and human subjects---both amateurs and pixel artists. Through formal analyses of our user study studies and a close collaboration with professional pixel artists, we showed that, in general, our pixelation algorithms produce more visually appealing results than na\"{i}ve rasterization algorithms do.
|
12 |
A hybrid real-time visible surface solution for rays with a common origin and arbitrary directionsJohnson, Gregory Scott, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2008. / Vita. Includes bibliographical references.
|
13 |
Occlusion-resolving direct volume rendering /Mak, Wai Ho. January 2009 (has links)
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2009. / Includes bibliographical references (p. 53-57).
|
14 |
Real-time rendering of synthetic terrainMcRoberts, Duncan Andrew Keith 07 June 2012 (has links)
M.Sc. / Real-time terrain rendering (RTTR) is an exciting eld in computer graphics. The algorithms and techniques developed in this domain allow immersive virtual environments to be created for interactive applications. Many di culties are encountered in this eld of research, including acquiring the data to model virtual worlds, handling huge amounts of geometry, and texturing landscapes that appear to go on forever. RTTR has been widely studied, and powerful methodologies have been developed to overcome many of these obstacles. Complex natural terrain features such as detailed vertical surfaces, overhangs and caves, however, are not easily supported by the majority of existing algorithms. It becomes di cult to add such detail to a landscape. Existing techniques are incredibly e cient at rendering elevation data, where for any given position on a 2D horizontal plane we have exactly 1 altitude value. In this case we have a many-to-1 mapping between 2D position and altitude, as many 2D coordinates may map to 1 altitude value but any single 2D coordinate maps to 1 and only 1 altitude. In order to support the features mentioned above we need to allow for a many-to-many mapping. As an example, with a cave feature for a given 2D coordinate we would have elevation values for the oor, the roof and the outer ground. In this dissertation we build upon established techniques to allow for this manyto- many mapping, and thereby add support for complex terrain features. The many-to-many mapping is made possible by making use of geometry images in place of height-maps. Another common problem with existing RTTR algorithms is texture distortion. Texturing is an inexpensive means of adding detail to rendered terrain. Many existing technique map texture coordinates in 2D, leading to distortion on steep surfaces. Our research attempts to reduce texture distortion in such situations by allowing a more even spread of texture coordinates. Geometry images make this possible as they allow for a more even distribution of sample positions. Additionally we devise a novel means of blending tiled texture that enhances the important features of the individual textures. Fully sampled terrain employs a single global texture that covers the entire landscape. This technique provides great detail, but requires a huge volume of data. Tiled texturing requires comparatively little data, but su ers from disturbing regular patterns. We seek to reduce the gap between tiled textures and fully sampled textures. In particular, we aim at reducing the regularity of tiled textures by changing the blending function. In summary, the goal of this research is twofold. Firstly we aim to support complex natural terrain features|speci cally detailed vertical surfaces, over-hangs and caves. Secondly we wish to improve terrain texturing by reducing texture distortion, and by blending tiled texture together in a manner that appears more natural. We have developed a level of detail algorithm which operates on geometry images, and a new texture blending technique to support these goals.
|
15 |
Modèles de rendu et animation émotionnelle en 3 D / 3D emotional rendering and animation modelsHuang, Jing 26 February 2013 (has links)
L'animation et le rendu sont deux domaines de recherche importants dans l'informatique graphique. L'occlusion ambiante (OA) est un moyen très répandu pour simuler l'éclairage indirect. Nous présentons une approche rapide et facile à mettre en œuvre pour l'approximation de l'occlusion ambiante de l'espace d'affichage. On calcule l'OA pour chaque pixel en intégrant les valeurs angulaires des échantillonneurs autour de la position du pixel qui pourrait bloquer l'éclairage ambiant. Nous appliquons une méthode séparable afin de réduire la complexité du calcul. La simulation des rides expressives du visage peut être estimée sans changer l'information géométrique. Nous avons construit un modèle de rides en utilisant une technique graphique qui effectue des calculs seulement dans l'espace d'affichage. Les animations faciales sont beaucoup plus réalistes avec la présence des rides. Nous présentons une méthode de cinématique inverse rapide et facile à mettre en œuvre qui s'appuie sur un modèle masse-ressort et qui repose sur les interactions de forces entre les masses. Les interactions de forces entre les masses peuvent être vues comme un problème de minimisation de l'énergie. Elle offre une très bonne qualité visuelle en haute performance de vitesse. En se basant sur notre méthode d'IK, nous proposons un modèle de synthèse des gestes corporels expressifs intégrés dans notre plateforme d'agents conversationnels. Nous appliquons l'animation de tout le corps enrichi par l'aspect expressif. Ce système offre plus de flexibilité pour configurer la cinématique expressive directe ou indirecte. De façon globale, cette thèse présente notre travail sur le rendu et l'animation en 3D. / Animation and rendering are both important research domains in computer graphics. We present a fast easy-to-implement separable approximation to screen space ambient occlusion.We evaluate AO for each pixel by integrating angular values of samplers around the pixel position which potentially block the ambient lighting.We apply a separable fashion to reduce the complexity of the evaluation. Wrinkle simulation can also be approximated without changing geometry information.We built a wrinkles model by using a modern graphics technique which performs computations only in screen space.With the help of wrinkles, the facial animation can be more realistic. Several factors have been proved, and wrinkles can help to recognize action units with a higher rate. Inverse kinematics (IK) can be used to find the hierarchical posture solutions. We present a fast and easy-to-implement locally physics-based IK method. Our method builds upon a mass-spring model and relies on force interactions between masses. Our method offers convincing visual quality results obtained with high time performance. Base on our IK method, we propose our expressive body-gestures animation synthesis model for our Embodied Conversational Agent (ECA) technology. Our implementation builds upon a full body reach model using a hybrid kinematics solution. Generated animations can be enhanced with expressive qualities.This system offers more flexibility to configure expressive Forward and Inverse Kinematics (FK and IK). It can be extended to other articulated figures. Overall, this thesis presents our work in 3D rendering and animation. Several new approaches have been proposed to improve both the quality and the speed.
|
16 |
Non-photorealistic rendering with coherence for augmented realityChen, Jiajian 16 July 2012 (has links)
A seamless blending of the real and virtual worlds is key to increased immersion and improved user experiences for augmented reality (AR). Photorealistic and non-photorealistic rendering (NPR) are two ways to achieve this goal. Non-photorealistic rendering creates an abstract and stylized version of both the real and virtual world, making them indistinguishable. This could be particularly useful in some applications (e.g., AR/VR aided machine repair, or for virtual medical surgery) or for certain AR games with artistic stylization.
Achieving temporal coherence is a key challenge for all NPR algorithms. Rendered results are temporally coherent when each frame smoothly and seamlessly transitions to the next one without visual flickering or artifacts that distract the eye from perceived smoothness. NPR algorithms with coherence are interesting in both general computer graphics and AR/VR areas. Rendering stylized AR without coherence processing causes the final results to be visually distracting. While various NPR algorithms with coherence support have been proposed in general graphics community for video processing, many of these algorithms require thorough analysis of all frames of the input video and cannot be directly applied to real-time AR applications. We have investigated existing NPR algorithms with coherence in both general graphics and AR/VR areas. These algorithms are divided into two categories: Model Space and Image Space. We present several NPR algorithms with coherence for AR: a watercolor inspired NPR algorithm, a painterly rendering algorithm, and NPR algorithms in the model space that can support several styling effects.
|
17 |
GPU implementace algoritmů irradiance a radiance caching / GPU implementation of the irradiance and radiance caching algorithmsBulant, Martin January 2015 (has links)
The object of this work is to create software implementing two algorithms for global ilumination computing. Iradiance and radiance caching should be implemented in CUDA framework on graphics card (GPU). Parallel implementation on GPU should dramatically improve algoritm speed compared to CPU implementation. The software will be written using already done framework for global illumunation computation. That allow to focus to algorithm implementation only. This work should speed up testing of new or existing methods for global illumination computing, because saving and reusing of intermediate results can be used for other algorithms too. Powered by TCPDF (www.tcpdf.org)
|
18 |
GPU implementace algoritmů irradiance a radiance caching / GPU implementation of the irradiance and radiance caching algorithmsBulant, Martin January 2015 (has links)
The objective of this work is to create software implementing two algorithms for global ilumination computation. Iradiance and radiance caching should be implemented in CUDA framework on a graphics card (GPU). Parallel implementation on the GPU should improve algoritm speed compared to CPU implementation. The software will be written using an already done framework for global illumunation computation. That allows to focus on algorithm implementation only. This work should speed up testing of new or existing methods for global illumination computing, because saving and reusing of intermediate results can be used for other algorithms too. Powered by TCPDF (www.tcpdf.org)
|
19 |
Colorization in Gabor space and realistic surface rendering on GPUs. / 基於Gabor特徵空間的染色技術與真實感表面GPU繪製 / CUHK electronic theses & dissertations collection / Ji yu Gabor te zheng kong jian de ran se ji shu yu zhen shi gan biao mian GPU hui zhiJanuary 2011 (has links)
Based on the construction of Gabor feature space, which is important in applying pixel similarity computations, we formalize the space using rotation-invariant Gabor filter banks and apply optimizations in texture feature space. In image colorizations, the pixels that have similar Gabor features appear similar colors, our approach can colorize natural images globally, without the restriction of the disjoint regions with similar texture-like appearances. Our approach supports the two-pass colorization processes: coloring optimization in Gabor space and color detailing for progressive effects. We further work on the video colorization using the optimized Gabor flow computing, including coloring keyframes, color propagation by Gabor filtering, and optimized parallel computing over the video. Our video colorization is designed in a spatiotemporal manner to keep temporal coherence, and provides simple closed-form solutions in energy optimization that yield fast colonizations. Moreover, we develop parallel surface texturing of geometric models on GPU, generating spatially-varying visual appearances. We incorporate the Gabor feature space for the searching of 2D exemplars, to determine the k-coherence candidate pixels. The multi-pass correction in synthesis is applied to the local neighborhood for parallel processes. The iso/aniso-scale texture synthesis leverages the strengths of GPU computing, so to synthesize the iso/aniso-scale texturing appearance in parallel over arbitrary surfaces. Our experimental results showed that our approach produces simply controllable texturing effects of surface synthesis, generating texture-similar and spatially-varying visual appearances with GPU accelerated performance. / Texture feature similarity has long been crucial and important topic in VR/graphics applications, such as image and video colorizations, surface texture synthesis and geometry image applications. Generally, the image feature is highly subjective, depending on not only the image pixels but also interactive users. Existing colorization and surface texture synthesis pay little attention to the generation of conforming color/textures that accurately reflect exemplar structures or user's intension. Realistic surface synthesis remains a challenging task in VR/graphics researches. In this dissertation, we focus on the encoding of the Gabor filter banks into texture feature similarity computations and GPU-parallel surface rendering faithfully, including image/vodeo colorizations, parallel texturing of geometric surfaces, and multiresolution rendering on sole-cube maps (SCMs). / We further explore the GPU-based multiresolution rendering on solecube maps (SCMs). Our SCMs on GPU generate adaptive mesh surfaces dynamically, and are fully developed in parallelization for large-scale and complex VR environments. We also encapsulate the differential coordinates in SCMs, reflecting the local geometric characteristics for geometric modeling and interactive animation applications. For the future work, we will work on improving the image/ video feature analysis framework in VR/graphics applications. The further work lying in the surface texture synthesis includes the interactive control of texture orientations by surface vector fields using sketch editing, so to widen the gamut of interactive tools available for texturing artists and end users. / Sheng, Bin. / Adviser: Hanqin Sun. / Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 128-142). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
20 |
Video based dynamic scene analysis and multi-style abstraction.January 2008 (has links)
Tao, Chenjun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 89-97). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgements --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Window-oriented Retargeting --- p.1 / Chapter 1.2 --- Abstraction Rendering --- p.4 / Chapter 1.3 --- Thesis Outline --- p.6 / Chapter 2 --- Related Work --- p.7 / Chapter 2.1 --- Video Migration --- p.8 / Chapter 2.2 --- Video Synopsis --- p.9 / Chapter 2.3 --- Periodic Motion --- p.14 / Chapter 2.4 --- Video Tracking --- p.14 / Chapter 2.5 --- Video Stabilization --- p.15 / Chapter 2.6 --- Video Completion --- p.20 / Chapter 3 --- Active Window Oriented Video Retargeting --- p.21 / Chapter 3.1 --- System Model --- p.21 / Chapter 3.1.1 --- Foreground Extraction --- p.23 / Chapter 3.1.2 --- Optimizing Active Windows --- p.27 / Chapter 3.1.3 --- Initialization --- p.29 / Chapter 3.2 --- Experiments --- p.32 / Chapter 3.3 --- Summary --- p.37 / Chapter 4 --- Multi-Style Abstract Image Rendering --- p.39 / Chapter 4.1 --- Abstract Images --- p.39 / Chapter 4.2 --- Multi-Style Abstract Image Rendering --- p.42 / Chapter 4.2.1 --- Multi-style Processing --- p.45 / Chapter 4.2.2 --- Layer-based Rendering --- p.46 / Chapter 4.2.3 --- Abstraction --- p.47 / Chapter 4.3 --- Experimental Results --- p.49 / Chapter 4.4 --- Summary --- p.56 / Chapter 5 --- Interactive Abstract Videos --- p.58 / Chapter 5.1 --- Abstract Videos --- p.58 / Chapter 5.2 --- Multi-Style Abstract Video --- p.59 / Chapter 5.2.1 --- Abstract Images --- p.60 / Chapter 5.2.2 --- Video Morphing --- p.65 / Chapter 5.2.3 --- Interactive System --- p.69 / Chapter 5.3 --- Interactive Videos --- p.76 / Chapter 5.4 --- Summary --- p.77 / Chapter 6 --- Conclusions --- p.81 / Chapter A --- List of Publications --- p.83 / Chapter B --- Optical flow --- p.84 / Chapter C --- Belief Propagation --- p.86 / Bibliography --- p.89
|
Page generated in 0.1355 seconds