1 |
Adaptive sampling and tessellation for displacement mapping hardwareHirche, Johannes. Unknown Date (has links) (PDF)
University, Diss., 2003--Tübingen.
|
2 |
Review of Displacement Mapping Techniques and Optimization / Granskning av Displacement Mapping Tekniker och OptimeringLundgren, Mikael, Hrkalovic, Ermin January 2012 (has links)
This paper explores different bump mapping techniques and their implementation. Bump mapping is a technique that is used in computer games to make simple 3D objects look more detailed than what they really are. The technique involves using a texture to change the objects normals to simulate bumps and is used to avoid rendering high polygonal objects. Over the years some different techniques have been developed based on bump mapping, these are normal mapping, relief mapping, parallax occlusion mapping, quadtree displacement mapping and so on. The first part of this paper we go through our goals and our research methodology. We then write about four different techniques and describe how they work. We also go through how they are implemented. After that we start our experiments and measure the different techniques against each other. When the first testing has been done, we start to optimize the techniques and run a second test to see how much faster, if it is faster, the optimization is compared to the previous tests. When the tests are done, we present our test data and analyse them. Finally we discuss the techniques and the testing. Then we finish up with a conclusion. / Mikaels telefon: 072-181 77 29 Ermins telefon: 076-178 97 59
|
3 |
Different Mapping Techniques for Realistic SurfacesÖhrn, Kristina January 2008 (has links)
<p>The different mapping techniques that are used increases the details on surfaces without increasing the number of polygons. Image Based Sculpting tools in the program Modo and Z-Brush is used to create folds and wrinkles from photographs of actual fabrics instead of trying to create these shapes by modeling them. This method makes it easier to achieve photorealistic renderings and produce as realistic fabric dynamics as possible when they are applied on objects.</p>
|
4 |
Performance aspects of layered displacement blending in real time applicationsPetersson, Tommy, Lindeberg, Marcus January 2013 (has links)
The purpose of this thesis is to investigate performance aspects of layered displacement blending; a technique used to render realistic and transformable objects in real time rendering systems using the GPU. Layered displacement blending is done by blending layers of color maps and displacement maps together based on values stored in an influence map. In this thesis we construct a theoretical and practical model for layered displacement blending. The model is implemented in a test bed application to enable measuring of performance aspects. The implementation is fed input with variations in triangle count, number of subdivisions, texture size and number of layers. The execution time for these different combinations are recorded and analyzed. The recorded execution times reveal that the amount of layers associated with an object has no impact on performance. Further analysis reveals that layered displacement blending is heavily dependent on the triangle count in the input mesh. The results show that layered displacement blending is a viable option to representing transformable objects in real time applications with respect to performance. This thesis provides; a theoretical model for layered displacement blending, an implementation of the model using the GPU and measurements of that implementation.
|
5 |
Point Cloud Mesostructure ImpostorsNykl, Erik L. January 2017 (has links)
No description available.
|
6 |
Different Mapping Techniques for Realistic SurfacesÖhrn, Kristina January 2008 (has links)
The different mapping techniques that are used increases the details on surfaces without increasing the number of polygons. Image Based Sculpting tools in the program Modo and Z-Brush is used to create folds and wrinkles from photographs of actual fabrics instead of trying to create these shapes by modeling them. This method makes it easier to achieve photorealistic renderings and produce as realistic fabric dynamics as possible when they are applied on objects.
|
7 |
Level-Of-Details Rendering with Hardware Tessellation / Rendu de niveaux de détails avec la Tessellation MatérielleLambert, Thibaud 18 December 2017 (has links)
Au cours des deux dernières décennies, les applications temps réel ont montré des améliorations colossales dans la génération de rendus photoréalistes. Cela est principalement dû à la disponibilité de modèles 3D avec une quantité croissante de détails. L'approche traditionnelle pour représenter et visualiser des objets 3D hautement détaillés est de les décomposer en un maillage basse fréquence et une carte de déplacement encodant les détails. La tessellation matérielle est le support idéal pour implémenter un rendu efficace de cette représentation. Dans ce contexte, nous proposons une méthode générale pour la génération et le rendu de maillages multi-résolutions compatibles avec la tessellation matérielle. Tout d'abord, nous introduisons une métrique dépendant de la vue capturant à la fois les distorsions géométriques et paramétriques, permettant de sélectionner la le niveau de résolution approprié au moment du rendu. Deuxièmement, nous présentons une nouvelle représentation hiérarchique permettant d'une part des transitions temporelles et spatiales continues entre les niveaux et d'autre part une tessellation matérielle non uniforme. Enfin, nous élaborons un processus de simplification pour générer notre représentation hiérarchique tout en minimisant notre métrique d'erreur. Notre méthode conduit à d'énormes améliorations tant en termes du nombre de triangles affiché qu'en temps de rendu par rapport aux méthodes alternatives. / In the last two decades, real-time applications have exhibited colossal improvements in the generation of photo-realistic images. This is mainly due to the availability of 3D models with an increasing amount of details. Currently, the traditional approach to represent and visualize highly detailed 3D objects is to decompose them into a low-frequency mesh and a displacement map encoding the details. The hardware tessellation is the ideal support to implement an efficient rendering of this representation. In this context, we propose a general framework for the generation and the rendering of multi-resolution feature-aware meshes compatible with hardware tessellation. First, we introduce a view-dependent metric capturing both geometric and parametric distortions, allowing to select the appropriate resolution at rendertime. Second, we present a novel hierarchical representation enabling on the one hand smooth temporal and spatial transitions between levels and on the other hand a non-uniform hardware tessellation. Last, we devise a simplification process to generate our hierarchical representation while minimizing our error metric. Our framework leads to huge improvements both in terms of triangle count and rendering time in comparison to alternative methods.
|
8 |
Subset selection in hierarchical recursive pattern assemblies and relief feature instancing for modeling geometric patternsJang, Justin 05 April 2010 (has links)
This thesis is concerned with modeling geometric patterns.
Specifically, a clear and practical definition for regular patterns is proposed.
Based on this definition, this thesis proposes the following modeling setting to describe the semantic transfer of a model between various forms of pattern regularity: (1) recognition or identification of patterns in digital models of 3D assemblies and scenes, (2) pattern regularization, (3) pattern modification and editing by varying the repetition parameters, and (4) establishing exceptions (designed irregularities) in regular patterns.
In line with this setting, this thesis describes a representation and approach for designing and editing hierarchical assemblies based on grouped, nested, and recursively nested patterns. Based on this representation, this thesis presents the OCTOR approach for specifying, recording, and producing exceptions in regular patterns.
To support editing of free-form shape patterns on surfaces, this thesis also presents the imprint-mapping approach which can be used to identify, extract, process, and apply relief features on surfaces. Pattern regularization, modification, and exceptions are addressed for the case of relief features on surfaces.
|
9 |
Interactive MesostructuresNykl, Scott L. January 2013 (has links)
No description available.
|
10 |
An empirically derived system for high-speed renderingRautenbach, Helperus Ritzema 25 September 2012 (has links)
This thesis focuses on 3D computer graphics and the continuous maximisation of rendering quality and performance. Its main focus is the critical analysis of numerous real-time rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shader-based special effects, lighting effects, shadows, reflection and refraction, post-processing effects and the processing of physics. This critical analysis allows us to assess the relationship between rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Using this performance data, gathered during the analysis of various rendering algorithms, we are able to define a selection engine to control the real-time cycling of rendering algorithms and special effects groupings based on environmental conditions. Furthermore, as a proof of concept, to balance Central Processing Unit (CPU) and Graphic Processing Unit (GPU) load for and increased speed of execution, our selection system unifies the GPU and CPU as a single computational unit for physics processing and environmental mapping. This parallel computing system enables the CPU to process cube mapping computations while the GPU can be tasked with calculations traditionally handled solely by the CPU. All analysed and benchmarked algorithms were implemented as part of a modular rendering engine. This engine offers conventional first-person perspective input control, mesh loading and support for shader model 4.0 shaders (via Microsoft’s High Level Shader Language) for effects such as high dynamic range rendering (HDR), dynamic ambient lighting, volumetric fog, specular reflections, reflective and refractive water, realistic physics, particle effects, etc. The test engine also supports the dynamic placement, movement and elimination of light sources, meshes and spatial geometry. Critical analysis was performed via scripted camera movement and object and light source additions – done not only to ensure consistent testing, but also to ease future validation and replication of results. This provided us with a scalable interactive testing environment as well as a complete solution for the rendering of computationally intensive 3D environments. As a full-fledged game engine, our rendering engine is amenable to first- and third-person shooter games, role playing games and 3D immersive environments. Evaluation criteria (identified to access the relationship between rendering quality and performance), as mentioned, allows us to effectively cycle algorithms based on empirical results and to distribute specific processing (cube mapping and physics processing) between the CPU and GPU, a unification that ensures the following: nearby effects are always of high-quality (where computational resources are available), distant effects are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised. The implication of our work is clear: unifying the CPU and GPU and dynamically cycling through the most appropriate algorithms based on ever-changing environmental conditions allow for maximised rendering quality and performance and shows that it is possible to render high-quality visual effects with realism, without overburdening scarce computational resources. Immersive rendering approaches used in conjunction with AI subsystems, game networking and logic, physics processing and other special effects (such as post-processing shader effects) are immensely processor intensive and can only be successfully implemented on high-end hardware. Only by cycling and distributing algorithms based on environmental conditions and through the exploitation of algorithmic strengths can high-quality real-time special effects and highly accurate calculations become as common as texture mapping. Furthermore, in a gaming context, players often spend an inordinate amount of time fine-tuning their graphics settings to achieve the perfect balance between rendering quality and frames-per-second performance. Using this system, however, ensures that performance vs. quality is always optimised, not only for the game as a whole but also for the current scene being rendered – some scenes might, for example, require more computational power than others, resulting in noticeable slowdowns, slowdowns not experienced thanks to our system’s dynamic cycling of rendering algorithms and its proof of concept unification of the CPU and GPU. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted
|
Page generated in 0.0712 seconds