• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Accurate and efficient strategies for the appearance filtering of complex materials

Gamboa Guzman, Luis Eduardo 12 1900 (has links)
La synthèse d’images réalistes repose sur des modèles physiques décrivant les interactions entre la lumière et les matériaux attachés aux objets dans une scène tridimensionnelle. Ces modèles mathématiques sont complexes et, dans le cas général, n’admettent pas de solution analytique. Pour cette raison, l’utilisation de méthodes numériques robustes et efficaces est nécessaire. Les méthodes de Monte Carlo ou techniques alternatives comme l’utilisation de développement par fonction de base sont appropriées pour résoudre ce type de problème. Dans cette thèse par articles, nous présentons deux nouvelles techniques permettant l’in- tégration numérique efficace de matériaux complexes. En premier lieu, nous introduisons une nouvelle méthode permettant d’intégrer simultanément plusieurs dimensions définies dans le domaine angulaire et spatiale. Avoir une technique efficace est essentiel pour intégrer des matériaux avec des normales variant rapidement sous différentes conditions d’éclairage. Notre technique utilise une nouvelle formulation basée sur un histogramme sphérique définie de façon directionnelle et spatial. Ce dernier nous permet d’utiliser des harmoniques sphé- riques pour intégrer les différentes dimensions rapidement, réduisant le temps de calcul d’un facteur approximatif de 30× par rapport aux méthodes de l’état de l’art. Dans notre second travail, nous introduisons une nouvelle stratégie d’échantillonnage pour estimer le transport de lumière à l’intérieur de matériaux multicouches. En identifiant les meilleures stratégies d’échantillonnage, nous proposons une technique efficace et non biaisée pour construire des chemins de lumière à l’intérieur de ce type de matériau. Notre nouvelle approche permet d’obtenir un estimateur de Monte Carlo efficace et de faible variance dans des matériaux contenant un nombre arbitraire de couches. / Realistic computer generated images and simulations require physically-based models to properly capture and reproduce light-material interactions. The underlying mathematical formulations are complex and mandate the use of efficient numerical methods, since analytic solutions are not available. Monte Carlo integration is one such commonly used numerical method, although, alternative approaches leveraging, e.g., basis expansions, may be suitable to solve these challenging problems. In this thesis by articles, we present two works where we efficiently devise numerical integration strategies for the rendering of complex materials. First, we propose a method to compute a spatial-angular multi-dimensional integration problem present when rendering materials with high-frequency normal variation under large, angularly varying illumination. By computing and manipulating a novel spherical histogram data representation, we are able to use spherical harmonics to efficiently solve the integral, outperforming the state-of-the-art by a factor of roughly 30×. Our second work describes a high-performance Monte Carlo integration strategy for rendering layered materials. By identifying the best path sampling strategies in the micro-scale light transport context, we are able to tailor an unbiased and efficient path construction method to evaluate high throughput, low variance paths through an arbitrary number of layers.
2

An empirically derived system for high-speed rendering

Rautenbach, Helperus Ritzema 25 September 2012 (has links)
This thesis focuses on 3D computer graphics and the continuous maximisation of rendering quality and performance. Its main focus is the critical analysis of numerous real-time rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shader-based special effects, lighting effects, shadows, reflection and refraction, post-processing effects and the processing of physics. This critical analysis allows us to assess the relationship between rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Using this performance data, gathered during the analysis of various rendering algorithms, we are able to define a selection engine to control the real-time cycling of rendering algorithms and special effects groupings based on environmental conditions. Furthermore, as a proof of concept, to balance Central Processing Unit (CPU) and Graphic Processing Unit (GPU) load for and increased speed of execution, our selection system unifies the GPU and CPU as a single computational unit for physics processing and environmental mapping. This parallel computing system enables the CPU to process cube mapping computations while the GPU can be tasked with calculations traditionally handled solely by the CPU. All analysed and benchmarked algorithms were implemented as part of a modular rendering engine. This engine offers conventional first-person perspective input control, mesh loading and support for shader model 4.0 shaders (via Microsoft’s High Level Shader Language) for effects such as high dynamic range rendering (HDR), dynamic ambient lighting, volumetric fog, specular reflections, reflective and refractive water, realistic physics, particle effects, etc. The test engine also supports the dynamic placement, movement and elimination of light sources, meshes and spatial geometry. Critical analysis was performed via scripted camera movement and object and light source additions – done not only to ensure consistent testing, but also to ease future validation and replication of results. This provided us with a scalable interactive testing environment as well as a complete solution for the rendering of computationally intensive 3D environments. As a full-fledged game engine, our rendering engine is amenable to first- and third-person shooter games, role playing games and 3D immersive environments. Evaluation criteria (identified to access the relationship between rendering quality and performance), as mentioned, allows us to effectively cycle algorithms based on empirical results and to distribute specific processing (cube mapping and physics processing) between the CPU and GPU, a unification that ensures the following: nearby effects are always of high-quality (where computational resources are available), distant effects are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised. The implication of our work is clear: unifying the CPU and GPU and dynamically cycling through the most appropriate algorithms based on ever-changing environmental conditions allow for maximised rendering quality and performance and shows that it is possible to render high-quality visual effects with realism, without overburdening scarce computational resources. Immersive rendering approaches used in conjunction with AI subsystems, game networking and logic, physics processing and other special effects (such as post-processing shader effects) are immensely processor intensive and can only be successfully implemented on high-end hardware. Only by cycling and distributing algorithms based on environmental conditions and through the exploitation of algorithmic strengths can high-quality real-time special effects and highly accurate calculations become as common as texture mapping. Furthermore, in a gaming context, players often spend an inordinate amount of time fine-tuning their graphics settings to achieve the perfect balance between rendering quality and frames-per-second performance. Using this system, however, ensures that performance vs. quality is always optimised, not only for the game as a whole but also for the current scene being rendered – some scenes might, for example, require more computational power than others, resulting in noticeable slowdowns, slowdowns not experienced thanks to our system’s dynamic cycling of rendering algorithms and its proof of concept unification of the CPU and GPU. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted

Page generated in 0.1144 seconds