• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

An empirically derived system for high-speed rendering

Rautenbach, Helperus Ritzema 25 September 2012 (has links)
This thesis focuses on 3D computer graphics and the continuous maximisation of rendering quality and performance. Its main focus is the critical analysis of numerous real-time rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shader-based special effects, lighting effects, shadows, reflection and refraction, post-processing effects and the processing of physics. This critical analysis allows us to assess the relationship between rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Using this performance data, gathered during the analysis of various rendering algorithms, we are able to define a selection engine to control the real-time cycling of rendering algorithms and special effects groupings based on environmental conditions. Furthermore, as a proof of concept, to balance Central Processing Unit (CPU) and Graphic Processing Unit (GPU) load for and increased speed of execution, our selection system unifies the GPU and CPU as a single computational unit for physics processing and environmental mapping. This parallel computing system enables the CPU to process cube mapping computations while the GPU can be tasked with calculations traditionally handled solely by the CPU. All analysed and benchmarked algorithms were implemented as part of a modular rendering engine. This engine offers conventional first-person perspective input control, mesh loading and support for shader model 4.0 shaders (via Microsoft’s High Level Shader Language) for effects such as high dynamic range rendering (HDR), dynamic ambient lighting, volumetric fog, specular reflections, reflective and refractive water, realistic physics, particle effects, etc. The test engine also supports the dynamic placement, movement and elimination of light sources, meshes and spatial geometry. Critical analysis was performed via scripted camera movement and object and light source additions – done not only to ensure consistent testing, but also to ease future validation and replication of results. This provided us with a scalable interactive testing environment as well as a complete solution for the rendering of computationally intensive 3D environments. As a full-fledged game engine, our rendering engine is amenable to first- and third-person shooter games, role playing games and 3D immersive environments. Evaluation criteria (identified to access the relationship between rendering quality and performance), as mentioned, allows us to effectively cycle algorithms based on empirical results and to distribute specific processing (cube mapping and physics processing) between the CPU and GPU, a unification that ensures the following: nearby effects are always of high-quality (where computational resources are available), distant effects are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised. The implication of our work is clear: unifying the CPU and GPU and dynamically cycling through the most appropriate algorithms based on ever-changing environmental conditions allow for maximised rendering quality and performance and shows that it is possible to render high-quality visual effects with realism, without overburdening scarce computational resources. Immersive rendering approaches used in conjunction with AI subsystems, game networking and logic, physics processing and other special effects (such as post-processing shader effects) are immensely processor intensive and can only be successfully implemented on high-end hardware. Only by cycling and distributing algorithms based on environmental conditions and through the exploitation of algorithmic strengths can high-quality real-time special effects and highly accurate calculations become as common as texture mapping. Furthermore, in a gaming context, players often spend an inordinate amount of time fine-tuning their graphics settings to achieve the perfect balance between rendering quality and frames-per-second performance. Using this system, however, ensures that performance vs. quality is always optimised, not only for the game as a whole but also for the current scene being rendered – some scenes might, for example, require more computational power than others, resulting in noticeable slowdowns, slowdowns not experienced thanks to our system’s dynamic cycling of rendering algorithms and its proof of concept unification of the CPU and GPU. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted
42

Distributed Ray Tracing v rozumném čase / Distributed Ray Tracing in Reasonable Time

Slovák, Radek January 2011 (has links)
This thesis deals with the method of distributed ray tracing focusing on optimalization of this method. The method uses simulation of some attributes of light by distributing rays of lights and it produces high quality and partly realistic images. The price for realitic effects is the high computational complexity of the method. The thesis analysis the theory connected with these aspects. A large part describes optimalizations of this method, i.e. searching for the nearest triangle intersection using kd-trees, quasi random sampling with faster convergence, the use of SSE instruction set and fast ray - triangle intersection. These optimalizations brought a noticable speed - up. The thesis includes description of implementation of these techniques. The implementation itself emphasises the practical usability including generating some advanced animations and universal description of objects.
43

Distributed Ray Tracing / Distributed Ray Tracing

Hošek, Václav January 2008 (has links)
VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ Distributed Ray Tracing, also called distribution ray tracing and stochastic ray tracing, is a refinement of ray tracing that allows for the rendering of "soft" phenomena, area light, depth of field and motion blur.
44

Quantification 3D d’une surface dynamique par lumière structurée en impulsion nanoseconde. Application à la physique des chocs, du millimètre au décimètre / 3D measurement of a dynamic surface by structured light in nanosecond regime. Application to shock physics, from millimeters to decimeters

Frugier, Pierre Antoine 29 June 2015 (has links)
La technique de reconstruction de forme par lumière structurée (ou projection de motifs) permet d’acquérir la topographie d’une surface objet avec une précision et un échantillonnage de points dense, de manière strictement non invasive. Pour ces raisons, elle fait depuis plusieurs années l’objet d’un fort intérêt. Les travaux présentés ici ont pour objectif d’adapter cette technique aux conditions sévères des expériences de physique des chocs : aspect monocoup, grande brièveté des phénomènes, diversité des échelles d’observation (de quelques millimètres au décimètre). Pour répondre à ces exigences, nous proposons de réaliser un dispositif autour d’un système d’imagerie rapide par éclairage laser nanoseconde, présentant des performances éprouvées et bien adaptées. La première partie des travaux s’intéresse à analyser les phénomènes prépondérants pour la qualité des images. Nous montrons quels sont les contributeurs principaux à la dégradation des signaux, et une technique efficace de lissage du speckle par fibrage est présentée. La deuxième partie donne une formulation projective de la reconstruction de forme ; celle-ci est rigoureuse, ne nécessitant pas de travailler dans l’approximation de faible perspective, ou de contraindre la géométrie de l’instrument. Un protocole d’étalonnage étendant la technique DLT (Direct Linear Transformation) aux systèmes à lumière structurée est proposé. Le modèle permet aussi, pour une expérience donnée, de prédire les performances de l’instrument par l’évaluation a priori des incertitudes de reconstruction. Nous montrons comment elles dépendent des paramètres du positionnement des sous-ensembles et de la forme-même de l’objet. Une démarche d’optimisation de la configuration de l’instrument pour une reconstruction donnée est introduite. La profondeur de champ limitant le champ objet minimal observable, la troisième partie propose de l’étendre par codage pupillaire : une démarche de conception originale est exposée. L’optimisation des composants est réalisée par algorithme génétique, sur la base de critères et de métriques définis dans l’espace de Fourier. Afin d’illustrer les performances de cette approche, un masque binaire annulaire a été conçu, réalisé et testé expérimentalement. Il corrige des défauts de mise au point très significatifs (Ψ≥±40 radians) sans impératif de filtrage de l’image. Nous montrons aussi que ce procédé donne accès à des composants tolérant des défauts de mise au point extrêmes (Ψ≈±100 radians , après filtrage). La dernière partie présente une validation expérimentale de l’instrument dans différents régimes, et à différentes échelles. Il a notamment été mis en œuvre sur l’installation LULI2000, où il a permis de mesurer dynamiquement la déformation et la fragmentation d’un matériau à base de carbone (champs millimétriques). Nous présentons également les mesures obtenues sous sollicitation pyrotechnique sur un revêtement de cuivre cylindrique de dimensions décimétriques. L’apparition et la croissance rapide de déformations radiales submillimétriques est mesurée à la surface du revêtement. / A Structured Light System (SLS) is an efficient means to measure a surface topography, as it features both high accuracy and dense spatial sampling in a strict non-invasive way. For these reasons, it became in the past years a technique of reference. The aim of the PhD is to bring this technique to the field of shock physics. Experiments involving shocks are indeed very specific: they only allow single-shot acquisition of extremely short phenomena occurring under a large range of spatial extensions (from a few mm to decimeters). In order to address these difficulties, we have envisioned the use of a well-known high-speed technique: pulsed laser illumination. The first part of the work deals with the evaluation of the key-parameters that have to be taken into account if one wants to get sharp acquisitions. The extensive study demonstrates that speckle effect and depth of field limitation are of particular importance. In this part, we provide an effective way to smooth speckle in nanosecond regime, leaving 14% of residual contrast. Second part introduces an original projective formulation for object-points reconstruction. This geometric approach is rigorous; it doesn’t involve any weak-perspective assumptions or geometric constraints (like camera-projector crossing of optical axis in object space). From this formulation, a calibration procedure is derived; we demonstrate that calibrating any structured-light system can be done by extending the Direct Linear Transformation (DLT) photogrammetric approach to SLS. Finally, we demonstrate that reconstruction uncertainties can be derived from the proposed model in an a priori manner; the accuracy of the reconstruction depends both on the configuration of the instrument and on the object shape itself. We finally introduce a procedure for optimizing the configuration of the instrument in order to lower the uncertainties for a given object. Since depth of field puts a limitation on the lowest measurable field extension, the third part focuses on extending it through pupil coding. We present an original way of designing phase components, based on criteria and metrics defined in Fourier space. The design of a binary annular phase mask is exhibited theoretically and experimentally. This one tolerates a defocus as high as Ψ≥±40 radians, without the need for image processing. We also demonstrate that masks designed with our method can restore extremely high defoci (Ψ≈±100 radians) after processing, hence extending depth of focus by amounts unseen yet. Finally, the fourth part exhibits experimental measurements obtained with the setup in different high-speed regimes and for different scales. It was embedded on LULI2000 high energy laser facility, and allowed measurements of the deformation and dynamic fragmentation of a sample of carbon. Finally, sub-millimetric deformations measured in ultra-high speed regime, on a cylinder of copper under pyrotechnic solicitation are presented.

Page generated in 0.0543 seconds