• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Susidūrimų paieškos, naudojant lygiagrečius skaičiavimus, metodų tyrimas / Collision detection methods using parallel computing

Šiukščius, Martynas 26 August 2013 (has links)
Susidūrimų paieška - tai dviejų ar daugiau objektų susikirtimo radimas. Praktikoje susidūrimų paieška taikoma šiose srityse: kompiuteriniuose žaidimuose, netiesinėje baigtinių elementų analizėje, dalelių hidrodinamikoje, daugiafunkcinės dinamikos analizėje, įvairiose fizikos simuliacijose ir kt. Egzistuoja daugybė susidūrimų paieškos algoritmų, iš kurių populiariausi yra erdvinio skaidymo, hierarchinio struktūrizavimo ir atrinkimo bei rūšiavimo metodai. Šiame darbe yra tiriamas šių algoritmų veikimas ant CPU (Central processing unit) ir ant GPU (Graphics processing unit), analizuojami susidūrimų paieškos nustatymo būdai bei nagrinėjamos pasirinktų algoritmų veikimo spartinimo galimybės panaudojant CUDA (Compute Unified Device Architecture) technologiją. Ši technologija yra Nvidia sukurta nauja duomenų apdorojimo architektūra išnaudojanti grafinio procesoriaus resursus bendro pobūdžio skaičiavimams. Darbe iškeltų tikslų pasiekimui yra realizuotos kelios bazinės algoritmų versijos, jų pritaikymo lygiagretiems skaičiavimams galimybės ir taip pat atliekami bazinių algoritmų laiko, reikalingo skaičiavimams atlikti, grafinio procesoriaus atminties sąnaudos bei įvairių veikimo laiką įtakojančių faktorių tyrimai. Darbo pabaigoje aptariami lygiagretaus programavimo privalumai pritaikant nagrinėjamai temai. Šiame darbe atlikti tyrimai parodė, jog perduodant skaičiavimus į GPU pasiekiamas 200 kartų didesnis nagrinėjamų algoritmų našumas negu atliekant skaičiavimus naudojant CPU. / Collision detection is a well-studied and active research field where the main problem is to determine if one or more objects collide with each other in 3D virtual space. Collision detection is an issue affecting many different fields of study, including computer animation, physical-based simulation, robotics, video games and haptic applications. There is a big variety of collision detection algorithms of witch spatial subdivision, octree and sort and sweep are three of them. In this document we provide a short summary of collision detection algorithms, but the main focus will be on analyzing and increasing their performance working on CPU (orig. Central processing unit) and GPU (orig. Graphics processing unit) separately by making use of CUDA (orig.Compute Unified Device Architecture) technology. This technology is a part of Nvidia, witch helps the use of graphics processor for general-purpose computation. Main goal of this research is achieved by performing analysis of implemented spatial subdivision, octree and sort and sweep algorithms. This analysis consists of both general performance, parallelization performance and various performance affecting factors analyses. At the end of the document, the advantages of parallel programming adapted to the present subject are discussed.
2

An empirically derived system for high-speed shadow rendering

Rautenbach, Helperus Ritzema 26 June 2009 (has links)
Shadows have captivated humanity since the dawn of time; with the current age being no exception – shadows are core to realism and ambience, be it to invoke a classic Baroque interplay of lights, darks and colours as the case in Rembrandt van Rijn’s Militia Company of Captain Frans Banning Cocq or to create a sense of mystery as found in film noir and expressionist cinematography. Shadows, in this traditional sense, are regions of blocked light – the combined effect of placing an object between a light source and surface. This dissertation focuses on real-time shadow generation as a subset of 3D computer graphics. Its main focus is the critical analysis of numerous real-time shadow rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shadows. This critical analysis allows us to assess the relationship between shadow rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Focusing on these bottleneck areas, we investigate several possibilities of improving the performance and quality of shadow rendering; both on a hardware and software level. Primary performance benefits are seen through effective culling, clipping, the use of hardware extensions and by managing the polygonal complexity and silhouette detection of shadow casting meshes. Additional performance gains are achieved by combining the depth-fail stencil shadow volume algorithm with dynamic spatial subdivision. Using this performance data gathered during the analysis of various shadow rendering algorithms, we are able to define a fuzzy logic-based expert system to control the real-time selection of shadow rendering algorithms based on environmental conditions. This system ensures the following: nearby shadows are always of high-quality, distant shadows are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised. / Dissertation (MSc)--University of Pretoria, 2009. / Computer Science / unrestricted
3

Measurement and modelling of light scattering by small to medium size parameter airborne particles

McCall, David Samuel January 2011 (has links)
An investigation into the light scattering properties of Saharan dust grains is presented. An electrodynamic trap has been used to levitate single dust particles. By adjusting the trap parameters, partial randomisation of the particle orientation has been introduced. While levitated, the particles were illuminated by a laser, and a rotating half-wave retarder enabled selection of vertically or horizontally polarized incident light. A laser diffractometer and linear photodiode array have been used to measure intensity at scattering angles between 0.5° and 177°. Combining these measurements with Fraunhofer diffraction as calculated for a range of appropriately-sized apertures allows the calculation of the phase function and degree of linear polarization. The phase functions and degree of linear polarisation for four case study particles are presented - the phase functions are found to be featureless across most of the scattering region, with none of the halo features or rainbow peaks associated with regularly shaped particles such as hexagonal columns or spheres. Particle models comprised of large numbers of facets have been constructed to resemble the levitated particles. Utilizing Gaussian random sphere methods, increasing levels of roughness have been added to the surfaces of these models. A Geometric Optics model and a related model, Ray Tracing with Diffraction on Facets, have been modified to calculate scattering on these particle reconstructions. Scattering calculations were performed on each of these reconstructions using a range of refractive indices and two rotation regimes – one where the orientations of the reconstructed particle were limited to match those observed when the particle was levitated, and one where the orientation was not limited. Qualitative comparisons are performed on the phase functions and degree of linear polarization, where it is observed that the addition of roughness to the modelled spheroids causes the computed phase functions to increasingly resemble those from the levitated particles. Limiting the orientation of the particles does not affect the scattering noticeably. The addition of a very small absorption coefficient does not change the comparisons considerably. As the absorption coefficient is increased, however, the quality of the comparisons decreases rapidly in all cases but one. The phase functions are quantitatively compared using RMS errors, and further comparison is performed using the asymmetry parameter.
4

An empirically derived system for high-speed rendering

Rautenbach, Helperus Ritzema 25 September 2012 (has links)
This thesis focuses on 3D computer graphics and the continuous maximisation of rendering quality and performance. Its main focus is the critical analysis of numerous real-time rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shader-based special effects, lighting effects, shadows, reflection and refraction, post-processing effects and the processing of physics. This critical analysis allows us to assess the relationship between rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Using this performance data, gathered during the analysis of various rendering algorithms, we are able to define a selection engine to control the real-time cycling of rendering algorithms and special effects groupings based on environmental conditions. Furthermore, as a proof of concept, to balance Central Processing Unit (CPU) and Graphic Processing Unit (GPU) load for and increased speed of execution, our selection system unifies the GPU and CPU as a single computational unit for physics processing and environmental mapping. This parallel computing system enables the CPU to process cube mapping computations while the GPU can be tasked with calculations traditionally handled solely by the CPU. All analysed and benchmarked algorithms were implemented as part of a modular rendering engine. This engine offers conventional first-person perspective input control, mesh loading and support for shader model 4.0 shaders (via Microsoft’s High Level Shader Language) for effects such as high dynamic range rendering (HDR), dynamic ambient lighting, volumetric fog, specular reflections, reflective and refractive water, realistic physics, particle effects, etc. The test engine also supports the dynamic placement, movement and elimination of light sources, meshes and spatial geometry. Critical analysis was performed via scripted camera movement and object and light source additions – done not only to ensure consistent testing, but also to ease future validation and replication of results. This provided us with a scalable interactive testing environment as well as a complete solution for the rendering of computationally intensive 3D environments. As a full-fledged game engine, our rendering engine is amenable to first- and third-person shooter games, role playing games and 3D immersive environments. Evaluation criteria (identified to access the relationship between rendering quality and performance), as mentioned, allows us to effectively cycle algorithms based on empirical results and to distribute specific processing (cube mapping and physics processing) between the CPU and GPU, a unification that ensures the following: nearby effects are always of high-quality (where computational resources are available), distant effects are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised. The implication of our work is clear: unifying the CPU and GPU and dynamically cycling through the most appropriate algorithms based on ever-changing environmental conditions allow for maximised rendering quality and performance and shows that it is possible to render high-quality visual effects with realism, without overburdening scarce computational resources. Immersive rendering approaches used in conjunction with AI subsystems, game networking and logic, physics processing and other special effects (such as post-processing shader effects) are immensely processor intensive and can only be successfully implemented on high-end hardware. Only by cycling and distributing algorithms based on environmental conditions and through the exploitation of algorithmic strengths can high-quality real-time special effects and highly accurate calculations become as common as texture mapping. Furthermore, in a gaming context, players often spend an inordinate amount of time fine-tuning their graphics settings to achieve the perfect balance between rendering quality and frames-per-second performance. Using this system, however, ensures that performance vs. quality is always optimised, not only for the game as a whole but also for the current scene being rendered – some scenes might, for example, require more computational power than others, resulting in noticeable slowdowns, slowdowns not experienced thanks to our system’s dynamic cycling of rendering algorithms and its proof of concept unification of the CPU and GPU. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted
5

Photon tracing na GPU / Photon Tracing on GPU

Galacz, Roman January 2013 (has links)
Subject of this thesis is acceleration of the photon mapping method on a graphic card. The photon mapping is a method for computing almost realistic global illumination of the scene. The computation itself is relatively time-consuming, so the acceleration of it is a hot issue in the field of computer graphics. The photon mapping is described in detail from photon tracing to rendering of the scene. The thesis is then focused on spatial subdivision structures, especially to the uniform grid. The design and the implementation of the application computing the photon mapping on GPU, which is achieved by OpenGL and CUDA interoperability, is described in the next part of the thesis. Lastly, the application is tested properly. The achieved results are reviewed in the conclusion of the thesis.

Page generated in 0.1071 seconds