• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 54
  • 47
  • 45
  • 37
  • 19
  • 16
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 727
  • 318
  • 113
  • 77
  • 74
  • 66
  • 57
  • 54
  • 54
  • 51
  • 41
  • 41
  • 41
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Efektivní trasování cest v objemových médiích na GPU / Efficient GPU path tracing in solid volumetric media

Forti, Federico January 2018 (has links)
Realistic Image synthesis, usually, requires long computations and the simulation of the light interacting with a virtual scene. One of the most computationally intensive simulation in this area is the visualization of solid participating media. This media can describe many different types of object with the same physical parameters (e.g. marble, air, fire, skin, wax ...). Simulating the light interacting with it requires the computation of many independent photons interactions inside the medium. However, those interactions can be computed in parallel, using the power of modern Graphic Processor Unit, or GPU, computing. This work present an overview over different methodologies, that can affect the performance of this type of simulations on the GPU. Different existing ideas are analyzed, compared and modified with the scope of speeding up the computation respect to the classic CPU implementation. 1
302

Um pipeline para renderização fotorrealística de tempo real com ray tracing para a realidade aumentada

Lemos de Almeida Melo, Diego 31 January 2012 (has links)
Made available in DSpace on 2014-06-12T16:01:28Z (GMT). No. of bitstreams: 2 arquivo9410_1.pdf: 4384561 bytes, checksum: 4ebaaa7cbd8455ac2eed9a38c2530cf4 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2012 / A Realidade Aumentada é um campo de pesquisa que trata do estudo de técnicas para integrar informações virtuais com o mundo real. Algumas aplicações de Realidade Aumentada requerem fotorrealismo, onde os elementos virtuais são tão coerentemente inseridos na cena real que o usuário não consegue distinguir o virtual do real. Para a síntese de cenas 3D existem diversas técnicas, entre elas o ray tracing. Ele é um algoritmo baseado em conceitos básicos da Física Ótica, cuja principal característica é a alta qualidade visual a um custo computacional elevado, o que condicionava a sua utilização a aplicações offline. Contudo, com o avanço do poder computacional das GPUs este algoritmo passou a ser viável para ser utilizado em aplicações de tempo real, devido principalmente ao fato de ser um algoritmo com a característica de poder ser massivamente paralelizado. Levando isto em consideração, esta dissertação propõe um pipeline para renderização fotorrealística em tempo real utilizando a técnica ray tracing em aplicações de Realidade Aumentada. O ray tracer utilizado foi o Real Time Ray Tracer, ou RT2, de Santos et al., que serviu de base para a construção de um pipeline com suporte a sombreamento, síntese de diversos tipos de materiais, oclusão, reflexão, refração e alguns efeitos de câmera. Para que fosse possível obter um sistema que funciona a taxas interativas, todo o pipeline de renderização foi implementado em GPU, utilizando a linguagem CUDA, da NVIDIA. Outra contribuição importante deste trabalho é a integração deste pipeline com o dispositivo Kinect, da Microsoft, possibilitando a obtenção de informações reais da cena, em tempo real, eliminando assim a necessidade de se conhecer previamente os objetos pertencentes à cena real
303

Aferências hipotalâmicas para a área tegmental ventral, núcleo tegmental rostromedial e núcleo dorsal da rafe. / Hypothalamic afferents to the ventral tegmental area, rostromedial tegmental nucleus and dorsal rafe nucleus.

Leandro Bueno Lima 23 June 2015 (has links)
O hipotálamo modula comportamentos relacionados à motivação, recompensa e punição através de projeções para a área tegmental ventral (VTA), o núcleo dorsal da rafe (DR) e o núcleo tegmental rostromedial (RMTg). Nesse estudo, investigamos através de métodos de rastreamento retrógrado as entradas hipotalâmicas da VTA, do DR e do RMTg e, se neurônios hipotalâmicos individuais inervam mais do que uma dessas regiões. Também determinamos uma possível assinatura GABAérgica ou glutamatérgica das aferências hipotalâmicas, através de rastreamento retrógrado combinado com métodos de hibridação in situ. Observamos que VTA, DR e RMTg recebem um padrão bastante semelhante de entradas hipotalâmicas originando de neurônios de projeção glutamatérgicas e GABAérgicas, a maioria deles (> 90%) inervando somente um desses três alvos. Nossos achados indicam que entradas hipotalâmicas são importantes fontes de sinais homeostáticos para a VTA, o DR e o RMTg. Eles exibem um alto grau de heterogeneidade que permite de excitar ou inibir as três estruturas de forma independente ou em conjunto. / The hypothalamus modulates behaviors related to motivation, reward and punishment via projections to the ventral tegmental area (VTA), dorsal raphe nucleus (DR), and rostromedial tegmental nucleus (RMTg). In this study we investigated by retrograde tracing methods hypothalamic inputs to the VTA, DR, and RMTg, and whether individual hypothalamic neurons project to more than one of these structures. We also determined a possible GABAergic or glutamatergic phenotype of hypothalamic afferents, by combining retrograde tracing with in situ hybridization methods. We found that VTA, DR, and RMTg receive a very similar set of hypothalamic afferents originating from glutamatergic and GABAergic hypothalamic projection neurons, the majority of them (> 90%) only innervating one of these structures. Our findings indicate that hypothalamic inputs are important sources of homeostatic signals for the VTA, DR, and RMTg. They exhibit a high degree of heterogeneity which permits to activate or inhibit the three structures either independently or jointly.
304

Método para visualização de campos tensoriais tridimensionais baseado em rastreamento de partículas

Leonel, Gildo de Almeida 17 January 2011 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-03T11:28:48Z No. of bitstreams: 1 gildodealmeidaleonel.pdf: 24098922 bytes, checksum: 16845fd58b93cf751e3ef6da19f65159 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T20:02:53Z (GMT) No. of bitstreams: 1 gildodealmeidaleonel.pdf: 24098922 bytes, checksum: 16845fd58b93cf751e3ef6da19f65159 (MD5) / Made available in DSpace on 2017-03-06T20:02:53Z (GMT). No. of bitstreams: 1 gildodealmeidaleonel.pdf: 24098922 bytes, checksum: 16845fd58b93cf751e3ef6da19f65159 (MD5) Previous issue date: 2011-01-17 / Campos tensoriais arbitrários são úteis em diversas áreas do conhecimento como a física, engenharias e áreas da saúde. Um dos principais interesses de profissionais destas áreas é a investigação de objetos colineares e coplanares representados pelos tensores. Esses objetos são formados por subconjuntos estruturados de tensores presentes no campo e que capturam alguma continuidade geométrica. Pela sua natureza multivariada, a visualização de elementos organizados é uma tarefa desafiadora. Geralmente, utilizam-se métodos de detecção direta destas estruturas para que o observador possa analisá-las. A proposta desta dissertação é explorar o fato de que o movimento estimula percepções complexas de forma inata no sistema visual humano. A abordagem desenvolvida utiliza um sistema de rastreamento de partículas e é parametrizado por campos tensoriais de forma que o comportamento das partículas represente as características do campo e tenha um aprimoramento que possibilite o melhor entendimento e a interpretação da informação proveniente dos tensores. / Arbitrary tensor fields are useful in several areas as physics, engineering and medicine. The investigation of collinear and coplanar objects represented by tensors is the main focus of research in these areas. These objects are formed by structured tensorial fields which captures some geometric continuity. The visualization of strutured elements is a challenging task because of their multivariate nature. To be analysed by the user, direct methods are usually used for detecting these structures. The proposal of this dissertation is to explore the fact that movement increases the perception of complex shapes, that are observed in a innate form by the human visual system. The approach developed uses a particle tracing system and is parameterized by tensor fields, so the particles flow represents the characteristics of the field and make an improvement that enables better understanding and interpretation of information derived from tensors.
305

Um Pipeline Para Renderização Fotorrealística de Tempo Real com Ray Tracing para Realidade Aumentada

Melo, Diego Lemos de Almeida 09 March 2012 (has links)
Submitted by Pedro Henrique Rodrigues (pedro.henriquer@ufpe.br) on 2015-03-04T18:12:29Z No. of bitstreams: 2 Dissertacao_completa_Diego_Lemos.pdf: 4382725 bytes, checksum: 304625beefcdb33f03bb97376f48c770 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-04T18:12:29Z (GMT). No. of bitstreams: 2 Dissertacao_completa_Diego_Lemos.pdf: 4382725 bytes, checksum: 304625beefcdb33f03bb97376f48c770 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2012-03-09 / A Realidade Aumentada é um campo de pesquisa que trata do estudo de técnicas para integrar informações virtuais com o mundo real. Algumas aplicações de Realidade Aumentada requerem fotorrealismo, onde os elementos virtuais são tão coerentemente inseridos na cena real que o usuário não consegue distinguir o virtual do real. Para a síntese de cenas 3D existem diversas técnicas, entre elas o ray tracing. Ele é um algoritmo baseado em conceitos básicos da Física Ótica, cuja principal característica é a alta qualidade visual a um custo computacional elevado, o que condicionava a sua utilização a aplicações offline. Contudo, com o avanço do poder computacional das GPUs este algoritmo passou a ser viável para ser utilizado em aplicações de tempo real, devido principalmente ao fato de ser um algoritmo com a característica de poder ser massivamente paralelizado. Levando isto em consideração, esta dissertação propõe um pipeline para renderização fotorrealística em tempo real utilizando a técnica ray tracing em aplicações de Realidade Aumentada. O ray tracer utilizado foi o Real Time Ray Tracer, ou RT2, de Santos et al., que serviu de base para a construção de um pipeline com suporte a sombreamento, síntese de diversos tipos de materiais, oclusão, reflexão, refração e alguns efeitos de câmera. Para que fosse possível obter um sistema que funciona a taxas interativas, todo o pipeline de renderização foi implementado em GPU, utilizando a linguagem CUDA, da NVIDIA. Outra contribuição importante deste trabalho é a integração deste pipeline com o dispositivo Kinect, da Microsoft, possibilitando a obtenção de informações reais da cena, em tempo real, eliminando assim a necessidade de se conhecer previamente os objetos pertencentes à cena real.
306

Real-time generation of kd-trees for ray tracing using DirectX 11

Säll, Martin, Cronqvist, Fredrik January 2017 (has links)
Context. Ray tracing has always been a simple but effective way to create a photorealistic scene but at a greater cost when expanding the scene. Recent improvements in GPU and CPU hardware have made ray tracing faster, making more complex scenes possible with the same amount of time needed to process the scene. Despite the improvements in hardware ray tracing is still rarely run at a interactive speed. Objectives. The aim of this experiment was to implement a new kdtree generation algorithm using DirectX 11 compute shaders. Methods. The implementation created during the experiment was tested using two platforms and five scenarios where the generation time for the kd-tree was measured in milliseconds. The results where compared to a sequential implementation running on the CPU. Results. In the end the kd-tree generation algorithm implemented did not run within our definition of real-time. Comparing the generation times from the implementations shows that there is a speedup for the GPU implementation compared to our CPU implementation, it also shows linear scaling for the generation time as the number of triangles in the scene increase. Conclusions. Noticeable limitations encountered during the experiment was that the handling of dynamic structures and sorting of arrays are limited which forced us to use less memory efficient solutions.
307

Ray Tracing Non-Polygonal Objects: Implementation and Performance Analysis using Embree

Carlie, Michael January 2016 (has links)
Free-form surfaces and implicit surfaces must be tessellated before being rendered with rasterization techniques. However ray tracing provides the means to directly render such objects without the need to first convert into polygonal meshes. Since ray tracing can handle triangle meshes as well, the question of which method is most suitable in terms of performance, quality and memory usage is addressed in this thesis. Bézier surfaces and NURBS surfaces along with basic algebraic implicit surfaces are implemented in order to test the performance relative to polygonal meshes approximating the same objects. The parametric surfaces are implemented using an iterative Newtonian method that converges on the point of intersection using a bounding volume hierarchy that stores the initial guesses. Research into intersecting rays with parametric surfaces is surveyed in order to find additional methods that speed up the computation. The implicit surfaces are implemented using common direct algebraic methods. All of the intersection tests are implemented using the Embree ray tracing API as well as a SIMD library in order to achieve interactive framerates on a CPU. The results show that both Bézier surfaces and NURBS surfaces can achieve interactive framerates on a CPU using SIMD computation, with Bézier surfaces coming close to the performance of polygonal counterparts. The implicit surfaces implemented outperform even the simplest polygonal approximations.
308

Implementing and Evaluating CPU/GPU Real-Time Ray Tracing Solutions

Norgren, David January 2016 (has links)
Ray tracing is a popular algorithm used to simulate the behavior of light and is commonly used to render images with high levels of visual realism. Modern multicore CPUs and many-core GPUs can take advantage of the parallel nature of ray tracing to accelerate the rendering process and produce new images in real-time. For non-specialized hardware however, such implementations are often limited to low screen resolutions, simple scene geometry and basic graphical effects. In this work, a C++ framework was created to investigate how the ray tracing algorithm can be implemented and accelerated on the CPU and GPU, respectively. The framework is capable of utilizing two third-party ray tracing libraries, Intel’s Embree and NVIDIA’s OptiX, to ray trace various 3D scenes. The framework also supports several effects for added realism, a user controlled camera and triangle meshes with different materials and textures. In addition, a hybrid ray tracing solution is explored, running both libraries simultaneously to render subsections of the screen. Benchmarks performed on a high-end CPU and GPU are finally presented for various scenes and effects. Throughout these results, OptiX on a Titan X performed better by a factor of 2-4 compared to Embree running on an 8-core hyperthreaded CPU within the same price range. Due to this imbalance of the CPU and GPU along with possible interferences between the libraries, the hybrid solution did not give a significant speedup, but created possibilities for future research.
309

Potential of GPU Based Hybrid Ray Tracing For Real-Time Games

Poulsen, Henrik January 2009 (has links)
The development of Graphics Hardware Technology is blazing fast, with new and more improved models, that out spec the previous generations with leaps and bounds, before one has the time to digest the potential of the previous generations computing power. With the progression of this technology the computer games industry has always been quick to adapt this new power and all the features that emerge as the graphic card industry learn what the customers need from their products. The current generations of games use extraordinary visual effects to heighten the immersion into the games, all of which is thanks to the constant progress of the graphics hardware, which would have been an impossibility just a couple of years ago. Ray tracing has been used for years in the movie industry for creation of stunning special effects and whole movies completely made in 3D. This technique for giving realistic imagery has always been for usage exclusively for non-interactive entertainment, since this way of rendering an image is extremely expensive when it comes to computations. To generate one single image with Ray Tracing you might need several hundred millions of calculations, which so far haven’t been proven to work in real-time situations, such as for games. However, due to the continuous increase of processing power in Graphical Processing Units, GPUs, the limits of what can, and cannot, be done in real-time is constantly shifting further and further into the realm of possibility. So this thesis focuses upon finding out just how close we are to getting ray tracing into the realm of real-time games. Two tests were performed to find out the potential a current (2009) high-end computer system has when it comes to handling a raster - ray tracing hybrid implementation. The first test is to see how well a modern GPU handles rendering of a very simple scene with phong shading and ray traced shadows without any optimizations. And the second test is with the same scenario, but this time done with a basic optimization; this last test is to illustrate the impact that possible optimizations have on ray tracers. These tests were later compared to Intel’s results with ray tracing Enemy Territory: Quake Wars.
310

Real-Time Audio Simulation with Implicit Surfaces using Sphere Tracing on the GPU

Sjöberg, Peter January 2011 (has links)
Digital games are based on interactive virtual environments where graphics and audio are combined. In many of these games there is lot of effort put into graphics while leaving the audio part underdeveloped. Audio in games is important in order to immerse the player in the virtual environment. Where a high level of emulated reality is needed graphics and audio should be combined on a similar level of realism. To make this possible a sophisticated method for audio simulation is needed. In the audio simulation field previous attempts at using ray tracing methods were successful. With methods based on ray tracing the sound waves are traced from the audio source to the listener in the virtual environment, where the environment is based on a scene consisting of implicit surfaces. A key part in the tracing computations is finding the intersection point between a sound wave and the surfaces in the scene. Sphere tracing is an alternative method for finding the intersection point and has been shown to be feasible for real-time usage on the graphics processing unit (GPU). To be interactive a game environment runs in real-time, this fact puts a time constraint on the rendering of the graphics and audio. The time constraint is based on the time window to render one frame in the synchronized rendering of graphics and audio based on the frame rate of the graphics. Consumer computer systems of today are in general equipped with a GPU, if an audio simulation can use the GPU in real-time this is a possible implementation target in a game system. The aim of this thesis is to investigate if audio simulation with the ray tracing method based on sphere tracing is possible to run in real-time on the GPU. An audio simulation system is implemented in order to examine the possibility for real-time usage based on computation time. The results of this thesis show that audio simulation with implicit surfaces using sphere tracing is possible to use in real-time with the GPU in some form. The time consumption for an audio simulation system like this is small enough to enable it for real-time usage. Based on an interactive graphics frame rate the time consumption allows the graphics and audio computations to use the GPU in the same frame time.

Page generated in 0.0402 seconds