• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 30
  • 29
  • 22
  • 17
  • 10
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 315
  • 315
  • 59
  • 42
  • 39
  • 34
  • 32
  • 31
  • 30
  • 24
  • 21
  • 20
  • 20
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Multipath-assisted Single-anchor Outdoor Positioning in Urban Environments

Ljungzell, Erik January 2018 (has links)
An important aspect of upcoming fifth-generation (5G) cellular communication systems is to improve the accuracy with which user equipments can be positioned. Accurately knowing the position of a user equipment is becoming increasingly important for a wide range of applications, such as automation in industry, drones, and the internet of things. Contrary to how existing techniques for outdoor cellular positioning deal with multipath propagation, in this study the aim is to use, rather than mitigate, the multipath propagation prevalent in dense urban environments. It is investigated whether it is possible to position a user equipment using only a single transmitting base station, by exploiting position-related information in multipath components inherent in the received signal. Two algorithms are developed: one classical point-estimation algorithm using a grid search to find the cost function-minimizing position, and one Bayesian filtering algorithm using a point-mass filter. Both algorithms make use of BEZT, a set of 3D propagation models developed by Ericsson Research, to predict propagation paths. A model of the signal received by a user equipment is formulated for use in the positioning algorithms. In addition to the signal model, the algorithms also require a digital map of the propagation environment. The algorithms are evaluated first on synthetic measurements, generated using BEZT, and then on real-world measurements. For both the synthetic and real-world measurement sets, the Bayesian point-mass filter outperforms the classical algorithm. It is observed how, given synthetic measurements, the algorithms yield better estimates in non-line-of-sight regions than in regions where the user equipment has line-of-sight to the transmitting base station. Unfortunately, these results do not generalize well to the real-world measurements, where, overall, neither algorithm is able to provide reliable and robust position estimates. However, as multipath-assisted positioning, to the best of our knowledge, has not been used for outdoor cellular positioning before, there are plenty of algorithm extensions, modifications, and problem aspects left to be studied - some of which are discussed in the concluding chapters.
82

Síntese de fenômenos naturais através do traçado de raios usando "height fields"

Silva, Franz Josef Figueroa Ferreira da January 1996 (has links)
A síntese de imagens é uma ferramenta valiosa na compreensão de diversos fenômenos da natureza. Nos últimos anos várias abordagens têm sido propostas para sintetizar tais fenômenos. A grande maioria de tais abordagens têm se centralizado no desenvolvimento de modelos procedurais. Porém, cada uma destas técnicas simula exclusivamente um fenômeno natural. Um dos métodos de síntese de imagens fotorealísticas mais proeminente é denominado de Traçado de Raios (Ray Tracing). Contudo, apesar de produzir imagens de excelente qualidade, este método é computacionalmente muito oneroso. A síntese de fenômenos naturais utilizando-se o traçado de raios é um desafio. É importante que este problema seja abordado, apesar da sua complexidade, pois a simulaçao fotorealista da natureza é muito importante para os cientistas e pesquisadores desde o surgimento dos computadores. Um algoritmo versátil e rápido para a síntese de fenômenos da natureza através do traçado de raios utilizando campos de altitude é proposto. O algoritmo utiliza uma modificação do algoritmo do Analisador Diferencial Digital de Bresenham para atravesar uma matriz bidimensional de valores de altitude. A determinação das primitivas geométricas a serem interseccionadas por um raio é obtida num tempo ( N ) , sendo N o número de altitudes no campo de altitude. Este trabalho faz uma comparação em termos de velocidade e realismo deste método com outras abordagens convencionais; e discute as implicações que a implementação deste método traz. Finalmente, destaca-se a simplicidade e versatilidade que este método proporciona devido à pequena quantidade de parâmetros necessária para a síntese de fenômenos naturais utilizando o traçado de raios. Para a criação de animações basta a especificação de novos parâmetros num intervalo de tempo diferente. / Visualization is a powerful tool for better undestanding of several natural phenomena. In recent years, several techniques have been proposed. Considerable interest in natural scene synthesis has focused on procedural models. However, these techniques produce synthetic scenes of only one natural phenomenon. Ray tracing is one of the most photorealistic methods of image syntesis. While providing images of excellent quality, ray tracing is a computationally intensive task. Natural scene synthesis is a challenging problem within the realm of ray tracing. It is important to tackle this problem, despite of its complexity, because photorealistic simulation have been important to scientific community since the appearance of computers. A fast and versatile algorithm for ray tracing natural scenes through height fields is presented. The algorithm employs a modified Bresenham DDA to traverse a two dimensional array of values. The objects tested for intersection are located in ( N ) time where N is the number of values in the field. This work compares the speed-up and photorealism achieved in natural scene synthesis using this method with other algorithms and discusses the implications of implementing this approach. As a final point, the simplicity and versatility of synthesizing complex natural scenes from a few parameters and data is especially attractive. Animated sequences require only the additional specifications of time modified parameters or data.
83

Estruturas de aceleração para Ray Tracing em tempo real: um estudo comparativo

Lira dos Santos, Artur 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T16:00:50Z (GMT). No. of bitstreams: 2 arquivo6997_1.pdf: 3788091 bytes, checksum: cf9480da9819849e38359e4e9a2bb074 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / O poder computacional atual das GPUs possibilita a execução de complexos algoritmos massivamente paralelos, como algoritmos de busca em estruturas de dados específicas para ray tracing em tempo real, comumente conhecidas como estruturas de aceleração. Esta dissertação descreve em detalhes o estudo e implementação de dezesseis diferentes algoritmos de travessia de estruturas de aceleração, utilizando o framework de CUDA, da NVIDIA. Este estudo comparativo teve o intuito de determinar as vantagens e desvantagens de cada técnica, em termos de performance, consumo de memória, grau de divergência em desvios e escalabilidade em múltiplas GPUs. Uma nova estrutura de aceleração, chamada Sparse Box Grid, também é proposta, além de dois novos algoritmos de busca, focando em melhoria de performance. Tais algoritmos são capazes de alcançar speedups de até 2.5x quando comparado com implementações recentes de travessias em GPU. Como consequência, é possível obter simulação em tempo real de cenas com milhões de primitivas para imagens com 1408x768 de resolução
84

Advances in modeling polarimeter performance

Chipman, Russell A. 30 August 2017 (has links)
Artifacts in polarimeters are apparent polarization features which are not real but result from the systematic errors in the polarimeter. The polarization artifacts are different between division of focal plane, spectral, and time modulation polarimeters. Artifacts result from many sources such as source properties, micropolarizer arrays, coatings issues, vibrations, and stress birefringence. A modeling examples of polarization artifacts due to a micro-polarizer array polarimeter is presented.
85

Challenges in coronagraph optical design

Chipman, Russell A. 06 September 2017 (has links)
The point spread function (PSF) for astronomical telescopes and instruments depends not only on geometric aberrations and scalar wave diffraction, but also on the apodization and wavefront errors introduced by coatings on reflecting and transmitting surfaces within the optical system. Geometrical ray tracing provides incomplete image simulations for exoplanet coronagraphs with the goal of resolving planets with a brightness less than 10<^>-9 of their star located within 3 Airy disk radii. The Polaris-M polarization analysis program calculates uncorrected coating polarization aberrations couple around 10<^>-5 light into crossed polarized diffraction patterns about twice Airy disk size. These wavefronts not corrected by the deformable optics systems. Polarization aberrations expansions have shown how image defects scale with mirror coatings, fold mirror angles, and numerical aperture.
86

Developing and utilizing the wavefield kinematics for efficient wavefield extrapolation

Waheed, Umair bin 08 1900 (has links)
Natural gas and oil from characteristically complex unconventional reservoirs, such as organic shale, tight gas and oil, coal-bed methane; are transforming the global energy market. These conventional reserves exist in complex geologic formations where conventional seismic techniques have been challenged to successfully image the subsurface. To acquire maximum benefits from these unconventional reserves, seismic anisotropy must be at the center of our modeling and inversion workflows. I present algorithms for fast traveltime computations in anisotropic media. Both ray-based and finite-difference solvers of the anisotropic eikonal equation are developed. The proposed algorithms present novel techniques to obtain accurate traveltime solutions for anisotropic media in a cost-efficient manner. The traveltime computation algorithms are then used to invert for anisotropy parameters. Specifically, I develop inversion techniques by using diffractions and diving waves in the seismic data. The diffraction-based inversion algorithm can be combined with an isotropic full-waveform inversion (FWI) method to obtain a high-resolution model for the anellipticity anisotropy parameter. The inversion algorithm based on diving waves is useful for building initial anisotropic models for depth-migration and FWI. I also develop the idea of 'effective elliptic models' for obtaining solutions of the anisotropic two-way wave equation. The proposed technique offers a viable alternative for wavefield computations in anisotropic media using a computationally cheaper wave propagation operator. The methods developed in the thesis lead to a direct cost savings for imaging and inversion projects, in addition to a reduction in turn-around time. With an eye on the next generation inversion methods, these techniques allow us to incorporate more accurate physics into our modeling and inversion framework.
87

Hardware accelerated ray tracing of particle systems

Lindau, Ludvig January 2020 (has links)
Background. Particle systems are a staple feature of most modern renderers. There are several technical challenges when it comes to rendering transparent particles. Particle sorting along the view direction is required for proper blending and casting shadows from particles requires non-standard shadow algorithms. A recent technology that could be used to adress these technical challenges is hardware accelerated ray tracing. However there is a lack of performance data gathered from this type of hardware. Objectives. The objective of this thesis is to measure the performance of a prototype that uses hardware accelerated ray tracing to render particles that cast shadows. Methods. A prototype is created and measurements of the ray tracing time are made. The scene used for the benchmark test is a densely packed particle volume of highly transparent particles, resulting in a scene that looks similar to smoke. Particles are sorted along a ray by repeatedly tracing rays against the scene and incrementing the ray origin past the previous intersection point until it has passed all the objects that lie along the ray. Results. Only a small number of particles can be rendered if real time rendering speeds are desired. High quality shadows can be produced in a way that is very simple compared to texture based methods. Conclusions. Future hardware speed ups can improve the rendering speeds but more sophisticated sorting methods are needed to render larger amounts of particles.
88

Pay tracing tools for high frequency electromagnetics simulations

Sefi, Sandy January 2003 (has links)
Over the past 20 years, the development in ComputationalElectromagnetics has produced a vast choice of methods based onthe large number of existing mathematical formulations of theMaxwell equations. None of them dominate over the others,instead they complement each other and the choice of methoddepends on the frequency range of the electromagnetic waves.This work is focused on the most popular method in the highfrequency scenario, namely the Geometrical Theory ofDiffraction (GTD). The main advantage of GTD is the ability topredict the electromagnetic field asymptotically in the limitof vanishing wavelength, when other methods, such as the Methodof Moments, become computationally too expensive. The low cost of GTD is due to both the fact that there is noruntime penalty in increasing the frequency and that the raytracing, which GTD is based on, is a geometrical technique. Thecomplexity is then no longer dependent on electrical size ofthe problem but instead on geometrical sub problems which aremanageable. For industrial applications the geometricalstructures, with which the rays interact, are modelled bytrimmed Non-Uniform Rational B-Spline (NURBS) surfaces, themost recent standard used to represent complex free-formgeometries. Due to the introduction of NURBS, the geometrical subproblems tend to be mathematically and numerically cumbersome,but they can be highly simplified by proper Object Orientedprogramming techniques. This allowed us to create a flexiblesoftware package, MIRA: Modular Implementation of Ray Tracingfor Antenna Applications, with an architecture that separatesmathematical algorithms from their implementation details andmodelling. In addition, its design supports hybridisationtechniques in combination with other methods such as Method ofMoment (MoM) and Physical Optics (PO). In a first hybrid application, a triangle-based PO solveruses the shadowing information calculated with the ray tracerpart of MIRA. The occlusion is performed between triangles andtheir facing NURBS surfaces rather than between their facingtriangles, thus reducing the complexity. Then the shadowinginformation is used in an iterative MoM-PO process in order tocover higher frequencies, where the contribution of theshadowing effects, in the hybrid formulation, is believed to bemore significant. Thesis presented at the Royal Institute of Technology ofStockholm in 2003, for the degree of Licentiate in ScientificComputing. / NR 20140805
89

Ray Collection Bounding Volume Hierarchy

Rivera, Kris Krishna 01 January 2011 (has links)
This thesis presents Ray Collection BVH, an improvement over a current day Ray Tracing acceleration structure to both build and perform the steps necessary to efficiently render dynamic scenes. Bounding Volume Hierarchy (BVH) is a commonly used acceleration structure, which aides in rendering complex scenes in 3D space using Ray Tracing by breaking the scene of triangles into a simple hierarchical structure. The algorithm this thesis explores was developed in an attempt at accelerating the process of both constructing this structure, and also using it to render these complex scenes more efficiently. The idea of using "ray collection" as a data structure was accidentally stumbled upon by the author in testing a theory he had for a class project. The overall scheme of the algorithm essentially collects a set of localized rays together and intersects them with subsequent levels of the BVH at each build step. In addition, only part of the acceleration structure is built on a per-Ray need basis. During this partial build, the Rays responsible for creating the scene are partially processed, also saving time on the overall procedure. Ray tracing is a widely used technique for simple rendering from realistic images to making movies. Particularly, in the movie industry, the level of realism brought in to the animated movies through ray tracing is incredible. So any improvement brought to these algorithms to improve the speed of rendering would be considered useful and iii welcome. This thesis makes contributions towards improving the overall speed of scene rendering, and hence may be considered as an important and useful contribution
90

FlexRender: A Distributed Rendering Architecture for Ray Tracing Huge Scenes on Commodity Hardware.

Somers, Robert Edward 01 June 2012 (has links) (PDF)
As the quest for more realistic computer graphics marches steadily on, the demand for rich and detailed imagery is greater than ever. However, the current "sweet spot" in terms of price, power consumption, and performance is in commodity hardware. If we desire to render scenes with tens or hundreds of millions of polygons as cheaply as possible, we need a way of doing so that maximizes the use of the commodity hardware we already have at our disposal. Techniques such as normal mapping and level of detail have attempted to address the problem by reducing the amount of geometry in a scene. This is problematic for applications that desire or demand access to the scene's full geometric complexity at render time. More recently, out-of-core techniques have provided methods for rendering large scenes when the working set is larger than the available system memory. We propose a distributed rendering architecture based on message-passing that is designed to partition scene geometry across a cluster of commodity machines in a spatially coherent way, allowing the entire scene to remain in-core and enabling the construction of hierarchical spatial acceleration structures in parallel. The results of our implementation show roughly an order of magnitude speedup in rendering time compared to the traditional approach, while keeping memory overhead for message queuing around 1%.

Page generated in 0.092 seconds