• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 15
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Algorithm design and 3D computer graphics rendering

Ewins, Jon Peter January 2000 (has links)
3D Computer graphics is becoming an almost ubiquitous part of the world in which we live. being present in art. entertainment. advertising. CAD. training and education. scientific visualisation and with the growth of the internet. in e-commerce and communication. This thesis encompasses two areas of study: The design of algorithms for high quality. real-time 3D computer graphics rendering hardware and the methodology and means for achieving this. When investigating new algorithms and their implementation in hardware. it is important to have a thorough understanding of their operation. both individually and in the context of an entire architecture. It is helpful to be able to model different algorithmic variations rapidly and experiment with them interchangeably. This thesis begins with a description of software based modelling techniques for the rapid investigation of algorithms for 3D computer graphics within the context of a C++ prototyping environment. Recent tremendous increases in the rendering performance of graphics hardware have been shadowed by corresponding advancements in the accuracy of the algorithms accelerated. Significantly. these improvements have led to a decline in tolerance towards rendering artefacts. Algorithms for the effective and efficient implementation of high quality texture filtering and edge antialiasing form the focus of the algorithm research described in this thesis. Alternative algorithms for real-time texture filtering are presented in terms of their computational cost and performance. culminating in the design of a low cost implementation for higher quality anisotropic texture filtering. Algorithms for edge antialiasing are reviewed. with the emphasis placed upon area sampling solutions. A modified A-buffer algorithm is presented that uses novel techniques to provide: efficient fragment storage; support for multiple intersecting transparent surfaces; and improved filtering quality through an extendable and weighted filter support from a single highly optimised lookup table.
2

Pixelating Vector Art

Inglis, Tiffany C. January 2014 (has links)
Pixel art is a popular style of digital art often found in video games. It is typically characterized by its low resolution and use of limited colour palettes. Pixel art is created manually with little automation because it requires attention to pixel-level details. Working with individual pixels is a challenging and abstract task, whereas manipulating higher-level objects in vector graphics is much more intuitive. However, it is difficult to bridge this gap because although many rasterization algorithms exist, they are not well-suited for the particular needs of pixel artists, particularly at low resolutions. In this thesis, we introduce a class of rasterization algorithms called pixelation that is tailored to pixel art needs. We describe how our algorithm suppresses artifacts when pixelating vector paths and preserves shape-level features when pixelating geometric primitives. We also developed methods inspired by pixel art for drawing lines and angles more effectively at low resolutions. We compared our results to rasterization algorithms, rasterizers used in commercial software, and human subjects---both amateurs and pixel artists. Through formal analyses of our user study studies and a close collaboration with professional pixel artists, we showed that, in general, our pixelation algorithms produce more visually appealing results than na\"{i}ve rasterization algorithms do.
3

An Image and Processing Comparison Study of Antialiasing Methods

Grahn, Alexander January 2016 (has links)
Context. Aliasing is a long standing problem in computer graphics. It occurs as the graphics card is unable to sample with an infinite accuracy to render the scene which causes the application to lose colour information for the pixels. This gives the objects and the textures unwanted jagged edges. Post-processing antialiasing methods is one way to reduce or remove these issues for real-time applications. Objectives. This study will compare two popular post-processing antialiasing methods that are used in modern games today, i.e., Fast approximate antialiasing (FXAA) and Submorphological antialiasing (SMAA). The main aim is to understand how both method work and how they perform compared to the other. Methods. The two methods are implemented in a real-time application using DirectX 11.0. Images and processing data is collected, where the processing data consists of the updating frequency of the rendering of screen known as frames per second (FPS), and the elapsed time on the graphics processing unit(GPU). Conclusions. FXAA shows difficulties in handling diagonal edges well but show only minor graphical artefacts in vertical and horizontal edges. The method can produce unwanted blur along edges. The edge pattern detection in SMAA makes it able to handle all directions well. The performance results conclude that FXAA do not lose a lot of FPS and is quick. FXAA is at least three times faster than SMAA on the GPU.
4

Photorealistic Surface Rendering with Microfacet Theory / Rendu photoréaliste de surfaces avec la théorie des microfacettes

Dupuy, Jonathan 26 November 2015 (has links)
La synthèse d'images dites photoréalistes nécessite d'évaluer numériquement la manière dont la lumière et la matière interagissent physiquement, ce qui, malgré la puissance de calcul impressionnante dont nous bénéficions aujourd'hui et qui ne cesse d'augmenter, est encore bien loin de devenir une tâche triviale pour nos ordinateurs. Ceci est dû en majeure partie à la manière dont nous représentons les objets: afin de reproduire les interactions subtiles qui mènent à la perception du détail, il est nécessaire de modéliser des quantités phénoménales de géométries. Au moment du rendu, cette complexité conduit inexorablement à de lourdes requêtes d'entrées-sorties, qui, couplées à des évaluations d'opérateurs de filtrage complexes, rendent les temps de calcul nécessaires à produire des images sans défaut totalement déraisonnables. Afin de pallier ces limitations sous les contraintes actuelles, il est nécessaire de dériver une représentation multiéchelle de la matière. Dans cette thèse, nous construisons une telle représentation pour la matière dont l'interface correspond à une surface perturbée, une configuration qui se construit généralement via des cartes d'élévations en infographie. Nous dérivons notre représentation dans le contexte de la théorie des microfacettes (conçue à l'origine pour modéliser la réflectance de surfaces rugueuses), que nous présentons d'abord, puis augmentons en deux temps. Dans un premier temps, nous rendons la théorie applicable à travers plusieurs échelles d'observation en la généralisant aux statistiques de microfacettes décentrées. Dans l'autre, nous dérivons une procédure d'inversion capable de reconstruire les statistiques de microfacettes à partir de réponses de réflexion d'un matériau arbitraire dans les configurations de rétroréflexion. Nous montrons comment cette théorie augmentée peut être exploitée afin de dériver un opérateur général et efficace de rééchantillonnage approximatif de cartes d'élévations qui (a) préserve l'anisotropie du transport de la lumière pour n'importe quelle résolution, (b) peut être appliqué en amont du rendu et stocké dans des MIP maps afin de diminuer drastiquement le nombre de requêtes d'entrées-sorties, et (c) simplifie de manière considérable les opérations de filtrage par pixel, le tout conduisant à des temps de rendu plus courts. Afin de valider et démontrer l'efficacité de notre opérateur, nous synthétisons des images photoréalistes anticrenelées et les comparons à des images de référence. De plus, nous fournissons une implantation C++ complète tout au long de la dissertation afin de faciliter la reproduction des résultats obtenus. Nous concluons avec une discussion portant sur les limitations de notre approche, ainsi que sur les verrous restant à lever afin de dériver une représentation multiéchelle de la matière encore plus générale / Photorealistic rendering involves the numeric resolution of physically accurate light/matter interactions which, despite the tremendous and continuously increasing computational power that we now have at our disposal, is nowhere from becoming a quick and simple task for our computers. This is mainly due to the way that we represent objects: in order to reproduce the subtle interactions that create detail, tremendous amounts of geometry need to be queried. Hence, at render time, this complexity leads to heavy input/output operations which, combined with numerically complex filtering operators, require unreasonable amounts of computation times to guarantee artifact-free images. In order to alleviate such issues with today's constraints, a multiscale representation for matter must be derived. In this thesis, we derive such a representation for matter whose interface can be modelled as a displaced surface, a configuration that is typically simulated with displacement texture mapping in computer graphics. Our representation is derived within the realm of microfacet theory (a framework originally designed to model reflection of rough surfaces), which we review and augment in two respects. First, we render the theory applicable across multiple scales by extending it to support noncentral microfacet statistics. Second, we derive an inversion procedure that retrieves microfacet statistics from backscattering reflection evaluations. We show how this augmented framework may be applied to derive a general and efficient (although approximate) down-sampling operator for displacement texture maps that (a) preserves the anisotropy exhibited by light transport for any resolution, (b) can be applied prior to rendering and stored into MIP texture maps to drastically reduce the number of input/output operations, and (c) considerably simplifies per-pixel filtering operations, resulting overall in shorter rendering times. In order to validate and demonstrate the effectiveness of our operator, we render antialiased photorealistic images against ground truth. In addition, we provide C++ implementations all along the dissertation to facilitate the reproduction of the presented results. We conclude with a discussion on limitations of our approach, and avenues for a more general multiscale representation for matter
5

Rendering Antialiased Shadows using Warped Variance Shadow Maps

Lauritzen, Andrew Timothy January 2008 (has links)
Shadows contribute significantly to the perceived realism of an image, and provide an important depth cue. Rendering high quality, antialiased shadows efficiently is a difficult problem. To antialias shadows, it is necessary to compute partial visibilities, but computing these visibilities using existing approaches is often too slow for interactive applications. Shadow maps are a widely used technique for real-time shadow rendering. One major drawback of shadow maps is aliasing, because the shadow map data cannot be filtered in the same way as colour textures. In this thesis, I present variance shadow maps (VSMs). Variance shadow maps use a linear representation of the depth distributions in the shadow map, which enables the use of standard linear texture filtering algorithms. Thus VSMs can address the problem of shadow aliasing using the same highly-tuned mechanisms that are available for colour images. Given the mean and variance of the depth distribution, Chebyshev's inequality provides an upper bound on the fraction of a shaded fragment that is occluded, and I show that this bound often provides a good approximation to the true partial occlusion. For more difficult cases, I show that warping the depth distribution can produce multiple bounds, some tighter than others. Based on this insight, I present layered variance shadow maps, a scalable generalization of variance shadow maps that partitions the depth distribution into multiple segments. This reduces or eliminates an artifact - "light bleeding" - that can appear when using the simpler version of variance shadow maps. Additionally, I demonstrate exponential variance shadow maps, which combine moments computed from two exponentially-warped depth distributions. Using this approach, high quality results are produced at a fraction of the storage cost of layered variance shadow maps. These algorithms are easy to implement on current graphics hardware and provide efficient, scalable solutions to the problem of shadow map aliasing.
6

Rendering Antialiased Shadows using Warped Variance Shadow Maps

Lauritzen, Andrew Timothy January 2008 (has links)
Shadows contribute significantly to the perceived realism of an image, and provide an important depth cue. Rendering high quality, antialiased shadows efficiently is a difficult problem. To antialias shadows, it is necessary to compute partial visibilities, but computing these visibilities using existing approaches is often too slow for interactive applications. Shadow maps are a widely used technique for real-time shadow rendering. One major drawback of shadow maps is aliasing, because the shadow map data cannot be filtered in the same way as colour textures. In this thesis, I present variance shadow maps (VSMs). Variance shadow maps use a linear representation of the depth distributions in the shadow map, which enables the use of standard linear texture filtering algorithms. Thus VSMs can address the problem of shadow aliasing using the same highly-tuned mechanisms that are available for colour images. Given the mean and variance of the depth distribution, Chebyshev's inequality provides an upper bound on the fraction of a shaded fragment that is occluded, and I show that this bound often provides a good approximation to the true partial occlusion. For more difficult cases, I show that warping the depth distribution can produce multiple bounds, some tighter than others. Based on this insight, I present layered variance shadow maps, a scalable generalization of variance shadow maps that partitions the depth distribution into multiple segments. This reduces or eliminates an artifact - "light bleeding" - that can appear when using the simpler version of variance shadow maps. Additionally, I demonstrate exponential variance shadow maps, which combine moments computed from two exponentially-warped depth distributions. Using this approach, high quality results are produced at a fraction of the storage cost of layered variance shadow maps. These algorithms are easy to implement on current graphics hardware and provide efficient, scalable solutions to the problem of shadow map aliasing.
7

Implementation of a 1GHZ frontend using transform domain charge sampling techniques

Kulkarni, Mandar Shashikant 15 May 2009 (has links)
The recent popularity and convenience of Wireless communication and the need for integration demands the development of Software Defined Radio (SDR). First defined by Mitoal, the SDR processed the entire bandwidth using a high resolution and high speed ADC and remaining operations were done in DSP. The current trend in SDRs is to design highly reconfigurable analog front ends which can handle narrow-band and wideband standards, one at a time. Charge sampling has been widely used in these architectures due to the built in antialiasing capabilities, jitter robustness at high signal frequencies and flexibility in filter design. This work proposed a 1GHz wideband front end aimed at SDR applications using Transform Domain (TD) sampling techniques. Frequency Domain (FD) sampling, a special case of TD sampling, efficiently parallelizes the signal for digital processing, relaxing the sampling requirements and enabling parallel digital processing at a much lower rate and is a potential candidate for SDR. The proposed front end converts the RF signal into current and then it is downconverted using passive mixers. The front end has five parallel paths, each acting on a part of the spectrum effectively parallelizing the front end and relaxing the requirements. An overlap introduced between successive integration windows for jitter robustness was exploited to create a novel sinc2 downsample by two filter topology. This topology was compared to a conventional topology and found to be equivalent and area efficient by about 44%. The proposed topology was used as a baseband filter for all paths in the front end. The chip was sent for fabrication in 45nm technology. The active area of the chip was 6:6mm2. The testing and measurement of the chip still remains to be done.
8

Reduced Area Discrete-Time Down-Sampling Filter Embedded With Windowed Integration Samplers

Raviprakash, Karthik 2010 August 1900 (has links)
Developing a flexible receiver, which can be reconfigured to multiple standards, is the key to solving the problem of embedding numerous and ever-changing functionalities in mobile handsets. Difficulty in efficiently reconfiguring analog blocks of a receiver chain to multiple standards calls for moving the ADC as close to the antenna as possible so that most of the processing is done in DSP. Different standards are sampled at different frequencies and a programmable anti-aliasing filtering is needed here. Windowed integration samplers have an inherent sinc filtering which creates nulls at multiples of fs. The attenuation provided by sinc filtering for a bandwidth B is directly proportional to the sampling frequency fs and, in order to meet the anti-aliasing specifications, a high sampling rate is needed. ADCs operating at such a high oversampling rate dissipate power for no good use. Hence, there is a need to develop a programmable discrete-time down-sampling circuit with high inherent anti-aliasing capabilities. Currently existing topologies use large numbers of switches and capacitors which occupy a lot of area.A novel technique in reducing die area on a discrete-time sinc2 ↓2 filter for charge sampling is proposed. An SNR comparison of the conventional and the proposed topology reveals that the new technique saves 25 percent die area occupied by the sampling capacitors of the filter. The proposed idea is also extended to implement higher downsampling factors and a greater percentage of area is saved as the down-sampling factor is increased. The proposed filter also has the topological advantage over previously reported works of allowing the designers to use active integration to charge the capacitance, which is critical in obtaining high linearity. A novel technique to implement a discrete-time sinc3 ↓2 filter for windowed integration samplers is also proposed. The topology reduces the idle time of the integration capacitors at the expense of a small complexity overhead in the clock generation, thereby saving 33 percent of the die area on the capacitors compared to the currently existing topology. Circuit Level simulations in 45 nm CMOS technlogy show a good agreement with the predicted behaviour obtained from the analaysis.
9

Multi-fragment visibility determination in the context of order-independent transparency rendering / Determinação de visibilidade de efeitos multi-fragmentos no contexto de transparência independente de ordem

Maule, Marilena January 2015 (has links)
No contexto de imagens geradas por computador, efeitos multi-fragmento são aqueles que determinam a cor do pixel baseados em informações computadas a partir de mais de um fragmento. Nesse tipo de efeito, a contribuição de cada fragmento é extraída de sua visibilidade com respeito a um determinado ponto de vista. Observando uma sequencia de fragmentos vista através de um pixel, a visibilidade de um fragmento depende da sua relação espacial com os demais fragmentos. Essa relação pode ser reduzida ao problema de ordenação de múltiplos fragmentos. Portanto, ordenação é essencial para correta avaliação de efeitos multi-fragmento. A pesquisa desta tese foca em dois problemas multi-fragmento clássicos: transparência independente de ordem e anti-aliasing de fragmentos transparentes. Enquanto o efeito de transparência necessita de ordenação de fragmentos ao longo do raio de visualização do pixel, anti-aliasing aumenta a complexidade do problema ao adicionar informação espacial do fragmento com respeito à área do pixel. A contribuição desta tese é o desenvolvimento de uma solução para visibilidade de fragmentos que pode tirar proveito do pipeline de transformação e iluminação, implementando nas GPUs de hoje. Nós descrevemos ambos os problemas de transparência e anti-aliasing, discutindo soluções anteriores, além de classificá-las e compará-las. Nossa análise associa soluções a implementações específicas, comparando uso de memória, desempenho e qualidade de imagem. Os documentos resultantes fornecem uma visão geral das áreas abordadas, contendo: qual é o estado-da-arte atualmente, o que ele é capaz de fazer e quais são suas limitações, ou seja, onde melhorias são possíveis. Como parte integrante desta tese, nós propomos duas novas técnicas para processar transparência independente de ordem. Nós mostramos como obter o menor consumo de memória para cálculo exato de transparência, em um número finito de passos de geometria; permitindo aumento da complexidade das cenas representadas e da resolução da imagem final, em relação aos métodos anteriores, dada uma determinada configuração de hardware. Adicionalmente, demonstramos que, para a maior parte dos casos, os fragmentos mais próximos ao observador tem maior impacto sobre a cor final do pixel. Também mostramos como esta perspectiva sobre o problema inspirou novas técnicas. A pesquisa também inclui a investigação de uma nova abordagem para anti-aliasing para fragmentos transparentes. Através do uso de uma única amostra por fragmento, nosso objetivo é reduzir o consumo de memória enquanto melhoramos desempenho e qualidade. Experimentos preliminares apresentam resultados promissores em comparação com a técnica mais usada para anti-aliasing. / Multi-fragment effects, in the computer-generated imagery context, are effects that determine pixel color based on information computed from more than one fragment. In such effects, the contribution of each fragment is extracted from its visibility with respect to a point of view. Seen through a pixel’s point of view, the visibility of one fragment depends on its spatial relationship with other fragments. This relationship can be reduced to the problem of sorting multiple fragments. Therefore, sorting is the key to multi-fragment evaluation. The research on this dissertation is focused on two classical multi-fragment effects: order-independent transparency and anti-aliasing of transparent fragments. While transparency rendering requires sorting of fragments along the view ray of a pixel, anti-aliasing increases the problem complexity by adding spatial information of fragments with respect to the pixel area. This dissertation contribution relies on the work towards the development of a solution for the visibility of fragments that can take advantage of the transformation and lighting pipeline implemented in current GPUs. We describe both transparency and aliasing problems, for which we discuss existing solutions, analyzing, classifying and comparing them. The analysis associates solutions to specific applications, comparing memory usage, performance, and quality. The result is a general view of each field: which are the current state-of-the-art capabilities and in which direction significant improvements can be made. As part of this dissertation, we proposed two novel techniques for order-independent transparency rendering. We show how to achieve the minimum memory footprint for computing exact transparency in a bounded number of geometry passes; allowing increasing scene complexity and image resolution to be feasible within current hardware capabilities. Additionally, we demonstrate that, for most scenarios, the front-most fragments have the greatest impact on the pixel color. We also show how the perspective we propose has inspired recent transparency techniques. The research includes the investigation of a novel anti-aliasing approach for transparent fragments. Through the use of a single sample per fragment, we aim at reducing memory footprint while improving performance and quality. Preliminary experiments show promising results, in comparison with a well established and largely used anti-aliasing technique.
10

Multi-fragment visibility determination in the context of order-independent transparency rendering / Determinação de visibilidade de efeitos multi-fragmentos no contexto de transparência independente de ordem

Maule, Marilena January 2015 (has links)
No contexto de imagens geradas por computador, efeitos multi-fragmento são aqueles que determinam a cor do pixel baseados em informações computadas a partir de mais de um fragmento. Nesse tipo de efeito, a contribuição de cada fragmento é extraída de sua visibilidade com respeito a um determinado ponto de vista. Observando uma sequencia de fragmentos vista através de um pixel, a visibilidade de um fragmento depende da sua relação espacial com os demais fragmentos. Essa relação pode ser reduzida ao problema de ordenação de múltiplos fragmentos. Portanto, ordenação é essencial para correta avaliação de efeitos multi-fragmento. A pesquisa desta tese foca em dois problemas multi-fragmento clássicos: transparência independente de ordem e anti-aliasing de fragmentos transparentes. Enquanto o efeito de transparência necessita de ordenação de fragmentos ao longo do raio de visualização do pixel, anti-aliasing aumenta a complexidade do problema ao adicionar informação espacial do fragmento com respeito à área do pixel. A contribuição desta tese é o desenvolvimento de uma solução para visibilidade de fragmentos que pode tirar proveito do pipeline de transformação e iluminação, implementando nas GPUs de hoje. Nós descrevemos ambos os problemas de transparência e anti-aliasing, discutindo soluções anteriores, além de classificá-las e compará-las. Nossa análise associa soluções a implementações específicas, comparando uso de memória, desempenho e qualidade de imagem. Os documentos resultantes fornecem uma visão geral das áreas abordadas, contendo: qual é o estado-da-arte atualmente, o que ele é capaz de fazer e quais são suas limitações, ou seja, onde melhorias são possíveis. Como parte integrante desta tese, nós propomos duas novas técnicas para processar transparência independente de ordem. Nós mostramos como obter o menor consumo de memória para cálculo exato de transparência, em um número finito de passos de geometria; permitindo aumento da complexidade das cenas representadas e da resolução da imagem final, em relação aos métodos anteriores, dada uma determinada configuração de hardware. Adicionalmente, demonstramos que, para a maior parte dos casos, os fragmentos mais próximos ao observador tem maior impacto sobre a cor final do pixel. Também mostramos como esta perspectiva sobre o problema inspirou novas técnicas. A pesquisa também inclui a investigação de uma nova abordagem para anti-aliasing para fragmentos transparentes. Através do uso de uma única amostra por fragmento, nosso objetivo é reduzir o consumo de memória enquanto melhoramos desempenho e qualidade. Experimentos preliminares apresentam resultados promissores em comparação com a técnica mais usada para anti-aliasing. / Multi-fragment effects, in the computer-generated imagery context, are effects that determine pixel color based on information computed from more than one fragment. In such effects, the contribution of each fragment is extracted from its visibility with respect to a point of view. Seen through a pixel’s point of view, the visibility of one fragment depends on its spatial relationship with other fragments. This relationship can be reduced to the problem of sorting multiple fragments. Therefore, sorting is the key to multi-fragment evaluation. The research on this dissertation is focused on two classical multi-fragment effects: order-independent transparency and anti-aliasing of transparent fragments. While transparency rendering requires sorting of fragments along the view ray of a pixel, anti-aliasing increases the problem complexity by adding spatial information of fragments with respect to the pixel area. This dissertation contribution relies on the work towards the development of a solution for the visibility of fragments that can take advantage of the transformation and lighting pipeline implemented in current GPUs. We describe both transparency and aliasing problems, for which we discuss existing solutions, analyzing, classifying and comparing them. The analysis associates solutions to specific applications, comparing memory usage, performance, and quality. The result is a general view of each field: which are the current state-of-the-art capabilities and in which direction significant improvements can be made. As part of this dissertation, we proposed two novel techniques for order-independent transparency rendering. We show how to achieve the minimum memory footprint for computing exact transparency in a bounded number of geometry passes; allowing increasing scene complexity and image resolution to be feasible within current hardware capabilities. Additionally, we demonstrate that, for most scenarios, the front-most fragments have the greatest impact on the pixel color. We also show how the perspective we propose has inspired recent transparency techniques. The research includes the investigation of a novel anti-aliasing approach for transparent fragments. Through the use of a single sample per fragment, we aim at reducing memory footprint while improving performance and quality. Preliminary experiments show promising results, in comparison with a well established and largely used anti-aliasing technique.

Page generated in 0.0824 seconds