• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 8
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

Sicat, Ronell Barrera 25 November 2015 (has links)
The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel and voxel footprints in input images and volumes. We show that the continuous pdfs encoded in the sparse pdf map representation enable accurate multi-resolution non-linear image operations on gigapixel images. Similarly, we show that sparse pdf volumes enable more consistent multi-resolution volume rendering compared to standard approaches, on both artificial and real world large-scale volumes. The supplementary videos demonstrate our results. In the standard approach, users heavily rely on panning and zooming interactions to navigate the data within the limits of their display devices. However, panning across the whole spatial domain and zooming across all resolution levels of large-scale images to search for interesting regions is not practical. Assisted exploration techniques allow users to quickly narrow down millions to billions of possible regions to a more manageable number for further inspection. However, existing approaches are not fully user-driven because they typically already prescribe what being of interest means. To address this, we introduce the patch sets representation for large-scale images. Patches inside a patch set are grouped and encoded according to similarity via a permutohedral lattice (p-lattice) in a user-defined feature space. Fast set operations on p-lattices facilitate patch set queries that enable users to describe what is interesting. In addition, we introduce an exploration framework—GigaPatchExplorer—for patch set-based image exploration. We show that patch sets in our framework are useful for a variety of user-driven exploration tasks in gigapixel images and whole collections thereof.
2

DIGITAL FILTERING OF MULTIPLE ANALOG CHANNELS

Hicks, William T. 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / The traditional use of active RC-type filters to provide anti-aliasing filters in Pulse Code Modulation (PCM) systems is being replaced by the use of Digital Signal Processing (DSP). This is especially true when performance requirements are stringent and require operation over a wide environmental temperature range. This paper describes the design of a multi channel digital filtering card that incorporates up to 100 unique digitally implemented cutoff frequencies. Any combination of these frequencies can be independently assigned to any of the input channels.
3

An Analysis of Various Digital Filter Types for Use as Matched Pre-Sample Filters in Data Encoders

Hicks, William T. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / The need for precise gain and phase matching in multi-channel data sampling systems can result in very strict design requirements for presample or anti-aliasing filters. The traditional use of active RC-type filters is expensive, especially when performance requirements are tight and when operation over a wide environmental temperature range is required. New Digital Signal Processing (DSP) techniques have provided an opportunity for cost reduction and/or performance improvements in these types of applications. This paper summarizes the results of an evaluation of various digital filter types used as matched presample filters in data sampling systems.
4

Design of 3D Accelerator for Mobile Platform

Ramachandruni, Radha Krishna January 2006 (has links)
<p>Implement a high-level model of computationally intensive part of 3D graphics pipe-line. Increasing popularity of handheld devices along with developments in hardware technology, 3D graphics on mobile devices is fast becoming a reality. Graphics processing is essentially complex and computationally demanding. In order to achieve scene realism and perception of motion, identifying and accelerating bottle necks is crucial. This thesis is about Open-GL graphics pipe-line in general. Software which implements computationally intensive part of graphics pipe-line is built. In essence a rasterization unit that gets triangles with 2D screen, texture co-ordinates and color. Triangles go through scan conversion, texturing and a set of other per-fragment operations before getting displayed on screen.</p>
5

Rendering for Microlithography on GPU Hardware

Iwaniec, Michel January 2008 (has links)
Over the last decades, integrated circuits have changed our everyday lives in a number of ways. Many common devices today taken for granted would not have been possible without this industrial revolution. Central to the manufacturing of integrated circuits is the photomask used to expose the wafers. Additionally, such photomasks are also used for manufacturing of flat screen displays. Microlithography, the manufacturing technique of such photomasks, requires complex electronics equipment that excels in both speed and fidelity. Manufacture of such equipment requires competence in virtually all engineering disciplines, where the conversion of geometry into pixels is but one of these. Nevertheless, this single step in the photomask drawing process has a major impact on the throughput and quality of a photomask writer. Current high-end semiconductor writers from Micronic use a cluster of Field-Programmable Gate Array circuits (FPGA). FPGAs have for many years been able to replace Application Specific Integrated Circuits due to their flexibility and low initial development cost. For parallel computation, an FPGA can achieve throughput not possible with microprocessors alone. Nevertheless, high-performance FPGAs are expensive devices, and upgrading from one generation to the next often requires a major redesign. During the last decade, the computer games industry has taken the lead in parallel computation with graphics card for 3D gaming. While essentially being designed to render 3D polygons and lacking the flexibility of an FPGA, graphics cards have nevertheless started to rival FPGAs as the main workhorse of many parallel computing applications. This thesis covers an investigation on utilizing graphics cards for the task of rendering geometry into photomask patterns. It describes different strategies that were tried and the throughput and fidelity achieved with them, along with the problems encountered. It also describes the development of a suitable evaluation framework that was critical to the process.
6

Rendering for Microlithography on GPU Hardware

Iwaniec, Michel January 2008 (has links)
<p>Over the last decades, integrated circuits have changed our everyday lives in a number of ways. Many common devices today taken for granted would not have been possible without this industrial revolution.</p><p>Central to the manufacturing of integrated circuits is the photomask used to expose the wafers. Additionally, such photomasks are also used for manufacturing of flat screen displays. Microlithography, the manufacturing technique of such photomasks, requires complex electronics equipment that excels in both speed and fidelity. Manufacture of such equipment requires competence in virtually all engineering disciplines, where the conversion of geometry into pixels is but one of these. Nevertheless, this single step in the photomask drawing process has a major impact on the throughput and quality of a photomask writer.</p><p>Current high-end semiconductor writers from Micronic use a cluster of Field-Programmable Gate Array circuits (FPGA). FPGAs have for many years been able to replace Application Specific Integrated Circuits due to their flexibility and low initial development cost. For parallel computation, an FPGA can achieve throughput not possible with microprocessors alone. Nevertheless, high-performance FPGAs are expensive devices, and upgrading from one generation to the next often requires a major redesign.</p><p>During the last decade, the computer games industry has taken the lead in parallel computation with graphics card for 3D gaming. While essentially being designed to render 3D polygons and lacking the flexibility of an FPGA, graphics cards have nevertheless started to rival FPGAs as the main workhorse of many parallel computing applications.</p><p>This thesis covers an investigation on utilizing graphics cards for the task of rendering geometry into photomask patterns. It describes different strategies that were tried and the throughput and fidelity achieved with them, along with the problems encountered. It also describes the development of a suitable evaluation framework that was critical to the process.</p>
7

Design of 3D Accelerator for Mobile Platform

Ramachandruni, Radha Krishna January 2006 (has links)
Implement a high-level model of computationally intensive part of 3D graphics pipe-line. Increasing popularity of handheld devices along with developments in hardware technology, 3D graphics on mobile devices is fast becoming a reality. Graphics processing is essentially complex and computationally demanding. In order to achieve scene realism and perception of motion, identifying and accelerating bottle necks is crucial. This thesis is about Open-GL graphics pipe-line in general. Software which implements computationally intensive part of graphics pipe-line is built. In essence a rasterization unit that gets triangles with 2D screen, texture co-ordinates and color. Triangles go through scan conversion, texturing and a set of other per-fragment operations before getting displayed on screen.
8

Textrendering med kantlinjer i Direct3D 11

Ståhlberg, Erik January 2016 (has links)
Context. Text rendering is useful in different contexts, and usually needs to be as sharp as possible. DirectWrite and Direct2D is a good choice when rendering for a 2D environment and can be used with Direct3D. Objectives. This study addresses the problem of aliasing with a study on FXAA or SSAA to find which is the better option to correct jagginess on text. Methods. A number of photos were set up where 26 test subjects had to answer questions about the blurring and jagginess in the photos. Results. The results showed that FXAA and SSAA perform relatively similar in jagginess in how jaggy the pictures is perceived and was significantly better than no anti-aliasing at all. Conclusions. It depends on how the images are displayed on the screen to detect any kind of jagginess or blur.
9

Procedural Reduction Maps

Van Horn, R. Brooks, III 16 January 2007 (has links)
Procedural textures and image textures are commonplace in graphics today, finding uses in such places as animated movies and video games. Unlike image texture maps, procedural textures typically suffer from minification aliasing. I present a method that, given a procedural texture on a surface, automatically creates an anti-aliased version of the procedural texture. The new procedural texture maintains the original textures details, but reduces minification aliasing artifacts. This new algorithm creates an image pyramid similar to MIP-Maps to represent the texture. Whereas a MIP-Map stores per-texel color, however, my texture hierarchy stores weighted sums of reflectance functions, allowing a wider-range of effects to be anti-aliased. The stored reflectance functions are automatically selected based on an analysis of the different functions found over the surface. When the texture is viewed at close range, the original texture is used, but as the texture footprint grows, the algorithm gradually replaces the textures result with an anti-aliased one. This results in faster development time for writing procedural textures as well as higher visual fidelity and faster rendering. With the optional addition of authoring guidelines, the analysis phase can be sped up by as much as two orders of magnitude. Furthermore, I developed a method for handling pre-filtered integration of reflectance functions to anti-alias specular highlights. The normal-centric BRDF (NBRDF) allows for fast evaluation over a range of normals appearing on the surface of an object. The NBRDF is easy to implement on the GPU for real-time results and can be combined with procedural reduction maps for real-time procedural texture minification anti-aliasing.
10

[en] REAL-TIME SHADOW MAPPING TECHNIQUES FOR CAD MODELS / [pt] GERAÇÃO DE SOMBRAS EM TEMPO REAL PARA MODELOS CAD

VITOR BARATA RIBEIRO BLANCO BARROSO 21 May 2007 (has links)
[pt] O mapeamento de sombras é uma técnica de renderização amplamente utilizada para a geração de sombras de superfícies arbitrárias em tempo real. No entanto, devido a sua natureza amostrada, apresenta dois problemas de difícil resolução: o aspecto chamuscado de objetos e a aparência serrilhada das bordas das sombras. Em particular, o sombreamento de modelos CAD (Computer-Aided Design) apresenta desafios ainda maiores, devido à existência de objetos estreitos com silhuetas complexas e o elevado grau de complexidade em profundidade. Neste trabalho, fazemos uma análise detalhada dos problemas de chamuscamento e serrilhamento, revisando e completando trabalhos de diferentes autores. Apresentamos ainda algumas propostas para melhoria de algoritmos existentes: o alinhamento de amostras independente de programas de vértice, um parâmetro generalizado para o LiSPSM (Light- Space Perspective Shadow Map), e um esquema de particionamento adaptativo em profundidade. Em seguida, investigamos a eficácia de diferentes algoritmos quando aplicados a modelos CAD, avaliando-os em critérios como facilidade de implementação, qualidade visual e eficiência computacional. / [en] Shadow mapping is a widely used rendering technique for shadow generation on arbitrary surfaces. However, because of the limited resolution available for sampling the scene, the algorithm presents two difficult problems to be solved: the incorrect self-shadowing of objects and the jagged appearance of shadow borders, also known as aliasing. Generating shadows for CAD (Computer-Aided Design) models presents additional challenges, due to the existence of many thin complex-silhouette objects and the high depth complexity. In this work, we present a detailed analysis of self-shadowing and aliasing by reviewing and building on works from different authors. We also propose some impromevents to existing algorithms: sample alignment without vertex shaders, a generalized parameter for the LiSPSM (Light-Space Perspective Shadow Map) algorithm, and an adaptive z- partitioning scheme. Finally, we investigate the effectiveness of different algorithms when applied to CAD models, considering ease of implementation, visual quality and computational efficiency.

Page generated in 0.05 seconds