• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 11
  • 9
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Temporal Anti-Aliasing and Temporal Supersampling in Three-Dimensional Computer Generated Dynamic Worlds / Temporal anti-vikning och temporal supersampling i tredimensionella datorgenerarade dynamiska världar

Stejmar, Carl January 2016 (has links)
This master thesis investigates and evaluates how a temporal component can help anti-aliasing with reduction of general spatial aliasing, preservation of thin geometry and how to get temporal stability in dynamic computer generated worlds. Of spatial aliasing, geometric aliasing is in focus but shading aliasing will also be discussed. Two temporal approaches are proposed. One of the methods utilizes the previous frame while the other method uses four previous frames. In order to do this an efficient way of re-projecting pixels are needed so this thesis deals with that problem and its consequences as well. Further, the results show that the way of taking and accumulating samples in these proposed methods show improvements that would not have been affordable without the temporal component for real-time applications. Thin geometry is preserved up to a degree but the proposed methods do not solve this problem for the general case. The temporal methods' image quality are evaluated against conventional anti-aliasing methods subjectively, by a survey, and objectively, by a numerical method not found elsewhere in anti-aliasing reports. Performance and memory consumption are also evaluated. The evaluation suggests that a temporal component for anti-aliasing can play an important role in increasing image quality and temporal stability without having a substantial negative impact of the performance with less memory consumed.
2

Comparison of Anti-Aliasing in Motion

Andersson, Lukas January 2018 (has links)
Background. Aliasing is a problem that every 3D game has because of the resolutions that monitors are using right now is not high enough. Aliasing is when you look at an object in a 3D world and see that it has jagged edges where it should be smooth. This can be reduced by a technique called anti-aliasing. Objectives. The object of this study is to compare three different techniques, Fast approximate anti-aliasing (FXAA), Subpixel Morphological Anti Aliasing (SMAA) and Temporal anti-aliasing (TAA) in motion to see which is a good default for games. Methods. An experiment was run where 20 people participated and tested a real-time prototype which had a camera moving through a scene multiple times with different anti-aliasing techniques. Results. The results showed that TAA was consistently performing best in the tests of blurry picture quality, aliasing and flickering. Both SMAA and FXAA were only comparable to TAA in the blur area of the test and falling behind all the other parts. Conclusions. TAA is a great anti-aliasing technique to use for avoiding aliasing and flickering while in motion. Blur was thought to be a problem but as the test shows most people did not feel that blur was a problem for any of the techniques that were used.
3

Two Anti-aliasing Methods for Creating a Uniform Look

Hyltegård, Simon January 2016 (has links)
Context. In the pursuit to render at good quality, anti-aliasing will be needed to reduce jagged edges. A challenge is presented where the image to be rendered consists of different elements like GUI and background. One anti-aliasing method alone might not be able to handle this due to some anti-aliasing methods not being applicable to certain elements within a rendering. Combining two different anti-aliasing methods on different elements within a rendering can however make parts appear extra blurry in relation to the rest, as some methods are prone to creating unwanted blur at specific occasions. Objectives. This thesis' goal is to present a method for applying anti-aliasing to an image containing different elements, while managing to render it with good quality and keeping a uniform look. Method. An experiment presented in the form of a user study was conducted for finding a suitable method for creating a uniform look. 26 respondents participated in the experiment, where they would rate a number of images by how uniform each were perceived as. Results. The results from a user study did not meet the author's predictions, where it was expected that FXAA would help create a uniform look when applied last as a final post-processing effect. However the respondents of the user study had varied opinions, as the results showed that all three methods presented in the experiment all were perceived to display a uniform look. Conclusions. A conclusion could be drawn that either anti-aliasing can not affect images greatly enough for the result to be perceived as non-uniform, at least for the two anti-aliasing methods which were tested. Or that the material presented in the survey did not manage to present an articulate display for the respondents.
4

Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

Sicat, Ronell Barrera 25 November 2015 (has links)
The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel and voxel footprints in input images and volumes. We show that the continuous pdfs encoded in the sparse pdf map representation enable accurate multi-resolution non-linear image operations on gigapixel images. Similarly, we show that sparse pdf volumes enable more consistent multi-resolution volume rendering compared to standard approaches, on both artificial and real world large-scale volumes. The supplementary videos demonstrate our results. In the standard approach, users heavily rely on panning and zooming interactions to navigate the data within the limits of their display devices. However, panning across the whole spatial domain and zooming across all resolution levels of large-scale images to search for interesting regions is not practical. Assisted exploration techniques allow users to quickly narrow down millions to billions of possible regions to a more manageable number for further inspection. However, existing approaches are not fully user-driven because they typically already prescribe what being of interest means. To address this, we introduce the patch sets representation for large-scale images. Patches inside a patch set are grouped and encoded according to similarity via a permutohedral lattice (p-lattice) in a user-defined feature space. Fast set operations on p-lattices facilitate patch set queries that enable users to describe what is interesting. In addition, we introduce an exploration framework—GigaPatchExplorer—for patch set-based image exploration. We show that patch sets in our framework are useful for a variety of user-driven exploration tasks in gigapixel images and whole collections thereof.
5

DIGITAL FILTERING OF MULTIPLE ANALOG CHANNELS

Hicks, William T. 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / The traditional use of active RC-type filters to provide anti-aliasing filters in Pulse Code Modulation (PCM) systems is being replaced by the use of Digital Signal Processing (DSP). This is especially true when performance requirements are stringent and require operation over a wide environmental temperature range. This paper describes the design of a multi channel digital filtering card that incorporates up to 100 unique digitally implemented cutoff frequencies. Any combination of these frequencies can be independently assigned to any of the input channels.
6

An Analysis of Various Digital Filter Types for Use as Matched Pre-Sample Filters in Data Encoders

Hicks, William T. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / The need for precise gain and phase matching in multi-channel data sampling systems can result in very strict design requirements for presample or anti-aliasing filters. The traditional use of active RC-type filters is expensive, especially when performance requirements are tight and when operation over a wide environmental temperature range is required. New Digital Signal Processing (DSP) techniques have provided an opportunity for cost reduction and/or performance improvements in these types of applications. This paper summarizes the results of an evaluation of various digital filter types used as matched presample filters in data sampling systems.
7

Design of 3D Accelerator for Mobile Platform

Ramachandruni, Radha Krishna January 2006 (has links)
<p>Implement a high-level model of computationally intensive part of 3D graphics pipe-line. Increasing popularity of handheld devices along with developments in hardware technology, 3D graphics on mobile devices is fast becoming a reality. Graphics processing is essentially complex and computationally demanding. In order to achieve scene realism and perception of motion, identifying and accelerating bottle necks is crucial. This thesis is about Open-GL graphics pipe-line in general. Software which implements computationally intensive part of graphics pipe-line is built. In essence a rasterization unit that gets triangles with 2D screen, texture co-ordinates and color. Triangles go through scan conversion, texturing and a set of other per-fragment operations before getting displayed on screen.</p>
8

Rendering for Microlithography on GPU Hardware

Iwaniec, Michel January 2008 (has links)
Over the last decades, integrated circuits have changed our everyday lives in a number of ways. Many common devices today taken for granted would not have been possible without this industrial revolution. Central to the manufacturing of integrated circuits is the photomask used to expose the wafers. Additionally, such photomasks are also used for manufacturing of flat screen displays. Microlithography, the manufacturing technique of such photomasks, requires complex electronics equipment that excels in both speed and fidelity. Manufacture of such equipment requires competence in virtually all engineering disciplines, where the conversion of geometry into pixels is but one of these. Nevertheless, this single step in the photomask drawing process has a major impact on the throughput and quality of a photomask writer. Current high-end semiconductor writers from Micronic use a cluster of Field-Programmable Gate Array circuits (FPGA). FPGAs have for many years been able to replace Application Specific Integrated Circuits due to their flexibility and low initial development cost. For parallel computation, an FPGA can achieve throughput not possible with microprocessors alone. Nevertheless, high-performance FPGAs are expensive devices, and upgrading from one generation to the next often requires a major redesign. During the last decade, the computer games industry has taken the lead in parallel computation with graphics card for 3D gaming. While essentially being designed to render 3D polygons and lacking the flexibility of an FPGA, graphics cards have nevertheless started to rival FPGAs as the main workhorse of many parallel computing applications. This thesis covers an investigation on utilizing graphics cards for the task of rendering geometry into photomask patterns. It describes different strategies that were tried and the throughput and fidelity achieved with them, along with the problems encountered. It also describes the development of a suitable evaluation framework that was critical to the process.
9

Rendering for Microlithography on GPU Hardware

Iwaniec, Michel January 2008 (has links)
<p>Over the last decades, integrated circuits have changed our everyday lives in a number of ways. Many common devices today taken for granted would not have been possible without this industrial revolution.</p><p>Central to the manufacturing of integrated circuits is the photomask used to expose the wafers. Additionally, such photomasks are also used for manufacturing of flat screen displays. Microlithography, the manufacturing technique of such photomasks, requires complex electronics equipment that excels in both speed and fidelity. Manufacture of such equipment requires competence in virtually all engineering disciplines, where the conversion of geometry into pixels is but one of these. Nevertheless, this single step in the photomask drawing process has a major impact on the throughput and quality of a photomask writer.</p><p>Current high-end semiconductor writers from Micronic use a cluster of Field-Programmable Gate Array circuits (FPGA). FPGAs have for many years been able to replace Application Specific Integrated Circuits due to their flexibility and low initial development cost. For parallel computation, an FPGA can achieve throughput not possible with microprocessors alone. Nevertheless, high-performance FPGAs are expensive devices, and upgrading from one generation to the next often requires a major redesign.</p><p>During the last decade, the computer games industry has taken the lead in parallel computation with graphics card for 3D gaming. While essentially being designed to render 3D polygons and lacking the flexibility of an FPGA, graphics cards have nevertheless started to rival FPGAs as the main workhorse of many parallel computing applications.</p><p>This thesis covers an investigation on utilizing graphics cards for the task of rendering geometry into photomask patterns. It describes different strategies that were tried and the throughput and fidelity achieved with them, along with the problems encountered. It also describes the development of a suitable evaluation framework that was critical to the process.</p>
10

Design of 3D Accelerator for Mobile Platform

Ramachandruni, Radha Krishna January 2006 (has links)
Implement a high-level model of computationally intensive part of 3D graphics pipe-line. Increasing popularity of handheld devices along with developments in hardware technology, 3D graphics on mobile devices is fast becoming a reality. Graphics processing is essentially complex and computationally demanding. In order to achieve scene realism and perception of motion, identifying and accelerating bottle necks is crucial. This thesis is about Open-GL graphics pipe-line in general. Software which implements computationally intensive part of graphics pipe-line is built. In essence a rasterization unit that gets triangles with 2D screen, texture co-ordinates and color. Triangles go through scan conversion, texturing and a set of other per-fragment operations before getting displayed on screen.

Page generated in 0.0561 seconds