• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 9
  • 8
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 89
  • 29
  • 26
  • 25
  • 25
  • 25
  • 21
  • 20
  • 19
  • 18
  • 17
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Implementation and Evaluation of a RF Receiver Architecture Using an Undersampling Track-and-Hold Circuit / Implementation och utvärdering av en RF-mottagare baserad på en undersamplande track-and-hold-krets

Dahlbäck, Magnus January 2003 (has links)
Today's radio frequency receivers for digital wireless communication are getting more and more complex. A single receiver unit should support multiple bands, have a wide bandwidth, be flexible and show good performance. To fulfil these requirements, new receiver architectures have to be developed and used. One possible alternative is the RF undersampling architecture. This thesis evaluates the RF undersampling architecture, which make use of an undersampling track-and-hold circuit with very wide bandwidth to perform direct sampling of the RF carrier before the analogue-to-digital converter. The architecture’s main advantages and drawbacks are identified and analyzed. Also, techniques and improvements to solve or reduce the main problems of the RF undersampling receiver are proposed.
32

A Color Filter Array Interpolation Method Based on Sampling Theory

Glotzbach, John William 26 August 2004 (has links)
Digital cameras use a single image sensor array with a color filter array (CFA) to measure a color image. Instead of measuring a red, green, and blue value at every pixel, these cameras have a filter built onto each pixel so that only one portion of the visible spectrum is measured. To generate a full-color image, the camera must estimate the missing two values at every pixel. This process is known as color filter array interpolation. The Bayer CFA pattern samples the green image on half of the pixels of the imaging sensor on a quincunx grid. The other half of the pixels measure the red and blue images equally on interleaved rectangular sampling grids. This thesis analyzes this problem with sampling theory. The red and blue images are sampled at half the rate of the green image and therefore have a higher probability of aliasing in the output image. This is apparent when simple interpolation algorithms like bilinear interpolation are used for CFA interpolation. Two reference algorithms, a projections onto convex sets (POCS) algorithm and an edge-directed algorithm by Adams and Hamilton (AH), are studied. Both algorithms address aliasing in the green image. Because of the high correlation among the red, green, and blue images, information from the red and blue images can be used to better interpolate the green image. The reference algorithms are studied to learn how this information is used. This leads to two new interpolation algorithms for the green image. The red and blue interpolation algorithm of AH is also studied to determine how the inter-image correlation is used when interpolating these images. This study shows that because the green image is sampled at a higher rate, it retains much of the high-frequency information in the original image. This information is used to estimate aliasing in the red and blue images. We present a general algorithm based on the AH algorithm to interpolate the red and blue images. This algorithm is able to provide results that are on average, better than both reference algorithms, POCS and AH.
33

Rendering for Microlithography on GPU Hardware

Iwaniec, Michel January 2008 (has links)
<p>Over the last decades, integrated circuits have changed our everyday lives in a number of ways. Many common devices today taken for granted would not have been possible without this industrial revolution.</p><p>Central to the manufacturing of integrated circuits is the photomask used to expose the wafers. Additionally, such photomasks are also used for manufacturing of flat screen displays. Microlithography, the manufacturing technique of such photomasks, requires complex electronics equipment that excels in both speed and fidelity. Manufacture of such equipment requires competence in virtually all engineering disciplines, where the conversion of geometry into pixels is but one of these. Nevertheless, this single step in the photomask drawing process has a major impact on the throughput and quality of a photomask writer.</p><p>Current high-end semiconductor writers from Micronic use a cluster of Field-Programmable Gate Array circuits (FPGA). FPGAs have for many years been able to replace Application Specific Integrated Circuits due to their flexibility and low initial development cost. For parallel computation, an FPGA can achieve throughput not possible with microprocessors alone. Nevertheless, high-performance FPGAs are expensive devices, and upgrading from one generation to the next often requires a major redesign.</p><p>During the last decade, the computer games industry has taken the lead in parallel computation with graphics card for 3D gaming. While essentially being designed to render 3D polygons and lacking the flexibility of an FPGA, graphics cards have nevertheless started to rival FPGAs as the main workhorse of many parallel computing applications.</p><p>This thesis covers an investigation on utilizing graphics cards for the task of rendering geometry into photomask patterns. It describes different strategies that were tried and the throughput and fidelity achieved with them, along with the problems encountered. It also describes the development of a suitable evaluation framework that was critical to the process.</p>
34

Étude des artefacts de flou, ringing et aliasing en imagerie numérique : application à la restauration

Blanchet, Gwendoline 17 November 2006 (has links) (PDF)
Cette thèse aborde les problèmes liés à la formation des images numériques. L'étape d'échantillonnage nécessaire à la formation d'une image discrète à partir d'une image continue peut introduire différents types d'artefacts qui constituent des dégradations majeures de la qualité de l'image. La motivation principale de cette thèse a été l'étude de ces artefacts que sont le flou, le ringing et l'aliasing. Dans la première partie, nous rappelons tout d'abord le processus de formation des images numériques puis nous proposons des définitions de ces artefacts. Dans la deuxième partie, nous définissons une mesure conjointe du flou et du ringing dans le cadre d'un filtrage passe bas précédant l'échantillonnage. La troisième partie est dédiée à la détection automatique de ces artefacts dans les images. Enfin, en quatrième partie, la détection automatique est testée dans des applications concrètes de la restauration d'images: la déconvolution aveugle et le débruitage.
35

Irregular sampling: from aliasing to noise

Hennenfent, Gilles, Herrmann, Felix J. January 2007 (has links)
Seismic data is often irregularly and/or sparsely sampled along spatial coordinates. We show that these acquisition geometries are not necessarily a source of adversity in order to accurately reconstruct adequately-sampled data. We use two examples to illustrate that it may actually be better than equivalent regularly subsampled data. This comment was already made in earlier works by other authors. We explain this behavior by two key observations. Firstly, a noise-free underdetermined problem can be seen as a noisy well-determined problem. Secondly, regularly subsampling creates strong coherent acquisition noise (aliasing) difficult to remove unlike the noise created by irregularly subsampling that is typically weaker and Gaussian-like
36

Seismic noise : the good the bad and the ugly

Herrmann, Felix J., Wilkinson, Dave January 2007 (has links)
In this paper, we present a nonlinear curvelet-based sparsity-promoting formulation for three problems related to seismic noise, namely the ’good’, corresponding to noise generated by random sampling; the ’bad’, corresponding to coherent noise for which (inaccurate) predictions exist and the ’ugly’ for which no predictions exist. We will show that the compressive capabilities of curvelets on seismic data and images can be used to tackle these three categories of noise-related problems.
37

Design of 3D Accelerator for Mobile Platform

Ramachandruni, Radha Krishna January 2006 (has links)
Implement a high-level model of computationally intensive part of 3D graphics pipe-line. Increasing popularity of handheld devices along with developments in hardware technology, 3D graphics on mobile devices is fast becoming a reality. Graphics processing is essentially complex and computationally demanding. In order to achieve scene realism and perception of motion, identifying and accelerating bottle necks is crucial. This thesis is about Open-GL graphics pipe-line in general. Software which implements computationally intensive part of graphics pipe-line is built. In essence a rasterization unit that gets triangles with 2D screen, texture co-ordinates and color. Triangles go through scan conversion, texturing and a set of other per-fragment operations before getting displayed on screen.
38

Textrendering med kantlinjer i Direct3D 11

Ståhlberg, Erik January 2016 (has links)
Context. Text rendering is useful in different contexts, and usually needs to be as sharp as possible. DirectWrite and Direct2D is a good choice when rendering for a 2D environment and can be used with Direct3D. Objectives. This study addresses the problem of aliasing with a study on FXAA or SSAA to find which is the better option to correct jagginess on text. Methods. A number of photos were set up where 26 test subjects had to answer questions about the blurring and jagginess in the photos. Results. The results showed that FXAA and SSAA perform relatively similar in jagginess in how jaggy the pictures is perceived and was significantly better than no anti-aliasing at all. Conclusions. It depends on how the images are displayed on the screen to detect any kind of jagginess or blur.
39

Scalable and accurate approaches for program dependence analysis, slicing, and verification of concurrent object oriented programs

Ranganath, Venkatesh Prasad January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Science / John M. Hatcliff / With the advent of multi-core processors and rich language support for concurrency, the paradigm of concurrent programming has arrived; however, the cost of developing and maintaining concurrent programs is still high. Simultaneously, the increase in social ubiquity of computing is reducing the "time-to-market" factor while demanding stronger correctness requirements. These effects are amplified with ever-growing size of software systems. Consequently, there is (will be) a rise in the demand for scalable and accurate techniques to enable faster development and maintenance of correct large scale concurrent software. This dissertation presents a collection of scalable and accurate approaches to tackle the above situation. Primarily, the approaches are focused on discovering dependences (relations) between various parts of the software/program and leveraging the dependences to improve maintenance and development tasks via program slicing (comprehension) and verification. Briefly, the proposed approaches are embodied in the following specific contributions: 1. New trace-based foundation for control dependences. 2. An equivalence class based analysis to efficiently and accurately calculate escape information and intra- and inter-thread dependences. 3. A new parametric data flow style slicing algorithm with various extensions to uniformly and easily realize and reason about most existing forms of static sequential and concurrent slicing. 4. A new generic notion of property/trace sensitivity to represent and reason about richer forms of context sensitivity. 5. Program dependence based partial order reduction techniques to enable efficient and accurate state space exploration in both static and dynamic mode. In an attempt to simplify the approaches, they have been based on the basic concepts/ideas of the affected techniques (e.g. program slicing is a rooted transitive closure of dependence relation). As trace-based reasoning is well suited for concurrent systems, an attempt has been made to explore trace-based reasoning wherever possible. While providing a rigorous theoretical presentation of these techniques, this effort also validates the techniques by implementing them in a robust tool framework called Indus (available from http://indus.projects.cis.ksu.edu) and then providing experimental results that demonstrate the effectiveness of the techniques on various concurrent applications. Given the current trend towards concurrent programming and social ubiquity of computing, the approaches proposed in this dissertation provide a foundation for collectively attacking scalability, accuracy, and soundness challenges in current and emerging systems.
40

Time Aliasing Methods of Spectrum Estimation

Dahl, Jason F. 06 February 2003 (has links) (PDF)
Time aliasing methods of spectrum estimation alter the time representation of a signal for the purpose of improving its frequency domain representation. Time aliasing allows characteristics of longer time window functions to be used with shorter DFT length. Windows designed specifically for use with time aliasing have improved properties compared to conventional windows. Many previous uses of time aliasing, including overlap-and-add methods of digital filtering, have focused on the elimination of time aliasing effects in the frequency domain in order to improve the representation of reconstructed signals in the time domain and have not addressed the issues associated with spectrum analysis. Proper use of time aliasing methods of spectrum analysis requires understanding of time and frequency scaling effects resulting from using a longer effective time window with a given DFT length and the effects of spectral averaging on the time aliased spectral estimate. Time aliasing has been shown to reduce bias error in spectral estimates by reducing spectral leakage and improving effective frequency resolution, particularly in regions of high dynamic range in the spectrum, yielding improved measurements of spectra containing narrowband phenomena such as those encountered in the applications of system identification and rotating machinery signature analysis.

Page generated in 0.0702 seconds