• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Measurement of three-dimensional coherent fluid structure in high Reynolds number turbulent boundary layers

Clark, Thomas Henry January 2012 (has links)
The turbulent boundary layer is an aspect of fluid flow which dominates the performance of many engineering systems - yet the analytic solution of such flows is intractable for most applications. Our understanding of boundary layers is therefore limited by our ability to simulate and measure them. Tomographic Particle Image Velocimetry (TPIV) is a recently developed technique for direct measurement of fluid velocity within a 3D region. This allows new insight into the topological structure of turbulent boundary layers. Increasing Reynolds Number increases the range of scales at which turbulence exists; a measurement technique must have a larger 'dynamic range' to fully resolve the flow. Tomographic PIV is currently limited in spatial dynamic range (which is also linked to the spatial and temporal resolution) due to a high degree of noise. Results also contain significant bias error. This work proposes a modification of the technique to use more than two exposures in the PIV process, which (for four exposures) is shown to improve random error by a factor of 2 to 7 depending on experimental setup parameters. The dynamic range increases correspondingly and can be doubled again in highly turbulent flows. Bias error is reduced by up to 40%. An alternative reconstruction approach is also presented, based on application of a reduction strategy (elimination of coefficients based on a first guess) to the tomographic weightings matrix Wij. This facilitates a potentially significant increase in computational efficiency. Despite the achieved reduction in error, measurements contain non-zero divergence due to noise and sampling errors. The same problem affects visualisation of topology and coherent fluid structures. Using Projection Onto Convex Sets, a framework for post-processing operators is implemented which includes a divergence minimisation procedure and a scale-limited denoising strategy which is resilient to 'false' vectors contained in the data. Finally, developed techniques are showcased by visualisation of topological information in the inner region of a high Reynolds Number boundary layer (δ+ = 1890, Reθ = 3650). Comments are made on the visible flow structures and tentative conclusions are drawn.
2

Reduced-data magnetic resonance imaging reconstruction methods: constraints and solutions.

Hamilton, Lei Hou 11 August 2011 (has links)
Imaging speed is very important in magnetic resonance imaging (MRI), especially in dynamic cardiac applications, which involve respiratory motion and heart motion. With the introduction of reduced-data MR imaging methods, increasing acquisition speed has become possible without requiring a higher gradient system. But these reduced-data imaging methods carry a price for higher imaging speed. This may be a signal-to-noise ratio (SNR) penalty, reduced resolution, or a combination of both. Many methods sacrifice edge information in favor of SNR gain, which is not preferable for applications which require accurate detection of myocardial boundaries. The central goal of this thesis is to develop novel reduced-data imaging methods to improve reconstructed image performance. This thesis presents a novel reduced-data imaging method, PINOT (Parallel Imaging and NOquist in Tandem), to accelerate MR imaging. As illustrated by a variety of computer simulated and real cardiac MRI data experiments, PINOT preserves the edge details, with flexibility of improving SNR by regularization. Another contribution is to exploit the data redundancy from parallel imaging, rFOV and partial Fourier methods. A Gerchberg Reduced Iterative System (GRIS), implemented with the Gerchberg-Papoulis (GP) iterative algorithm is introduced. Under the GRIS, which utilizes a temporal band-limitation constraint in the image reconstruction, a variant of Noquist called iterative implementation iNoquist (iterative Noquist) is proposed. Utilizing a different source of prior information, first combining iNoquist and Partial Fourier technique (phase-constrained iNoquist) and further integrating with parallel imaging methods (PINOT-GRIS) are presented to achieve additional acceleration gains.
3

A color filter array interpolation method for digital cameras using alias cancellation

Appia, Vikram V. 31 March 2008 (has links)
To reduce cost, many digital cameras use a single sensor array instead of using three arrays for the red, green and blue. Thus at each pixel location only the red, green or blue intensity value is available. And to generate a complete color image, the camera must estimate the missing two values at each pixel location .Color filter arrays are used to capture only one portion of the spectrum (Red, Green or Blue) at each location. Various arrangements of the Color Filter Array (CFA) are possible, but the Bayer array is the most commonly used arrangement and we will deal exclusively with the Bayer array in this thesis. Since each of the three colors channels are effectively downsampled, it leads to aliasing artifacts. This thesis will analyze the effects of aliasing in the frequency- domain and present a method to reduce the deterioration in image quality due to aliasing artifacts. Two reference algorithms, AH-POCS (Adams and Hamilton - Projection Onto Convex Sets) and Adaptive Homogeneity-Directed interpolation, are discussed in de- tail. Both algorithms use the assumption that there is high correlation in the high- frequency regions to reduce aliasing. AH-POCS uses alias cancellation technique to reduce aliasing in the red and blue images, while the Adaptive Homogeneity-Directed interpolation algorithm is an edge-directed algorithm. We present here an algorithm that combines these two techniques and provides a better result on average when compared to the reference algorithms.
4

Uma abordagem híbrida baseada em Projeções sobre Conjuntos Convexos para Super-Resolução espacial e espectral / A hybrid approach based on projections onto convex sets for spatial and spectral super-resolution

Cunha, Bruno Aguilar 10 November 2016 (has links)
Submitted by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T16:07:35Z No. of bitstreams: 1 CUNHA_Bruno_2017.pdf: 1281922 bytes, checksum: 605ecd45f46a3b67332ed6bd13043af5 (MD5) / Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T16:07:44Z (GMT) No. of bitstreams: 1 CUNHA_Bruno_2017.pdf: 1281922 bytes, checksum: 605ecd45f46a3b67332ed6bd13043af5 (MD5) / Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T16:07:53Z (GMT) No. of bitstreams: 1 CUNHA_Bruno_2017.pdf: 1281922 bytes, checksum: 605ecd45f46a3b67332ed6bd13043af5 (MD5) / Made available in DSpace on 2017-10-17T16:08:04Z (GMT). No. of bitstreams: 1 CUNHA_Bruno_2017.pdf: 1281922 bytes, checksum: 605ecd45f46a3b67332ed6bd13043af5 (MD5) Previous issue date: 2016-11-10 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / This work proposes both a study and a development of an algorithm for super-resolution of digital images using projections onto convex sets. The method is based on a classic algorithm for spatial super-resolution which considering the subpixel information present in a set of lower resolution images, generate an image of higher resolution and better visual quality. We propose the incorporation of a new restriction based on the Richardson-Lucy algorithm in order to restore and recover part of the spatial frequencies lost during the degradation and decimation process of the high resolution images. In this way the algorithm provides a hybrid approach based on projections onto convex sets which is capable of promoting both the spatial and spectral image super-resolution. The proposed approach was compared with the original algorithm from Sezan and Tekalp and later with a method based on a robust framework that is considered nowadays one of the most effective methods for super-resolution. The results, considering both the visual and the mean square error analysis, demonstrate that the proposed method has great potential promoting increased visual quality over the images studied. / Este trabalho visa o estudo e o desenvolvimento de um algoritmo para super-resolução de imagens digitais baseado na teoria de projeções sobre conjuntos convexos. O método é baseado em um algoritmo clássico de projeções sobre restrições convexas para super- resolução espacial onde se busca, considerando as informações subpixel presentes em um conjunto de imagens de menor resolução, gerar uma imagem de maior resolução e com melhor qualidade visual. Propomos a incorporação de uma nova restrição baseada no algoritmo de Richardson-Lucy para restaurar e recuperar parte das frequências espaciais perdidas durante o processo de degradação e decimação das imagens de alta resolução. Nesse sentido o algoritmo provê uma abordagem híbrida baseada em projeções sobre conjuntos convexos que é capaz de promover simultaneamente a super-resolução espacial e a espectral. A abordagem proposta foi comparada com o algoritmo original de Sezan e Tekalp e posteriormente com um método baseado em um framework de super-resolução robusta, considerado um dos métodos mais eficazes na atualidade. Os resultados obtidos, considerando as análises visuais e também através do erro médio quadrático, demonstram que o método proposto possui grande potencialidade promovendo o aumento da qualidade visual das imagens estudadas.

Page generated in 0.1228 seconds