• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Embebed wavelet image reconstruction in parallel computation hardware

Guevara Escobedo, Jorge January 2016 (has links)
In this thesis an algorithm is demonstrated for the reconstruction of hard-field Tomography images through localized block areas, obtained in parallel and from a multiresolution framework. Block areas are subsequently tiled to put together the full size image. Given its properties to preserve its compact support after being ramp filtered, the wavelet transform has received to date much attention as a promising solution in radiation dose reduction in medical imaging, through the reconstruction of essentially localised regions. In this work, this characteristic is exploited with the aim of reducing the time and complexity of the standard reconstruction algorithm. Independently reconstructing block images with geometry allowing to cover completely the reconstructed frame as a single output image, allows the individual blocks to be reconstructed in parallel, and to experience its performance in a multiprocessor hardware reconfigurable system (i.e. FPGA). Projection data from simulated Radon Transform (RT) was obtained at 180 evenly spaced angles. In order to define every relevant block area within the sinogram, forward RT was performed over template phantoms representing block frames. Reconstruction was then performed in a domain beyond the block frame limits, to allow calibration overlaps when fitting of adjacent block images. The 256 by 256 Shepp-Logan phantom was used to test the methodology of both parallel multiresolution and parallel block reconstruction generalisations. It is shown that the reconstruction time of a single block image in a 3-scale multiresolution framework, compared to the standard methodology, performs around 48 times faster. By assuming a parallel implementation, it can implied that the reconstruction time of a single tile, should be very close related to the reconstruction time of the full size and resolution image.
2

Algorithms for Tomographic Reconstruction of Rectangular Temperature Distributions using Orthogonal Acoustic Rays

Kim, Chuyoung 09 September 2016 (has links)
Non-intrusive acoustic thermometry using an acoustic impulse generator and two microphones is developed and integrated with tomographic techniques to reconstruct temperature contours. A low velocity plume at around 450 °F exiting through a rectangular duct (3.25 by 10 inches) was used for validation and reconstruction. 0.3 % static temperature relative error compared with thermocouple-measured data was achieved using a cross-correlation algorithm to calculate speed of sound. Tomographic reconstruction algorithms, the simplified multiplicative algebraic reconstruction technique (SMART) and least squares method (LSQR), are investigated for visualizing temperature contours of the heated plume. A rectangular arrangement of transmitter and microphones with a traversing mechanism collected two orthogonal sets of acoustic projection data. Both reconstruction techniques have successfully recreated the overall characteristic of the contour; however, for the future work, the integration of the refraction effect and implementation of additional angled projections are required to improve local temperature estimation accuracy. The root-mean-square percentage errors of reconstructing non-uniform, asymmetric temperature contours using the SMART and LSQR method are calculated as 20% and 19%, respectively. / Master of Science
3

Discrete Tomographic Reconstruction Methods From The Theories Of Optimization And Inverse Problems: Application In Vlsi Microchip Production

Ozgur, Osman 01 January 2006 (has links) (PDF)
Optimization theory is a key technology for inverse problems of reconstruction in science, engineering and economy. Discrete tomography is a modern research field dealing with the reconstruction of finite objects in, e.g., VLSI chip design, where this thesis will focus on. In this work, a framework with its supplementary algorithms and a new problem reformulation are introduced to approximately resolve this NP-hard problem. The framework is modular, so that other reconstruction methods, optimization techniques, optimal experimental design methods can be incorporated within. The problem is being revisited with a new optimization formulation, and interpretations of known methods in accordance with the framework are also given. Supplementary algorithms are combined or incorporated to improve the solution or to reduce the cost in terms of time and space from the computational point of view.
4

First-order gradient regularisation methods for image restoration : reconstruction of tomographic images with thin structures and denoising piecewise affine images

Papoutsellis, Evangelos January 2016 (has links)
The focus of this thesis is variational image restoration techniques that involve novel non-smooth first-order gradient regularisers: Total Variation (TV) regularisation in image and data space for reconstruction of thin structures from PET data and regularisers given by an infimal-convolution of TV and $L^p$ seminorms for denoising images with piecewise affine structures. In the first part of this thesis, we present a novel variational model for PET reconstruction. During a PET scan, we encounter two different spaces: the sinogram space that consists of all the PET data collected from the detectors and the image space where the reconstruction of the unknown density is finally obtained. Unlike most of the state of the art reconstruction methods in which an appropriate regulariser is designed in the image space only, we introduce a new variational method incorporating regularisation in image and sinogram space. In particular, the corresponding minimisation problem is formed by a total variational regularisation on both the sinogram and the image and with a suitable weighted $L^2$ fidelity term, which serves as an approximation to the Poisson noise model for PET. We establish the well-posedness of this new model for functions of Bounded Variation (BV) and perform an error analysis through the notion of the Bregman distance. We examine analytically how TV regularisation on the sinogram affects the reconstructed image especially the boundaries of objects in the image. This analysis motivates the use of a combined regularisation principally for reconstructing images with thin structures. In the second part of this thesis we propose a first-order regulariser that is a combination of the total variation and $L^p$ seminorms with $1 < p \le \infty$. A well-posedness analysis is presented and a detailed study of the one dimensional model is performed by computing exact solutions for simple functions such as the step function and a piecewise affine function, for the regulariser with $p = 2$ and $p = 1$. We derive necessary and sufficient conditions for a pair in $BV \times L^p$ to be a solution for our proposed model and determine the structure of solutions dependent on the value of $p$. In the case $p = 2$, we show that the regulariser is equivalent to the Huber-type variant of total variation regularisation. Moreover, there is a certain class of one dimensional data functions for which the regularised solutions are equivalent to high-order regularisers such as the state of the art total generalised variation (TGV) model. The key assets of our regulariser are the elimination of the staircasing effect - a well-known disadvantage of total variation regularisation - the capability of obtaining piecewise affine structures for $p = 1$ and qualitatively comparable results to TGV. In addition, our first-order $TVL^p$ regulariser is capable of preserving spike-like structures that TGV is forced to smooth. The numerical solution of the proposed first-order model is in general computationally more efficient compared to high-order approaches.
5

Development of Sparse Recovery Based Optimized Diffuse Optical and Photoacoustic Image Reconstruction Methods

Shaw, Calvin B January 2014 (has links) (PDF)
Diffuse optical tomography uses near infrared (NIR) light as the probing media to re-cover the distributions of tissue optical properties with an ability to provide functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) is non-linear and ill-posed, requiring usage of advanced computational methods to compensate this. Diffuse optical image reconstruction problem is always rank-deficient, where finding the independent measurements among the available measurements becomes challenging problem. Knowing these independent measurements will help in designing better data acquisition set-ups and lowering the costs associated with it. An optimal measurement selection strategy based on incoherence among rows (corresponding to measurements) of the sensitivity (or weight) matrix for the near infrared diffuse optical tomography is proposed. As incoherence among the measurements can be seen as providing maximum independent information into the estimation of optical properties, this provides high level of optimization required for knowing the independency of a particular measurement on its counterparts. The utility of the proposed scheme is demonstrated using simulated and experimental gelatin phantom data set comparing it with the state-of-the-art methods. The traditional image reconstruction methods employ ℓ2-norm in the regularization functional, resulting in smooth solutions, where the sharp image features are absent. The sparse recovery methods utilize the ℓp-norm with p being between 0 and 1 (0 ≤ p1), along with an approximation to utilize the ℓ0-norm, have been deployed for the reconstruction of diffuse optical images. These methods are shown to have better utility in terms of being more quantitative in reconstructing realistic diffuse optical images compared to traditional methods. Utilization of ℓp-norm based regularization makes the objective (cost) function non-convex and the algorithms that implement ℓp-norm minimization utilizes approximations to the original ℓp-norm function. Three methods for implementing the ℓp-norm were con-sidered, namely Iteratively Reweigthed ℓ1-minimization (IRL1), Iteratively Reweigthed Least-Squares (IRLS), and Iteratively Thresholding Method (ITM). These results in-dicated that IRL1 implementation of ℓp-minimization provides optimal performance in terms of shape recovery and quantitative accuracy of the reconstructed diffuse optical tomographic images. Photoacoustic tomography (PAT) is an emerging hybrid imaging modality combining optics with ultrasound imaging. PAT provides structural and functional imaging in diverse application areas, such as breast cancer and brain imaging. A model-based iterative reconstruction schemes are the most-popular for recovering the initial pressure in limited data case, wherein a large linear system of equations needs to be solved. Often, these iterative methods requires regularization parameter estimation, which tends to be a computationally expensive procedure, making the image reconstruction process to be performed off-line. To overcome this limitation, a computationally efficient approach that computes the optimal regularization parameter is developed for PAT. This approach is based on the least squares-QR (LSQR) decomposition, a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution.

Page generated in 0.0921 seconds