11 |
An investigative study of the applicability of the convolution method to geophysical tomographyChin, Kimberley Germaine January 1985 (has links)
No description available.
|
12 |
An Empirical Study of Algebraic Reconstruction TechniquesMOHAMMAD, KAZEMI EHSAN 10 1900 (has links)
<p>A computerized tomography scan enables the visualization of an object interior without opening it up. This technique is used in many fields e.g. in medical imaging, geology, and industry. To obtain information about an object, exterior measurements by means of X-rays are performed. Then, to reconstruct an image of the object’s interior, image-reconstructions methods are applied. The problem of reconstructing images from measurements of X-ray radiation belongs to the class of inverse problems. A class of important methods for inverse problems is Algebraic Reconstruction Techniques (ART). The performance of these methods depends on the choice of a relaxation parameter.</p> <p>In this thesis, we compare numerically various ART methods, namely Kaczmarz, symmetric Kaczmarz, randomized Kaczmarz and simultaneous ART. We perform an extensive numerical investigation of the behaviour of these methods, and in particular, study how they perform with respect to this relaxation parameter. We propose a simple heuristic for finding a good relaxation parameter for each of these methods. Comparisons of the new proposed strategy with a previously proposed one shows that our strategy has a slightly better performance in terms of relative error, relative residual and image discrepancy of the reconstructed image. Both strategies showed relatively close numerical results, but interestingly enough, for different values of this parameter.</p> / Master of Computer Science (MCS)
|
13 |
Two-Phase Flow Measurement using Fast X-ray Line Detector SystemSong, Kyle Seregay 25 November 2019 (has links)
Void fraction is an essential parameter for understanding the interfacial structure, and heat and mass transfer mechanisms in various gas-liquid flow systems. It becomes critically important to accurately measure void fraction as advanced high fidelity two-phase flow models require high-quality validation data. However, void fraction measurement remains a challenging task to date due to the complexity and rapid-changing characteristic of the gas-liquid boundary flow structure. This study aims to develop an advanced void fraction measurement system based on x-ray and fast line detector technologies. The dissertation has covered the major components necessary to develop a complete measurement system. Spectral analysis of x-ray attenuation in two-phase flow has been performed, and a new void fraction model is developed based on the analysis. The newly developed pixel-to-radial conversion algorithm is capable of converting measured void fraction along with the detector array to the radial distribution in a circular pipe for a wide range of void fraction conditions. The x-ray system attains the radial distributions of key measurable factors such as void fraction and gas velocity. The data are compared with the double-sensor conductivity probe and gas flowmeter for various flow conditions. The results show reasonable agreements between the x-ray and the other measurement techniques. Finally, various 2-D tomography algorithms are implemented for the non-axisymmetric two-phase flow reconstruction. A comprehensive summary of classical absorption tomography for the two-phase flow study is provided. An in-depth sensitivity study is carried out using synthetic bubbles, aiming to investigate the effect of various uncertainty factors such as background noise, off-center shift, void profile effect, etc. The sensitivity study provides a general guideline for the performance of existing 2-D reconstruction algorithms. / Doctor of Philosophy / Gas-liquid flow phenomenon exists in an extensive range of natural and engineering systems, for example, hydraulic pipelines in a nuclear reactor, heat exchanger, pump cavitation, and boilers in the gas-fired power stations. Accurate measurement of the void fraction is essential to understand the behaviors of the two-phase flow phenomenon. However, measuring void fraction distribution in two-phase flow is a difficult task due to its complex and fast-changing interfacial structure. This study developed a comprehensive suite of the non-intrusive x-ray measurement techniques, and a pixel-to-radial conversion algorithm to process the line- and time-averaged void fraction information. The newly developed algorithm, called the Area-based Onion-Peeling (ABOP) method, can convert the pixel measurement to the radial void fraction distribution, which is more useful for studying and modeling axisymmetric flows. Various flow conditions are measured and evaluated for the benchmarking of the algorithm. Finally, classical 2-D reconstruction algorithms are investigated for the void fraction measurement in non-axisymmetric flows. A comprehensive summary of the performance of these algorithms for a two-phase flow study is provided. An in-depth sensitivity study using synthetic bubbles has been performed to examine the effect of uncertainty factors and to benchmark the algorithms for the non-axisymmetric flows.
|
14 |
Algorithms for Tomographic Reconstruction of Rectangular Temperature Distributions using Orthogonal Acoustic RaysKim, Chuyoung 09 September 2016 (has links)
Non-intrusive acoustic thermometry using an acoustic impulse generator and two microphones is developed and integrated with tomographic techniques to reconstruct temperature contours. A low velocity plume at around 450 °F exiting through a rectangular duct (3.25 by 10 inches) was used for validation and reconstruction. 0.3 % static temperature relative error compared with thermocouple-measured data was achieved using a cross-correlation algorithm to calculate speed of sound. Tomographic reconstruction algorithms, the simplified multiplicative algebraic reconstruction technique (SMART) and least squares method (LSQR), are investigated for visualizing temperature contours of the heated plume. A rectangular arrangement of transmitter and microphones with a traversing mechanism collected two orthogonal sets of acoustic projection data. Both reconstruction techniques have successfully recreated the overall characteristic of the contour; however, for the future work, the integration of the refraction effect and implementation of additional angled projections are required to improve local temperature estimation accuracy. The root-mean-square percentage errors of reconstructing non-uniform, asymmetric temperature contours using the SMART and LSQR method are calculated as 20% and 19%, respectively. / Master of Science / Computational tomography is an approach to reconstruct the cross-sectional planar image of a 3D object. This technique is widely used in the medical field using x-rays to visualize cross-sections of internal organs. Along with x-rays, acoustic rays can also be utilized with tomographic techniques. The speed of sound travelling through a gaseous medium, such as air, is depended on the density, humidity, and temperature of the medium. Using this relationship, the temperature of the medium can be calculated with known speed of sound, density, and humidity. The speed of sound can be found using the distance and time of flight of the acoustic ray using transmitter and microphones. Since the effect of density and humidity of the medium on speed of sound is relatively insignificant, those values were assumed to be constant. In this research, the acoustic temperature measuring technique using the speed of sound relationship was applied and validated, then the technique was integrated with tomography using two projection angles. A rectangular duct (3.25 by 10 inches) with a heated air at around 450 °F exiting the duct was tested. The calculated temperature from acoustics was compared with values measured with thermocouples. After the acoustic temperature measuring technique was validated, multiple acoustic rays arranged in two orthogonal projections were setup. The speed of sound values from the acoustic rays were utilized to reconstruct the temperature distribution of the duct exit using two tomographic reconstruction methods: LSQR and SMART. Both reconstruction techniques have captured overall contour of the temperature. More projection angles and sound refractive properties will be utilized in the future to overcome the limitations of detailed reconstruction.
|
15 |
Proton Computed Tomography: Matrix Data Generation Through General Purpose Graphics Processing Unit Reconstructionwitt, micah 01 March 2014 (has links)
Proton computed tomography (pCT) is an image modality that will improve treatment planning for patients receiving proton radiation therapy compared with the current techniques, which are based on X-ray CT. Images are reconstructed in pCT by solving a large and sparse system of linear equations. The size of the system necessitates matrix-partitioning and parallel reconstruction algorithms to be implemented across some sort of cluster computing architecture. The prototypical algorithm to solve the pCT system is the algebraic reconstruction technique (ART) that has been modified into parallel versions called block-iterative-projection (BIP) methods and string-averaging-projection (SAP) methods. General purpose graphics processing units (GPGPUs) have hundreds of stream processors for massively parallel calculations. A GPGPU cluster is a set of nodes, with each node containing a set of GPGPUs. This thesis describes a proton simulator that was developed to generate realistic pCT data sets. Simulated data sets were used to compare the performance of a BIP implementation against a SAP implementation on a single GPGPU with the data stored in a sparse matrix structure called the compressed sparse row (CSR) format. Both BIP and SAP algorithms allow for parallel computation by creating row partitions of the pCT linear system. The difference between these two general classes of algorithms is that BIP permits parallel computations within the row partitions yet sequential computations between the row partitions, whereas SAP permits parallel computations between the row partitions yet sequential computations within the row partitions. This thesis also introduces a general partitioning scheme to be applied to a GPGPU cluster to achieve a pure parallel ART algorithm while providing a framework for column partitioning to the pCT system, as well as show sparse visualization patterns that can be found via specified ordering of the equations within the matrix.
|
16 |
Exploiting parallelism of irregular problems and performance evaluation on heterogeneous multi-core architecturesXu, Meilian 04 October 2012 (has links)
In this thesis, we design, develop and implement parallel algorithms for irregular
problems on heterogeneous multi-core architectures. Irregular problems exhibit
random and unpredictable memory access patterns, poor spatial locality and input dependent control flow. Heterogeneous multi-core processors vary in: clock frequency, power dissipation, programming model (MIMD vs. SIMD), memory design and computing units, scalar versus vector units. The heterogeneity of the processors makes designing efficient parallel algorithms for irregular problems on heterogeneous multicore processors challenging. Techniques of mapping tasks or data on traditional parallel computers can not be used as is on heterogeneous multi-core processors due to the varying hardware. In an attempt to understand the efficiency of futuristic heterogeneous multi-core architectures on applications we study several computation and bandwidth oriented irregular problems on one heterogeneous multi-core architecture, the IBM Cell Broadband Engine (Cell BE). The Cell BE consists of a general processor and eight specialized processors and addresses vector/data-level parallelism and instruction-level parallelism simultaneously. Through these studies on the Cell BE, we provide some discussions and insight on the performance of the applications on heterogeneous multi-core architectures.
Verifying these experimental results require some performance modeling. Due
to the diversity of heterogeneous multi-core architectures, theoretical performance models used for homogeneous multi-core architectures do not provide accurate results. Therefore, in this thesis we propose an analytical performance prediction model that considers the multitude architectural features of heterogeneous multi-cores (such as DMA transfers, number of instructions and operations, the processor frequency and DMA bandwidth). We show that the execution time from our prediction model is comparable to the execution time of the experimental results for a complex medical imaging application.
|
17 |
Exploiting parallelism of irregular problems and performance evaluation on heterogeneous multi-core architecturesXu, Meilian 04 October 2012 (has links)
In this thesis, we design, develop and implement parallel algorithms for irregular
problems on heterogeneous multi-core architectures. Irregular problems exhibit
random and unpredictable memory access patterns, poor spatial locality and input dependent control flow. Heterogeneous multi-core processors vary in: clock frequency, power dissipation, programming model (MIMD vs. SIMD), memory design and computing units, scalar versus vector units. The heterogeneity of the processors makes designing efficient parallel algorithms for irregular problems on heterogeneous multicore processors challenging. Techniques of mapping tasks or data on traditional parallel computers can not be used as is on heterogeneous multi-core processors due to the varying hardware. In an attempt to understand the efficiency of futuristic heterogeneous multi-core architectures on applications we study several computation and bandwidth oriented irregular problems on one heterogeneous multi-core architecture, the IBM Cell Broadband Engine (Cell BE). The Cell BE consists of a general processor and eight specialized processors and addresses vector/data-level parallelism and instruction-level parallelism simultaneously. Through these studies on the Cell BE, we provide some discussions and insight on the performance of the applications on heterogeneous multi-core architectures.
Verifying these experimental results require some performance modeling. Due
to the diversity of heterogeneous multi-core architectures, theoretical performance models used for homogeneous multi-core architectures do not provide accurate results. Therefore, in this thesis we propose an analytical performance prediction model that considers the multitude architectural features of heterogeneous multi-cores (such as DMA transfers, number of instructions and operations, the processor frequency and DMA bandwidth). We show that the execution time from our prediction model is comparable to the execution time of the experimental results for a complex medical imaging application.
|
18 |
A Technique for Magnetron Oscillator Based Inverse Synthetic Aperture Radar Image FormationAljohani, Mansour Abdullah M. January 2019 (has links)
No description available.
|
Page generated in 0.0903 seconds