1 |
Parameter Estimation in Magnetic Resonance ImagingGraff, Christian George January 2009 (has links)
This work concerns practical quantitative magnetic resonance (MR) imaging techniques and their implementation and use in clinical MR systems. First, background information on MR imaging is given, including the physics of the magnetic resonance, relaxation effects and how imaging is accomplished.Subsequently, the first part of this work describes the estimation of the T2 relaxation parameter from fast spin-echo (FSE) data. Various complications are considered, including partial volume and data from multiple receiver coils along with the effects of the timing parameters on the accuracy of T2 estimates. Next, the problem of classifying small (1 cm diameter) liver lesions using T2 estimates obtained from radially-acquired FSE data collected in a single breath-hold is considered. Several algorithms are proposed for obtaining lesion T2 estimates, and these algorithms are evaluated with a task-based metric, their ability to separate two classes of lesions, benign and malignant. A novel computer-generated phantom is developed for the generation of the data used in this evaluation.The second part of this work describes techniques that perform the separation of water and lipid signals while simultaneously estimating relaxation parameters that have clinical relevance. The acquisition sequences used here are Cartesian and radial versions of Gradient and Spin-Echo (GRASE). The radial GRASE technique is post-processed with a novel algorithm that estimates the T2 of the water signal independent of the lipid signal. The accuracy of this algorithm is evaluated in phantom and its potential use for detecting inflammation of the liver is evaluated using clinical data. Cartesian GRASE data is processed to obtain T2-dagger and lipid fraction estimates in bone which can be used to assess bone quality. The algorithm is tested in phantom and in vivo, and preliminary results are given.In the concluding chapter results are summarized and directions for future work are indicated.
|
2 |
Estimating Signal Features from Noisy Images with Stochastic BackgroundsWhitaker, Meredith Kathryn January 2008 (has links)
Imaging is often used in scientific applications as a measurement tool. The location of a target, brightness of a star, and size of a tumor are all examples of object features that are sought after in various imaging applications. A perfect measurement of these quantities from image data is impossible because of, most notably, detector noise fluctuations, finite resolution, sensitivity of the imaging instrument, and obscuration by undesirable object structures. For these reasons, sophisticated image-processing techniques are designed to treat images as random variables. Quantities calculated from an image are subject to error and fluctuation; implied by calling them estimates of object features.This research focuses on estimator error for tasks common to imaging applications. Computer simulations of imaging systems are employed to compare the estimates to the true values. These computations allow for algorithm performance tests and subsequent development. Estimating the location, size, and strength of a signal embedded in a background structure from noisy image data is the basic task of interest. The estimation task's degree of difficulty is adjusted to discover the simplest data-processing necessary to yield successful estimates.Even when using an idealized imaging model, linear Wiener estimation was found to be insufficient for estimating signal location and shape. These results motivated the investigation of more complex data processing. A new method (named the scanning-linear estimator because it maximizes a linear functional) is successful in cases where linear estimation fails. This method has also demonstrated positive results when tested in realistic simulations of tomographic SPECT imaging systems. A comparison to a model of current clinical estimation practices found that the scanning-linear method offers substantial gains in performance.
|
3 |
A Multidimensional Filtering Framework with Applications to Local Structure Analysis and Image EnhancementSvensson, Björn January 2008 (has links)
Filtering is a fundamental operation in image science in general and in medical image science in particular. The most central applications are image enhancement, registration, segmentation and feature extraction. Even though these applications involve non-linear processing a majority of the methodologies available rely on initial estimates using linear filters. Linear filtering is a well established cornerstone of signal processing, which is reflected by the overwhelming amount of literature on finite impulse response filters and their design. Standard techniques for multidimensional filtering are computationally intense. This leads to either a long computation time or a performance loss caused by approximations made in order to increase the computational efficiency. This dissertation presents a framework for realization of efficient multidimensional filters. A weighted least squares design criterion ensures preservation of the performance and the two techniques called filter networks and sub-filter sequences significantly reduce the computational demand. A filter network is a realization of a set of filters, which are decomposed into a structure of sparse sub-filters each with a low number of coefficients. Sparsity is here a key property to reduce the number of floating point operations required for filtering. Also, the network structure is important for efficiency, since it determines how the sub-filters contribute to several output nodes, allowing reduction or elimination of redundant computations. Filter networks, which is the main contribution of this dissertation, has many potential applications. The primary target of the research presented here has been local structure analysis and image enhancement. A filter network realization for local structure analysis in 3D shows a computational gain, in terms of multiplications required, which can exceed a factor 70 compared to standard convolution. For comparison, this filter network requires approximately the same amount of multiplications per signal sample as a single 2D filter. These results are purely algorithmic and are not in conflict with the use of hardware acceleration techniques such as parallel processing or graphics processing units (GPU). To get a flavor of the computation time required, a prototype implementation which makes use of filter networks carries out image enhancement in 3D, involving the computation of 16 filter responses, at an approximate speed of 1MVoxel/s on a standard PC.
|
4 |
True Color Measurements Using Color Calibration TechniquesWransky, Michael E. 15 September 2015 (has links)
No description available.
|
Page generated in 0.0805 seconds