51 |
Linear sampling type methods for inverse scattering problems: theory and applications.January 2011 (has links)
Dai, Lipeng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 73-75). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.0.1 --- Linear sampling method --- p.2 / Chapter 1.0.2 --- choice of cut-off values --- p.5 / Chapter 1.0.3 --- Underwater image problem --- p.7 / Chapter 2 --- Mathematical justification of LSM --- p.10 / Chapter 2.1 --- Some mathematical preparations --- p.11 / Chapter 2.2 --- Well-posedness of an interior transmission problem --- p.13 / Chapter 2.3 --- Linear sampling method: full aperture --- p.20 / Chapter 2.4 --- Linear sampling method: limited aperture --- p.23 / Chapter 3 --- Strengthened linear sampling method --- p.28 / Chapter 3.1 --- Proof of theorem 1.0.3 --- p.28 / Chapter 3.2 --- Several estimates in theory for strengthened LSM --- p.33 / Chapter 4 --- Underwater imaging problem --- p.38 / Chapter 4.1 --- Boundary integral method --- p.38 / Chapter 4.2 --- Approximation of the Integral Kernel in (4.12) --- p.40 / Chapter 4.3 --- Numerical solution of (4.12) --- p.44 / Chapter 4.4 --- Underwater image problem --- p.45 / Chapter 4.5 --- Imaging scheme without a reference object --- p.48 / Chapter 4.6 --- Numerical examples without a reference object --- p.49 / Chapter 4.7 --- Imaging scheme with a reference object --- p.59 / Chapter 4.8 --- Numerical examples with a reference object --- p.61
|
52 |
Joint recovery of high-dimensional signals from noisy and under-sampled measurements using fusion penaltiesPoddar, Sunrita 01 December 2018 (has links)
The presence of missing entries pose a hindrance to data analysis and interpretation. The missing entries may occur due to a variety of reasons such as sensor malfunction, limited acquisition time or unavailability of information. In this thesis, we present algorithms to analyze and complete data which contain several missing entries. We consider the recovery of a group of signals, given a few under-sampled and noisy measurements of each signal. This involves solving ill-posed inverse problems, since the number of available measurements are considerably fewer than the dimensionality of the signal that we aim to recover. In this work, we consider different data models to enable joint recovery of the signals from their measurements, as opposed to the independent recovery of each signal. This prior knowledge makes the inverse problems well-posed. While compressive sensing techniques have been proposed for low-rank or sparse models, such techniques have not been studied to the same extent for other models such as data appearing in clusters or lying on a low-dimensional manifold. In this work, we consider several data models arising in different applications, and present some theoretical guarantees for the joint reconstruction of the signals from few measurements. Our proposed techniques make use of fusion penalties, which are regularizers that promote solutions with similarity between certain pairs of signals.
The first model that we consider is that of points lying on a low-dimensional manifold, embedded in high dimensional ambient space. This model is apt for describing a collection of signals, each of which is a function of only a few parameters; the manifold dimension is equal to the number of parameters. We propose a technique to recover a series of such signals, given a few measurements for each signal. We demonstrate this in the context of dynamic Magnetic Resonance Imaging (MRI) reconstruction, where only a few Fourier measurements are available for each time frame. A novel acquisition scheme enables us to detect the neighbours of each frame on the manifold. We then recover each frame by enforcing similarity with its neighbours. The proposed scheme is used to enable fast free-breathing cardiac and speech MRI scans.
Next, we consider the recovery of curves/surfaces from few sampled points. We model the curves as the zero-level set of a trigonometric polynomial, whose bandwidth controls the complexity of the curve. We present theoretical results for the minimum number of samples required to uniquely identify the curve. We show that the null-space vectors of high dimensional feature maps of these points can be used to recover the curve. The method is demonstrated on the recovery of the structure of DNA filaments from a few clicked points. This idea is then extended to recover data lying on a high-dimensional surface from few measurements. The formulated algorithm has similarities to our algorithm for recovering points on a manifold. Hence, we apply the above ideas to the cardiac MRI reconstruction problem, and are able to show better image quality with reduced computational complexity.
Finally, we consider the case where the data is organized into clusters. The goal is to recover the true clustering of the data, even when a few features of each data point is unknown. We propose a fusion-penalty based optimization problem to cluster data reliably in the presence of missing entries, and present theoretical guarantees for successful recovery of the correct clusters. We next propose a computationally efficient algorithm to solve a relaxation of this problem. We demonstrate that our algorithm reliably recovers the true clusters in the presence of large fractions of missing entries on simulated and real datasets.
This work thus results in several theoretical insights and solutions to different practical problems which involve reconstructing and analyzing data with missing entries. The fusion penalties that are used in each of the above models are obtained directly as a result of model assumptions. The proposed algorithms show very promising results on several real datasets, and we believe that they are general enough to be easily extended to several other practical applications.
|
53 |
Maximum Entropy Regularisation Applied to Ultrasonic Image ReconstructionBattle, David John January 1999 (has links)
Image reconstruction, in common with many other inverse problems, is often mathematically ill-posed in the sense that solutions are neither stable nor unique. Ultrasonic image reconstruction is particularly notorious in this regard, with narrow transducer bandwidths and limited - sometimes sparsely sampled apertures posing formidable difficulties for conventional signal processing. To overcome these difficulties, some form of regularisation is mandatory, whereby the ill-posed problem is restated as a closely related, well-posed problem, and then solved uniquely. This thesis explores the application of maximum entropy (MaxEnt) regularisation to the problem of reconstructing complex-valued imagery from sparsely sampled coherent ultrasonic field data, with particular emphasis on three-dimensional problems in the non-destructive evaluation (NDE) of materials. MaxEnt has not previously been applied to this class of problem, and yet in comparison with many other approaches to image reconstruction, it emerges as the clear leader in terms of resolution and overall image quality. To account for this performance, it is argued that the default image model used with MaxEnt is particularly meaningful in cases of ultrasonic scattering by objects embedded in homogeneous media. To establish physical and mathematical insights into the forward problem, linear equations describing scattering from both penetrable and impenetrable objects are first derived using the Born and physical optics approximations respectively. These equations are then expressed as a shift-invariant computational model that explicitly incorporates sparse sampling. To validate this model, time-domain scattering responses are computed and compared with analytical solutions for a simple canonical test case drawn from the field of NDE. The responses computed via the numerical model are shown to accurately reproduce the analytical responses. To solve inverse scattering problems via MaxEnt, the robust Cambridge algorithm is generalised to the complex domain and extended to handle broadband (multiple-frequency) data. Two versions of the augmented algorithm are then compared with a range of other algorithms, including several linearly regularised algorithms and lastly, due to its acknowledged status as a competitor with MaxEnt in radio-astronomy, the non-linear CLEAN algorithm. These comparisons are made through simulated 3-D imaging experiments under conditions of both complete and sparse aperture sampling with low and high levels of additive Gaussian noise. As required in any investigation of inverse problems, the experimental confirmation of algorithmic performance is emphasised, and two common imaging geometries relevant to NDE are selected for this purpose. In monostatic synthetic aperture imaging experiments involving side-drilled holes in an aluminium plate and test objects immersed in H2O, MaxEnt image reconstruction is demonstrated to be robust against grating-lobe and side-lobe formation, in addition to temporal bandwidth restriction. This enables efficient reconstruction of 2-D and 3-D images from small numbers of discrete samples in the spatial and frequency domains. The thesis concludes with a description of the design and testing of a novel polyvinylidene fluoride (PVDF) bistatic array transducer that offers advantages over conventional point-sampled arrays in terms of construction simplicity and signal-to-noise ratio. This ultra-sparse orthogonal array is the only one of its kind yet demonstrated, and was made possible by MaxEnt signal processing.
|
54 |
MICROWAVE IMAGING OF BIOLOGICAL TISSUES: applied toward breast tumor detectionGunnarsson, Tommy January 2007 (has links)
<p>Microwave imaging is an efficient diagnostic modality for non-invasively visualizing dielectric contrasts of non-metallic bodies. An increasing interest of this field has been observed during the last decades. Many application areas in biomedicine have been issued, recently the breast tumor detection application using microwave imaging.</p><p>Many groups are working in the field at the moment for several reasons. Breast cancer is a major health problem globally for women, while it is the second most common cancer form for women causing 0.3 % of the yearly female death in Sweden. Medical imaging is considered as the most effective way of diagnostic breast tumors, where X-ray mammography is the dominating technique. However, this imaging modality still suffers from some limitations. Many women, mostly young ones, have radiographically dense breasts, which means that the breast tissues containing high rates of fibroglandular tissues. In this case the density is very similar to the breast tumor and the diagnosis is very difficult. In this case alternative modalities like Magnetic Resonance Imaging (MRI) with contrast enhancement and Ultrasound imaging are used, however those are not suitable for large scale screening program.Another limitation is the false-negative and false-positive rate using mammography, in general 5–15 % of the tumors are not detected and many cases have to go though a breast biopsy to verify a tumor diagnosis. At last the mammography using breast compression sometimes painful, and utilizing ionizing X-rays. The big potential in microwave imaging is the reported high contrast of complex permittivity between fibroglandular tissues and tumor tissues in breasts and that it is a non-ionizing method which probably will be rather inexpensive.</p><p>The goal with this work is to develop a microwave imaging system able to reconstruct quantitative images of a female breast. In the frame of this goal this Licentiate thesis contains a brief review of the ongoing research in the field of microwave imaging of biological tissues, with the major focus on the breast tumor application. Both imaging algorithms and experimental setups are included. A feasibility study is performed to analyze what response levels could be expected, in signal properties, in a breast tumor detection application. Also, the usability of a 3D microwave propagation simulator, (QW3D), in the setup development is investigated. This is done by using a simple antenna setup with a breast phantom with different tumor positions. From those results it is clear that strong responses are obtained by a tumor presence and the diffracted responses gives strong information about inhomogeneities inside the breast. The second part of this Licentiate thesis is done in collaboration between Mälardalen University and Supélec. Using the existing planar 2.45 GHz microwave camera and the iterative non-linear Newton Kantorovich code, developed at Département de Recherches en Electromagnétisme (DRE) at Supélec, as a starting point, a new platform for both real-time qualitative imaging and quantitative images of inhomogeneous objects are investigated. The focusing is related to breast tumor detection. For the moment the tomographic performance of the planar camera is verified in simulations through a comparison with other setups. Good calibration is observed, but still experimental work concerning phantom development etc. is needed before experimental results on breast tumor detection may be obtained.</p>
|
55 |
The inverse problem of fiber Bragg gratings /Jin, Hai, January 2006 (has links)
Thesis (Ph. D.)--University of Washington, 2006. / Vita. Includes bibliographical references (p. 140-144).
|
56 |
MICROWAVE IMAGING OF BIOLOGICAL TISSUES: applied toward breast tumor detectionGunnarsson, Tommy January 2007 (has links)
Microwave imaging is an efficient diagnostic modality for non-invasively visualizing dielectric contrasts of non-metallic bodies. An increasing interest of this field has been observed during the last decades. Many application areas in biomedicine have been issued, recently the breast tumor detection application using microwave imaging. Many groups are working in the field at the moment for several reasons. Breast cancer is a major health problem globally for women, while it is the second most common cancer form for women causing 0.3 % of the yearly female death in Sweden. Medical imaging is considered as the most effective way of diagnostic breast tumors, where X-ray mammography is the dominating technique. However, this imaging modality still suffers from some limitations. Many women, mostly young ones, have radiographically dense breasts, which means that the breast tissues containing high rates of fibroglandular tissues. In this case the density is very similar to the breast tumor and the diagnosis is very difficult. In this case alternative modalities like Magnetic Resonance Imaging (MRI) with contrast enhancement and Ultrasound imaging are used, however those are not suitable for large scale screening program.Another limitation is the false-negative and false-positive rate using mammography, in general 5–15 % of the tumors are not detected and many cases have to go though a breast biopsy to verify a tumor diagnosis. At last the mammography using breast compression sometimes painful, and utilizing ionizing X-rays. The big potential in microwave imaging is the reported high contrast of complex permittivity between fibroglandular tissues and tumor tissues in breasts and that it is a non-ionizing method which probably will be rather inexpensive. The goal with this work is to develop a microwave imaging system able to reconstruct quantitative images of a female breast. In the frame of this goal this Licentiate thesis contains a brief review of the ongoing research in the field of microwave imaging of biological tissues, with the major focus on the breast tumor application. Both imaging algorithms and experimental setups are included. A feasibility study is performed to analyze what response levels could be expected, in signal properties, in a breast tumor detection application. Also, the usability of a 3D microwave propagation simulator, (QW3D), in the setup development is investigated. This is done by using a simple antenna setup with a breast phantom with different tumor positions. From those results it is clear that strong responses are obtained by a tumor presence and the diffracted responses gives strong information about inhomogeneities inside the breast. The second part of this Licentiate thesis is done in collaboration between Mälardalen University and Supélec. Using the existing planar 2.45 GHz microwave camera and the iterative non-linear Newton Kantorovich code, developed at Département de Recherches en Electromagnétisme (DRE) at Supélec, as a starting point, a new platform for both real-time qualitative imaging and quantitative images of inhomogeneous objects are investigated. The focusing is related to breast tumor detection. For the moment the tomographic performance of the planar camera is verified in simulations through a comparison with other setups. Good calibration is observed, but still experimental work concerning phantom development etc. is needed before experimental results on breast tumor detection may be obtained.
|
57 |
Bayesian inference for source determination in the atmospheric environmentKeats, William Andrew January 2009 (has links)
In the event of a hazardous release (chemical, biological, or radiological) in an urban environment, monitoring agencies must have the tools to locate and characterize the source of the emission in order to respond and minimize damage. Given a finite and noisy set of concentration measurements, determining the source location, strength and time of release is an ill-posed inverse problem. We treat this problem using Bayesian inference, a framework under which uncertainties in modelled and measured concentrations can be propagated, in a consistent, rigorous manner, toward a final probabilistic estimate for the source.
The Bayesian methodology operates independently of the chosen dispersion model, meaning it can be applied equally well to problems in urban environments, at regional scales, or at global scales. Both Lagrangian stochastic (particle-tracking) and Eulerian (fixed-grid, finite-volume) dispersion models have been used successfully. Calculations are accomplished efficiently by using adjoint (backward) dispersion models, which reduces the computational effort required from calculating one [forward] plume per possible source configuration to calculating one [backward] plume per detector. Markov chain Monte Carlo (MCMC) is used to efficiently sample from the posterior distribution for the source parameters; both the Metropolis-Hastings and hybrid Hamiltonian algorithms are used.
In this thesis, four applications falling under the rubric of source determination are addressed: dispersion in highly disturbed flow fields characteristic of built-up (urban) environments; dispersion of a nonconservative scalar over flat terrain in a statistically stationary and horizontally homogeneous (turbulent) wind field; optimal placement of an auxiliary detector using a decision-theoretic approach; and source apportionment of particulate matter (PM) using a chemical mass balance (CMB) receptor model. For the first application, the data sets used to validate the proposed methodology include a water-channel simulation of the near-field dispersion of contaminant plumes in a large array of building-like obstacles (Mock Urban Setting Trial) and a full-scale field experiment (Joint Urban 2003) in Oklahoma City. For the second and third applications, the background wind and terrain conditions are based on those encountered during the Project Prairie Grass field experiment; mean concentration and turbulent scalar flux data are synthesized using a Lagrangian stochastic model where necessary. In the fourth and final application, Bayesian source apportionment results are compared to the US Environmental Protection Agency's standard CMB model using a test case involving PM data from Fresno, California. For each of the applications addressed in this thesis, combining Bayesian inference with appropriate computational techniques results in a computationally efficient methodology for performing source determination.
|
58 |
Bayesian inference for source determination in the atmospheric environmentKeats, William Andrew January 2009 (has links)
In the event of a hazardous release (chemical, biological, or radiological) in an urban environment, monitoring agencies must have the tools to locate and characterize the source of the emission in order to respond and minimize damage. Given a finite and noisy set of concentration measurements, determining the source location, strength and time of release is an ill-posed inverse problem. We treat this problem using Bayesian inference, a framework under which uncertainties in modelled and measured concentrations can be propagated, in a consistent, rigorous manner, toward a final probabilistic estimate for the source.
The Bayesian methodology operates independently of the chosen dispersion model, meaning it can be applied equally well to problems in urban environments, at regional scales, or at global scales. Both Lagrangian stochastic (particle-tracking) and Eulerian (fixed-grid, finite-volume) dispersion models have been used successfully. Calculations are accomplished efficiently by using adjoint (backward) dispersion models, which reduces the computational effort required from calculating one [forward] plume per possible source configuration to calculating one [backward] plume per detector. Markov chain Monte Carlo (MCMC) is used to efficiently sample from the posterior distribution for the source parameters; both the Metropolis-Hastings and hybrid Hamiltonian algorithms are used.
In this thesis, four applications falling under the rubric of source determination are addressed: dispersion in highly disturbed flow fields characteristic of built-up (urban) environments; dispersion of a nonconservative scalar over flat terrain in a statistically stationary and horizontally homogeneous (turbulent) wind field; optimal placement of an auxiliary detector using a decision-theoretic approach; and source apportionment of particulate matter (PM) using a chemical mass balance (CMB) receptor model. For the first application, the data sets used to validate the proposed methodology include a water-channel simulation of the near-field dispersion of contaminant plumes in a large array of building-like obstacles (Mock Urban Setting Trial) and a full-scale field experiment (Joint Urban 2003) in Oklahoma City. For the second and third applications, the background wind and terrain conditions are based on those encountered during the Project Prairie Grass field experiment; mean concentration and turbulent scalar flux data are synthesized using a Lagrangian stochastic model where necessary. In the fourth and final application, Bayesian source apportionment results are compared to the US Environmental Protection Agency's standard CMB model using a test case involving PM data from Fresno, California. For each of the applications addressed in this thesis, combining Bayesian inference with appropriate computational techniques results in a computationally efficient methodology for performing source determination.
|
59 |
Inverse Problems in Portfolio Selection: Scenario Optimization FrameworkBhowmick, Kaushiki 10 1900 (has links)
A number of researchers have proposed several Bayesian methods for portfolio selection, which combine statistical information from financial time series with the prior beliefs of the portfolio manager, in an attempt to reduce the impact of estimation errors in distribution parameters on the portfolio selection process and the effect of these errors on the performance of 'optimal' portfolios in out-of-sample-data.
This thesis seeks to reverse the direction of this process, inferring portfolio managers’ probabilistic beliefs about future distributions based on the portfolios that they hold. We refer to the process of portfolio selection as the forward problem and the process of retrieving the implied probabilities, given an optimal portfolio, as the inverse problem. We attempt to solve the inverse problem in a general setting by using a finite set of scenarios. Using a discrete time framework, we can retrieve probabilities associated with each of the scenarios, which tells us the views of the portfolio manager implicit in the choice of a portfolio considered optimal.
We conduct the implied views analysis for portfolios selected using expected utility maximization, where the investor's utility function is a globally non-optimal concave function, and in the mean-variance setting with the covariance matrix assumed to be given.
We then use the models developed for inverse problem on empirical data to retrieve the implied views implicit in a given portfolio, and attempt to determine whether incorporating these views in portfolio selection improves portfolio performance out of sample.
|
60 |
On the inverse shortest path length problemHung, Cheng-Huang 01 December 2003 (has links)
No description available.
|
Page generated in 0.0801 seconds