Numerous scientific applications have seen the rise of massive inverse problems, where there are too much data to implement an all-at-once strategy to compute a solution. Additionally, tools for regularizing ill-posed inverse problems are infeasible when the problem is too large. This thesis focuses on the development of row-action methods, which can be used to iteratively solve inverse problems when it is not possible to access the entire data-set or forward model simultaneously. We investigate these techniques for linear inverse problems and for separable, nonlinear inverse problems where the objective function is nonlinear in one set of parameters and linear in another set of parameters. For the linear problem, we perform a convergence analysis of these methods, which shows favorable asymptotic and initial convergence properties, as well as a trade-off between convergence rate and precision of iterates that is based on the step-size. These row-action methods can be interpreted as stochastic Newton and stochastic quasi-Newton approaches on a reformulation of the least squares problem, and they can be analyzed as limited memory variants of the recursive least squares algorithm. For ill-posed problems, we introduce sampled regularization parameter selection techniques, which include sampled variants of the discrepancy principle, the unbiased predictive risk estimator, and the generalized cross-validation. We demonstrate the effectiveness of these methods using examples from super-resolution imaging, tomography reconstruction, and image classification. / Doctor of Philosophy / Numerous scientific problems have seen the rise of massive data sets. An example of this is super-resolution, where many low-resolution images are used to construct a high-resolution image, or 3-D medical imaging where a 3-D image of an object of interest with hundreds of millions voxels is reconstructed from x-rays moving through that object. This work focuses on row-action methods that numerically solve these problems by repeatedly using smaller samples of the data to avoid the computational burden of using the entire data set at once. When data sets contain measurement errors, this can cause the solution to get contaminated with noise. While there are methods to handle this issue, when the data set becomes massive, these methods are no longer feasible. This dissertation develops techniques to avoid getting the solution contaminated with noise, even when the data set is immense. The methods developed in this work are applied to numerous scientific applications including super-resolution imaging, tomography, and image classification.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/90377 |
Date | 19 June 2019 |
Creators | Slagel, Joseph Tanner |
Contributors | Mathematics, Chung, Julianne, Chung, Matthias, Gugercin, Serkan, Marzouk, Youssef, Tenorio, Luis |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Detected Language | English |
Type | Dissertation |
Format | ETD, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0022 seconds