• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Robust Parameter Inversion Using Stochastic Estimates

Munster, Drayton William 10 January 2020 (has links)
For parameter inversion problems governed by systems of partial differential equations, such as those arising in Diffuse Optical Tomography (DOT), even the cost of repeated objective function evaluation can be overwhelming. Despite the linear (in the state variable) nature of the DOT problem, the nonlinear parameter inversion process is dominated by the computational burden of solving a large linear system for each source and frequency. To compute the Jacobian for use in Newton-type methods, an adjoint solve is required for each detector and frequency. When a three-dimensional tomography problem may have nearly 1,000 sources and detectors, the computational cost of an optimization routine is a large burden. While techniques from model order reduction can partially alleviate the computational cost, obtaining error bounds in parameter space is typically not feasible. In this work, we examine two different remedies based on stochastic estimates of the objective function. In the first manuscript, we focus on maximizing the efficiency of using stochastic estimates by replacing our objective function with a surrogate objective function computed from a reduced order model (ROM). We use as few as a single sample to detect a misfit between the full-order and surrogate objective functions. Once a sufficiently large difference is detected, it is necessary to update the ROM to reduce the error. We propose a new technique for improving the ROM with very few large linear solutions. Using this techniques, we observe a reduction of up to 98% in the number of large linear solutions for a three-dimensional tomography problem. In the second manuscript, we focus on establishing a robust algorithm. We propose a new trust region framework that replaces the objective function evaluations with stochastic estimates of the improvement factor and the misfit between the model and objective function gradients. If these estimates satisfy a fixed multiplicative error bound with a high, but fixed, probability, we show that this framework converges almost surely to a stationary point of the objective function. We derive suitable bounds for the DOT problem and present results illustrating the robust nature of these estimates with only 10 samples per iteration. / Doctor of Philosophy / For problems such as medical imaging, the process of reconstructing the state of a system from measurement data can be very expensive to compute. The ever increasing need for high accuracy requires very large models to be used. Reducing the computational burden by replacing the model with a specially constructed smaller model is an established and effective technique. However, it can be difficult to determine how well the smaller model matches the original model. In this thesis, we examine two techniques for estimating the quality of a smaller model based on randomized combinations of sources and detectors. The first technique focuses on reducing the computational cost as much as possible. With the equivalent of a single randomized source, we show that this estimate is an effective measure of the model quality. Coupled with a new technique for improving the smaller model, we demonstrate a highly efficient and robust method. The second technique prioritizes robustness in its algorithm. The algorithm uses these randomized combinations to estimate how the observations change for different system states. If these estimates are accurate with a high probability, we show that this leads to a method that always finds a minimum misfit between predicted values and the observed data.
2

Randomized Algorithms for Preconditioner Selection with Applications to Kernel Regression

DiPaolo, Conner 01 January 2019 (has links)
The task of choosing a preconditioner M to use when solving a linear system Ax=b with iterative methods is often tedious and most methods remain ad-hoc. This thesis presents a randomized algorithm to make this chore less painful through use of randomized algorithms for estimating traces. In particular, we show that the preconditioner stability || I - M-1A ||F, known to forecast preconditioner quality, can be computed in the time it takes to run a constant number of iterations of conjugate gradients through use of sketching methods. This is in spite of folklore which suggests the quantity is impractical to compute, and a proof we give that ensures the quantity could not possibly be approximated in a useful amount of time by a deterministic algorithm. Using our estimator, we provide a method which can provably select a quality preconditioner among n candidates using floating operations commensurate with running about n log(n) steps of the conjugate gradients algorithm. In the absence of such a preconditioner among the candidates, our method can advise the practitioner to use no preconditioner at all. The algorithm is extremely easy to implement and trivially parallelizable, and along the way we provide theoretical improvements to the literature on trace estimation. In empirical experiments, we show the selection method can be quite helpful. For example, it allows us to create to the best of our knowledge the first preconditioning method for kernel regression which never uses more iterations over the non-preconditioned analog in standard settings.

Page generated in 0.072 seconds