• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

PHYSICS INFORMED MACHINE LEARNING METHODS FOR UNCERTAINTY QUANTIFICATION

Sharmila Karumuri (14226875) 17 May 2024 (has links)
<p>The need to carry out Uncertainty quantification (UQ) is ubiquitous in science and engineering. However, carrying out UQ for real-world problems is not straightforward and they require a lot of computational budget and resources. The objective of this thesis is to develop computationally efficient approaches based on machine learning to carry out UQ. Specifically, we addressed two problems.</p> <p><br></p> <p>The first problem is, it is difficult to carry out Uncertainty propagation (UP) in systems governed by elliptic PDEs with spatially varying uncertain fields in coefficients and boundary conditions. Here as we have functional uncertainties, the number of uncertain parameters is large. Unfortunately, in these situations to carry out UP we need to solve the PDE a large number of times to obtain convergent statistics of the quantity governed by the PDE. However, solving the PDE by a numerical solver repeatedly leads to a computational burden. To address this we proposed to learn the surrogate of the solution of the PDE in a data-free manner by utilizing the physics available in the form of the PDE. We represented the solution of the PDE as a deep neural network parameterized function in space and uncertain parameters. We introduced a physics-informed loss function derived from variational principles to learn the parameters of the network. The accuracy of the learned surrogate is validated against the corresponding ground truth estimate from the numerical solver. We demonstrated the merit of using our approach by solving UP problems and inverse problems faster than by using a standard numerical solver.</p> <p><br></p> <p>The second problem we focused on in this thesis is related to inverse problems. State of the art approach to solving inverse problems involves posing the inverse problem as a Bayesian inference task and estimating the distribution of input parameters conditioned on the observed data (posterior). Markov Chain Monte Carlo (MCMC) methods and variational inference methods provides us ways to estimate the posterior. However, these inference techniques need to be re-run whenever a new set of observed data is given leading to a computational burden. To address this, we proposed to learn a Bayesian inverse map i.e., the map from the observed data to the posterior. This map enables us to do on-the-fly inference. We demonstrated our approach by solving various examples and we validated the posteriors learned from our approach against corresponding ground truth posteriors from the MCMC method.</p>

Page generated in 0.0611 seconds