• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 256
  • 76
  • 46
  • 36
  • 20
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 495
  • 495
  • 145
  • 135
  • 80
  • 76
  • 75
  • 69
  • 69
  • 68
  • 65
  • 61
  • 57
  • 55
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Uncertainty in inverse elasticity problems

Gendin, Daniel I. 27 September 2021 (has links)
The non-invasive differential diagnosis of breast masses through ultrasound imaging motivates the following class of elastic inverse problems: Given one or more measurements of the displacement field within an elastic material, determine the material property distribution within the material. This thesis is focused on uncertainty quantification in inverse problem solutions, with application to inverse problems in linear and nonlinear elasticity. We consider the inverse nonlinear elasticity problem in the context of Bayesian statistics. We show the well-known result that computing the Maximum A Posteriori (MAP) estimate is consistent with previous optimization formulations of the inverse elasticity problem. We show further that certainty in this estimate may be quantified using concepts from information theory, specifically, information gain as measured by the Kullback-Leibler (K-L) divergence and mutual information. A particular challenge in this context is the computational expense associated with computing these quantities. A key contribution of this work is a novel approach that exploits the mathematical structure of the inverse problem and properties of conjugate gradient method to make these calculations feasible. A focus of this work is estimating the spatial distribution of the elastic nonlinearity of a material. Measurement sensitivity to the nonlinearity is much higher for large (finite) strains than for smaller strains, and so large strains tend to be used for such measurements. Measurements of larger deformations, however, tend to show greater levels of noise. A key finding of this work is that, when identifying nonlinear elastic properties, information gain can be used to characterize a trade-off between larger strains with higher noise levels and smaller strains with lower noise levels. These results can be used to inform experimental design. An approach often used to estimate both linear and nonlinear elastic property distributions is to do so sequentially: Use a small strain deformation to estimate the linear properties, and a large strain deformation to estimate the nonlinearity. A key finding of this work is that accurate characterization of the joint posterior probability distribution over both linear and nonlinear elastic parameters requires that the estimates be performed jointly rather than sequentially. All the methods described above are demonstrated in applications to problems in elasticity for both simulated data as well as clinically measured data (obtained in vivo). In the context of the clinical data, we evaluate repeatability of measurements and parameter reconstructions in a clinical setting.
32

Recovering signals in physiological systems with large datasets

Pendar, Hodjat 11 September 2020 (has links)
In many physiological studies, variables of interest are not directly accessible, requiring that they be estimated indirectly from noisy measured signals. Here, we introduce two empirical methods to estimate the true physiological signals from indirectly measured, noisy data. The first method is an extension of Tikhonov regularization to large-scale problems, using a sequential update approach. In the second method, we improve the conditioning of the problem by assuming that the input is uniform over a known time interval, and then we use a least-squares method to estimate the input. These methods were validated computationally and experimentally by applying them to flow-through respirometry data. Specifically, we infused CO2 in a flow-through respirometry chamber in a known pattern, and used the methods to recover the known input from the recorded data. The results from these experiments indicate that these methods are capable of sub-second accuracy. We also applied the methods on respiratory data from a grasshopper to investigate the exact timing of abdominal pumping, spiracular opening, and CO2 emission. The methods can be used more generally for input estimation of any linear system. / Master of Science / The goal of an inverse problem is to determine some signal or parameter of interest that is not directly accessible but can be obtained from an observed effect or a processed version that is measurable. Finding the gas exchange signal in animals is an example of an inverse problem. One method to noninvasively measure the gas exchange rate of animals is to put them in a respirometry chamber, flow air through the chamber, and measure the concentration of the respiratory gasses outside the chamber. However, because the gasses mix in the chamber and gradually flow through the gas analyzer, the pattern of the measured gas concentration can be dramatically different than the true pattern of real instantaneous gas exchange of the animal. In this thesis, we present two methods to recover the true signal from the recorded data (i.e., for inverse reconstruction), and we evaluate them computationally and experimentally.
33

Adjoint based solution and uncertainty quantification techniques for variational inverse problems

Hebbur Venkata Subba Rao, Vishwas 25 September 2015 (has links)
Variational inverse problems integrate computational simulations of physical phenomena with physical measurements in an informational feedback control system. Control parameters of the computational model are optimized such that the simulation results fit the physical measurements.The solution procedure is computationally expensive since it involves running the simulation computer model (the emph{forward model}) and the associated emph {adjoint model} multiple times. In practice, our knowledge of the underlying physics is incomplete and hence the associated computer model is laden with emph {model errors}. Similarly, it is not possible to measure the physical quantities exactly and hence the measurements are associated with emph {data errors}. The errors in data and model adversely affect the inference solutions. This work develops methods to address the challenges posed by the computational costs and by the impact of data and model errors in solving variational inverse problems. Variational inverse problems of interest here are formulated as optimization problems constrained by partial differential equations (PDEs). The solution process requires multiple evaluations of the constraints, therefore multiple solutions of the associated PDE. To alleviate the computational costs we develop a parallel in time discretization algorithm based on a nonlinear optimization approach. Like in the emph{parareal} approach, the time interval is partitioned into subintervals, and local time integrations are carried out in parallel. Solution continuity equations across interval boundaries are added as constraints. All the computational steps - forward solutions, gradients, and Hessian-vector products - involve only ideally parallel computations and therefore are highly scalable. This work develops a systematic mathematical framework to compute the impact of data and model errors on the solution to the variational inverse problems. The computational algorithm makes use of first and second order adjoints and provides an a-posteriori error estimate for a quantity of interest defined on the inverse solution (i.e., an aspect of the inverse solution). We illustrate the estimation algorithm on a shallow water model and on the Weather Research and Forecast model. Presence of outliers in measurement data is common, and this negatively impacts the solution to variational inverse problems. The traditional approach, where the inverse problem is formulated as a minimization problem in $L_2$ norm, is especially sensitive to large data errors. To alleviate the impact of data outliers we propose to use robust norms such as the $L_1$ and Huber norm in data assimilation. This work develops a systematic mathematical framework to perform three and four dimensional variational data assimilation using $L_1$ and Huber norms. The power of this approach is demonstrated by solving data assimilation problems where measurements contain outliers. / Ph. D.
34

An information field theory approach to engineering inverse problems

Alexander M Alberts (18398166) 18 April 2024 (has links)
<p dir="ltr">Inverse problems in infinite dimensions are ubiquitously encountered across the scien- tific disciplines. These problems are defined by the need to reconstruct continuous fields from incomplete, noisy measurements, which oftentimes leads to ill-posed problems. Almost universally, the solutions to these problems are constructed in a Bayesian framework. How- ever, in the infinite-dimensional setting, the theory is largely restricted to the Gaussian case, and the treatment of prior physical knowledge is lacking. We develop a new framework for Bayesian reconstruction of infinite-dimensional fields which encodes our physical knowledge directly into the prior, while remaining in the continuous setting. We then prove various characteristics of the method, including situations in which the problems we study have unique solutions under our framework. Finally, we develop numerical sampling schemes to characterize the various objects involved.</p>
35

Multi-coefficient Dirichlet Neumann type elliptic inverse problems with application to reflection seismology

Kulkarni, Mandar S. January 2009 (has links) (PDF)
Thesis (Ph. D.)--University of Alabama at Birmingham, 2009. / Title from PDF t.p. (viewed July 21, 2010). Additional advisors: Thomas Jannett, Tsun-Zee Mai, S. S. Ravindran, Günter Stolz, Gilbert Weinstein. Includes bibliographical references (p. 59-64).
36

Seismic hazard site assessment in Kitimat, British Columbia, via bernstein-polynomial-based inversion of surface-wave dispersion​

Gosselin, Jeremy M. 20 December 2016 (has links)
This thesis applies a fully nonlinear Bayesian inversion methodology to estimate shear-wave velocity (Vs) profiles and uncertainties from surface-wave dispersion data extracted from ambient seismic noise. In the inversion, the Vs profile is parameterized using a Bernstein polynomial basis, which efficiently characterizes general depth-dependent gradients in the soil/sediment column. Bernstein polynomials provide a stable parameterization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the Vs profile. The inversion solution is defined in terms of the marginal posterior probability for Vs as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is validated via inversion of synthetic dispersion data as well as previously-considered data inverted using different parameterizations. The approach considered here is better suited than layered modelling approaches in applications where smooth gradients in geophysical parameters are expected, and/or the observed data are diffuse and not sensitive to fine-scale discrete layering (such as surface-wave dispersion). The Bernstein polynomial representation is much more general than other gradient-based models such that the form of the gradients are determined by the data, rather than by subjective parameterization choice. The Bernstein inversion methodology is also applied to dispersion data processed from passive array recordings collected in the coastal community of Kitimat, British Columbia. The region is the proposed site of several large-scale industrial development projects and has great economic and environmental significance for Canada. The inversion results are consistent with findings from other geophysical studies in the region and are used in a site-specific seismic hazard analysis. The level of ground-motion amplification expected to occur during an earthquake due to near-surface Vs structure is probabilistically quantified, and predicted to be significant compared to reference (hard ground) sites. / Graduate
37

Microwave breast imaging techniques in two and three dimensions

Baran, Anastasia 02 September 2016 (has links)
Biomedical imaging at microwave frequencies has shown potential for breast cancer detection and monitoring. The advantages of microwave imaging over current imaging techniques are that it is relatively inexpensive, and uses low-energy, non-ionizing radiation. It also provides a quantitative measurement of the dielectric properties of tissues, which offers the ability to characterize tissue types. Microwave imaging also comes with significant drawbacks. The resolution is poor compared to other imaging modalities, which presents challenges when trying to resolve fine structures. It is also not very sensitive to low contrast objects, and the accuracy of recovered tissue properties can be poor. This thesis shows that the use of prior information in microwave imaging inversion algorithms greatly improves the resulting images by minimizing mathematical difficulties in reconstruction that are due to the ill-posed nature of the inverse problem. The focus of this work is to explore novel methods to obtain and use prior information in the microwave breast imaging problem. We make use of finite element contrast source inversion (FEM-CSI) software formulated in two and three dimensions (2D, 3D). This software has the ability to incorporate prior information as an inhomogeneous numerical background medium. We motivate the usefulness of prior information by developing a simulated annealing technique that segments experimental human forearm images into tissue regions. Tissue types are identified and the resulting map of dielectric properties is used as prior information for the 2D FEM-CSI code. This results in improvements to the reconstructions, demonstrating the ability of prior information to improve breast images. We develop a combined microwave tomography/radar algorithm, and demonstrate that it is able to reconstruct images of superior quality, compared to either technique used alone. The algorithm is applied to data from phantoms containing tumours of decreasing size and can accurately monitor the changes. The combined algorithm is shown to be robust to the choice of immersion medium. This property allows us to design an immersion medium-independent algorithm, in which a numerical background can be used to reduce the contrast. We also develop a novel march-on-background technique that reconstructs high quality images using data collected in multiple immersion media. / October 2016
38

Image restoration in the presence of Poisson-Gaussian noise / Restauration d'images dégradées par un bruit Poisson-Gauss

Jezierska, Anna Maria 13 May 2013 (has links)
Cette thèse porte sur la restauration d'images dégradées à la fois par un flou et par un bruit. Une attention particulière est portée aux images issues de la microscopie confocale et notamment celles de macroscopie. Dans ce contexte, un modèle de bruit Poisson-Gauss apparaît bien adapté car il permet de prendre en compte le faible nombre de photons et le fort bruit enregistrés simultanément par les détecteurs. Cependant, ce type de modèle de bruit a été peu exploité car il pose de nombreuses difficultés tant théoriques que pratiques. Dans ce travail, une approche variationnelle est adoptée pour résoudre le problème de restauration dans le cas où le terme de fidélité exact est considéré. La solution du problème peut aussi être interprétée au sens du Maximum A Posteriori (MAP). L'utilisation d'algorithmes primaux-duaux récemment proposés en optimisation convexe permet d'obtenir de bons résultats comparativement à plusieurs approches existantes qui considèrent des approximations variées du terme de fidélité. En ce qui concerne le terme de régularisation de l'approche MAP, des approximations discrète et continue de la pseudo-norme $ell_0$ sont considérées. Cette mesure, célèbre pour favoriser la parcimonie, est difficile à optimiser car elle est, à la fois, non convexe et non lisse. Dans un premier temps, une méthode basée sur les coupures de graphes est proposée afin de prendre en compte des à priori de type quadratique tronqué. Dans un second temps, un algorithme à mémoire de gradient de type Majoration-Minimisation, dont la convergence est garantie, est considéré afin de prendre en compte des a priori de type norme $ell_2-ell_0$. Cet algorithme permet notamment d'obtenir de bons résultats dans des problèmes de déconvolution. Néanmoins, un inconvénient des approches variationnelles est qu'elles nécessitent la détermination d'hyperparamètres. C'est pourquoi, deux méthodes, reposant sur une approche Espérance-Maximisation (EM) sont proposées, dans ce travail, afin d'estimer les paramètres d'un bruit Poisson-Gauss: (1) à partir d'une série temporelle d'images (dans ce cas, des paramètres de « bleaching » peuvent aussi être estimés) et (2) à partir d'une seule image. De manière générale, cette thèse propose et teste de nombreuses méthodologies adaptées à la prise en compte de bruits et de flous difficiles, ce qui devrait se révéler utile pour des applications variées, au-delà même de la microscopie / This thesis deals with the restoration of images corrupted by blur and noise, with emphasis on confocal microscopy and macroscopy applications. Due to low photon count and high detector noise, the Poisson-Gaussian model is well suited to this context. However, up to now it had not been widely utilized because of theoretical and practical difficulties. In view of this, we formulate the image restoration problem in the presence of Poisson-Gaussian noise in a variational framework, where we express and study the exact data fidelity term. The solution to the problem can also be interpreted as a Maximum A Posteriori (MAP) estimate. Using recent primal-dual convex optimization algorithms, we obtain results that outperform methods relying on a variety of approximations. Turning our attention to the regularization term in the MAP framework, we study both discrete and continuous approximation of the $ell_0$ pseudo-norm. This useful measure, well-known for promoting sparsity, is difficult to optimize due to its non-convexity and its non-smoothness. We propose an efficient graph-cut procedure for optimizing energies with truncated quadratic priors. Moreover, we develop a majorize-minimize memory gradient algorithm to optimize various smooth versions of the $ell_2-ell_0$ norm, with guaranteed convergence properties. In particular, good results are achieved on deconvolution problems. One difficulty with variational formulations is the necessity to tune automatically the model hyperparameters. In this context, we propose to estimate the Poisson-Gaussian noise parameters based on two realistic scenarios: one from time series images, taking into account bleaching effects, and another from a single image. These estimations are grounded on the use of an Expectation-Maximization (EM) approach.Overall, this thesis proposes and evaluates various methodologies for tackling difficult image noise and blur cases, which should be useful in various applicative contexts within and beyond microscopy
39

Inverse Autoconvolution Problems with an Application in Laser Physics

Bürger, Steven 21 October 2016 (has links) (PDF)
Convolution and, as a special case, autoconvolution of functions are important in many branches of mathematics and have found lots of applications, such as in physics, statistics, image processing and others. While it is a relatively easy task to determine the autoconvolution of a function (at least from the numerical point of view), the inverse problem, which consists in reconstructing a function from its autoconvolution is an ill-posed problem. Hence there is no possibility to solve such an inverse autoconvolution problem with a simple algebraic operation. Instead the problem has to be regularized, which means that it is replaced by a well-posed problem, which is close to the original problem in a certain sense. The outline of this thesis is as follows: In the first chapter we give an introduction to the type of inverse problems we consider, including some basic definitions and some important examples of regularization methods for these problems. At the end of the introduction we shortly present some general results about the convergence theory of Tikhonov-regularization. The second chapter is concerned with the autoconvolution of square integrable functions defined on the interval [0, 1]. This will lead us to the classical autoconvolution problems, where the term “classical” means that no kernel function is involved in the autoconvolution operator. For the data situation we distinguish two cases, namely data on [0, 1] and data on [0, 2]. We present some well-known properties of the classical autoconvolution operators. Moreover, we investigate nonlinearity conditions, which are required to show applicability of certain regularization approaches or which lead convergence rates for the Tikhonov regularization. For the inverse autoconvolution problem with data on the interval [0, 1] we show that a convergence rate cannot be shown using the standard convergence rate theory. If the data are given on the interval [0, 2], we can show a convergence rate for Tikhonov regularization if the exact solution satisfies a sparsity assumption. After these theoretical investigations we present various approaches to solve inverse autoconvolution problems. Here we focus on a discretized Lavrentiev regularization approach, for which even a convergence rate can be shown. Finally, we present numerical examples for the regularization methods we presented. In the third chapter we describe a physical measurement technique, the so-called SD-Spider, which leads to an inverse problem of autoconvolution type. The SD-Spider method is an approach to measure ultrashort laser pulses (laser pulses with time duration in the range of femtoseconds). Therefor we first present some very basic concepts of nonlinear optics and after that we describe the method in detail. Then we show how this approach, starting from the wave equation, leads to a kernel-based equation of autoconvolution type. The aim of chapter four is to investigate the equation and the corresponding problem, which we derived in chapter three. As a generalization of the classical autoconvolution we define the kernel-based autoconvolution operator and show that many properties of the classical autoconvolution operator can also be shown in this new situation. Moreover, we will consider inverse problems with kernel-based autoconvolution operator, which reflect the data situation of the physical problem. It turns out that these inverse problems may be locally well-posed, if all possible data are taken into account and they are locally ill-posed if one special part of the data is not available. Finally, we introduce reconstruction approaches for solving these inverse problems numerically and test them on real and artificial data.
40

An inverse problem for an inhomogeneous string with an interval of zero density and a concentrated mass at the end point

Mdhluli, Daniel Sipho 10 May 2016 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy. 27 January 2016. / The direct and inverse spectral problems for an inhomogeneous string with an interval of zero density and a concentrated mass at the end point moving with damping are investigated. The partial differential equation is mapped into an ordinary differential equation using separation of variables which in turn is transformed into a Sturm-Liouville differential equation with boundary conditions depending on these parathion variable. The Marchenko approach is employed in the inverse problem to recover the potential, density and other parameters from the knowledge of the two spectra and length of the string.

Page generated in 0.0514 seconds