• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 20
  • 15
  • 14
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach

Asiri, Sharefa M. 25 May 2013 (has links)
Observers are well known in the theory of dynamical systems. They are used to estimate the states of a system from some measurements. However, recently observers have also been developed to estimate some unknowns for systems governed by Partial differential equations. Our aim is to design an observer to solve inverse source problem for a one dimensional wave equation. Firstly, the problem is discretized in both space and time and then an adaptive observer based on partial field measurements (i.e measurements taken form the solution of the wave equation) is applied to estimate both the states and the source. We see the effectiveness of this observer in both noise-free and noisy cases. In each case, numerical simulations are provided to illustrate the effectiveness of this approach. Finally, we compare the performance of the observer approach with Tikhonov regularization approach.
2

Optimal Control for an Impedance Boundary Value Problem

Bondarenko, Oleksandr 10 January 2011 (has links)
We consider the analysis of the scattering problem. Assume that an incoming time harmonic wave is scattered by a surface of an impenetrable obstacle. The reflected wave is determined by the surface impedance of the obstacle. In this paper we will investigate the problem of choosing the surface impedance so that a desired scattering amplitude is achieved. We formulate this control problem within the framework of the minimization of a Tikhonov functional. In particular, questions of the existence of an optimal solution and the derivation of the optimality conditions will be addressed. / Master of Science
3

The Inverse Source Problem for Helmholtz

Fernstrom, Hugo, Sträng, Hugo January 2022 (has links)
This paper studies the inverse source problem for the Helmholtz equation with a point source in a two dimensional domain. Given complete boundary data and appropriate discretization Tikhonov regularization is established to be an effective method at finding the point source. Furthermore, it was found that Tikhonov regularization can locate point sources even given significant noise, as well as incomplete boundary data in complicated domains.
4

Calibration of Option Pricing in Reproducing Kernel Hilbert Space

Ge, Lei 01 January 2015 (has links)
A parameter used in the Black-Scholes equation, volatility, is a measure for variation of the price of a financial instrument over time. Determining volatility is a fundamental issue in the valuation of financial instruments. This gives rise to an inverse problem known as the calibration problem for option pricing. This problem is shown to be ill-posed. We propose a regularization method and reformulate our calibration problem as a problem of finding the local volatility in a reproducing kernel Hilbert space. We defined a new volatility function which allows us to embrace both the financial and time factors of the options. We discuss the existence of the minimizer by using regu- larized reproducing kernel method and show that the regularizer resolves the numerical instability of the calibration problem. Finally, we apply our studied method to data sets of index options by simulation tests and discuss the empirical results obtained.
5

Row-Action Methods for Massive Inverse Problems

Slagel, Joseph Tanner 19 June 2019 (has links)
Numerous scientific applications have seen the rise of massive inverse problems, where there are too much data to implement an all-at-once strategy to compute a solution. Additionally, tools for regularizing ill-posed inverse problems are infeasible when the problem is too large. This thesis focuses on the development of row-action methods, which can be used to iteratively solve inverse problems when it is not possible to access the entire data-set or forward model simultaneously. We investigate these techniques for linear inverse problems and for separable, nonlinear inverse problems where the objective function is nonlinear in one set of parameters and linear in another set of parameters. For the linear problem, we perform a convergence analysis of these methods, which shows favorable asymptotic and initial convergence properties, as well as a trade-off between convergence rate and precision of iterates that is based on the step-size. These row-action methods can be interpreted as stochastic Newton and stochastic quasi-Newton approaches on a reformulation of the least squares problem, and they can be analyzed as limited memory variants of the recursive least squares algorithm. For ill-posed problems, we introduce sampled regularization parameter selection techniques, which include sampled variants of the discrepancy principle, the unbiased predictive risk estimator, and the generalized cross-validation. We demonstrate the effectiveness of these methods using examples from super-resolution imaging, tomography reconstruction, and image classification. / Doctor of Philosophy / Numerous scientific problems have seen the rise of massive data sets. An example of this is super-resolution, where many low-resolution images are used to construct a high-resolution image, or 3-D medical imaging where a 3-D image of an object of interest with hundreds of millions voxels is reconstructed from x-rays moving through that object. This work focuses on row-action methods that numerically solve these problems by repeatedly using smaller samples of the data to avoid the computational burden of using the entire data set at once. When data sets contain measurement errors, this can cause the solution to get contaminated with noise. While there are methods to handle this issue, when the data set becomes massive, these methods are no longer feasible. This dissertation develops techniques to avoid getting the solution contaminated with noise, even when the data set is immense. The methods developed in this work are applied to numerous scientific applications including super-resolution imaging, tomography, and image classification.
6

Numerical Study Of Regularization Methods For Elliptic Cauchy Problems

Gupta, Hari Shanker 05 1900 (has links) (PDF)
Cauchy problems for elliptic partial differential equations arise in many important applications, such as, cardiography, nondestructive testing, heat transfer, sonic boom produced by a maneuvering aerofoil, etc. Elliptic Cauchy problems are typically ill-posed, i.e., there may not be a solution for some Cauchy data, and even if a solution exists uniquely, it may not depend continuously on the Cauchy data. The ill-posedness causes numerical instability and makes the classical numerical methods inappropriate to solve such problems. For Cauchy problems, the research on uniqueness, stability, and efficient numerical methods are of significant interest to mathematicians. The main focus of this thesis is to develop numerical techniques for elliptic Cauchy problems. Elliptic Cauchy problems can be approached as data completion problems, i.e., from over-specified Cauchy data on an accessible part of the boundary, one can try to recover missing data on the inaccessible part of the boundary. Then, the Cauchy problems can be solved by finding a so-lution to a well-posed boundary value problem for which the recovered data constitute a boundary condition on the inaccessible part of the boundary. In this thesis, we use natural linearization approach to transform the linear Cauchy problem into a problem of solving a linear operator equation. We consider this operator in a weaker image space H−1, which differs from the previous works where the image space of the operator is usually considered as L2 . The lower smoothness of the image space will make a problem a bit more ill-posed. But under such settings, we can prove the compactness of the considered operator. At the same time, it allows a relaxation of the assumption concerning noise. The numerical methods that can cope with these ill-posed operator equations are the so called regularization methods. One prominent example of such regularization methods is Tikhonov regularization which is frequently used in practice. Tikhonov regularization can be considered as a least-squares tracking of data with a regularization term. In this thesis we discuss a possibility to improve the reconstruction accuracy of the Tikhonov regularization method by using an iterative modification of Tikhonov regularization. With this iterated Tikhonov regularization the effect of the penalty term fades away as iterations go on. In the application of iterated Tikhonov regularization, we find that for severely ill-posed problems such as elliptic Cauchy problems, discretization has such a powerful influence on the accuracy of the regularized solution that only with some reasonable discretization level, desirable accuracy can be achieved. Thus, regularization by projection method which is commonly known as self-regularization is also considered in this thesis. With this method, the regularization is achieved only by discretization along with an appropriate choice of discretization level. For all regularization methods, the choice of an appropriate regularization parameter is a crucial issue. For this purpose, we propose the balancing principle which is a recently introduced powerful technique for the choice of the regularization parameter. While applying this principle, a balance between the components related to the convergence rate and stability in the accuracy estimates has to be made. The main advantage of the balancing principle is that it can work in an adaptive way to obtain an appropriate value of the regularization parameter, and it does not use any quantitative knowledge of convergence rate or stability. The accuracy provided by this adaptive strategy is worse only by a constant factor than one could achieve in the case of known stability and convergence rates. We apply the balancing principle in both iterated Tikhonov regularization and self-regularization methods to choose the proper regularization parameters. In the thesis, we also investigate numerical techniques based on iterative Tikhonov regular-ization for nonlinear elliptic Cauchy problems. We consider two types of problems. In the first kind, the nonlinear problem can be transformed to a linear problem while in the second kind, linearization of the nonlinear problem is not possible, and for this we propose a special iterative method which differs from methods such as Landweber iteration and Newton-type method which are usually based on the calculation of the Frech´et derivative or adjoint of the equation. Abundant examples are presented in the thesis, which illustrate the performance of the pro-posed regularization methods as well as the balancing principle. At the same time, these examples can be viewed as a support for the theoretical results achieved in this thesis. In the end of this thesis, we describe the sonic boom problem, where we first encountered the ill-posed nonlinear Cauchy problem. This is a very difficult problem and hence we took this problem to provide a motivation for the model problems. These model problems are discussed one by one in the thesis in the increasing order of difficulty, ending with the nonlinear problems in Chapter 5. The main results of the dissertation are communicated in the article [35].
7

Simultaneous activity and attenuation reconstruction in emission tomography

Dicken, Volker January 1998 (has links)
In single photon emission computed tomography (SPECT) one is interested in reconstructing the activity distribution f of some radiopharmaceutical. The data gathered suffer from attenuation due to the tissue density µ. Each imaged slice incorporates noisy sample values of the nonlinear attenuated Radon transform (formular at this place in the original abstract) Traditional theory for SPECT reconstruction treats µ as a known parameter. In practical applications, however, µ is not known, but either crudely estimated, determined in costly additional measurements or plainly neglected. We demonstrate that an approximation of both f and µ from SPECT data alone is feasible, leading to quantitatively more accurate SPECT images. The result is based on nonlinear Tikhonov regularization techniques for parameter estimation problems in differential equations combined with Gauss-Newton-CG minimization.
8

Regularization of Parameter Problems for Dynamic Beam Models

Rydström, Sara January 2010 (has links)
The field of inverse problems is an area in applied mathematics that is of great importance in several scientific and industrial applications. Since an inverse problem is typically founded on non-linear and ill-posed models it is a very difficult problem to solve. To find a regularized solution it is crucial to have a priori information about the solution. Therefore, general theories are not sufficient considering new applications. In this thesis we consider the inverse problem to determine the beam bending stiffness from measurements of the transverse dynamic displacement. Of special interest is to localize parts with reduced bending stiffness. Driven by requirements in the wood-industry it is not enough considering time-efficient algorithms, the models must also be adapted to manage extremely short calculation times. For the developing of efficient methods inverse problems based on the fourth order Euler-Bernoulli beam equation and the second order string equation are studied. Important results are the transformation of a nonlinear regularization problem to a linear one and a convex procedure for finding parts with reduced bending stiffness.
9

Stability Analysis of Method of Foundamental Solutions for Laplace's Equations

Huang, Shiu-ling 21 June 2006 (has links)
This thesis consists of two parts. In the first part, to solve the boundary value problems of homogeneous equations, the fundamental solutions (FS) satisfying the homogeneous equations are chosen, and their linear combination is forced to satisfy the exterior and the interior boundary conditions. To avoid the logarithmic singularity, the source points of FS are located outside of the solution domain S. This method is called the method of fundamental solutions (MFS). The MFS was first used in Kupradze in 1963. Since then, there have appeared numerous reports of MFS for computation, but only a few for analysis. The part one of this thesis is to derive the eigenvalues for the Neumann and the Robin boundary conditions in the simple case, and to estimate the bounds of condition number for the mixed boundary conditions in some non-disk domains. The same exponential rates of Cond are obtained. And to report numerical results for two kinds of cases. (I) MFS for Motz's problem by adding singular functions. (II) MFS for Motz's problem by local refinements of collocation nodes. The values of traditional condition number are huge, and those of effective condition number are moderately large. However, the expansion coefficients obtained by MFS are scillatingly large, to cause another kind of instability: subtraction cancellation errors in the final harmonic solutions. Hence, for practical applications, the errors and the ill-conditioning must be balanced each other. To mitigate the ill-conditioning, it is suggested that the number of FS should not be large, and the distance between the source circle and the partial S should not be far, either. In the second part, to reduce the severe instability of MFS, the truncated singular value decomposition(TSVD) and Tikhonov regularization(TR) are employed. The computational formulas of the condition number and the effective condition number are derived, and their analysis is explored in detail. Besides, the error analysis of TSVD and TR is also made. Moreover, the combination of TSVD and TR is proposed and called the truncated Tikhonov regularization in this thesis, to better remove some effects of infinitesimal sigma_{min} and high frequency eigenvectors.
10

Evaulation Of Spatial And Spatio-temporal Regularization Approaches In Inverse Problem Of Electrocardiography

Onal, Murat 01 August 2008 (has links) (PDF)
Conventional electrocardiography (ECG) is an essential tool for investigating cardiac disorders such as arrhythmias or myocardial infarction. It consists of interpretation of potentials recorded at the body surface that occur due to the electrical activity of the heart. However, electrical signals originated at the heart suffer from attenuation and smoothing within the thorax, therefore ECG signal measured on the body surface lacks some important details. The goal of forward and inverse ECG problems is to recover these lost details by estimating the heart&amp / #8217 / s electrical activity non-invasively from body surface potential measurements. In the forward problem, one calculates the body surface potential distribution (i.e. torso potentials) using an appropriate source model for the equivalent cardiac sources. In the inverse problem of ECG, one estimates cardiac electrical activity based on measured torso potentials and a geometric model of the torso. Due to attenuation and spatial smoothing that occur within the thorax, inverse ECG problem is ill-posed and the forward model matrix is badly conditioned. Thus, small disturbances in the measurements lead to amplified errors in inverse solutions. It is difficult to solve this problem for effective cardiac imaging due to the ill-posed nature and high dimensionality of the problem. Tikhonov regularization, Truncated Singular Value Decomposition (TSVD) and Bayesian MAP estimation are some of the methods proposed in literature to cope with the ill-posedness of the problem. The most common approach in these methods is to ignore temporal relations of epicardial potentials and to solve the inverse problem at every time instant independently (column sequential approach). This is the fastest and the easiest approach / however, it does not include temporal correlations. The goal of this thesis is to include temporal constraints as well as spatial constraints in solving the inverse ECG problem. For this purpose, two methods are used. In the first method, we solved the augmented problem directly. Alternatively, we solve the problem with column sequential approach after applying temporal whitening. The performance of each method is evaluated.

Page generated in 0.1042 seconds