• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 294
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 618
  • 143
  • 104
  • 93
  • 87
  • 78
  • 78
  • 70
  • 68
  • 68
  • 62
  • 61
  • 57
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Load Identification using Matrix Inversion Method (MIM) for Transfer Path Analysis (TPA)

Komandur, Deepak K. 28 October 2019 (has links)
No description available.
52

PARAMETER SELECTION RULES FOR ILL-POSED PROBLEMS

Park, Yonggi 19 November 2019 (has links)
No description available.
53

The Inverse Source Problem for Helmholtz

Fernstrom, Hugo, Sträng, Hugo January 2022 (has links)
This paper studies the inverse source problem for the Helmholtz equation with a point source in a two dimensional domain. Given complete boundary data and appropriate discretization Tikhonov regularization is established to be an effective method at finding the point source. Furthermore, it was found that Tikhonov regularization can locate point sources even given significant noise, as well as incomplete boundary data in complicated domains.
54

Calibration of Option Pricing in Reproducing Kernel Hilbert Space

Ge, Lei 01 January 2015 (has links)
A parameter used in the Black-Scholes equation, volatility, is a measure for variation of the price of a financial instrument over time. Determining volatility is a fundamental issue in the valuation of financial instruments. This gives rise to an inverse problem known as the calibration problem for option pricing. This problem is shown to be ill-posed. We propose a regularization method and reformulate our calibration problem as a problem of finding the local volatility in a reproducing kernel Hilbert space. We defined a new volatility function which allows us to embrace both the financial and time factors of the options. We discuss the existence of the minimizer by using regu- larized reproducing kernel method and show that the regularizer resolves the numerical instability of the calibration problem. Finally, we apply our studied method to data sets of index options by simulation tests and discuss the empirical results obtained.
55

Photon Beam Spectrum Characterization Using Scatter Radiation Analysis

Hawwari, Majd I. 12 April 2010 (has links)
No description available.
56

Joint Enhancement of Multichannel Synthetic Aperture Radar Data

Ramakrishnan, Naveen 19 March 2008 (has links)
No description available.
57

Graph Based Regularization of Large Covariance Matrices

Yekollu, Srikar January 2009 (has links)
No description available.
58

Model-based Regularization for Video Super-Resolution

Wang, Huazhong 04 1900 (has links)
In this thesis, we reexamine the classical problem of video super-resolution, with an aim to reproduce fine edge/texture details of acquired digital videos. In general, the video super-resolution reconstruction is an ill-posed inverse problem, because of an insufficient number of observations from registered low-resolution video frames. To stabilize the problem and make its solution more accurate, we develop two video super-resolution techniques: 1) a 2D autoregressive modeling and interpolation technique for video super-resolution reconstruction, with model parameters estimated from multiple registered low-resolution frames; 2) the use of image model as a regularization term to improve the performance of the traditional video super-resolution algorithm. We further investigate the interactions of various unknown variables involved in video super-resolution reconstruction, including motion parameters, high-resolution pixel intensities and the parameters of the image model used for regularization. We succeed in developing a joint estimation technique that infers these unknowns simultaneously to achieve statistical consistency among them. / Thesis / Master of Applied Science (MASc)
59

Learning Hyperparameters for Inverse Problems by Deep Neural Networks

McDonald, Ashlyn Grace 08 May 2023 (has links)
Inverse problems arise in a wide variety of applications including biomedicine, environmental sciences, astronomy, and more. Computing reliable solutions to these problems requires the inclusion of prior knowledge in a process that is often referred to as regularization. Most regularization techniques require suitable choices of regularization parameters. In this work, we will describe new approaches that use deep neural networks (DNN) to estimate these regularization parameters. We will train multiple networks to approximate mappings from observation data to individual regularization parameters in a supervised learning approach. Once the networks are trained, we can efficiently compute regularization parameters for newly-obtained data by forward propagation through the DNNs. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. Numerical results for tomography demonstrate the potential benefits of using DNNs to learn regularization parameters. / Master of Science / Inverse problems arise in a wide variety of applications including biomedicine, environmental sciences, astronomy, and more. With these types of problems, the goal is to reconstruct an approximation of the original input when we can only observe the output. However, the output often includes some sort of noise or error, which means that computing reliable solutions to these problems is difficult. In order to combat this problem, we can include prior knowledge about the solution in a process that is often referred to as regularization. Most regularization techniques require suitable choices of regularization parameters. In this work, we will describe new approaches that use deep neural networks (DNN) to obtain these parameters. We will train multiple networks to approximate mappings from observation data to individual regularization parameters in a supervised learning approach. Once the networks are trained, we can efficiently compute regularization parameters for newly-obtained data by forward propagation through the DNNs. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. Numerical results for tomography demonstrate the potential of using DNNs to learn regularization parameters.
60

Row-Action Methods for Massive Inverse Problems

Slagel, Joseph Tanner 19 June 2019 (has links)
Numerous scientific applications have seen the rise of massive inverse problems, where there are too much data to implement an all-at-once strategy to compute a solution. Additionally, tools for regularizing ill-posed inverse problems are infeasible when the problem is too large. This thesis focuses on the development of row-action methods, which can be used to iteratively solve inverse problems when it is not possible to access the entire data-set or forward model simultaneously. We investigate these techniques for linear inverse problems and for separable, nonlinear inverse problems where the objective function is nonlinear in one set of parameters and linear in another set of parameters. For the linear problem, we perform a convergence analysis of these methods, which shows favorable asymptotic and initial convergence properties, as well as a trade-off between convergence rate and precision of iterates that is based on the step-size. These row-action methods can be interpreted as stochastic Newton and stochastic quasi-Newton approaches on a reformulation of the least squares problem, and they can be analyzed as limited memory variants of the recursive least squares algorithm. For ill-posed problems, we introduce sampled regularization parameter selection techniques, which include sampled variants of the discrepancy principle, the unbiased predictive risk estimator, and the generalized cross-validation. We demonstrate the effectiveness of these methods using examples from super-resolution imaging, tomography reconstruction, and image classification. / Doctor of Philosophy / Numerous scientific problems have seen the rise of massive data sets. An example of this is super-resolution, where many low-resolution images are used to construct a high-resolution image, or 3-D medical imaging where a 3-D image of an object of interest with hundreds of millions voxels is reconstructed from x-rays moving through that object. This work focuses on row-action methods that numerically solve these problems by repeatedly using smaller samples of the data to avoid the computational burden of using the entire data set at once. When data sets contain measurement errors, this can cause the solution to get contaminated with noise. While there are methods to handle this issue, when the data set becomes massive, these methods are no longer feasible. This dissertation develops techniques to avoid getting the solution contaminated with noise, even when the data set is immense. The methods developed in this work are applied to numerous scientific applications including super-resolution imaging, tomography, and image classification.

Page generated in 0.0171 seconds