• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 609
  • 143
  • 103
  • 89
  • 87
  • 78
  • 77
  • 70
  • 68
  • 68
  • 61
  • 59
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Flexible Modeling of Non-Stationary Extremal Dependence Using Spatially-Fused LASSO and Ridge Penalties

Shao, Xuanjie 05 April 2022 (has links)
Statistical modeling of a nonstationary spatial extremal dependence structure is a challenging problem. In practice, parametric max-stable processes are commonly used for modeling spatially-indexed block maxima data, where the stationarity assumption is often made to make inference easier. However, this assumption is unreliable for data observed over a large or complex domain. In this work, we develop a computationally-efficient method to estimate nonstationary extremal dependence using max-stable processes, which builds upon and extends an approach recently proposed in the classical geostatistical literature. More precisely, we divide the spatial domain into a fine grid of subregions, each having its own set of dependence-related parameters, and then impose LASSO ($L_1$) or Ridge ($L_2$) penalties to obtain spatially-smooth estimates. We then also subsequently merge the subregions sequentially together with a new algorithm to enhance the model's performance. Here we focus on the popular Brown-Resnick process, although extensions to other classes of max-stable processes are also possible. We discuss practical strategies for adequately defining the subregions and merging them back together. To make our method suitable for high-dimensional datasets, we exploit a pairwise likelihood approach and discuss the choice of pairs to achieve reasonable computational and statistical efficiency. We apply our proposed method to a dataset of annual maximum temperature in Nepal and show that our approach fits reasonably and realistically captures the complex non-stationarity in the extremal dependence.
52

Row-Action Methods for Massive Inverse Problems

Slagel, Joseph Tanner 19 June 2019 (has links)
Numerous scientific applications have seen the rise of massive inverse problems, where there are too much data to implement an all-at-once strategy to compute a solution. Additionally, tools for regularizing ill-posed inverse problems are infeasible when the problem is too large. This thesis focuses on the development of row-action methods, which can be used to iteratively solve inverse problems when it is not possible to access the entire data-set or forward model simultaneously. We investigate these techniques for linear inverse problems and for separable, nonlinear inverse problems where the objective function is nonlinear in one set of parameters and linear in another set of parameters. For the linear problem, we perform a convergence analysis of these methods, which shows favorable asymptotic and initial convergence properties, as well as a trade-off between convergence rate and precision of iterates that is based on the step-size. These row-action methods can be interpreted as stochastic Newton and stochastic quasi-Newton approaches on a reformulation of the least squares problem, and they can be analyzed as limited memory variants of the recursive least squares algorithm. For ill-posed problems, we introduce sampled regularization parameter selection techniques, which include sampled variants of the discrepancy principle, the unbiased predictive risk estimator, and the generalized cross-validation. We demonstrate the effectiveness of these methods using examples from super-resolution imaging, tomography reconstruction, and image classification. / Doctor of Philosophy / Numerous scientific problems have seen the rise of massive data sets. An example of this is super-resolution, where many low-resolution images are used to construct a high-resolution image, or 3-D medical imaging where a 3-D image of an object of interest with hundreds of millions voxels is reconstructed from x-rays moving through that object. This work focuses on row-action methods that numerically solve these problems by repeatedly using smaller samples of the data to avoid the computational burden of using the entire data set at once. When data sets contain measurement errors, this can cause the solution to get contaminated with noise. While there are methods to handle this issue, when the data set becomes massive, these methods are no longer feasible. This dissertation develops techniques to avoid getting the solution contaminated with noise, even when the data set is immense. The methods developed in this work are applied to numerous scientific applications including super-resolution imaging, tomography, and image classification.
53

Load Identification using Matrix Inversion Method (MIM) for Transfer Path Analysis (TPA)

Komandur, Deepak K. 28 October 2019 (has links)
No description available.
54

PARAMETER SELECTION RULES FOR ILL-POSED PROBLEMS

Park, Yonggi 19 November 2019 (has links)
No description available.
55

Photon Beam Spectrum Characterization Using Scatter Radiation Analysis

Hawwari, Majd I. 12 April 2010 (has links)
No description available.
56

Joint Enhancement of Multichannel Synthetic Aperture Radar Data

Ramakrishnan, Naveen 19 March 2008 (has links)
No description available.
57

Graph Based Regularization of Large Covariance Matrices

Yekollu, Srikar January 2009 (has links)
No description available.
58

Model-based Regularization for Video Super-Resolution

Wang, Huazhong 04 1900 (has links)
In this thesis, we reexamine the classical problem of video super-resolution, with an aim to reproduce fine edge/texture details of acquired digital videos. In general, the video super-resolution reconstruction is an ill-posed inverse problem, because of an insufficient number of observations from registered low-resolution video frames. To stabilize the problem and make its solution more accurate, we develop two video super-resolution techniques: 1) a 2D autoregressive modeling and interpolation technique for video super-resolution reconstruction, with model parameters estimated from multiple registered low-resolution frames; 2) the use of image model as a regularization term to improve the performance of the traditional video super-resolution algorithm. We further investigate the interactions of various unknown variables involved in video super-resolution reconstruction, including motion parameters, high-resolution pixel intensities and the parameters of the image model used for regularization. We succeed in developing a joint estimation technique that infers these unknowns simultaneously to achieve statistical consistency among them. / Thesis / Master of Applied Science (MASc)
59

The Inverse Source Problem for Helmholtz

Fernstrom, Hugo, Sträng, Hugo January 2022 (has links)
This paper studies the inverse source problem for the Helmholtz equation with a point source in a two dimensional domain. Given complete boundary data and appropriate discretization Tikhonov regularization is established to be an effective method at finding the point source. Furthermore, it was found that Tikhonov regularization can locate point sources even given significant noise, as well as incomplete boundary data in complicated domains.
60

Calibration of Option Pricing in Reproducing Kernel Hilbert Space

Ge, Lei 01 January 2015 (has links)
A parameter used in the Black-Scholes equation, volatility, is a measure for variation of the price of a financial instrument over time. Determining volatility is a fundamental issue in the valuation of financial instruments. This gives rise to an inverse problem known as the calibration problem for option pricing. This problem is shown to be ill-posed. We propose a regularization method and reformulate our calibration problem as a problem of finding the local volatility in a reproducing kernel Hilbert space. We defined a new volatility function which allows us to embrace both the financial and time factors of the options. We discuss the existence of the minimizer by using regu- larized reproducing kernel method and show that the regularizer resolves the numerical instability of the calibration problem. Finally, we apply our studied method to data sets of index options by simulation tests and discuss the empirical results obtained.

Page generated in 0.1296 seconds