• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 292
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 615
  • 143
  • 104
  • 92
  • 87
  • 78
  • 78
  • 70
  • 68
  • 68
  • 62
  • 61
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Flexible Modeling of Non-Stationary Extremal Dependence Using Spatially-Fused LASSO and Ridge Penalties

Shao, Xuanjie 05 April 2022 (has links)
Statistical modeling of a nonstationary spatial extremal dependence structure is a challenging problem. In practice, parametric max-stable processes are commonly used for modeling spatially-indexed block maxima data, where the stationarity assumption is often made to make inference easier. However, this assumption is unreliable for data observed over a large or complex domain. In this work, we develop a computationally-efficient method to estimate nonstationary extremal dependence using max-stable processes, which builds upon and extends an approach recently proposed in the classical geostatistical literature. More precisely, we divide the spatial domain into a fine grid of subregions, each having its own set of dependence-related parameters, and then impose LASSO ($L_1$) or Ridge ($L_2$) penalties to obtain spatially-smooth estimates. We then also subsequently merge the subregions sequentially together with a new algorithm to enhance the model's performance. Here we focus on the popular Brown-Resnick process, although extensions to other classes of max-stable processes are also possible. We discuss practical strategies for adequately defining the subregions and merging them back together. To make our method suitable for high-dimensional datasets, we exploit a pairwise likelihood approach and discuss the choice of pairs to achieve reasonable computational and statistical efficiency. We apply our proposed method to a dataset of annual maximum temperature in Nepal and show that our approach fits reasonably and realistically captures the complex non-stationarity in the extremal dependence.
52

Load Identification using Matrix Inversion Method (MIM) for Transfer Path Analysis (TPA)

Komandur, Deepak K. 28 October 2019 (has links)
No description available.
53

PARAMETER SELECTION RULES FOR ILL-POSED PROBLEMS

Park, Yonggi 19 November 2019 (has links)
No description available.
54

The Inverse Source Problem for Helmholtz

Fernstrom, Hugo, Sträng, Hugo January 2022 (has links)
This paper studies the inverse source problem for the Helmholtz equation with a point source in a two dimensional domain. Given complete boundary data and appropriate discretization Tikhonov regularization is established to be an effective method at finding the point source. Furthermore, it was found that Tikhonov regularization can locate point sources even given significant noise, as well as incomplete boundary data in complicated domains.
55

Calibration of Option Pricing in Reproducing Kernel Hilbert Space

Ge, Lei 01 January 2015 (has links)
A parameter used in the Black-Scholes equation, volatility, is a measure for variation of the price of a financial instrument over time. Determining volatility is a fundamental issue in the valuation of financial instruments. This gives rise to an inverse problem known as the calibration problem for option pricing. This problem is shown to be ill-posed. We propose a regularization method and reformulate our calibration problem as a problem of finding the local volatility in a reproducing kernel Hilbert space. We defined a new volatility function which allows us to embrace both the financial and time factors of the options. We discuss the existence of the minimizer by using regu- larized reproducing kernel method and show that the regularizer resolves the numerical instability of the calibration problem. Finally, we apply our studied method to data sets of index options by simulation tests and discuss the empirical results obtained.
56

Photon Beam Spectrum Characterization Using Scatter Radiation Analysis

Hawwari, Majd I. 12 April 2010 (has links)
No description available.
57

Joint Enhancement of Multichannel Synthetic Aperture Radar Data

Ramakrishnan, Naveen 19 March 2008 (has links)
No description available.
58

Graph Based Regularization of Large Covariance Matrices

Yekollu, Srikar January 2009 (has links)
No description available.
59

Model-based Regularization for Video Super-Resolution

Wang, Huazhong 04 1900 (has links)
In this thesis, we reexamine the classical problem of video super-resolution, with an aim to reproduce fine edge/texture details of acquired digital videos. In general, the video super-resolution reconstruction is an ill-posed inverse problem, because of an insufficient number of observations from registered low-resolution video frames. To stabilize the problem and make its solution more accurate, we develop two video super-resolution techniques: 1) a 2D autoregressive modeling and interpolation technique for video super-resolution reconstruction, with model parameters estimated from multiple registered low-resolution frames; 2) the use of image model as a regularization term to improve the performance of the traditional video super-resolution algorithm. We further investigate the interactions of various unknown variables involved in video super-resolution reconstruction, including motion parameters, high-resolution pixel intensities and the parameters of the image model used for regularization. We succeed in developing a joint estimation technique that infers these unknowns simultaneously to achieve statistical consistency among them. / Thesis / Master of Applied Science (MASc)
60

Learning Hyperparameters for Inverse Problems by Deep Neural Networks

McDonald, Ashlyn Grace 08 May 2023 (has links)
Inverse problems arise in a wide variety of applications including biomedicine, environmental sciences, astronomy, and more. Computing reliable solutions to these problems requires the inclusion of prior knowledge in a process that is often referred to as regularization. Most regularization techniques require suitable choices of regularization parameters. In this work, we will describe new approaches that use deep neural networks (DNN) to estimate these regularization parameters. We will train multiple networks to approximate mappings from observation data to individual regularization parameters in a supervised learning approach. Once the networks are trained, we can efficiently compute regularization parameters for newly-obtained data by forward propagation through the DNNs. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. Numerical results for tomography demonstrate the potential benefits of using DNNs to learn regularization parameters. / Master of Science / Inverse problems arise in a wide variety of applications including biomedicine, environmental sciences, astronomy, and more. With these types of problems, the goal is to reconstruct an approximation of the original input when we can only observe the output. However, the output often includes some sort of noise or error, which means that computing reliable solutions to these problems is difficult. In order to combat this problem, we can include prior knowledge about the solution in a process that is often referred to as regularization. Most regularization techniques require suitable choices of regularization parameters. In this work, we will describe new approaches that use deep neural networks (DNN) to obtain these parameters. We will train multiple networks to approximate mappings from observation data to individual regularization parameters in a supervised learning approach. Once the networks are trained, we can efficiently compute regularization parameters for newly-obtained data by forward propagation through the DNNs. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. Numerical results for tomography demonstrate the potential of using DNNs to learn regularization parameters.

Page generated in 0.024 seconds