• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 20
  • 15
  • 14
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Parameter Estimation In Generalized Partial Linear Models With Conic Quadratic Programming

Celik, Gul 01 September 2010 (has links) (PDF)
In statistics, regression analysis is a technique, used to understand and model the relationship between a dependent variable and one or more independent variables. Multiple Adaptive Regression Spline (MARS) is a form of regression analysis. It is a non-parametric regression technique and can be seen as an extension of linear models that automatically models non-linearities and interactions. MARS is very important in both classification and regression, with an increasing number of applications in many areas of science, economy and technology. In our study, we analyzed Generalized Partial Linear Models (GPLMs), which are particular semiparametric models. GPLMs separate input variables into two parts and additively integrates classical linear models with nonlinear model part. In order to smooth this nonparametric part, we use Conic Multiple Adaptive Regression Spline (CMARS), which is a modified form of MARS. MARS is very benefical for high dimensional problems and does not require any particular class of relationship between the regressor variables and outcome variable of interest. This technique offers a great advantage for fitting nonlinear multivariate functions. Also, the contribution of the basis functions can be estimated by MARS, so that both the additive and interaction effects of the regressors are allowed to determine the dependent variable. There are two steps in the MARS algorithm: the forward and backward stepwise algorithms. In the first step, the model is constructed by adding basis functions until a maximum level of complexity is reached. Conversely, in the second step, the backward stepwise algorithm reduces the complexity by throwing the least significant basis functions from the model. In this thesis, we suggest not using backward stepwise algorithm, instead, we employ a Penalized Residual Sum of Squares (PRSS). We construct PRSS for MARS as a Tikhonov Regularization Problem. We treat this problem using continuous optimization techniques which we consider to become an important complementary technology and alternative to the concept of the backward stepwise algorithm. Especially, we apply the elegant framework of Conic Quadratic Programming (CQP) an area of convex optimization that is very well-structured, hereby, resembling linear programming and, therefore, permitting the use of interior point methods. At the end of this study, we compare CQP with Tikhonov Regularization problem for two different data sets, which are with and without interaction effects. Moreover, by using two another data sets, we make a comparison between CMARS and two other classification methods which are Infinite Kernel Learning (IKL) and Tikhonov Regularization whose results are obtained from the thesis, which is on progress.
12

Regularization of Parameter Problems for Dynamic Beam Models

Rydström, Sara January 2010 (has links)
<p>The field of inverse problems is an area in applied mathematics that is of great importance in several scientific and industrial applications. Since an inverse problem is typically founded on non-linear and ill-posed models it is a very difficult problem to solve. To find a regularized solution it is crucial to have <em>a priori</em> information about the solution. Therefore, general theories are not sufficient considering new applications.</p><p>In this thesis we consider the inverse problem to determine the beam bending stiffness from measurements of the transverse dynamic displacement. Of special interest is to localize parts with reduced bending stiffness. Driven by requirements in the wood-industry it is not enough considering time-efficient algorithms, the models must also be adapted to manage extremely short calculation times.</p><p>For the developing of efficient methods inverse problems based on the fourth order Euler-Bernoulli beam equation and the second order string equation are studied. Important results are the transformation of a nonlinear regularization problem to a linear one and a convex procedure for finding parts with reduced bending stiffness.</p>
13

Maximum entropy regularization for calibrating a time-dependent volatility function

Hofmann, Bernd, Krämer, Romy 26 August 2004 (has links) (PDF)
We investigate the applicability of the method of maximum entropy regularization (MER) including convergence and convergence rates of regularized solutions to the specific inverse problem (SIP) of calibrating a purely time-dependent volatility function. In this context, we extend the results of [16] and [17] in some details. Due to the explicit structure of the forward operator based on a generalized Black-Scholes formula the ill-posedness character of the nonlinear inverse problem (SIP) can be verified. Numerical case studies illustrate the chances and limitations of (MER) versus Tikhonov regularization (TR) for smooth solutions and solutions with a sharp peak.
14

Algorithms for Toeplitz Matrices with Applications to Image Deblurring

Kimitei, Symon Kipyagwai 21 April 2008 (has links)
In this thesis, we present the O(n(log n)^2) superfast linear least squares Schur algorithm (ssschur). The algorithm we will describe illustrates a fast way of solving linear equations or linear least squares problems with low displacement rank. This program is based on the O(n^2) Schur algorithm speeded up via FFT. The algorithm solves a ill-conditioned Toeplitz-like system using Tikhonov regularization. The regularized system is Toeplitz-like of displacement rank 4. We also show the effect of choice of the regularization parameter on the quality of the image reconstructed.
15

Theoretical and Numerical Study of Tikhonov's Regularization and Morozov's Discrepancy Principle

Whitney, MaryGeorge L. 01 December 2009 (has links)
A concept of a well-posed problem was initially introduced by J. Hadamard in 1923, who expressed the idea that every mathematical model should have a unique solution, stable with respect to noise in the input data. If at least one of those properties is violated, the problem is ill-posed (and unstable). There are numerous examples of ill- posed problems in computational mathematics and applications. Classical numerical algorithms, when used for an ill-posed model, turn out to be divergent. Hence one has to develop special regularization techniques, which take advantage of an a priori information (normally available), in order to solve an ill-posed problem in a stable fashion. In this thesis, theoretical and numerical investigation of Tikhonov's (variational) regularization is presented. The regularization parameter is computed by the discrepancy principle of Morozov, and a first-kind integral equation is used for numerical simulations.
16

Tikhonov regularization with oversmoothing penalties

Gerth, Daniel 21 December 2016 (has links) (PDF)
In the last decade l1-regularization became a powerful and popular tool for the regularization of Inverse Problems. While in the early years sparse solution were in the focus of research, recently also the case that the coefficients of the exact solution decay sufficiently fast was under consideration. In this paper we seek to show that l1-regularization is applicable and leads to optimal convergence rates even when the exact solution does not belong to l1 but only to l2. This is a particular example of over-smoothing regularization, i.e., the penalty implies smoothness properties the exact solution does not fulfill. We will make some statements on convergence also in this general context.
17

Multihypothesis Prediction for Compressed Sensing and Super-Resolution of Images

Chen, Chen 12 May 2012 (has links)
A process for the use of multihypothesis prediction in the reconstruction of images is proposed for use in both compressed-sensing reconstruction as well as single-image super-resolution. Specifically, for compressed-sensing reconstruction of a single still image, multiple predictions for an image block are drawn from spatially surrounding blocks within an initial non-predicted reconstruction. The predictions are used to generate a residual in the domain of the compressed-sensing random projections. This residual being typically more compressible than the original signal leads to improved compressed-sensing reconstruction quality. To appropriately weight the hypothesis predictions, a Tikhonov regularization to an ill-posed least-squares optimization is proposed. An extension of this framework is applied to the compressed-sensing reconstruction of hyperspectral imagery is also studied. Finally, the multihypothesis paradigm is employed for single-image superresolution wherein each patch of a low-resolution image is represented as a linear combination of spatially surrounding hypothesis patches.
18

Using regularization for error reduction in GRACE gravity estimation

Save, Himanshu Vijay 02 June 2010 (has links)
The Gravity Recovery and Climate Experiment (GRACE) is a joint National Aeronautics and Space Administration / Deutsches Zentrum für Luftund Raumfahrt (NASA/DLR) mission to map the time-variable and mean gravity field of the Earth, and was launched on March 17, 2002. The nature of the gravity field inverse problem amplifies the noise in the data that creeps into the mid and high degree and order harmonic coefficients of the earth's gravity fields for monthly variability, making the GRACE estimation problem ill-posed. These errors, due to the use of imperfect models and data noise, are manifested as peculiar errors in the gravity estimates as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study develops a methodology based on Tikhonov regularization technique using the L-curve method in combination with orthogonal transformation method. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve can be prohibitive for a large scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization that is a computationally inexpensive approximation to L-curve called L-ribbon. This method projects a large estimation problem on a problem of size of about two orders of magnitude smaller. Using the knowledge of the characteristics of the systematic errors in the GRACE solutions, this study designs a new regularization matrix that reduces the systematic errors without attenuating the signal. The regularization matrix provides a constraint on the geopotential coefficients as a function of its degree and order. The regularization algorithms are implemented in a parallel computing environment for this study. A five year time-series of the candidate regularized solutions show markedly reduced systematic errors without any reduction in the variability signal compared to the unconstrained solutions. The variability signals in the regularized series show good agreement with the hydrological models in the small and medium sized river basins and also show non-seasonal signals in the oceans without the need for post-processing. / text
19

Résolution de problèmes inverses en géodésie physique / On solving some inverse problems in physical geodesy

Abdelmoula, Amine 20 December 2013 (has links)
Ce travail traite de deux problèmes de grande importances en géodésie physique. Le premier porte sur la détermination du géoïde sur une zone terrestre donnée. Si la terre était une sphère homogène, la gravitation en un point, serait entièrement déterminée à partir de sa distance au centre de la terre, ou de manière équivalente, en fonction de son altitude. Comme la terre n'est ni sphérique ni homogène, il faut calculer en tout point la gravitation. A partir d'un ellipsoïde de référence, on cherche la correction à apporter à une première approximation du champ de gravitation afin d'obtenir un géoïde, c'est-à-dire une surface sur laquelle la gravitation est constante. En fait, la méthode utilisée est la méthode de collocation par moindres carrés qui sert à résoudre des grands problèmes aux moindres carrés généralisés. Le seconde partie de cette thèse concerne un problème inverse géodésique qui consiste à trouver une répartition de masses ponctuelles (caractérisées par leurs intensités et positions), de sorte que le potentiel généré par eux, se rapproche au maximum d'un potentiel donné. Sur la terre entière une fonction potentielle est généralement exprimée en termes d'harmoniques sphériques qui sont des fonctions de base à support global la sphère. L'identification du potentiel cherché se fait en résolvant un problème aux moindres carrés. Lorsque seulement une zone limitée de la Terre est étudiée, l'estimation des paramètres des points masses à l'aide des harmoniques sphériques est sujette à l'erreur, car ces fonctions de base ne sont plus orthogonales sur un domaine partiel de la sphère. Le problème de la détermination des points masses sur une zone limitée est traitée par la construction d'une base de Slepian qui est orthogonale sur le domaine limité spécifié de la sphère. Nous proposons un algorithme itératif pour la résolution numérique du problème local de détermination des masses ponctuelles et nous donnons quelques résultats sur la robustesse de ce processus de reconstruction. Nous étudions également la stabilité de ce problème relativement au bruit ajouté. Nous présentons quelques résultats numériques ainsi que leurs interprétations. / This work focuses on the study of two well-known problems in physical geodesy. The first problem concerns the determination of the geoid on a given area on the earth. If the Earth were a homogeneous sphere, the gravity at a point would be entirely determined from its distance to the center of the earth or in terms of its altitude. As the earth is neither spherical nor homogeneous, we must calculate gravity at any point. From a reference ellipsoid, we search to find the correction to a mathematical approximation of the gravitational field in order to obtain a geoid, i.e. A surface on which gravitational potential is constant. The method used is the method of least squares collocation which is the best for solving large generalized least squares problems. In the second problem, We are interested in a geodetic inverse problem that consists in finding a distribution of point masses (characterized by their intensities and positions), such that the potential generated by them best approximates a given potential field. On the whole Earth a potential function is usually expressed in terms of spherical harmonics which are basis functions with global support. The identification of the two potentials is done by solving a least-squares problem. When only a limited area of the Earth is studied, the estimation of the point-mass parameters by means of spherical harmonics is prone to error, since they are no longer orthogonal over a partial domain of the sphere. The point-mass determination problem on a limited region is treated by the construction of a Slepian basis that is orthogonal over the specified limited domain of the sphere. We propose an iterative algorithm for the numerical solution of the local point mass determination problem and give some results on the robustness of this reconstruction process. We also study the stability of this problem against added noise. Some numerical tests are presented and commented.
20

Parameter Estimation In Generalized Partial Linear Modelswith Tikhanov Regularization

Kayhan, Belgin 01 September 2010 (has links) (PDF)
Regression analysis refers to techniques for modeling and analyzing several variables in statistical learning. There are various types of regression models. In our study, we analyzed Generalized Partial Linear Models (GPLMs), which decomposes input variables into two sets, and additively combines classical linear models with nonlinear model part. By separating linear models from nonlinear ones, an inverse problem method Tikhonov regularization was applied for the nonlinear submodels separately, within the entire GPLM. Such a particular representation of submodels provides both a better accuracy and a better stability (regularity) under noise in the data. We aim to smooth the nonparametric part of GPLM by using a modified form of Multiple Adaptive Regression Spline (MARS) which is very useful for high-dimensional problems and does not impose any specific relationship between the predictor and dependent variables. Instead, it can estimate the contribution of the basis functions so that both the additive and interaction effects of the predictors are allowed to determine the dependent variable. The MARS algorithm has two steps: the forward and backward stepwise algorithms. In the rst one, the model is built by adding basis functions until a maximum level of complexity is reached. On the other hand, the backward stepwise algorithm starts with removing the least significant basis functions from the model. In this study, we propose to use a penalized residual sum of squares (PRSS) instead of the backward stepwise algorithm and construct PRSS for MARS as a Tikhonov regularization problem. Besides, we provide numeric example with two data sets / one has interaction and the other one does not have. As well as studying the regularization of the nonparametric part, we also mention theoretically the regularization of the parametric part. Furthermore, we make a comparison between Infinite Kernel Learning (IKL) and Tikhonov regularization by using two data sets, with the difference consisting in the (non-)homogeneity of the data set. The thesis concludes with an outlook on future research.

Page generated in 0.1191 seconds