Spelling suggestions: "subject:"regularization."" "subject:"regularizations.""
151 |
Algorithms for Toeplitz Matrices with Applications to Image DeblurringKimitei, Symon Kipyagwai 21 April 2008 (has links)
In this thesis, we present the O(n(log n)^2) superfast linear least squares Schur algorithm (ssschur). The algorithm we will describe illustrates a fast way of solving linear equations or linear least squares problems with low displacement rank. This program is based on the O(n^2) Schur algorithm speeded up via FFT. The algorithm solves a ill-conditioned Toeplitz-like system using Tikhonov regularization. The regularized system is Toeplitz-like of displacement rank 4. We also show the effect of choice of the regularization parameter on the quality of the image reconstructed.
|
152 |
Theoretical and Numerical Study of Tikhonov's Regularization and Morozov's Discrepancy PrincipleWhitney, MaryGeorge L. 01 December 2009 (has links)
A concept of a well-posed problem was initially introduced by J. Hadamard in 1923, who expressed the idea that every mathematical model should have a unique solution, stable with respect to noise in the input data. If at least one of those properties is violated, the problem is ill-posed (and unstable). There are numerous examples of ill- posed problems in computational mathematics and applications. Classical numerical algorithms, when used for an ill-posed model, turn out to be divergent. Hence one has to develop special regularization techniques, which take advantage of an a priori information (normally available), in order to solve an ill-posed problem in a stable fashion. In this thesis, theoretical and numerical investigation of Tikhonov's (variational) regularization is presented. The regularization parameter is computed by the discrepancy principle of Morozov, and a first-kind integral equation is used for numerical simulations.
|
153 |
Regularization of the AVO inverse problem by means of a multivariate Cauchy probability distributionAlemie, Wubshet M. Unknown Date
No description available.
|
154 |
Contributions to statistical learning and its applications in personalized medicineValencia Arboleda, Carlos Felipe 16 May 2013 (has links)
This dissertation, in general, is about finding stable solutions to statistical models with very large number of parameters and to analyze their asymptotic statistical properties. In particular, it is centered in the study of regularization methods based on penalized estimation. Those procedures find an estimator that is the result of an optimization problem balancing out the fitting to the data with the plausability of the estimation. The first chapter studies a smoothness regularization estimator for an infinite dimensional parameter in an exponential family model with functional predictors. We focused on the Reproducing Kernel Hilbert space approach and show that regardless the generality of the method, minimax optimal convergence rates are achieved. In order to derive the asymptotic analysis of the estimator, we developed a simultaneous diagonalization tool for two positive definite operators: the kernel operator and the operator defined by the second Frechet derivative of the expected data t functional. By using the proposed simultaneous diagonalization tool sharper bounds on the minimax rates are obtained. The second chapter studies the statistical properties of the method of regularization using Radial Basis Functions in the context of linear inverse problems. The regularization here serves two purposes, one is creating a stable solution for the inverse problem and the other is prevent the over-fitting on the nonparametric estimation of the functional target. Different degrees for the ill-posedness in the inversion of the operator A are considered: mildly and severely ill-posed. Also, we study different types for radial basis kernels classifieded by the strength of the penalization norm: Gaussian, Multiquadrics and Spline type of kernels. The third chapter deals with the problem of Individualized Treatment Rule (ITR) and analyzes the solution of it through Discriminant Analysis. In the ITR problem, the treatment assignment is done based on the particular patient's prognosis covariates in order to maximizes some reward function. Data generated from a random clinical trial is considered. Maximizing the empirical value function is an NP-hard computational problem. We consider estimating directly the decision rule by maximizing the expected value, using a surrogate function in order to make the optimization problem computationally feasible (convex programming). Necessary and sufficient conditions for Infinite Sample Consistency on the surrogate function are found for different scenarios: binary treatment selection, treatment selection with withholding and multi-treatment selection.
|
155 |
Topics on Regularization of Parameters in Multivariate Linear RegressionChen, Lianfu 2011 December 1900 (has links)
My dissertation mainly focuses on the regularization of parameters in the multivariate linear regression under different assumptions on the distribution of the errors. It consists of two topics where we develop iterative procedures to construct sparse estimators for both the regression coefficient and scale matrices simultaneously, and a third topic where we develop a method for testing if the skewness parameter in the skew-normal distribution is parallel to one of the eigenvectors of the scale matrix.
In the first project, we propose a robust procedure for constructing a sparse estimator of a multivariate regression coefficient matrix that accounts for the correlations of the response variables. Robustness to outliers is achieved using heavy-tailed t distributions for the multivariate response, and shrinkage is introduced by adding to the negative log-likelihood l1 penalties on the entries of both the regression coefficient matrix and the precision matrix of the responses. Taking advantage of the hierarchical representation of a multivariate t distribution as the scale mixture of normal distributions and the EM algorithm, the optimization problem is solved iteratively where at each EM iteration suitably modified multivariate regression with covariance estimation (MRCE) algorithms proposed by Rothman, Levina and Zhu are used. We propose two new optimization algorithms for the penalized likelihood, called MRCEI and MRCEII, which differ from MRCE in the way that the tuning parameters for the two matrices are selected. Estimating the degrees of freedom when penalizing the entries of the matrices presents new computational challenges. A simulation study and real data analysis demonstrate that the MRCEII, which selects the tuning parameter of the precision matrix of the multiple responses using the Cp criterion, generally does the best among all methods considered in terms of the prediction error, and MRCEI outperforms the MRCE methods when the regression coefficient matrix is less sparse.
The second project is motivated by the existence of the skewness in the data for which the symmetric distribution assumption on the errors does not hold. We extend the procedure we have proposed to the case where the errors in the multivariate linear regression follow a multivariate skew-normal or skew-t distribution. Based on the convenient representation of skew-normal and skew-t as well as the EM algorithm, we develop an optimization algorithm, called MRST, to iteratively minimize the negative penalized log-likelihood. We also carry out a simulation study to assess the performance of the method and illustrate its application with one real data example.
In the third project, we discuss the asymptotic distributions of the eigenvalues and eigenvectors for the MLE of the scale matrix in a multivariate skew-normal distribution. We propose a statistic for testing whether the skewness vector is proportional to one of the eigenvectors of the scale matrix based on the likelihood ratio. Under the alternative, the likelihood is maximized numerically with two different ways of parametrization for the scale matrix: Modified Cholesky Decomposition (MCD) and Givens Angle. We conduct a simulation study and show that the statistic obtained using Givens Angle parametrization performs well and is more reliable than that obtained using MCD.
|
156 |
Distance Functions and Their Use in Adaptive Mathematical MorphologyĆurić, Vladimir January 2014 (has links)
One of the main problems in image analysis is a comparison of different shapes in images. It is often desirable to determine the extent to which one shape differs from another. This is usually a difficult task because shapes vary in size, length, contrast, texture, orientation, etc. Shapes can be described using sets of points, crisp of fuzzy. Hence, distance functions between sets have been used for comparing different shapes. Mathematical morphology is a non-linear theory related to the shape or morphology of features in the image, and morphological operators are defined by the interaction between an image and a small set called a structuring element. Although morphological operators have been extensively used to differentiate shapes by their size, it is not an easy task to differentiate shapes with respect to other features such as contrast or orientation. One approach for differentiation on these type of features is to use data-dependent structuring elements. In this thesis, we investigate the usefulness of various distance functions for: (i) shape registration and recognition; and (ii) construction of adaptive structuring elements and functions. We examine existing distance functions between sets, and propose a new one, called the Complement weighted sum of minimal distances, where the contribution of each point to the distance function is determined by the position of the point within the set. The usefulness of the new distance function is shown for different image registration and shape recognition problems. Furthermore, we extend the new distance function to fuzzy sets and show its applicability to classification of fuzzy objects. We propose two different types of adaptive structuring elements from the salience map of the edge strength: (i) the shape of a structuring element is predefined, and its size is determined from the salience map; (ii) the shape and size of a structuring element are dependent on the salience map. Using this salience map, we also define adaptive structuring functions. We also present the applicability of adaptive mathematical morphology to image regularization. The connection between adaptive mathematical morphology and Lasry-Lions regularization of non-smooth functions provides an elegant tool for image regularization.
|
157 |
Modified iterative Runge-Kutta-type methods for nonlinear ill-posed problemsPornsawad, Pornsarp, Böckmann, Christine January 2014 (has links)
This work is devoted to the convergence analysis of a modified Runge-Kutta-type
iterative regularization method for solving nonlinear ill-posed problems under a priori and a posteriori stopping rules. The convergence rate results of the proposed method can be obtained under Hölder-type source-wise condition if the Fréchet derivative is properly scaled and locally Lipschitz continuous. Numerical results are achieved by using the Levenberg-Marquardt and Radau methods.
|
158 |
Regularization of the AVO inverse problem by means of a multivariate Cauchy probability distributionAlemie, Wubshet M. 06 1900 (has links)
Amplitude Variation with Oset (AVO) inversion is one of the techniques that is being used to estimate subsurface physical parameters such as P-wave velocity, S-wave velocity, and density or their attributes. AVO inversion is an ill-conditioned problem which has to be regularized in order to obtain a stable and unique solution. In this thesis, a Bayesian procedure that uses a Multivariate Cauchy distribution as a prior probability distribution is introduced. The prior includes a scale matrix that imposes correlation among the AVO attributes and induces a regularization that provokes solutions that are sparse and stable in the presence of noise. The performance of this regularization is demonstrated by both synthetic and real data examples using linearized approximations to the Zoeppritz equations. / Geophysics
|
159 |
Efficient Calibration and Predictive Error Analysis for Highly-Parameterized Models Combining Tikhonov and Subspace Regularization TechniquesMatthew James Tonkin Unknown Date (has links)
The development and application of environmental models to help understand natural systems, and support decision making, is commonplace. A difficulty encountered in the development of such models is determining which physical and chemical processes to simulate, and on what temporal and spatial scale(s). Modern computing capabilities enable the incorporation of more processes, at increasingly refined scales, than at any time previously. However, the simulation of a large number of fine scale processes has undesirable consequences: first, the execution time of many environmental models has not declined despite advances in processor speed and solution techniques; and second, such complex models incorporate a large number of parameters, for which values must be assigned. Compounding these problems is the recognition that since the inverse problem in groundwater modeling is non-unique the calibration of a single parameter set does not assure the reliability of model predictions. Practicing modelers are, then, faced with complex models that incorporate a large number of parameters whose values are uncertain, and that make predictions that are prone to an unspecified amount of error. In recognition of this, there has been considerable research into methods for evaluating the potential for error in model predictions arising from errors in the values assigned to model parameters. Unfortunately, some common methods employed in the estimation of model parameters, and the evaluation of the potential error associated with model parameters and predictions, suffer from limitations in their application that stem from an emphasis on obtaining an over-determined, parsimonious, inverse problem. That is, common methods of model analysis exhibit artifacts from the propagation of subjective a-priori parameter parsimony throughout the calibration and predictive error analyses. This thesis describes theoretical and practical developments that enable the estimation of a large number of parameters, and the evaluation of the potential for error in predictions made by highly parameterized models. Since the focus of this research is on the use of models in support of decision making, the new methods are demonstrated by application to synthetic applications, where the performance of the method can be evaluated under controlled conditions; and to real-world applications, where the performance of the method can be evaluated in terms of trade-offs in computational effort versus calibration results and the ability to rigorously yet expediently investigate predictive error. The applications suggest that the new techniques are applicable to a range of environmental modeling disciplines. Mathematical innovations described in this thesis focus on combining complementary regularized inversion (calibration) techniques with novel methods for analyzing model predictive error. Several of the innovations are founded on explicit recognition of the existence of the calibration solution and null spaces – that is, that with the available observations there are some (combinations of) parameters that can be estimated; and there are some (combinations of) parameters that cannot. The existence of a non-trivial calibration null space is at the heart of the non-uniqueness problem in model calibration: this research expands upon this concept by recognizing that there are combinations of parameters that lie within the calibration null space yet possess non-trivial projections onto the predictive solution space, and these combinations of parameters are at the heart of predictive error analysis. The most significant contribution of this research is the attempt to develop a framework for model analysis that promotes computational efficiency in both the calibration and the subsequent analysis of the potential for error in model predictions. Fundamental to this framework is the use of a large number of parameters, the use of Tikhonov regularization, and the use of subspace techniques. Use of a large number of parameters enables parameter detail to be represented in the model at a scale approaching true variability; the use of Tikhonov constraints enables the modeler to incorporate preferred conditions on parameter values and/or their variation throughout the calibration and the predictive analysis; and, the use of subspace techniques enables model calibration and predictive analysis to be undertaken expediently, even when undertaken using a large number of parameters. This research focuses on the inability of the calibration process to accurately identify parameter values: it is assumed that the models in question accurately represent the relevant processes at the relevant scales so that parameter and predictive error depend only on parameter detail not represented in the model and/or accurately inferred through the calibration process. Contributions to parameter and predictive error arising from incorrect model identification are outside the scope of this research.
|
160 |
Spatial Coherence Enhancing Reconstructions for High Angular Resolution Diffusion MRIRügge, Christoph 02 February 2015 (has links)
No description available.
|
Page generated in 0.1008 seconds