Spelling suggestions: "subject:"regularization"" "subject:"regularizations""
1 
Asymptotics and computations for approximation of method of regularization estimatorsLee, SangJoon 29 August 2005 (has links)
Inverse problems arise in many branches of natural science, medicine and engineering involving the recovery of a whole function given only a ﬁnite number of noisy measurements on functionals. Such problems are usually illposed, which causes severe diﬃculties for standard leastsquares or maximum likelihood estimation techniques. These problems can be solved by a method of regularization. In this dissertation, we study various problems in the method of regularization. We develop asymptotic properties of the optimal smoothing parameters concerning levels of smoothing for estimating the mean function and an associated inverse function based on Fourier analysis. We present numerical algorithms for an approximated method of regularization estimator computation with linear inequality constraints. New datadriven smoothing parameter selection criteria are proposed in this setting. In addition, we derive a Bayesian credible interval for the approximated method of regularization estimators.

2 
Multiscale SpectralDomain Parameterization for History Matching in Structured and Unstructured Grid GeometriesBhark, Eric Whittet 2011 August 1900 (has links)
Reservoir model calibration to production data, also known as history matching, is an essential tool for the prediction of fluid displacement patterns and related decisions concerning reservoir management and field development. The history matching of high resolution geologic models is, however, known to define an illposed inverse problem such that the solution of geologic heterogeneity is always nonunique and potentially unstable. A common approach to improving illposedness is to parameterize the estimable geologic model components, imposing a type of regularization that exploits geologic continuity by explicitly or implicitly grouping similar properties while retaining at least the minimum heterogeneity resolution required to reproduce the data. This dissertation develops novel methods of model parameterization within the class of techniques based on a linear transformation.
Three principal research contributions are made in this dissertation. First is the development of an adaptive multiscale history matching formulation in the frequency domain using the discrete cosine parameterization. Geologic model calibration is performed by its sequential refinement to a spatial scale sufficient to match the data. The approach enables improvement in solution nonuniqueness and stability, and further balances model and data resolution as determined by a parameter identifiability metric. Second, a modelindependent parameterization based on grid connectivity information is developed as a generalization of the cosine parameterization for applicability to generic grid geometries. The parameterization relates the spatial reservoir parameters to the modal shapes or harmonics of the grid on which they are defined, merging with a Fourier analysis in special cases (i.e., for rectangular grid cells of constant dimensions), and enabling a multiscale calibration of the reservoir model in the spectral domain. Third, a modeldependent parameterization is developed to combine grid connectivity with prior geologic information within a spectral domain representation. The resulting parameterization is capable of reducing geologic models while imposing prior heterogeneity on the calibrated model using the adaptive multiscale workflow.
In addition to methodological developments of the parameterization methods, an important consideration in this dissertation is their applicability to field scale reservoir models with varying levels of prior geologic complexity on par with current industry standards.

3 
Asymptotics of Gaussian Regularized LeastSquaresLippert, Ross, Rifkin, Ryan 20 October 2005 (has links)
We consider regularized leastsquares (RLS) with a Gaussian kernel. Weprove that if we let the Gaussian bandwidth $\sigma \rightarrow\infty$ while letting the regularization parameter $\lambda\rightarrow 0$, the RLS solution tends to a polynomial whose order iscontrolled by the relative rates of decay of $\frac{1}{\sigma^2}$ and$\lambda$: if $\lambda = \sigma^{(2k+1)}$, then, as $\sigma \rightarrow\infty$, the RLS solution tends to the $k$th order polynomial withminimal empirical error. We illustrate the result with an example.

4 
Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and ClusteringPoggio, Tomaso, Girosi, Federico 01 April 1990 (has links)
The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of threelayer networks that we call regularization networks or Hyper Basis Functions. These networks are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients $c_\\ alpha$: moving centers and adjustable norm weight.

5 
On the Dirichlet Prior and Bayesian RegularizationSteck, Harald, Jaakkola, Tommi S. 01 September 2002 (has links)
A common objective in learning a model from data is to recover its network structure, while the model parameters are of minor interest. For example, we may wish to recover regulatory networks from highthroughput data sources. In this paper we examine how Bayesian regularization using a Dirichlet prior over the model parameters affects the learned model structure in a domain with discrete variables. Surprisingly, a weak prior in the sense of smaller equivalent sample size leads to a strong regularization of the model structure (sparse graph) given a sufficiently large data set. In particular, the empty graph is obtained in the limit of a vanishing strength of prior belief. This is diametrically opposite to what one may expect in this limit, namely the complete graph from an (unregularized) maximum likelihood estimate. Since the prior affects the parameters as expected, the prior strength balances a "tradeoff" between regularizing the parameters or the structure of the model. We demonstrate the benefits of optimizing this tradeoff in the sense of predictive accuracy.

6 
Bagging RegularizesPoggio, Tomaso, Rifkin, Ryan, Mukherjee, Sayan, Rakhlin, Alex 01 March 2002 (has links)
Intuitively, we expect that averaging  or bagging  different regressors with low correlation should smooth their behavior and be somewhat similar to regularization. In this note we make this intuition precise. Using an almost classical definition of stability, we prove that a certain form of averaging provides generalization bounds with a rate of convergence of the same order as Tikhonov regularization  similar to fashionable RKHSbased learning algorithms.

7 
Asymptotics and computations for approximation of method of regularization estimatorsLee, SangJoon 29 August 2005 (has links)
Inverse problems arise in many branches of natural science, medicine and engineering involving the recovery of a whole function given only a ﬁnite number of noisy measurements on functionals. Such problems are usually illposed, which causes severe diﬃculties for standard leastsquares or maximum likelihood estimation techniques. These problems can be solved by a method of regularization. In this dissertation, we study various problems in the method of regularization. We develop asymptotic properties of the optimal smoothing parameters concerning levels of smoothing for estimating the mean function and an associated inverse function based on Fourier analysis. We present numerical algorithms for an approximated method of regularization estimator computation with linear inequality constraints. New datadriven smoothing parameter selection criteria are proposed in this setting. In addition, we derive a Bayesian credible interval for the approximated method of regularization estimators.

8 
Non parametric density estimation via regularizationLin, Mu Unknown Date
No description available.

9 
Non parametric density estimation via regularizationLin, Mu 11 1900 (has links)
The thesis aims at showing some important methods, theories and
applications about nonparametric density estimation via
regularization in univariate setting.
It gives a brief introduction to nonparametric density estimation,
and discuss several wellknown methods, for example, histogram and
kernel methods. Regularized methods with penalization and shape
constraints are the focus of the thesis. Maximum entropy density
estimation is introduced and the relationship between taut string
and maximum entropy density estimation is explored. Furthermore, the
dual and primal theories are discussed and some theoretical proofs
corresponding to quasiconcave density estimation are presented.
Different the numerical methods of nonparametric density estimation
with regularization are classified and compared. Finally, a real
data experiment will also be discussed in the last part of the
thesis. / Statistics

10 
Regularization Theory and Shape ConstraintsVerri, Alessandro, Poggio, Tomaso 01 September 1986 (has links)
Many problems of early vision are illposed; to recover unique stable solutions regularization techniques can be used. These techniques lead to meaningful results, provided that solutions belong to suitable compact sets. Often some additional constraints on the shape or the behavior of the possible solutions are available. This note discusses which of these constraints can be embedded in the classic theory of regularization and how, in order to improve the quality of the recovered solution. Connections with mathematical programming techniques are also discussed. As a conclusion, regularization of early vision problems may be improved by the use of some constraints on the shape of the solution (such as monotonicity and upper and lower bounds), when available.

Page generated in 0.1401 seconds