• Refine Query
• Source
• Publication year
• to
• Language
• 275
• 164
• 44
• 32
• 10
• 2
• 2
• 2
• 1
• 1
• 1
• 1
• 1
• 1
• 1
• Tagged with
• 588
• 136
• 99
• 87
• 84
• 75
• 74
• 68
• 67
• 65
• 56
• 56
• 51
• 51
• 51
• The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

#### Asymptotics and computations for approximation of method of regularization estimators

Lee, Sang-Joon 29 August 2005 (has links)
Inverse problems arise in many branches of natural science, medicine and engineering involving the recovery of a whole function given only a &#64257;nite number of noisy measurements on functionals. Such problems are usually ill-posed, which causes severe di&#64259;culties for standard least-squares or maximum likelihood estimation techniques. These problems can be solved by a method of regularization. In this dissertation, we study various problems in the method of regularization. We develop asymptotic properties of the optimal smoothing parameters concerning levels of smoothing for estimating the mean function and an associated inverse function based on Fourier analysis. We present numerical algorithms for an approximated method of regularization estimator computation with linear inequality constraints. New data-driven smoothing parameter selection criteria are proposed in this setting. In addition, we derive a Bayesian credible interval for the approximated method of regularization estimators.
2

#### Multiscale Spectral-Domain Parameterization for History Matching in Structured and Unstructured Grid Geometries

Bhark, Eric Whittet 2011 August 1900 (has links)
Reservoir model calibration to production data, also known as history matching, is an essential tool for the prediction of fluid displacement patterns and related decisions concerning reservoir management and field development. The history matching of high resolution geologic models is, however, known to define an ill-posed inverse problem such that the solution of geologic heterogeneity is always non-unique and potentially unstable. A common approach to improving ill-posedness is to parameterize the estimable geologic model components, imposing a type of regularization that exploits geologic continuity by explicitly or implicitly grouping similar properties while retaining at least the minimum heterogeneity resolution required to reproduce the data. This dissertation develops novel methods of model parameterization within the class of techniques based on a linear transformation. Three principal research contributions are made in this dissertation. First is the development of an adaptive multiscale history matching formulation in the frequency domain using the discrete cosine parameterization. Geologic model calibration is performed by its sequential refinement to a spatial scale sufficient to match the data. The approach enables improvement in solution non-uniqueness and stability, and further balances model and data resolution as determined by a parameter identifiability metric. Second, a model-independent parameterization based on grid connectivity information is developed as a generalization of the cosine parameterization for applicability to generic grid geometries. The parameterization relates the spatial reservoir parameters to the modal shapes or harmonics of the grid on which they are defined, merging with a Fourier analysis in special cases (i.e., for rectangular grid cells of constant dimensions), and enabling a multiscale calibration of the reservoir model in the spectral domain. Third, a model-dependent parameterization is developed to combine grid connectivity with prior geologic information within a spectral domain representation. The resulting parameterization is capable of reducing geologic models while imposing prior heterogeneity on the calibrated model using the adaptive multiscale workflow. In addition to methodological developments of the parameterization methods, an important consideration in this dissertation is their applicability to field scale reservoir models with varying levels of prior geologic complexity on par with current industry standards.
3

#### Asymptotics of Gaussian Regularized Least-Squares

Lippert, Ross, Rifkin, Ryan 20 October 2005 (has links)
We consider regularized least-squares (RLS) with a Gaussian kernel. Weprove that if we let the Gaussian bandwidth $\sigma \rightarrow\infty$ while letting the regularization parameter $\lambda\rightarrow 0$, the RLS solution tends to a polynomial whose order iscontrolled by the relative rates of decay of $\frac{1}{\sigma^2}$ and$\lambda$: if $\lambda = \sigma^{-(2k+1)}$, then, as $\sigma \rightarrow\infty$, the RLS solution tends to the $k$th order polynomial withminimal empirical error. We illustrate the result with an example.
4

#### Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering

Poggio, Tomaso, Girosi, Federico 01 April 1990 (has links)
The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of three-layer networks that we call regularization networks or Hyper Basis Functions. These networks are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients $c_\\ alpha$: moving centers and adjustable norm- weight.
5

#### On the Dirichlet Prior and Bayesian Regularization

Steck, Harald, Jaakkola, Tommi S. 01 September 2002 (has links)
A common objective in learning a model from data is to recover its network structure, while the model parameters are of minor interest. For example, we may wish to recover regulatory networks from high-throughput data sources. In this paper we examine how Bayesian regularization using a Dirichlet prior over the model parameters affects the learned model structure in a domain with discrete variables. Surprisingly, a weak prior in the sense of smaller equivalent sample size leads to a strong regularization of the model structure (sparse graph) given a sufficiently large data set. In particular, the empty graph is obtained in the limit of a vanishing strength of prior belief. This is diametrically opposite to what one may expect in this limit, namely the complete graph from an (unregularized) maximum likelihood estimate. Since the prior affects the parameters as expected, the prior strength balances a "trade-off" between regularizing the parameters or the structure of the model. We demonstrate the benefits of optimizing this trade-off in the sense of predictive accuracy.
6

#### Bagging Regularizes

Poggio, Tomaso, Rifkin, Ryan, Mukherjee, Sayan, Rakhlin, Alex 01 March 2002 (has links)
Intuitively, we expect that averaging --- or bagging --- different regressors with low correlation should smooth their behavior and be somewhat similar to regularization. In this note we make this intuition precise. Using an almost classical definition of stability, we prove that a certain form of averaging provides generalization bounds with a rate of convergence of the same order as Tikhonov regularization --- similar to fashionable RKHS-based learning algorithms.
7

#### Asymptotics and computations for approximation of method of regularization estimators

Lee, Sang-Joon 29 August 2005 (has links)
Inverse problems arise in many branches of natural science, medicine and engineering involving the recovery of a whole function given only a ﬁnite number of noisy measurements on functionals. Such problems are usually ill-posed, which causes severe diﬃculties for standard least-squares or maximum likelihood estimation techniques. These problems can be solved by a method of regularization. In this dissertation, we study various problems in the method of regularization. We develop asymptotic properties of the optimal smoothing parameters concerning levels of smoothing for estimating the mean function and an associated inverse function based on Fourier analysis. We present numerical algorithms for an approximated method of regularization estimator computation with linear inequality constraints. New data-driven smoothing parameter selection criteria are proposed in this setting. In addition, we derive a Bayesian credible interval for the approximated method of regularization estimators.
8

#### Non parametric density estimation via regularization

Lin, Mu Unknown Date
No description available.
9

#### Non parametric density estimation via regularization

Lin, Mu 11 1900 (has links)
The thesis aims at showing some important methods, theories and applications about non-parametric density estimation via regularization in univariate setting. It gives a brief introduction to non-parametric density estimation, and discuss several well-known methods, for example, histogram and kernel methods. Regularized methods with penalization and shape constraints are the focus of the thesis. Maximum entropy density estimation is introduced and the relationship between taut string and maximum entropy density estimation is explored. Furthermore, the dual and primal theories are discussed and some theoretical proofs corresponding to quasi-concave density estimation are presented. Different the numerical methods of non-parametric density estimation with regularization are classified and compared. Finally, a real data experiment will also be discussed in the last part of the thesis. / Statistics
10

#### Regularization Theory and Shape Constraints

Verri, Alessandro, Poggio, Tomaso 01 September 1986 (has links)
Many problems of early vision are ill-posed; to recover unique stable solutions regularization techniques can be used. These techniques lead to meaningful results, provided that solutions belong to suitable compact sets. Often some additional constraints on the shape or the behavior of the possible solutions are available. This note discusses which of these constraints can be embedded in the classic theory of regularization and how, in order to improve the quality of the recovered solution. Connections with mathematical programming techniques are also discussed. As a conclusion, regularization of early vision problems may be improved by the use of some constraints on the shape of the solution (such as monotonicity and upper and lower bounds), when available.

Page generated in 0.1401 seconds