1 |
LASSOING MIXTURES AND BAYESIAN ROBUST ESTIMATIONXing, Guan January 2007 (has links)
No description available.
|
2 |
Metoda Lasso a její aplikace v časových řadách / The Lasso and its application to time seriesHolý, Vladimír January 2014 (has links)
This thesis first describes the Lasso method and its adaptive improvement. Then the basic theoretical properties are shown and different algorithms are introduced. The main part of this thesis is application of the Lasso method to AR, MA and ARCH time series and to REGAR, REGMA and REGARCH models. An algorithm of the adaptive Lasso in a more general time series model, which includes all above mentioned models and series, is developed. The properties of methods and algorithms are shown on simulations and on a practical example. Powered by TCPDF (www.tcpdf.org)
|
3 |
De Cortés valeroso, y Mexicana /Lasso de la Vega, Gabriel, Pullés-Linares, Nidia. January 2005 (has links)
Thesis (Ph. D.)--City University of New York, 1999. / Includes bibliographical references (p. 107-117) and indexes.
|
4 |
A Review of Linear Regression and some Basic Proofs for LassoHe, Shiquan 14 January 2010 (has links)
The goal of this paper is to do some basic proofs for lasso and have a deep understanding of linear regression. In this paper, firstly I give a review of methods in linear regression, and most concerns with the method of lasso. Lasso for ¡®least absolute shrinkage and selection operator¡¯ is a regularized version of method adds a constraint which uses norm less or equal to a given value t. By doing so, some predictor coefficients would be shrank and some others might be set to 0. We can attain good interpretation and prediction accuracy by using lasso method. Secondly, I provide some basic proofs for lasso, which would be very helpful in understanding lasso. Additionally, some geometric graphs are also given and one example is illustrated.
|
5 |
The Munich Kapelle of Orlando di Lasso, 1563-1594 : a model for Renaissance choral performance practice /Fisher, Gary, January 1987 (has links)
Thesis (D.M.A.)--University of Oklahoma, 1987. / Leaves 268-269 bound upside down. Bibliography: leaves 217-226.
|
6 |
The voice of prophecy Orlando di Lasso's Sibyls and Italian humanism /Roth, Marjorie A. January 2005 (has links)
Thesis (Ph. D.)--University of Rochester, 2005.
|
7 |
An Application of Ridge Regression and LASSO Methods for Model SelectionPhillips, Katie Lynn 10 August 2018 (has links)
Ordinary Least Squares (OLS) models are popular tools among field scientists, because they are easy to understand and use. Although OLS estimators are unbiased, it is often advantageous to introduce some bias in order to lower the overall variance in a model. This study focuses on comparing ridge regression and the LASSO methods which both introduce bias to the regression problem. Both approaches are modeled after the OLS but also implement a tuning parameter. Additionally, this study will compare the use of two different functions in R, one of which will be used for ridge regression and the LASSO while the other will be used strictly for the LASSO. The techniques discussed are applied to a real set of data involving some physiochemical properties of wine and how they affect the overall quality of the wine.
|
8 |
Povýběrová Inference: Lasso & Skupinové Lasso / Post-selection Inference: Lasso & Group LassoBouř, Vojtěch January 2017 (has links)
The lasso is a popular tool that can be used for variable selection and esti- mation, however, classical statistical inference cannot be applied for its estimates. In this thesis the classical and the group lasso is described together with effici- ent algorithms for the solution. The key part is dedicated to the post-selection inference for the lasso estimates where we explain why the classical inference is not suitable. Three post-selection tests for the lasso are described and one test is proposed also for the group lasso. The tests are compared in simulations where finite sample properties are examined. The tests are further applied on a practical example. 1
|
9 |
Some statistical methods for dimension reductionAl-Kenani, Ali J. Kadhim January 2013 (has links)
The aim of the work in this thesis is to carry out dimension reduction (DR) for high dimensional (HD) data by using statistical methods for variable selection, feature extraction and a combination of the two. In Chapter 2, the DR is carried out through robust feature extraction. Robust canonical correlation (RCCA) methods have been proposed. In the correlation matrix of canonical correlation analysis (CCA), we suggest that the Pearson correlation should be substituted by robust correlation measures in order to obtain robust correlation matrices. These matrices have been employed for producing RCCA. Moreover, the classical covariance matrix has been substituted by robust estimators for multivariate location and dispersion in order to get RCCA. In Chapter 3 and 4, the DR is carried out by combining the ideas of variable selection using regularisation methods with feature extraction, through the minimum average variance estimator (MAVE) and single index quantile regression (SIQ) methods, respectively. In particular, we extend the sparse MAVE (SMAVE) reported in (Wang and Yin, 2008) by combining the MAVE loss function with different regularisation penalties in Chapter 3. An extension of the SIQ of Wu et al. (2010) by considering different regularisation penalties is proposed in Chapter 4. In Chapter 5, the DR is done through variable selection under Bayesian framework. A flexible Bayesian framework for regularisation in quantile regression (QR) model has been proposed. This work is different from Bayesian Lasso quantile regression (BLQR), employing the asymmetric Laplace error distribution (ALD). The error distribution is assumed to be an infinite mixture of Gaussian (IMG) densities.
|
10 |
INFERENCE AFTER VARIABLE SELECTIONPelawa Watagoda, Lasanthi Chathurika Ranasinghe 01 August 2017 (has links)
This thesis presents inference for the multiple linear regression model Y = beta_1 x_1 + ... + beta_p x_p + e after model or variable selection, including prediction intervals for a future value of the response variable Y_f, and testing hypotheses with the bootstrap. If n is the sample size, most results are for n/p large, but prediction intervals are developed that may increase in average length slowly as p increases for fixed n if the model is sparse: k predictors have nonzero coefficients beta_i where n/k is large.
|
Page generated in 0.0168 seconds