Spelling suggestions: "subject:"degression estimator"" "subject:"aregression estimator""
1 |
Statistical properties of forward selection regression estimatorsThiebaut, Nicolene Magrietha 17 November 2011 (has links)
Please read the abstract in the dissertation. / Dissertation (MSc)--University of Pretoria, 2011. / Statistics / unrestricted
|
2 |
Bias reduction studies in nonparametric regression with applications : an empirical approach / Marike KrugellKrugell, Marike January 2014 (has links)
The purpose of this study is to determine the effect of three improvement methods on nonparametric kernel
regression estimators. The improvement methods are applied to the Nadaraya-Watson estimator with crossvalidation
bandwidth selection, the Nadaraya-Watson estimator with plug-in bandwidth selection, the local
linear estimator with plug-in bandwidth selection and a bias corrected nonparametric estimator proposed by Yao
(2012). The di erent resulting regression estimates are evaluated by minimising a global discrepancy measure,
i.e. the mean integrated squared error (MISE).
In the machine learning context various improvement methods, in terms of the precision and accuracy of an
estimator, exist. The rst two improvement methods introduced in this study are bootstrapped based. Bagging
is an acronym for bootstrap aggregating and was introduced by Breiman (1996a) from a machine learning
viewpoint and by Swanepoel (1988, 1990) in a functional context. Bagging is primarily a variance reduction
tool, i.e. bagging is implemented to reduce the variance of an estimator and in this way improve the precision of
the estimation process. Bagging is performed by drawing repetitive bootstrap samples from the original sample
and generating multiple versions of an estimator. These replicates of the estimator are then used to obtain an
aggregated estimator. Bragging stands for bootstrap robust aggregating. A robust estimator is obtained by
using the sample median over the B bootstrap estimates instead of the sample mean as in bagging.
The third improvement method aims to reduce the bias component of the estimator and is referred to as boosting.
Boosting is a general method for improving the accuracy of any given learning algorithm. The method starts
of with a sensible estimator and improves iteratively, based on its performance on a training dataset.
Results and conclusions verifying existing literature are provided, as well as new results for the new methods. / MSc (Statistics), North-West University, Potchefstroom Campus, 2015
|
3 |
Bias reduction studies in nonparametric regression with applications : an empirical approach / Marike KrugellKrugell, Marike January 2014 (has links)
The purpose of this study is to determine the effect of three improvement methods on nonparametric kernel
regression estimators. The improvement methods are applied to the Nadaraya-Watson estimator with crossvalidation
bandwidth selection, the Nadaraya-Watson estimator with plug-in bandwidth selection, the local
linear estimator with plug-in bandwidth selection and a bias corrected nonparametric estimator proposed by Yao
(2012). The di erent resulting regression estimates are evaluated by minimising a global discrepancy measure,
i.e. the mean integrated squared error (MISE).
In the machine learning context various improvement methods, in terms of the precision and accuracy of an
estimator, exist. The rst two improvement methods introduced in this study are bootstrapped based. Bagging
is an acronym for bootstrap aggregating and was introduced by Breiman (1996a) from a machine learning
viewpoint and by Swanepoel (1988, 1990) in a functional context. Bagging is primarily a variance reduction
tool, i.e. bagging is implemented to reduce the variance of an estimator and in this way improve the precision of
the estimation process. Bagging is performed by drawing repetitive bootstrap samples from the original sample
and generating multiple versions of an estimator. These replicates of the estimator are then used to obtain an
aggregated estimator. Bragging stands for bootstrap robust aggregating. A robust estimator is obtained by
using the sample median over the B bootstrap estimates instead of the sample mean as in bagging.
The third improvement method aims to reduce the bias component of the estimator and is referred to as boosting.
Boosting is a general method for improving the accuracy of any given learning algorithm. The method starts
of with a sensible estimator and improves iteratively, based on its performance on a training dataset.
Results and conclusions verifying existing literature are provided, as well as new results for the new methods. / MSc (Statistics), North-West University, Potchefstroom Campus, 2015
|
4 |
On Some Ridge Regression Estimators for Logistic Regression ModelsWilliams, Ulyana P 28 March 2018 (has links)
The purpose of this research is to investigate the performance of some ridge regression estimators for the logistic regression model in the presence of moderate to high correlation among the explanatory variables. As a performance criterion, we use the mean square error (MSE), the mean absolute percentage error (MAPE), the magnitude of bias, and the percentage of times the ridge regression estimator produces a higher MSE than the maximum likelihood estimator. A Monto Carlo simulation study has been executed to compare the performance of the ridge regression estimators under different experimental conditions. The degree of correlation, sample size, number of independent variables, and log odds ratio has been varied in the design of experiment. Simulation results show that under certain conditions, the ridge regression estimators outperform the maximum likelihood estimator. Moreover, an empirical data analysis supports the main findings of this study. This thesis proposed and recommended some good ridge regression estimators of the logistic regression model for the practitioners in the field of health, physical and social sciences.
|
Page generated in 0.09 seconds