• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 771
  • 154
  • 103
  • 84
  • 66
  • 29
  • 19
  • 19
  • 14
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • Tagged with
  • 1423
  • 1423
  • 224
  • 171
  • 168
  • 152
  • 133
  • 131
  • 121
  • 108
  • 105
  • 101
  • 100
  • 100
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Different-based methods in nonparametric regression models

Dai, Wenlin 31 July 2014 (has links)
This thesis develops some new di.erence-based methods for nonparametric regression models. The .rst part of this thesis focuses on the variance estimation for nonparametric models with various settings. In Chapter 2, a uni.ed framework of variance estimator is proposed for a model with smooth mean function. This framework combines the higher order di.erence sequence with least squares method and greatly extends the literature, including most of existing methods as special cases. We derive the asymp­totic mean squared errors and make both theoretical and numerical comparison for various estimators within the system. Based on the dramatic interaction of ordinary di.erence sequences and least squares method, we eventually .nd a uniformly sat­isfactory estimator for all the settings, solving the challenging problem of sequence selection. In Chapter 3, three methods are developed for the variance estimation in the repeated measurement setting. Both their asymptotic properties and .nite sample performance are explored. The sequencing method is shown to be the most adaptive while the sample variance method and the partitioning method are shown to outperform in certain cases. In Chapter 4, we propose a pairwise regression method for estimating the residual variance. Speci.cally, we regress the squared di.erence between observations on the squared distance between design points, and then es­timate the residual variance as the intercept. Unlike most existing di.erence-based estimators that require a smooth regression function, our method applies to regres­sion models with jump discontinuities. And it also applies to the situations where the design points are unequally spaced. The smoothness assumption of the nonparametric regression function is quite critical for the curve .tting and the residual variance estimation. The second part (Chapter 5) concentrates on the discontinuities detection for the mean function. In particular, we revisit the di.erence-based method in M¨uller and Stadtm¨uller (1999) and propose to improve it. To achieve the goal, we .rst reveal that their method is less e.cient due to the inappropriate choice of the response variable in their linear regression model. We then propose a new regression model for estimating the resid­ual variance and the total amount of discontinuities simultaneously. In both theory and simulations, we show that the proposed variance estimator has a smaller MSE compared to their estimator, whereas the e.ciency of the estimators for the total amount of discontinuities remain unchanged. Finally, we construct a new test proce­dure for detection using the newly proposed estimations; and via simulation studies, we demonstrate that our new test procedure outperforms the existing one in most settings. At the beginning of Chapter 6, a series of new di.erence sequences is de.ned to complete the span between the optimal sequence and the ordinary sequence. The vari­ance estimators using proposed sequences are shown to be quite robust and achieve smallest mean square errors for most of general settings. Then, the di.erence-based methods for variance function estimation are generally discussed. Keywords: Asymptotic normality, Di.erence-based estimator, Di.erence sequence, Jump point, Least square, Nonparametric regression, Pairwise regression, Repeated measurement, Residual variance
252

Model-adaptive tests for regressions

Zhu, Xuehu 26 August 2015 (has links)
In this thesis, we firstly develop a model-adaptive checking method for partially parametric single-index models, which combines the advantages of both dimension reduction technique and global smoothing tests. Besides, we propose a dimension reduction-based model adaptive test of heteroscedasticity checks for nonparametric and semi-parametric regression models. Finally, to extend our testing approaches to nonparametric regressions with some restrictions, we consider significance testing under a nonparametric framework. In Chapter 2, “Model Checking for Partially Parametric Single-index Models: A Model-adaptive Approach", we consider the model checking problems for more general parametric models which include generalized linear models and generalized nonlinear models. We develop a model-adaptive dimension reduction test procedure by extending an existing directional test. Compared with traditional smoothing model checking methodologies, the procedure of this test not only avoids the curse of dimensionality but also is an omnibus test. The resulting test is omnibus adapting the null and alternative models to fully utilize the dimension-reduction structure under the null hypothesis and can detect fully nonparametric global alternatives, and local alternatives distinct from the null model at a convergence rate as close to square root of the sample size as possible. Finally, both Monte Carlo simulation studies and real data analysis are conducted to compare with existing tests and illustrate the finite sample performance of the new test. In Chapter 3,Heteroscedasticity Checks for Nonparametric and Semi-parametric Regression Model: A Dimension Reduction Approach", we consider heteroscedasticity checks for nonparametric and semi-parametric regression models. Existing local smoothing tests suffer severely from the curse of dimensionality even when the number of covariates is moderate because of use of nonparametric estimation. In this chapter, we propose a dimension reduction-based model adaptive test that behaves like a local smoothing test as if the number of covariates is equal to the number of their linear combinations in the mean regression function, in particular, equal to 1 when the mean function contains a single index. The test statistic is asymptotically normal under the null hypothesis such that critical values are easily determined. The finite sample performances of the test are examined by simulations and a real data analysis. In Chapter 4,Dimension Reduction-based Significance Testing in Nonparametric Regression", as nonparametric techniques need much less restrictive conditions than those required for parametric approaches, we consider to check nonparametric regressions with some restrictions under sufficient dimension reduction structure. A dimension-reduction-based model-adaptive test is proposed for significance of a subset of covariates in the context of a nonparametric regression model. Unlike existing local smoothing significance tests, the new test behaves like a local smoothing test as if the number of covariates is just that under the null hypothesis and it can detect local alternative hypotheses distinct from the null hypothesis at the rate that is only related to the number of covariates under the null hypothesis. Thus, the curse of dimensionality is largely alleviated when nonparametric estimation is inevitably required. In the cases where there are many insignificant covariates, the improvement of the new test is very significant over existing local smoothing tests on the significance level maintenance and power enhancement. Simulation studies and a real data analysis are conducted to examine the finite sample performance of the proposed test. Finally, we conclude the main results and discuss future research directions in Chapter 5. Keywords: Model checking; Partially parametric single-index models; Central mean subspace; Central subspace; Partial central subspace; Dimension reduction; Ridge-type eigenvalue ratio estimate; Model-adaption; Heteroscedasticity checks; Significance testing.
253

Expenditure analysis and planning in a changed economy: a case study approach of Gweru City Council, Zimbabwe

Kuhudzai, Anesu G January 2014 (has links)
The purpose of this study is to analyse Gweru City Council`s spending pattern and behaviour and to determine if this spending pattern is directed towards poverty reduction and economic development or not. Furthermore, to fit a log-differenced regression model to a historical financial dataset obtained from Gweru City Council Finance Department for the time period July 2009 to September 2012. Regression techniques were used to determine how Gweru City Council`s total income (dependent variable) is affected by its expenditure (independent variables). Econometric modeling techniques were employed for the evaluation of estimate tests, conducted to determine the reliability of the estimated model. The study concludes by providing some recommendations for possible financial plans which could be adopted by Gweru City Council and other local authorities in Zimbabwe for the well-being of Zimbabweans and economic development.
254

Regularization methods for support vector machines

Wu, Zhili 01 January 2008 (has links)
No description available.
255

Dimension reduction and variable selection in regression

Wen, Songqiao 01 January 2008 (has links)
No description available.
256

The consolidation of forecests with regression models

Venter, Daniel Jacobus Lodewyk January 2014 (has links)
The primary objective of this study was to develop a dashboard for the consolidation of multiple forecasts utilising a range of multiple linear regression models. The term dashboard is used to describe with a single word the characteristics of the forecasts consolidation application that was developed to provide the required functionalities via a graphical user interface structured as a series of interlinked screens. Microsoft Excel© was used as the platform to develop the dashboard named ConFoRM (acronym for Consolidate Forecasts with Regression Models). The major steps of the consolidation process incorporated in ConFoRM are: 1. Input historical data. Select appropriate analysis and holdout samples. 3. Specify regression models to be considered as candidates for the final model to be used for the consolidation of forecasts. 4. Perform regression analysis and holdout analysis for each of the models specified in step 3. 5. Perform post-holdout testing to assess the performance of the model with best holdout validation results on out-of-sample data. 6. Consolidate forecasts. Two data transformations are available: the removal of growth and time-periods effect from the time series; a translation of the time series by subtracting ̅i, the mean of all the forecasts for data record i, from the variable being predicted and its related forecasts for each data record I. The pre-defined regression models available for ordinary least square linear regression models (LRM) are: a. A set of k simple LRM’s, one for each of the k forecasts; b. A multiple LRM that includes all the forecasts: c. A multiple LRM that includes all the forecasts and as many of the first-order interactions between the input forecasts as allowed by the sample size and the maximum number of predictors provided by the dashboard with the interactions included in the model to be those with the highest individual correlation with the variable being predicted; d. A multiple LRM that includes as many of the forecasts and first-order interactions between the input forecasts as allowed by the sample size and the maximum number of predictors provided by the dashboard: with the forecasts and interactions included in the model to be those with the highest individual correlation with the variable being predicted; e. A simple LRM with the predictor variable being the mean of the forecasts: f. A set of simple LRM’s with the predictor variable in each case being the weighted mean of the forecasts with different formulas for the weights Also available is an ad hoc user specified model in terms of the forecasts and the predictor variables generated by the dashboard for the pre-defined models. Provision is made in the regression analysis for both of forward entry and backward removal regression. Weighted least squares (WLS) regression can be performed optionally based on the age of forecasts with smaller weight for older forecasts.
257

Variable regression estimation of unknown system delay

Elnaggar, Ashraf January 1990 (has links)
This thesis describes a novel approach to model and estimate systems of unknown delay. The a-priori knowledge available about the systems is fully utilized so that the number of parameters to be estimated equals the number of unknowns in the systems. Existing methods represent the single unknown system delay by a large number of unknown parameters in the system model. The purpose of this thesis is to develop new methods of modelling the systems so that the unknowns are directly estimated. The Variable Regression Estimation technique is developed to provide direct delay estimation. The delay estimation requires minimum excitation and is robust, bounded, and it converges to the true value for first-order and second-order systems. The delay estimation provides a good model approximation for high-order systems and the model is always stable and matches the frequency response of the system at any given frequency. The new delay estimation method is coupled with the Pole Placement, Dahlin and the Generalized Predictive Controller (GPC) design and adaptive versions of these controllers result. The new adaptive GPC has the same closed-loop performance for different values of system delay. This was not achievable in the original adaptive GPC. The adaptive controllers with direct delay estimation can regulate systems with dominant time delay with minimum parameters in the controller and the system model. The delay does not lose identifiability in closed-loop estimation. Experiments on the delay estimation show excellent agreement with the theoretical analysis of the proposed methods. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
258

Covariance analysis of multiple linear regression equations

Eekman, Gordon Clifford Duncan January 1969 (has links)
A covariance analysis procedure which compares multiple linear regression equations is developed by extending the general linear hypothesis model of full rank to encompass heterogeneous data. A FORTRAN IV computer program tests parallelism and coincidence amongst sets of regression equations. By a practical example both the theory and the computer program are demonstrated. / Graduate and Postdoctoral Studies / Graduate
259

Additivity of component regression equations when the underlying model is linear

Chiyenda, Simeon Sandaramu January 1983 (has links)
This thesis is concerned with the theory of fitting models of the form y = Xβ + ε, where some distributional assumptions are made on ε. More specifically, suppose that y[sub=j] = Zβ[sub=j] + ε [sub=j] is a model for a component j (j = 1, 2, ..., k) and that one is interested in estimation and interference theory relating to y[sub=T] = Σ [sup=k; sub=j=1] y[sub=j] = Xβ[sub=T] + ε[sub=T]. The theory of estimation and inference relating to the fitting of y[sub=T] is considered within the general framework of general linear model theory. The consequence of independence and dependence of the y[sub=j] (j = 1, 2, ..., k) for estimation and inference is investigated. It is shown that under the assumption of independence of the y[sub=j], the parameter vector of the total equation can easily be obtained by adding corresponding components of the estimates for the parameters of the component models. Under dependence, however, this additivity property seems to break down. Inference theory under dependence is much less tractable than under independence and depends critically, of course, upon whether y[sub=T] is normal or not. Finally, the theory of additivity is extended to classificatory models encountered in designed experiments. It is shown, however, that additivity does not hold in general in nonlinear models. The problem of additivity does not require new computing subroutines for estimation and inference in general in those cases where it works. / Forestry, Faculty of / Graduate
260

The accuracy of parameter estimates and coverage probability of population values in regression models upon different treatments of systematically missing data

Othuon, Lucas Onyango A. 11 1900 (has links)
Several methods are available for the treatment of missing data. Most of the methods are based on the assumption that data are missing completely at random (MCAR). However, data sets that are MCAR are rare in psycho-educational research. This gives rise to the need for investigating the performance of missing data treatments (MDTs) with non-randomly or systematically missing data, an area that has not received much attention by researchers in the past. In the current simulation study, the performance of four MDTs, namely, mean substitution (MS), pairwise deletion (PW), expectation-maximization method (EM), and regression imputation (RS), was investigated in a linear multiple regression context. Four investigations were conducted involving four predictors under low and high multiple R² , and nine predictors under low and high multiple R² . In addition, each investigation was conducted under three different sample size conditions (94, 153, and 265). The design factors were missing pattern (2 levels), percent missing (3 levels) and non-normality (4 levels). This design gave rise to 72 treatment conditions. The sampling was replicated one thousand times in each condition. MDTs were evaluated based on accuracy of parameter estimates. In addition, the bias in parameter estimates, and coverage probability of regression coefficients, were computed. The effect of missing pattern, percent missing, and non-normality on absolute error for R² estimate was of practical significance. In the estimation of R², EM was the most accurate under the low R² condition, and PW was the most accurate under the high R² condition. No MDT was consistently least biased under low R² condition. However, with nine predictors under the high R² condition, PW was generally the least biased, with a tendency to overestimate population R². The mean absolute error (MAE) tended to increase with increasing non-normality and increasing percent missing. Also, the MAE in R² estimate tended to be smaller under monotonic pattern than under non-monotonic pattern. MDTs were most differentiated at the highest level of percent missing (20%), and under non-monotonic missing pattern. In the estimation of regression coefficients, RS generally outperformed the other MDTs with respect to accuracy of regression coefficients as measured by MAE . However, EM was competitive under the four predictors, low R² condition. MDTs were most differentiated only in the estimation of β₁, the coefficient of the variable with no missing values. MDTs were undifferentiated in their performance in the estimation for b₂,...,bp, p = 4 or 9, although the MAE remained fairly the same across all the regression coefficients. The MAE increased with increasing non-normality and percent missing, but decreased with increasing sample size. The MAE was generally greater under non-monotonic pattern than under monotonic pattern. With four predictors, the least bias was under RS regardless of the magnitude of population R². Under nine predictors, the least bias was under PW regardless of population R². The results for coverage probabilities were generally similar to those under estimation of regression coefficients, with coverage probabilities closest to nominal alpha under RS. As expected, coverage probabilities decreased with increasing non-normality for each MDT, with values being closest to nominal value for normal data. MDTs were most differentiated with respect to coverage probabilities under non-monotonic pattern than under monotonic pattern. Important implications of the results to researchers are numerous. First, the choice of MDT was found to depend on the magnitude of population R², number of predictors, as well as on the parameter estimate of interest. With the estimation of R² as the goal of analysis, use of EM is recommended if the anticipated R² is low (about .2). However, if the anticipated R² is high (about .6), use of PW is recommended. With the estimation of regression coefficients as the goal of analysis, the choice of MDT was found to be most crucial for the variable with no missing data. The RS method is most recommended with respect to estimation accuracy of regression coefficients, although greater bias was recorded under RS than under PW or MS when the number of predictors was large (i.e., nine predictors). Second, the choice of MDT seems to be of little concern if the proportion of missing data is 10 percent, and also if the missing pattern is monotonic rather than non-monotonic. Third, the proportion of missing data seems to have less impact on the accuracy of parameter estimates under monotonic missing pattern than under non-monotonic missing pattern. Fourth, it is recommended for researchers that in the control of Type I error rates under low R² condition, the EM method should be used as it produced coverage probability of regression coefficients closest to nominal value at .05 level. However, in the control of Type I error rates under high R² condition, the RS method is recommended. Considering that simulated data were used in the present study, it is suggested that future research should attempt to validate the findings of the present study using real field data. Also, a future investigator could modify the number of predictors as well as the confidence interval in the calculation of coverage probabilities to extend generalization of results. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate

Page generated in 0.0767 seconds