• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3010
  • 1002
  • 369
  • 345
  • 272
  • 182
  • 174
  • 160
  • 82
  • 54
  • 30
  • 29
  • 23
  • 22
  • 21
  • Tagged with
  • 6620
  • 2241
  • 1127
  • 915
  • 851
  • 791
  • 740
  • 738
  • 643
  • 542
  • 498
  • 486
  • 444
  • 417
  • 397
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Different-based methods in nonparametric regression models

Dai, Wenlin 31 July 2014 (has links)
This thesis develops some new di.erence-based methods for nonparametric regression models. The .rst part of this thesis focuses on the variance estimation for nonparametric models with various settings. In Chapter 2, a uni.ed framework of variance estimator is proposed for a model with smooth mean function. This framework combines the higher order di.erence sequence with least squares method and greatly extends the literature, including most of existing methods as special cases. We derive the asymp­totic mean squared errors and make both theoretical and numerical comparison for various estimators within the system. Based on the dramatic interaction of ordinary di.erence sequences and least squares method, we eventually .nd a uniformly sat­isfactory estimator for all the settings, solving the challenging problem of sequence selection. In Chapter 3, three methods are developed for the variance estimation in the repeated measurement setting. Both their asymptotic properties and .nite sample performance are explored. The sequencing method is shown to be the most adaptive while the sample variance method and the partitioning method are shown to outperform in certain cases. In Chapter 4, we propose a pairwise regression method for estimating the residual variance. Speci.cally, we regress the squared di.erence between observations on the squared distance between design points, and then es­timate the residual variance as the intercept. Unlike most existing di.erence-based estimators that require a smooth regression function, our method applies to regres­sion models with jump discontinuities. And it also applies to the situations where the design points are unequally spaced. The smoothness assumption of the nonparametric regression function is quite critical for the curve .tting and the residual variance estimation. The second part (Chapter 5) concentrates on the discontinuities detection for the mean function. In particular, we revisit the di.erence-based method in M¨uller and Stadtm¨uller (1999) and propose to improve it. To achieve the goal, we .rst reveal that their method is less e.cient due to the inappropriate choice of the response variable in their linear regression model. We then propose a new regression model for estimating the resid­ual variance and the total amount of discontinuities simultaneously. In both theory and simulations, we show that the proposed variance estimator has a smaller MSE compared to their estimator, whereas the e.ciency of the estimators for the total amount of discontinuities remain unchanged. Finally, we construct a new test proce­dure for detection using the newly proposed estimations; and via simulation studies, we demonstrate that our new test procedure outperforms the existing one in most settings. At the beginning of Chapter 6, a series of new di.erence sequences is de.ned to complete the span between the optimal sequence and the ordinary sequence. The vari­ance estimators using proposed sequences are shown to be quite robust and achieve smallest mean square errors for most of general settings. Then, the di.erence-based methods for variance function estimation are generally discussed. Keywords: Asymptotic normality, Di.erence-based estimator, Di.erence sequence, Jump point, Least square, Nonparametric regression, Pairwise regression, Repeated measurement, Residual variance
562

Model-adaptive tests for regressions

Zhu, Xuehu 26 August 2015 (has links)
In this thesis, we firstly develop a model-adaptive checking method for partially parametric single-index models, which combines the advantages of both dimension reduction technique and global smoothing tests. Besides, we propose a dimension reduction-based model adaptive test of heteroscedasticity checks for nonparametric and semi-parametric regression models. Finally, to extend our testing approaches to nonparametric regressions with some restrictions, we consider significance testing under a nonparametric framework. In Chapter 2, “Model Checking for Partially Parametric Single-index Models: A Model-adaptive Approach", we consider the model checking problems for more general parametric models which include generalized linear models and generalized nonlinear models. We develop a model-adaptive dimension reduction test procedure by extending an existing directional test. Compared with traditional smoothing model checking methodologies, the procedure of this test not only avoids the curse of dimensionality but also is an omnibus test. The resulting test is omnibus adapting the null and alternative models to fully utilize the dimension-reduction structure under the null hypothesis and can detect fully nonparametric global alternatives, and local alternatives distinct from the null model at a convergence rate as close to square root of the sample size as possible. Finally, both Monte Carlo simulation studies and real data analysis are conducted to compare with existing tests and illustrate the finite sample performance of the new test. In Chapter 3,Heteroscedasticity Checks for Nonparametric and Semi-parametric Regression Model: A Dimension Reduction Approach", we consider heteroscedasticity checks for nonparametric and semi-parametric regression models. Existing local smoothing tests suffer severely from the curse of dimensionality even when the number of covariates is moderate because of use of nonparametric estimation. In this chapter, we propose a dimension reduction-based model adaptive test that behaves like a local smoothing test as if the number of covariates is equal to the number of their linear combinations in the mean regression function, in particular, equal to 1 when the mean function contains a single index. The test statistic is asymptotically normal under the null hypothesis such that critical values are easily determined. The finite sample performances of the test are examined by simulations and a real data analysis. In Chapter 4,Dimension Reduction-based Significance Testing in Nonparametric Regression", as nonparametric techniques need much less restrictive conditions than those required for parametric approaches, we consider to check nonparametric regressions with some restrictions under sufficient dimension reduction structure. A dimension-reduction-based model-adaptive test is proposed for significance of a subset of covariates in the context of a nonparametric regression model. Unlike existing local smoothing significance tests, the new test behaves like a local smoothing test as if the number of covariates is just that under the null hypothesis and it can detect local alternative hypotheses distinct from the null hypothesis at the rate that is only related to the number of covariates under the null hypothesis. Thus, the curse of dimensionality is largely alleviated when nonparametric estimation is inevitably required. In the cases where there are many insignificant covariates, the improvement of the new test is very significant over existing local smoothing tests on the significance level maintenance and power enhancement. Simulation studies and a real data analysis are conducted to examine the finite sample performance of the proposed test. Finally, we conclude the main results and discuss future research directions in Chapter 5. Keywords: Model checking; Partially parametric single-index models; Central mean subspace; Central subspace; Partial central subspace; Dimension reduction; Ridge-type eigenvalue ratio estimate; Model-adaption; Heteroscedasticity checks; Significance testing.
563

Expenditure analysis and planning in a changed economy: a case study approach of Gweru City Council, Zimbabwe

Kuhudzai, Anesu G January 2014 (has links)
The purpose of this study is to analyse Gweru City Council`s spending pattern and behaviour and to determine if this spending pattern is directed towards poverty reduction and economic development or not. Furthermore, to fit a log-differenced regression model to a historical financial dataset obtained from Gweru City Council Finance Department for the time period July 2009 to September 2012. Regression techniques were used to determine how Gweru City Council`s total income (dependent variable) is affected by its expenditure (independent variables). Econometric modeling techniques were employed for the evaluation of estimate tests, conducted to determine the reliability of the estimated model. The study concludes by providing some recommendations for possible financial plans which could be adopted by Gweru City Council and other local authorities in Zimbabwe for the well-being of Zimbabweans and economic development.
564

Regression test selection by exclusion

Ngah, Amir January 2012 (has links)
This thesis addresses the research in the area of regression testing. Software systems change and evolve over time. Each time a system is changed regression tests have to be run to validate these changes. An important issue in regression testing is how to minimise reuse the existing test cases of original program for modied program. One of the techniques to tackle this issue is called regression test selection technique. The aim of this research is to signicantly reduce the number of test cases that need to be run after changes have been made. Specically, this thesis focuses on developing a model for regression test selection using the decomposition slicing technique. Decomposition slicing provides a technique that is capable of identifying the unchanged parts of the system. The model of regression test selection based on decomposition slicing and exclusion of test cases was developed in this thesis. The model is called Regression Test Selection by Exclusion (ReTSE) and has four main phases. They are Program Analysis, Comparison, Exclusion and Optimisation phases. The validity of the ReTSE model is explored through the application of a number of case studies. The case studies tackle all types of modication such as change, delete and add statements. The case studies have covered a single and combination types of modication at a time. The application of the proposed model has shown that signicant reductions in the number of test cases can be achieved. The evaluation of the model based on an existing framework and comparison with another model also has shown promising results. The case studies have limited themselves to relatively small programs and the next step is to apply the model to larger systems with more complex changes to ascertain if it scales up. While some parts of the model have been automated tools will be required for the rest when carrying out the larger case studies.
565

On goodness-of-fit of logistic regression model

Liu, Ying January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Shie-Shien Yang / Logistic regression model is a branch of the generalized linear models and is widely used in many areas of scientific research. The logit link function and the binary dependent variable of interest make the logistic regression model distinct from linear regression model. The conclusion drawn from a fitted logistic regression model could be incorrect or misleading when the covariates can not explain and /or predict the response variable accurately based on the fitted model- that is, lack-of-fit is present in the fitted logistic regression model. The current goodness-of-fit tests can be roughly categorized into four types. (1) The tests are based on covariate patterns, e.g., Pearson's Chi-square test, Deviance D test, and Osius and Rojek's normal approximation test. (2) Hosmer-Lemeshow's C and Hosmer-Lemeshow's H tests are based on the estimated probabilities. (3) Score tests are based on the comparison of two models, where the assumed logistic regression model is embedded into a more general parametric family of models, e.g., Stukel's Score test and Tsiatis's test. (4) Smoothed residual tests include le Cessie and van Howelingen's test and Hosmer and Lemeshow's test. All of them have advantages and disadvantages. In this dissertation, we proposed a partition logistic regression model which can be viewed as a generalized logistic regression model, since it includes the logistic regression model as a special case. This partition model is used to construct goodness-of- fit test for a logistic regression model which can also identify the nature of lack-of-fit is due to the tail or middle part of the probabilities of success. Several simulation results showed that the proposed test performs as well as or better than many of the known tests.
566

Regularization methods for support vector machines

Wu, Zhili 01 January 2008 (has links)
No description available.
567

Dimension reduction and variable selection in regression

Wen, Songqiao 01 January 2008 (has links)
No description available.
568

The consolidation of forecests with regression models

Venter, Daniel Jacobus Lodewyk January 2014 (has links)
The primary objective of this study was to develop a dashboard for the consolidation of multiple forecasts utilising a range of multiple linear regression models. The term dashboard is used to describe with a single word the characteristics of the forecasts consolidation application that was developed to provide the required functionalities via a graphical user interface structured as a series of interlinked screens. Microsoft Excel© was used as the platform to develop the dashboard named ConFoRM (acronym for Consolidate Forecasts with Regression Models). The major steps of the consolidation process incorporated in ConFoRM are: 1. Input historical data. Select appropriate analysis and holdout samples. 3. Specify regression models to be considered as candidates for the final model to be used for the consolidation of forecasts. 4. Perform regression analysis and holdout analysis for each of the models specified in step 3. 5. Perform post-holdout testing to assess the performance of the model with best holdout validation results on out-of-sample data. 6. Consolidate forecasts. Two data transformations are available: the removal of growth and time-periods effect from the time series; a translation of the time series by subtracting ̅i, the mean of all the forecasts for data record i, from the variable being predicted and its related forecasts for each data record I. The pre-defined regression models available for ordinary least square linear regression models (LRM) are: a. A set of k simple LRM’s, one for each of the k forecasts; b. A multiple LRM that includes all the forecasts: c. A multiple LRM that includes all the forecasts and as many of the first-order interactions between the input forecasts as allowed by the sample size and the maximum number of predictors provided by the dashboard with the interactions included in the model to be those with the highest individual correlation with the variable being predicted; d. A multiple LRM that includes as many of the forecasts and first-order interactions between the input forecasts as allowed by the sample size and the maximum number of predictors provided by the dashboard: with the forecasts and interactions included in the model to be those with the highest individual correlation with the variable being predicted; e. A simple LRM with the predictor variable being the mean of the forecasts: f. A set of simple LRM’s with the predictor variable in each case being the weighted mean of the forecasts with different formulas for the weights Also available is an ad hoc user specified model in terms of the forecasts and the predictor variables generated by the dashboard for the pre-defined models. Provision is made in the regression analysis for both of forward entry and backward removal regression. Weighted least squares (WLS) regression can be performed optionally based on the age of forecasts with smaller weight for older forecasts.
569

Export Propensity of Canadian SMEs: A Gender Based Study

Liao, Xiaolu January 2015 (has links)
SME exporters constitute a critical economic force that contributes significantly to national productivity and job creation in the Canadian economy. However, the academic literature suggests that female-owned SMEs are less likely to export. With lower export propensity, the potential of female-owned SMEs for organic growth, economic self-sufficiency and wealth creation could be comprised. This paper applies logistic regression to study factors that influence SME owners’ export propensity with particular reference to the moderating effect of gender in the context of the Ajzen and Fishbein ’s (2005) theory of Reasoned Action and Planned Behavior. We improve the methodology of prevailing research by redefining “gender” in a more appropriate way and by computing gender interaction effects more accurately. Based on this analysis, we found that, although male- and female-owned SMEs show different likelihoods of exporting, gender does not have a direct residual impact. Instead, systemic gender differences account for most differences in the export propensity between male-owned and female-owned SMEs. Specifically, female-owned SMEs may be systemically disadvantaged because their firms are smaller, more limited in management capacity with younger and less-experienced managers. The lack of resources and market knowledge become constraining factors for them with respect to becoming “export-ready”. Additionally, female SME owners show a higher perception of risk and financing difficulty (although they do not encounter higher rejection rates of financing applications). Their subjective perceptions of potential barriers may contribute to their reluctance to export.
570

Variable regression estimation of unknown system delay

Elnaggar, Ashraf January 1990 (has links)
This thesis describes a novel approach to model and estimate systems of unknown delay. The a-priori knowledge available about the systems is fully utilized so that the number of parameters to be estimated equals the number of unknowns in the systems. Existing methods represent the single unknown system delay by a large number of unknown parameters in the system model. The purpose of this thesis is to develop new methods of modelling the systems so that the unknowns are directly estimated. The Variable Regression Estimation technique is developed to provide direct delay estimation. The delay estimation requires minimum excitation and is robust, bounded, and it converges to the true value for first-order and second-order systems. The delay estimation provides a good model approximation for high-order systems and the model is always stable and matches the frequency response of the system at any given frequency. The new delay estimation method is coupled with the Pole Placement, Dahlin and the Generalized Predictive Controller (GPC) design and adaptive versions of these controllers result. The new adaptive GPC has the same closed-loop performance for different values of system delay. This was not achievable in the original adaptive GPC. The adaptive controllers with direct delay estimation can regulate systems with dominant time delay with minimum parameters in the controller and the system model. The delay does not lose identifiability in closed-loop estimation. Experiments on the delay estimation show excellent agreement with the theoretical analysis of the proposed methods. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate

Page generated in 0.0413 seconds