• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 771
  • 154
  • 103
  • 84
  • 66
  • 29
  • 19
  • 19
  • 14
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • Tagged with
  • 1424
  • 1424
  • 224
  • 171
  • 168
  • 152
  • 133
  • 131
  • 121
  • 108
  • 105
  • 101
  • 101
  • 100
  • 100
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Is uncorrelating the residuals worth it?

Ward, Laurel Lorraine January 1973 (has links)
No description available.
272

Robust inferential procedures applied to regression

Agard, David B. 13 October 2005 (has links)
This dissertation is concerned with the evaluation of a robust modification of existing methodology within the classical inference framework. This results in an F-test based on the robust weights used in arriving at the M or Bounded-Influence estimates. These estimates are known to be robust to outliers and highly influential points, respectively. The first part of this evaluation involves a Monte Carlo power study, under violations of the classical assumptions, of this F-test based on robust weights and several other proposed robust tests. It is shown in simulation studies that, under certain conditions, the F-test based on robust weights is a much more powerful test than the classical F -test, and compares favorably to all other proposals studied. The second part involves the development of the influence curve (IC) for the F-test based on robust weights and one empirical approximation to the IC, the Sample Influence Curve (SIC). It is shown for two sample data sets that the SIC demonstrates the resistance to unusual points of the F-test based on robust weights. / Ph. D.
273

A comparison of the classical and inverse methods of calibration in regression

Thomas, Marlin Amos January 1969 (has links)
The linear calibration problem, frequently referred to as inverse regression or the discrimination problem can be stated briefly as the problem of estimating the independent variable x in a regression situation for a measured value of the dependent variable y. The literature on this problem deals primarily with the Classical method where the Classical estimator is obtained by expressing the linear model as y<sub>i</sub> = α + βx<sub>i</sub> + ε<sub>i</sub> , obtaining the least squares estimator for y for a given value of x and inverting the relationship. A second estimator for calibration, the Inverse estimator, is obtained by expressing the linear model as x<sub>i</sub> = γ + δy<sub>i</sub> + ε’<sub>i</sub> and using the resulting least squares estimator to estimate x. The experimental design problem for the Inverse estimator is explored first in this dissertation using the criterion of minimizing the average or integrated mean squared error, and the resulting optimal and near optimal designs are then compared with those for the Classical estimator which were recently derived by Ott and Nycrs. Optimal designs are developed for a linear approximation when the true model is linear and when it is quadratic. In both cases, the optimal designs depend on unknown model parameters and are not realistically useable. However, designs are shown to exist which are near optimal and do not depend on the unknown model parameters. For the linear approximation to the quadratic model, these near optimal designs depend on N, the number of observations used to estimate the model parameters, and specific designs are developed and set forth in tables for N = 5(1)20(2)30(5)50. The cost of misclassifying a quadratic model as linear is discussed from a design point of view as well as the cost of protecting against a possible quadratic effect, The costs are expressed in terms of the percent deviation from the average mean squared error that would be obtained if the model were classified correctly, The derived designs for the Inverse estimator are compared with the recently derived designs for the Classical estimator using as a measure of comparison the ratio of minimum average mean squared errors obtained by using the optimal design for both estimators. Further comparisons are also made between optimal designs for the Classical estimator and the derived near optimal designs for the Inverse estimator using the ratio of the corresponding average mean squared errors as a measure of comparison. Parallels are drawn between forward regression (estimating, the dependent variable for a given value of the independent variable) and inverse regression using both the Classical and Inverse methods. / Ph. D.
274

A study of homogeneity among regression relationships

Robinson, John P. January 1958 (has links)
Master of Science
275

Modified principal components regression

Wu, Huan-Ter January 1979 (has links)
When near linear relationships exist among the columns of regressor variables, the variances of the least squares estimators of the regression coefficients become very large. The least squares estimator of the vector of the regression coefficients, which can be written in terms of latent roots and latent vectors of X'X, tends to place heavy weights on the latent vectors corresponding to small latent roots of X'X. Thus, the estimates of regression coefficients corresponding to the regressors involved in multicollinearities tend to be dominated by the multicollinearities. Therefore, the least squares estimators could estimate the true parameters poorly and could be very unreliable. In order to overcome the ill-effects of multicollinearities on the least squares estimator, the procedure of principal components regression deletes those components corresponding to the small latent roots of X'X. Then we regress <u>y</u> on the retained components using ordinary least squares. When principal components regression is used as an alternative to the least squares in the presence of a near singular X'X matrix, its performance depends strongly on the orientation of the deleted components to the vector of regression coefficients. In this paper, we present a modification of the principal components procedure in which components associated with near singularities are dampened but are not completely deleted. The resulting estimator was compared in a Monte Carlo study with the least squares estimator and the principal component estimator using mean squared error as the basis of comparison. The results indicate that the modified principal components estimator will perform better than either of the other two estimators over a wide range of orientations and signal-to-noise ratios and that it provides a reasonable compromise choice when the orientation is unknown. / Ph. D.
276

A short cut method for linear regression

Perng, Shian-koong January 1961 (has links)
This thesis reviews and discusses the so-called “Group Averages method" in the linear regression, the quadratic regression, and the functional relation situations. In the linear and quadratic regression situations, under the assumption of X<sub>i</sub> equally spaced, the efficiency of the Group Averages estimator is quite satisfactory as compared with Least Squares estimators. In the functional relation situation we used the Group Averages method and the Maximum Likelihood method for estimation of parameters. To compare their efficiencies we used the variance of the Group Averages estimator which was given by Dorff and Gurland [3], and developed the variance of Maximum Likelihood estimators. Under the assumption of X<sub>i</sub> equally spaced, we round the efficiency of the Group Averages estimator to be quite satisfactory. However, caution is needed for using the Group Averages method in functional relationships, since it requires the following condition to be satisfied: Pr {|d<sub>i</sub>| ≥ ½ c} negligible Where c = Min. |X<sub>i+1</sub> - X<sub>i</sub>|. / Master of Science
277

The reduction in sum of squares attributable to a subset of a set of regression coefficients and the invariance under certain linear transformations of a sequence of quadratic forms in these coefficients

Graham, Bruce McConne January 1947 (has links)
M.S.
278

An Application of Ridge Regression to Educational Research

Amos, Nancy Notley 12 1900 (has links)
Behavioral data are frequently plagued with highly intercorrelated variables. Collinearity is an indication of insufficient information in the model or in the data. It, therefore, contributes to the unreliability of the estimated coefficients. One result of collinearity is that regression weights derived in one sample may lead to poor prediction in another model. One technique which was developed to deal with highly intercorrelated independent variables is ridge regression. It was first proposed by Hoerl and Kennard in 1970 as a method which would allow the data analyst to both stabilize his estimates and improve upon his squared error loss. The problem of this study was the application of ridge regression in the analysis of data resulting from educational research.
279

Retail Site Selection Using Multiple Regression Analysis

Taylor, Ronald D. (Ronald Dean) 12 1900 (has links)
Samples of stores were drawn from two chains, Pizza Hut and Zale Corporation. Two different samples were taken from Pizza Hut. Site specific material and sales data were furnished by the companies and demographic material relative to each site was gathered. Analysis of variance tests for linearity were run on the three regression equations developed from the data and each of the three regressions equations were found to have a statistically significant linear relationship. Statistically significant differences were found among similar variables used in the prediction of sales by using Fisher's Z' Transformations on the correlation coefficients. Eight of the eighteen variables used in the Pizza Hut study were found to be statistically different between the two regions used in the study. Additionally, analysis of variance tests were used to show that traffic pattern variables were not better predictors than demographic variables.
280

Börsintroduktioner i Sverige : En tvärsnittsundersökning om underprissättning och prissättningsmetoder

Panic, Stefan, Taher, Roni January 2016 (has links)
Börsintroduktion är en process som innebär att ett företag för första gången tillgängliggör sina aktier för handel på börsen. Vid fastställandet av priset på en aktie finns två metoder som företag kan tillämpa, vilket är fast eller intervallprissättning. Problem som kan uppstå i samband med en börsintroduktion är att teckningskursen inte blir värderad till det pris som marknadsvärdet uppskattas till, vilket kallas underprissättning. Denna studie syftar till estimera magnituden av en eventuell underprissättning och jämföra olika faktorers påverkan på denna. Vidare är syftet att identifiera hur valet av prissättningsmetod påverkar den eventuella underprissättningen på den svenska börsmarknaden. Med hjälp av en regressionsanalys på 149 observationer, kom författarna av denna studie fram till att den genomsnittliga underprissättningen för perioden 2005-2015 har varit 4,61 %. Det konstaterades att en fast prissättningsmetod medför en högre förstadagsavkasting och att den rådande marknadsavkastningen har störst påverkan på underprissättningen. / IPO (Initial Public Offering) is a process in which a company for the first time starts to sell its shares on the stock market. When determining the share price, there are two methods acompany can use - bookbuilding or the fixed price method. Problems that may arise when introducing an IPO is that the offered price will not be equivalently valued with regard to the estimated market value, which is also called underpricing. The aim of this study is to estimatethe magnitude of a potential underpricing and to compare different impacts from various factors. Furthermore, the aim of the study is to identify how the choice of pricing method impacts the potential underpricing on the Swedish stock market. In a sample of 149 observations, the regression analysis implies that the market-adjusted underpricing is 4,61 % during the years 2005-2015. We find that a fixed price method generates higher average initial return. We also find that the current market rate of return has the greatest impact on underpricing.

Page generated in 0.1486 seconds