Spelling suggestions: "subject:"2analysis off variance"" "subject:"2analysis oof variance""
111 |
The Robustness of O'Brien's r Transformation to Non-NormalityGordon, Carol J. (Carol Jean) 08 1900 (has links)
A Monte Carlo simulation technique was employed in this study to determine if the r transformation, a test of homogeneity of variance, affords adequate protection against Type I error over a range of equal sample sizes and number of groups when samples are obtained from normal and non-normal distributions. Additionally, this study sought to determine if the r transformation is more robust than Bartlett's chi-square to deviations from normality. Four populations were generated representing normal, uniform, symmetric leptokurtic, and skewed leptokurtic distributions. For each sample size (6, 12, 24, 48), number of groups (3, 4, 5, 7), and population distribution condition, the r transformation and Bartlett's chi-square were calculated. This procedure was replicated 1,000 times; the actual significance level was determined and compared to the nominal significance level of .05. On the basis of the analysis of the generated data, the following conclusions are drawn. First, the r transformation is generally robust to violations of normality when the size of the samples tested is twelve or larger. Second, in the instances where a significant difference occurred between the actual and nominal significance levels, the r transformation produced (a) conservative Type I error rates if the kurtosis of the parent population were 1.414 or less and (b) an inflated Type I error rate when the index of kurtosis was three. Third, the r transformation should not be used if sample size is smaller than twelve. Fourth, the r transformation is more robust in all instances to non-normality, but the Bartlett test is superior in controlling Type I error when samples are from a population with a normal distribution. In light of these conclusions, the r transformation may be used as a general utility test of homogeneity of variances when either the distribution of the parent population is unknown or is known to have a non-normal distribution, and the size of the equal samples is at least twelve.
|
112 |
Internationalization of U.S. equipment leasing firms : an empirical study using covariance structure modeling /Agarwal, Sanjeev January 1986 (has links)
No description available.
|
113 |
Variances of some truncated distributions for various points of truncationHayles, George Carlton 30 October 2008 (has links)
The purpose of this study is to examine variances in the case of distributions obtained by truncating a given distribution at various points. In particular, the truncated distributions are restricted to nested increasing intervals, and the question is posed whether the variances of these distributions are monotonically increasing. The answer to this question is relevant to the use of conditional information for purposes of estimation and prediction.
Several tables are presented in the thesis which provide evidence of the property of monotonic variance for nested increasing intervals of truncation in the case of univariate distributions., The Monte Carlo procedure is used to determine a table of standard deviations for the standard normal distribution with the same points of truncation reported by Clark(2). Clark's table is given intact, and it is used in comparison with the new table reported here as a check on the Monte Carlo procedure used in the present study.
Distributions other than the standard normal distribution are examined as well, namely, a Pearson U-shaped distribution and a bimodal distribution consisting of a mixture or two Pearson distributions. Graphs of the U-shaped and bimodal distributions are given.
A section is given in which dispersion for a bivariate case is examined in terms of the bivariate normal distribution. An interesting trend among the covariance matrices is observed in the data reported in that section.
A separate computer program for each type of distribution was written and used to calculate the variances of the truncated distributions. FORTRAN programs and flow charts are presented in the Appendix. Explanation of the tables and procedures used to calculate the entries in the body of each table are given in each section as well as some discussion of the results presented. / Master of Science
|
114 |
Analysis of variance of a balanced incomplete block design with missing observationsBaird, Hugh Robert January 1960 (has links)
The problem considered in this paper is that of estimating several missing values and analyzing the resulting augmented data in a balanced incomplete block design.
The estimates are obtained by Yates' procedure of minimizing the error sum of squares.
Explicit formulae are obtained for all cases involving not more than two missing values and for several particular configurations of the missing values within the design. A general solution is obtained which involves the inversion of a symmetric n-square matrix, where n is the number of missing values.
An exact analysis of data augmented by missing value estimates is given which eliminates a positive bias in the treatment sum of squares.
It is possible to treat a balanced incomplete block design as a randomized block design with missing values. Estimates of the missing entries and a randomized block analysis can then be obtained according to the methods of Glenn and Kramer. An example of this procedure is given, and the results are compared with the results obtained by the usual balanced incomplete block analysis.
An example is given illustrating the techniques of missing value estimation and subsequent exact analysis for the balanced incomplete block design. / Master of Science
|
115 |
On the use of an auxiliary variable in the transformation of discrete dataTaylor, Robert James January 1955 (has links)
M.S.
|
116 |
Comparison of Bayes' and minimum variance unbiased estimators of reliability in the extreme value life testing modelGodbold, James Homer January 1970 (has links)
The purpose of this study is to consider two different types of estimators for reliability using the extreme value distribution as the life-testing model. First the unbiased minimum variance estimator is derived. Then the Bayes' estimators for the uniform, exponential, and inverted gamma prior distributions are obtained, and these results are extended to a whole class of exponential failure models. Each of the Bayes' estimators is compared with the unbiased minimum variance estimator in a Monte Carlo simulation where it is shown that the Bayes' estimator has smaller squared error loss in each case.
The problem of obtaining estimators with respect to an exponential type loss function is also considered. The difficulties in such an approach are demonstrated. / Master of Science
|
117 |
Lower bounds for the variance of uniformly minimum variance unbiased estimatorsLemon, Glen Hortin January 1965 (has links)
The object of this paper was to study lower bounds ·for the variance of uniformly minimum variance unbiased estimators.
The lower bounds of Cramer and Rao, Bhattacharyya, Hammersley, Chapman and Robbins, and Kiefer were derived and discussed. Each was compared with the other, showing their relative merits and shortcomings.
Of the lower bounds considered all are greater than or equal to the Cramer-Rao lower bound. The Kiefer lower bound is as good as any of the others, or better.
We were able to show that the Cramer-Rao lower bound is exactly the first Bhattacharyya lower bound. The Hammersley and the Chapman and Robbins lower bounds are identical when they both have the same parameter space, i.e., when Ω = (a,b).
The use of the various lower bounds is illustrated in examples throughout the paper. / M.S.
|
118 |
Graphical assessment of the prediction capability of response surface designsGiovannitti-Jensen, Ann January 1987 (has links)
A response surface analysis is concerned with the exploration of a system in order to determine the behavior of the response of the system as levels of certain factors which influence the response are changed. It is often of particular interest to predict the response in some region of the allowable factor values and to find the optimal operating conditions of the system.
In an experiment to search for the optimum response of a surface it is advantageous to predict the response with equal, or nearly equal, precision at all combinations of the levels of the variables which represent locations which are the same distance from the center of the experimental region. Measures of the quality of prediction at locations on the surface of a hypersphere are presented in this thesis. These measures are used to form a graphical method of assessing the overall prediction capability of an experimental design throughout the region of interest.
Rotatable designs give equal variances of predicted values corresponding to locations on the same sphere. In this case, the center of the sphere coincides with the center of the rotatable design. However, there is a need for a method to quantify the prediction capability on spheres for non-rotatable designs. The spherical variance is a measure of the average prediction variance at locations on the surface of a sphere. The spherical variance obtained with a design provides an assessment of how well the response is being estimated on the average at locations which are the same distance from the region center. This thesis introduces two measures which describe the dispersion in the variances of the predicted responses at aH locations on the surface of a sphere. These prediction variance dispersion (PVD) measures are used to evaluate the ability of a design to estimate the response with consistent precision at locations which are the same distance from the region center. The PVD measures are used in conjunction with the spherical variance to assess the prediction capability of a design.
A plot of the spherical variance and the maximum and minimum prediction variances for locations on a sphere against the radius of the sphere gives a comprehensive picture of the behavior of the prediction variances throughout a region, and, hence, of the quality of the predicted responses, obtained with a particular design. Such plots are used to investigate and compare the prediction capabilities of certain response surface designs currently available to the researcher. The plots are also used to investigate the robustness of a design under adverse experimental conditions and to determine the effects of taking an additional experimental run on the quality of the predicted responses. / Ph. D.
|
119 |
Design and analysis for a two level factorial experiment in the presence of dispersion effectsMays, Darcy P. 10 October 2005 (has links)
Standard response surface methodology experimental designs for estimating location models involve the assumption of homogeneous variance throughout the design region. However, with heterogeneity of variance these standard designs are not optimal.
Using the D and Q-optimality criteria, this dissertation proposes a two-stage experimental design procedure that gives more efficient designs than the standard designs when heterogeneous variance exists. Several multiple variable location models, with and without interactions, are considered. For each the first stage estimates the heterogeneous variance structure, while the second stage then augments the first stage to produce a D or Q-optimal design for fitting the location model under the estimated variance structure. However, there is a potential instability of the variance estimates in the first stage that can lower the efficiency of the two-stage procedure. This problem can be addressed and the efficiency of the procedure enhanced if certain mild assumptions concerning the variance structure are made and formulated as a prior distribution to produce a Bayes estimator.
With homogeneous variance, designs are analyzed using ordinary least squares. However, with heterogeneous variance the correct analysis is to use weighted least squares. This dissertation also examines the effects that analysis by weighted least squares can have and compares this procedure to the proposed two-stage procedure. / Ph. D.
|
120 |
A graphical approach for evaluating the potential impact of bias due to model misspecification in response surface designsVining, G. Geoffrey January 1988 (has links)
The basic purpose of response surface analysis is to generate a relatively simple model to serve as an adequate approximation for a more complex phenomenon. This model then may be used for other purposes, for example prediction or optimization. Since the proposed model is only an approximation, the analyst almost always faces the potential of bias due to model misspecification. The ultimate impact of this bias depends upon the choice both of the experimental design and of the region for conducting the experiment.
This dissertation proposes a graphical approach for evaluating the impact of bias upon response surface designs. Essentially, it extends the work of Giovannitti-Jensen (1987) and Giovannitti-Jensen and Myers (1988) who have developed a graphical technique for displaying a design's prediction variance capabilities. This dissertation extends this concept: (1) to the prediction bias due to model misspecification; (2) the prediction bias due to the presence of a single outlier; and (3) to a mean squared error of prediction. Several common first and second-order response surface designs are evaluated through this approach. / Ph. D.
|
Page generated in 0.0715 seconds