• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Monte Carlo study of several alpha-adjustment procedures using a testing multiple hypotheses in factorial anova

An, Qian. January 2010 (has links)
Thesis (Ph.D.)--Ohio University, June, 2010. / Title from PDF t.p. Includes bibliographical references.
2

A Monte Carlo study of power analysis of hierarchical linear model and repeated measures approaches to longitudinal data analysis

Fang, Hua. January 2006 (has links)
Thesis (Ph.D.)--Ohio University, August, 2006. / Title from PDF t.p. Includes bibliographical references.
3

Simulating Statistical Power Curves with the Bootstrap and Robust Estimation

Herrington, Richard S. 08 1900 (has links)
Power and effect size analysis are important methods in the psychological sciences. It is well known that classical statistical tests are not robust with respect to power and type II error. However, relatively little attention has been paid in the psychological literature to the effect that non-normality and outliers have on the power of a given statistical test (Wilcox, 1998). Robust measures of location exist that provide much more powerful tests of statistical hypotheses, but their usefulness in power estimation for sample size selection, with real data, is largely unknown. Furthermore, practical approaches to power planning (Cohen, 1988) usually focus on normal theory settings and in general do not make available nonparametric approaches to power and effect size estimation. Beran (1986) proved that it is possible to nonparametrically estimate power for a given statistical test using bootstrap methods (Efron, 1993). However, this method is not widely known or utilized in data analysis settings. This research study examined the practical importance of combining robust measures of location with nonparametric power analysis. Simulation and analysis of real world data sets are used. The present study found that: 1) bootstrap confidence intervals using Mestimators gave shorter confidence intervals than the normal theory counterpart whenever the data had heavy tailed distributions; 2) bootstrap empirical power is higher for Mestimators than the normal theory counterpart when the data had heavy tailed distributions; 3) the smoothed bootstrap controls type I error rate (less than 6%) under the null hypothesis for small sample sizes; and 4) Robust effect sizes can be used in conjuction with Cohen's (1988) power tables to get more realistic sample sizes given that the data distribution has heavy tails.
4

Statistical Power Analysis of Dissertations Completed by Students Majoring in Educational Leadership at Tennessee Universities

Deng, Heping 01 May 2000 (has links) (PDF)
The purpose of this study was to estimate the level of statistical power demonstrated in recent dissertations in the field of educational leadership. Power tables provided in Cohen's (1988) Statistical Power Analysis for the Behavioral Sciences were used to determine the power of the statistical tests conducted in dissertations selected from five universities in Tennessee. The meta-analytic approach was used to summarize and synthesize the findings. The population of this study consisted of all dissertations successfully defended by doctoral students majoring in educational leadership/administration at East Tennessee State University, the University of Tennessee at Knoxville, Tennessee State University, the University of Memphis, and Vanderbilt University from January 1, 1996 through December 31, 1998. Dissertations were included if statistical significance testing was used, if the reported tests were referenced in associated power tables from Cohen's (1988) Statistical Power Analysis for the Behavioral Sciences, and if sample sizes were reported in the study. Eighty out of 221 reviewed dissertations were analyzed and statistical power was calculated for each of the 2629 significance tests. The mean statistical power level was calculated for each dissertation. The mean power was .34 to detect small effects, .79 to detect medium effects, and .94 to detect large effects with the dissertation as the unit of analysis. The mean power level across all significance tests was .29 to detect small effects, .75 to detect medium effects, and .93 to detect large effects. These results demonstrated the highest statistical power levels for detecting large and medium effects. The statistical power estimates were quite low when a small effect size was assumed. Researchers had a very low probability of finding true significant differences when looking for small effects. Though the degree of statistical power demonstrated in analyzed dissertations was satisfactory for large and medium effect sizes, neither power level nor Type II error was mentioned in any of the 80 dissertations that were analyzed. Therefore, it is hard to determine whether these dissertations were undertaken with consideration of Type II error or the level of statistical power. The mean sample size used for the 2,629 significance tests was 2.5 times the mean optimal sample size, although most significance tests used samples that were much smaller than optimal sample size. It is recommended that doctoral students in educational leadership receive additional training on the importance of statistical power and the process for estimating appropriate sample size.
5

A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA

Senteney, Michael H. January 2020 (has links)
No description available.

Page generated in 0.0895 seconds