• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 57
  • 57
  • 20
  • 12
  • 10
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Effect Size Reporting and Interpreting Practices in Published Higher Education Journal Articles

Stafford, Mehary T. 08 1900 (has links)
Data-driven decision making is an integral part of higher education and it needs to be rooted in strong methodological and statistical practices. Key practices include the use and interpretation of effect sizes as well as a correct understanding of null hypothesis significance testing (NHST). Therefore, effect size reporting and interpreting practices in higher education journal articles represent an important area of inquiry. This study examined effect size reporting and interpretation practices of published quantitative studies in three core higher education journals: Journal of Higher Education, Review of Higher Education, and Research in Higher Education. The review covered a three-year publication period between 2013 and 2015. Over the three-year span, a total of 249 articles were published by the three journals. The number of articles published across the three years did not vary appreciably. The majority of studies employed quantitative methods (71.1%), about a quarter of them used qualitative methods (25.7%), and the remaining 3.2% used mixed methods. Seventy-three studies were removed from further analysis because they did not feature any quantitative analyses. The remaining 176 quantitative articles represented the sample pool. Overall, 52.8% of the 176 studies in the final analysis reported effect size measures as part of their major findings. Of the 93 articles reporting effect sizes, 91.4% of them interpreted effect sizes for their major findings. The majority of studies that interpreted effect sizes also provided a minimal level of interpretation (60.2% of the 91.4%). Additionally, 26.9% of articles provided average effect size interpretation, and the remaining 4.3% of studies provided strong interpretation and discussed their findings in light of previous studies in their field.
2

Common language effect size : A valuable step towards a more comprehensible presentation of statistical information?

Lindh, Johan January 2019 (has links)
To help address the knowledge gap between science and practice this study explores the possible positive benefits of using a more pedagogical effect size estimate when presenting statistical relationships. Traditional presentation has shown limitations with major downsides being that scientific findings are misinterpreted or misunderstood even by professionals. This study explores the possible effects of the non-traditional effect size estimate Common Language Effect Size (CLES) on different training outcomes for HR professionals. This study also explores the possible effect of cognitive system preference on training outcomes. Results show no overall effect of CLES on either training outcomes or cognitive system. A positive effect of CLES on training outcome is found at the subfactor level showing a significant effect. The results can be interpreted that non-traditional effect size estimates have a limited effect on training outcomes. This small but valuable piece to bridge the gap of knowledge is discussed.
3

The robustness of confidence intervals for effect size in one way designs with respect to departures from normality

Hembree, David January 1900 (has links)
Master of Science / Department of Statistics / Paul Nelson / Effect size is a concept that was developed to bridge the gap between practical and statistical significance. In the context of completely randomized one way designs, the setting considered here, inference for effect size has only been developed under normality. This report is a simulation study investigating the robustness of nominal 0.95 confidence intervals for effect size with respect to departures from normality in terms of their coverage rates and lengths. In addition to the normal distribution, data are generated from four non-normal distributions: logistic, double exponential, extreme value, and uniform. The report discovers that the coverage rates of the logistic, double exponential, and extreme value distributions drop as effect size increases, while, as expected, the coverage rate of the normal distribution remains very steady at 0.95. In an interesting turn of events, the uniform distribution produced higher than 0.95 coverage rates, which increased with effect size. Overall, in the scope of the settings considered, normal theory confidence intervals for effect size are robust for small effect size and not robust for large effect size. Since the magnitude of effect size is typically not known, researchers are advised to investigate the assumption of normality before constructing normal theory confidence intervals for effect size.
4

Evaluation of Five Effect Size Measures of Measurement Non-Invariance for Continuous Outcomes

January 2019 (has links)
abstract: To make meaningful comparisons on a construct of interest across groups or over time, measurement invariance needs to exist for at least a subset of the observed variables that define the construct. Often, chi-square difference tests are used to test for measurement invariance. However, these statistics are affected by sample size such that larger sample sizes are associated with a greater prevalence of significant tests. Thus, using other measures of non-invariance to aid in the decision process would be beneficial. For this dissertation project, I proposed four new effect size measures of measurement non-invariance and analyzed a Monte Carlo simulation study to evaluate their properties and behavior in addition to the properties and behavior of an already existing effect size measure of non-invariance. The effect size measures were evaluated based on bias, variability, and consistency. Additionally, the factors that affected the value of the effect size measures were analyzed. All studied effect sizes were consistent, but three were biased under certain conditions. Further work is needed to establish benchmarks for the unbiased effect sizes. / Dissertation/Thesis / Doctoral Dissertation Psychology 2019
5

Statistical controversies in cancer research: using standardized effect size graphs to enhance interpretability of cancer-related clinical trials with patient-reported outcomes

Bell, M. L., Fiero, M. H., Dhillon, H. M., Bray, V. J., Vardy, J. L. 08 1900 (has links)
Patient reported outcomes (PROs) are becoming increasingly important in cancer studies, particularly with the emphasis on patient centered outcome research. However, multiple PROs, using different scales, with different directions of favorability are often used within a trial, making interpretation difficult. To enhance interpretability, we propose the use of a standardized effect size graph, which shows all PROs from a study on the same figure, on the same scale. Plotting standardized effects with their 95% confidence intervals (CIs) on a single graph clearly showing the null value conveys a comprehensive picture of trial results. We demonstrate how to create such a graph using data from a randomized controlled trial that measured 12 PROs at two time points. The 24 effect sizes and CIs are shown on one graph and clearly indicate that the intervention is effective and sustained.
6

Attenuation of the Squared Canonical Correlation Coefficient Under Varying Estimates of Score Reliability

Wilson, Celia M. 08 1900 (has links)
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability. Monte Carlo simulation methodology was used to fulfill the purpose of this study. Initially, data populations with various manipulated conditions were generated (N = 100,000). Subsequently, 500 random samples were drawn with replacement from each population, and data was subjected to canonical correlation analyses. The canonical correlation results were then analyzed using descriptive statistics and an ANOVA design to determine under which condition(s) the squared canonical correlation coefficient was most attenuated when compared to population Rc2 values. This information was analyzed and used to determine what effect, if any, the different conditions considered in this study had on Rc2. The results from this Monte Carlo investigation clearly illustrated the importance of score reliability when interpreting study results. As evidenced by the outcomes presented, the more measurement error (lower reliability) present in the variables included in an analysis, the more attenuation experienced by the effect size(s) produced in the analysis, in this case Rc2. These results also demonstrated the role between and within set correlation, variable set size, and sample size played in the attenuation levels of the squared canonical correlation coefficient.
7

Meta-Analysis of the Effectiveness of Computer-Assisted Instruction in Technical Education and Training

Yaakub, Mohammad Naim 09 July 1998 (has links)
The overall effectiveness of computer-assisted instruction (CAI) for higher order learning in technical education and training was determined through the meta-analysis approach. Studies that had investigated the effectiveness of CAI as compared to traditional instruction were selected from major databases in the civilian and military sectors. The selection criteria were: (a) instruction was in the area of technical education and training, (b) a comparison was made between a group of students that received computer-assisted instruction with another group that was taught in the traditional manner, (c) student learning in both groups was measured in some form, and (d) quantitative results on criterion measures were provided. The common comparison metric chosen to indicate the effect size was the standardized mean difference. Additionally, a determination was made of the difference in CAI effectiveness between studies categorized into: (a) CAI type -- intelligent CAI and ordinary CAI; (b) nature of CAI treatment -- replacement and supplemental; (c) subject assignment -- random groups, intact groups, and assignments other than the preceding two groups; (d) educational level -- secondary / postsecondary, university, and adult military training; and (e) setting -- civilian and military. The overall effect size of CAI was found to be 0.35, implying that the average student in the traditional class would have improved from the 50th percentile to the 64th percentile if the student had been provided with CAI. Intelligent CAI was found to be significantly more effective than ordinary CAI. / Ph. D.
8

Welcoming Quality in Non-Significance and Replication Work, but Moving Beyond the p-Value: Announcing New Editorial Policies for Quantitative Research in JOAA

McBee, Matthew T., Matthews, Michael S. 01 May 2014 (has links)
The self-correcting nature of psychological and educational science has been seriously questioned. Recent special issues of Perspectives on Psychological Science and Psychology of Aesthetics, Creativity, and the Arts have roundly condemned current organizational models of research and dissemination and have criticized the perverse incentive structure that tempts researchers into generating and publishing false positive findings. At the same time, replications are rarely attempted, allowing untruths to persist in the literature unchallenged. In this article, the editors of the Journal of Advanced Academics consider this situation and announce new policies for quantitative submissions. They are (a) an explicit call for replication studies; (b) new instructions directing reviewers to base their evaluation of a study’s merit on the quality of the research design, execution, and written description, rather than on the statistical significance of its results; and (c) an invitation to omit statistical hypothesis tests in favor of reporting effect sizes and their confidence limits.
9

Congruence Effects Treatment Technique-Outcome Measure Interaction

Jacobs, John A. 08 1900 (has links)
It was hypothesized that effect size in therapy outcome research would correlate positively with congruence effects. Congruence was defined as the degree to which what had been practiced in treatment was scored as improvement when outcome was measured. Additionally, it was hypothesized that correcting effect sizes for estimated nongeneralizable change attributable to congruence (i.e., representativeness reduction) would significantly reduce the average magnitude of effect.
10

Effects of Telemonitoring in Cancer Patients

Vittatoe, Danielle S., Glenn, L. Lee 01 January 2014 (has links)
No description available.

Page generated in 0.073 seconds