• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3113
  • 336
  • 258
  • 209
  • 103
  • 78
  • 36
  • 30
  • 30
  • 30
  • 30
  • 30
  • 29
  • 28
  • 17
  • Tagged with
  • 4928
  • 2274
  • 1869
  • 1107
  • 377
  • 370
  • 297
  • 234
  • 227
  • 223
  • 214
  • 209
  • 204
  • 200
  • 198
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

A Validity Study of the American School Counselor Association (ASCA) National Model Readiness Self-Assessment Instrument

McGannon, Wendy 01 January 2007 (has links)
School counseling has great potential to help students achieve to high standards in the academic, career, and personal/social aspects of their lives (House & Martin, 1998). With the advent of No Child Left Behind (NCLB, 2001) the role of the school counselor is beginning to change. In response to the challenges and pressures to implement standards-based educational programs, the American School Counselor Association released “The ASCA National Model: A Framework for School Counseling Programs” (ASCA, 2003). The ASCA National Model was designed with an increased focus on both accountability and the use of data to make decisions and to increase student achievement. It is intended to ensure that all students are served by the school counseling program by using student data to advocate for equity, to facilitate student improvement, and to provide strategies for closing the achievement gap. The purpose of this study was to investigate the psychometric properties of an instrument designed to assess school districts' readiness to implement the ASCA National Model. Data were gathered from 693 respondents of a web-based version of the ASCA National Model Readiness Self-Assessment Instrument. Confirmatory factor analysis did not support the structure of the 7-factor model. Exploratory factor analysis produced a 3-factor model which was supported by confirmatory factor analyses, after creating variable parcels within each of the three factors. Based on the item loadings within each factor, the factors were labeled in the following manner: factor one was labeled School Counselor Characteristics, factor two was labeled District Conditions and factor three was labeled School Counseling Program Supports. Cross-validation of this model with an independent data sample of 363 respondents to the ASCA Readiness Instrument provided additional evidence to support the three factor model. The results of these analyses will be used to give school districts more concise score report information about necessary changes to support implementation of the ASCA National Model. These results provide evidence to support the interpretation of the scores that will be obtained from the ASCA Readiness Instrument.
412

Meta-Analysis of Factor Analyses: Comparison of Univariate and Multivariate Approaches Using Correlation Matrices and Factor Loadings

Unknown Date (has links)
Currently, more sophisticated techniques such as factor analyses are frequently applied in primary research thus may need to be meta-analyzed. This topic has been given little attention in the past due to its complexity. Because factor analysis is becoming more popular in research in many areas including education, social work, social science, and so on, the study of methods for the meta-analysis of factor analyses is also becoming more important. The first main purpose of this dissertation is to compare the results of seven different approaches to doing meta-analysis of confirmatory factor analyses. Specifically, five approaches are based on univariate meta-analysis methods. The next two approaches use multivariate meta-analysis to obtain the results of factor loadings and the standard errors of factor loadings. The results from each approach are compared. Given the fact that factor analyses are commonly used in many areas, the second purpose of this dissertation is to explore the appropriate approach or approaches to use for the meta-analysis of factor analyses, especially Confirmatory Factor Analysis (CFA). When the average sample size was small, the results of IRD, WMC, WMFL, and GLS-MFL approaches showed better performance than those of UMC, MFL, and GLS-MC approaches to estimating parameters. With large average sample sizes (larger than 150), the performance to estimate the parameters across all seven approaches seemed to be similar in this dissertation. Based on my simulation results, researchers who want to conduct meta-analytic confirmatory factor analysis can apply any of these approaches to synthesize the results from primary studies it their studies have n > 150. / A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester 2015. / June 9, 2015. / factor analysis, meta-analysis, multivariate meta-analysis, univariate meta-analysis / Includes bibliographical references. / Betsy J. Becker, Professor Directing Dissertation; Fred Huffer, University Representative; Insu Paek, Committee Member; Yanyun Yang, Committee Member.
413

Comparison of kernel equating and item response theory equating methods

Meng, Yu 01 January 2012 (has links)
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item response theory (IRT) equating techniques. The purpose of this study was to evaluate the accuracy and stability of KE and IRT true score equating by manipulating several common factors that are known to influence the equating results. Three equating methods (Kernel post-stratification equating, Stocking-Lord and Mean/Sigma) were compared with an established equating criterion. A wide variety of conditions were simulated to match realistic situations that reflected differences in sample size, anchor test length and, group ability differences. The systematic error and random error of equating were summarized with bias statistics and the standard error of equating (SEE), and compared across the methods. The overall better equating methods under specific conditions were recommended based on the root mean squared error (RMSE). The equating results revealed that, and as expected, in general, equating error decreased as the number of anchor items was increased and sample size was increased across all the methods. Aside from method effects, group differences in ability produced the greatest impact on equating error in this particular study. The accuracy and stability of each equating method depended on the portion of the score scale range where comparisons were being made. Overall, Kernel equating was shown to be more stable in most situations but not as accurate as IRT equating for the conditions studied. The interactions between pairs of factors investigated in this study seemed to be more influential and beneficial to IRT equating than for KE. Further practical recommendations were suggested for future study: for example, using alternate methods of data simulation to remove the advantage of the IRT equating methods.
414

Evaluating several multidimensional adaptive testing procedures for diagnostic assessment

Yoo, Hanwook 01 January 2011 (has links)
This computer simulation study was designed to comprehensively investigate how formative test designs can capitalize on the dimensional structure among the proficiencies being measured in a test, item selection methods, and computerized adaptive testing to improve measurement precision and classification accuracy. Four variables were manipulated to investigate the effectiveness of multidimensional adaptive testing (MAT): Number of dimensions measured by the test, magnitude of the correlations among the dimensions, the item selection method, and the test design. Outcome measures included recovery of known proficiency scores, bias in estimation, and accuracy of proficiency classifications. Unlike previous MAT research, no significant effect was found on the outcome measures due to the number of dimensions. A moderate improvement in the outcome measures was found with higher correlations (e.g., .50 or .80) among the dimensions. Four different item selection methods—Bayesian, Fisher, optimal, and random—were applied to evaluate the measurement efficiency of adaptive item selection methods and non-adaptive methods. As a baseline, the findings from the item selection method using random selection were available. The Bayesian item selection method showed the best results under different conditions. The Fisher item selection method showed the second best results, but the gap among adaptive item selection methods was reduced with longer tests and higher correlations among the dimensions. The optimal item selection method produced comparable results to adaptive item selection methods, when the focus was on the accuracy of decision making which in many applications of diagnostic assessment is the most important criterion. The level of impact of increased test length with a fixed test length condition was apparent on all of the outcome measures. The results from the study suggest that the Bayesian item selection method can be quite useful when there are at least moderate correlations among the dimensions. As these results were obtained using a good estimate of the priors, in a next step, the impact of poor prior (i.e., inaccurate) information on the validity of the Bayesian approach (e.g., too high, too low, too tight) should be investigated. We note too the very good results obtained with optimal item selection when the focus was on accuracy of proficiency classifications.
415

The relation between intelligence and rate of movement

Unknown Date (has links)
M.A. Florida State College for Women 1930
416

Measurements in some fundamental capacities as related to scholastic ability and intellectual level

Unknown Date (has links)
M.A. Florida State College for Women 1929
417

A study of tests devised to measure art capacities

Unknown Date (has links)
M.A. Florida State College for Women 1930
418

Investigating the Recovery of Latent Class Membership in the Mixture Rasch Modeling

Unknown Date (has links)
Mixture IRT modeling allows the detection of latent classes and different item parameter profile patterns across latent classes. In Rasch mixture model estimation, latent classes are assumed to follow a normal distribution with means constrained to be equal across latent classes for the model identification purpose. In the literature, this conventional constraint was shown to be problematic in establishing a common scale and comparing item profile patterns across different latent classes. In this study, a simulation study was conducted to explore the degree of recovery of class membership. Also, the class membership recovery of the conventional constraint approach was compared to the class-invariant item constraint approach. The results show that the recovery of class membership has the similar recovery for both approaches. In addition, the consistency of class membership for two approaches is consistent with each other. / A Thesis submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2014. / October 2, 2014. / Includes bibliographical references. / Insu Paek, Professor Directing Thesis; Betsy Jane Becker, Committee Member; Dan McGee, Committee Member.
419

Some suggestions for constructing tests and test items for the primary grades of the elementary school

Unknown Date (has links)
"This problem was chosen because there seems to be a need for an understanding by primary teachers about what learnings should be tested and how to test those learnings. For too long we, as teachers of the first, second, and third grades of the elementary school of America, have relied on our subjective opinions as an adequate measure of the child's progress. If we, in our educational program, are going to insist that children come to school, and progress from grade to grade, we will have to base the criteria that govern advancing from one grade to another on something more scientific than teacher observations. This is the plan that was followed for arriving at some workable regulations to govern the construction of teacher-made tests for the lower elementary grades. Material in text books on measurement and primary teaching methods was explored. Periodicals were examined in hopes of finding some very recent research in the field. The study of standardized tests showed the writer what had successfully been done. In constructing test items the use of teachers' manuals, workbooks, and state bulletins should be invaluable"--Introduction. / "August, 1950." / Typescript. / "Submitted to the Graduate Council of Florida State University in partial fulfillment of the requirements for the degree of Master of Science." / Advisor: M. H. DeGraff, Professor Directing Paper. / Includes bibliographical references (leaves 29-30).
420

Evaluation of Measurement Invariance in IRT Using Limited Information Fit Statistics/Indices: A Monte Carlo Study

Unknown Date (has links)
Measurement invariance analysis is important when test scores are used to make a group-wise comparison. Multiple-group IRT modeling is one of the commonly used methods for measurement invariance examination. One essential step in the multiple-group modeling method is the evaluation of overall model-data fit. A family of limited information fit statistics has been recently developed for assessing the overall model-data fit in IRT. Previous studies evaluated the performance of limited information fit statistics using single-group data, and found that these fit statistics performed better than the traditional full information fit statistics when data were sparse. However, no study has investigated the performance of the limited information fit statistics within the multiple-group modeling framework. This study aims to examine the performance of the limited information fit statistic (M₂) and M₂-based corresponding descriptive fit indices in conducting measurement invariance analysis within the multiple-group IRT framework. A Monte Carlo study was conducted to examine sampling distributions of M₂ and M₂-based descriptive fit indices, and their sensitivities to lack of measurement invariance under various conditions. The manipulated factors included sample sizes, model types, dimensionality, types and numbers of DIF items, and latent trait distributions. Results showed that the M₂ followed an approximately chi-square distribution when the model was correctly specified, as expected. The type I error rates of M₂ were reasonable under large sample sizes (1000/2000). When the model was misspecified, the power of M₂ was a function of sample size and the number of DIF items. For example, the power of M₂ for rejecting the U2PL Scalar Model increased from 29.2% to 99.9% when the number of uniform DIF items increased from one to six, given the sample sizes of 1000/2000. With six uniform DIF items (30% of the studied items), the power of increased from 42.4% to 99.9% when sample sizes changed from 250/500 to 1000/2000. When the difference in M₂(ΔM₂) was used to compare two correctly specified nested models, the sampling distribution of ΔM₂ appeared to be apart from the reference chi-square distribution at both tails, especially under small sample sizes. The type I error rates of the ΔM₂ test became closer to the expectation when sample sizes increased. For example, both Metric and Configural Models were correctly specified when the test included no DIF item. Given the alpha level of .05, the type I error rates of for the comparsion between the Metric and Configural Model were slightly inflated with n=250/500 (8.72%), and became closer to the alpha level with n=1000/2000 (5.3%). When at least one of the models was misspecified, the power of increased when the number of DIF items or sample sizes became larger. For example, the Metric Model was misspecified when nonuniform DIF item existed. Given sample sizes of 1000/2000 and alpha level of .05, the power of ΔM₂ for the comparison between the Metric and Configural Model increased from 52.55 % to 99.39% when the number of nonuniform DIF items changes from one to six. With one nonuniform DIF item in the test, the power of ΔM₂ was only 17.05% given the alpha level of .05 and sample sizes of 250/500, but increased to 52.55% given the sample sizes of 1000/2000. The descriptive fit indices and their differences between nested models were also affected by the number of DIF items. When there was no DIF item, all fit indices indicated good model-data fit. The differences of the five fit indices between nested models were all very small (<.008) across different sample sizes. When DIF items existed, the means of descriptive fit indices, and their differences between nested models increased when number of DIF items increased. The finding from this study provided some suggestions about the implementation of the limited information fit statistics/indices in measurement invariance analysis within the multiple-group IRT framework. / A Dissertation submitted to the Department of Educational Psychology and Learning Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester 2016. / October 31, 2016. / Includes bibliographical references. / Yanyun Yang, Professor Co-Directing Dissertation; Insu Paek, Professor Co-Directing Dissertation; Fred W. Huffer, University Representative; Betsy J. Becker, Committee Member; Salih Binici, Committee Member.

Page generated in 0.2541 seconds