• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 358
  • 153
  • 76
  • 24
  • 18
  • 16
  • 14
  • 11
  • 9
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 855
  • 432
  • 421
  • 135
  • 126
  • 123
  • 118
  • 117
  • 115
  • 108
  • 100
  • 86
  • 86
  • 86
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

A comparison of traditional and IRT factor analysis.

Kay, Cheryl Ann 12 1900 (has links)
This study investigated the item parameter recovery of two methods of factor analysis. The methods researched were a traditional factor analysis of tetrachoric correlation coefficients and an IRT approach to factor analysis which utilizes marginal maximum likelihood estimation using an EM algorithm (MMLE-EM). Dichotomous item response data was generated under the 2-parameter normal ogive model (2PNOM) using PARDSIM software. Examinee abilities were sampled from both the standard normal and uniform distributions. True item discrimination, a, was normal with a mean of .75 and a standard deviation of .10. True b, item difficulty, was specified as uniform [-2, 2]. The two distributions of abilities were completely crossed with three test lengths (n= 30, 60, and 100) and three sample sizes (N = 50, 500, and 1000). Each of the 18 conditions was replicated 5 times, resulting in 90 datasets. PRELIS software was used to conduct a traditional factor analysis on the tetrachoric correlations. The IRT approach to factor analysis was conducted using BILOG 3 software. Parameter recovery was evaluated in terms of root mean square error, average signed bias, and Pearson correlations between estimated and true item parameters. ANOVAs were conducted to identify systematic differences in error indices. Based on many of the indices, it appears the IRT approach to factor analysis recovers item parameters better than the traditional approach studied. Future research should compare other methods of factor analysis to MMLE-EM under various non-normal distributions of abilities.
192

A Comparison of Three Correlational Procedures for Factor-Analyzing Dichotomously-Scored Item Response Data

Fluke, Ricky 05 1900 (has links)
In this study, an improved correlational procedure for factor-analyzing dichotomously-scored item response data is described and tested. The procedure involves (a) replacing the dichotomous input values with continuous probability values obtained through Rasch analysis; (b) calculating interitem product-moment correlations among the probabilities; and (c) subjecting the correlations to unweighted least-squares factor analysis. Two simulated data sets and an empirical data set (Kentucky Comprehensive Listening Test responses) were used to compare the new procedure with two more traditional techniques, using (a) phi and (b) tetrachoric correlations calculated directly from the dichotomous item-response values. The three methods were compared on three criterion measures: (a) maximum internal correlation; (b) product of the two largest factor loadings; and (c) proportion of variance accounted for. The Rasch-based procedure is recommended for subjecting dichotomous item response data to latent-variable analysis.
193

An investigation of the 2012 Annual National Assessment Grade 6 mathematics instrument

Modzuka, Charlotte Madumelani January 2017 (has links)
The aim of this study was to investigate the quality of the Annual National Assessment (ANA) Grade 6 mathematics instrument including its design, with reference to a single education district. The main question that was investigated was: To what extent does the 2012 Annual National Assessment Grade 6 mathematics assessment instrument provide meaningful information for making appropriate interpretations on district level? The conceptual framework underpinning this study was drawn from the Queensland Studies Authorities Assessment Policy document. The research comprised a secondary analysis design applying mixed methods using the scripts of 546 learners in one district from 5 schools selected to represent a range of achievement. A content analysis of the instrument was undertaken, followed by a statistical item analysis applying the Rasch measurement model. These analytical methods were utilised to determine the quality of the ANA Grade 6 mathematics instrument. Content validity, construct validity and reliability was investigated in order to evaluate inferences that were made and actions that were taken based upon the mathematics performance of learners in Grade 6 in the Gauteng North District (GND) in the year 2012. The investigation revealed that construct validity and content validity were largely achieved, as items were appropriately aligned to the 2012 ANA Grade 6 mathematics curriculum. However errors in mathematics and language formulation detracted from the validity of the instrument. In the case of some items, lack of clarity may have confused learners. As far as reliability is concerned the investigation revealed that the instrument had a reasonable person separation index, a measure of both item and person reliability. However, these conclusions are based on a relatively small sample from only one district and therefore has somewhat limited applicability but is nevertheless of educational consequence. / Dissertation (MEd)--University of Pretoria, 2017. / Science, Mathematics and Technology Education / MEd / Unrestricted
194

Designing Software to Unify Person-Fit Assessment

Pfleger, Phillip Isaac 10 December 2020 (has links)
Item-response theory (IRT)assumes that the model fits the data. One commonly overlooked aspect of model-fit assessment is an examination of personfit, or person-fit assessment (PFA). One reason that PFA lacks popularity among psychometricians is that comprehensive software is notpresent.This dissertation outlines the development and testing ofa new software package, called wizirt, that will begin to meet this need. This software package provides a wide gamut of tools to the user but is currently limited to unidimensional, dichotomous, and parametricmodels. The wizirt package is built in the open source language R, where it combines the capabilities of a number of other R packages under a single syntax.In addition to the wizirt package, I have created a number of resources to help users learn to use the package. This includes support for individuals who have never used R before, as well as more experienced R users.
195

Designing Software to Unify Person-Fit Assessment

Pfleger, Phillip Isaac 10 December 2020 (has links)
Item-response theory (IRT)assumes that the model fits the data. One commonly overlooked aspect of model-fit assessment is an examination of personfit, or person-fit assessment (PFA). One reason that PFA lacks popularity among psychometricians is that comprehensive software is notpresent.This dissertation outlines the development and testing ofa new software package, called wizirt, that will begin to meet this need. This software package provides a wide gamut of tools to the user but is currently limited to unidimensional, dichotomous, and parametricmodels. The wizirt package is built in the open source language R, where it combines the capabilities of a number of other R packages under a single syntax.In addition to the wizirt package, I have created a number of resources to help users learn to use the package. This includes support for individuals who have never used R before, as well as more experienced R users.
196

Assessing the Differential Functioning of Items and Tests of a Polytomous Employee Attitude Survey

Swander, Carl Joseph 06 April 1999 (has links)
Dimensions of a polytomous employee attitude survey were examined for the presence of differential item functioning (DIF) and differential test functioning (DTF) utilizing Raju, van der Linden, & Fleer's (1995) differential functioning of items and tests (DFIT) framework. Comparisons were made between managers and non-managers on the 'Management' dimension and between medical staff and nurse staff employees on both the 'Management' and 'Quality of Care and Service' dimensions. 2 out of 21 items from the manager/non-manager comparison were found to have significant DIF, supporting the generalizability of Lynch, Barnes-Farell, and Kulikowich (1998). No items from the medical staff/nurse staff comparisons were found to have DIF. The DTF results indicated that in two out of the three comparisons 1 item could be removed to create dimensions free from DTF. Based on the current findings implications and future research are discussed. / Master of Science
197

Measuring Procedural Justice: A Case Study in Criminometrics

Graham, Amanda K. 01 October 2019 (has links)
No description available.
198

Item-Reduction Methodologies for Complex Educational Assessments: A Comparative Methodological Exploration

Kruse, Lance M. January 2019 (has links)
No description available.
199

ITEM RESPONSE MODELS AND CONVEX OPTIMIZATION.

Lewis, Naama 01 May 2020 (has links)
Item Response Theory (IRT) Models, like the one parameter, two parameters, or normal Ogive, have been discussed for many years. These models represent a rich area of investigation due to their complexity as well as the large amount of data collected in relationship to model parameter estimation. Here we propose a new way of looking at IRT models using I-projections and duality. We use convex optimization methods to derive these models. The Kullback-Leibler divergence is used as a metric and specific constraints are proposed for the various models. With this approach, the dual problem is shown to be much easier to solve than the primal problem. In particular when there are many constraints, we propose the application of a projection algorithm for solving these types of problems. We also consider re-framing the problem and utilizing a decomposition algorithm to solve for parameters as well. Both of these methods will be compared to the Rasch and 2-Parameter Logistic models using established computer software where estimation of model parameters are done under Maximum Likelihood Estimation framework. We will also compare the appropriateness of these techniques on multidimensional item response data sets and propose new models with the use of I-projections.
200

HIGH-STAKES TESTS FOR STUDENTS WITH SPECIFIC LEARNING DISABILITIES: DISABILITY-BASED DIFFERENTIAL ITEM FUNCTIONING

Anjorin, Idayatou 01 December 2009 (has links)
Students with learning disabilities are increasingly included in state accountability systems. The purpose of this study was to investigate disability-based differential item functioning (DIF) on a statewide high-stakes mathematics test administered in the Spring of 2003 to all students seeking a high-school diploma in one state in the eastern part of the U.S. Overall performance scores for all students in grade 10 taking the test for the first time were examined. Item performance scores for students with specific learning disabilities who took the test with and without state mandated accommodations were compared with that for students without disabilities after matching on total test score. It was hypothesized that more DIF items will favor students who received packages of accommodations. The standardization method for DIF analysis by Doran and Holland yielded the presence of items in two directions. This study revealed that more DIF items favored students without disabilities, and with substantially high indexes that could be problematic for understanding the meaning of scores for students with specific learning disabilities.

Page generated in 0.1268 seconds