• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 7
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimating attribute-based reliability in cognitive diagnostic assessment

Zhou, Jiawen 06 1900 (has links)
Cognitive diagnostic assessment (CDA) is a testing format that employs a cognitive model to, first, develop or identify items measuring specific knowledge and skills and, then, use this model to direct psychometric analyses of examinees item response patterns to promote diagnostic inferences. The attribute hierarchy method (AHM, Leighton, Gierl, & Hunka, 2004) is a psychometric procedure for classifying examinees test item responses into a set of structured attribute patterns associated with different components from a cognitive model of task performance. Attribute reliability is a fundamental concept in cognitive diagnostic assessment because it refers to the consistency of the decisions made in diagnostic test about examinees mastery of specific attributes. In this study, an adapted attribute-based reliability estimate was evaluated in comparison of the standard Cronbachs alpha using simulated data. Factors expected to influence attribute reliability estimates, including test length, sample size, model structure, and model-data fit level, were also studied. Results of this study revealed that the performances of the two attribute-based reliability estimation indices are comparable; however, the adapted index is conceptually more meaningful. Test length, model structure, and model-data fit were shown to impact attribute reliability estimates differentially. Implications to researchers and practitioners were given based on the simulation results. Limitations of the present study and future directions were also discussed. / Measurement, Evaluation, and Cognition
2

Estimating attribute-based reliability in cognitive diagnostic assessment

Zhou, Jiawen Unknown Date
No description available.
3

Maximizing the Potential of Multiple-choice Items for Cognitive Diagnostic Assessment

Gu, Zhimei 09 January 2012 (has links)
When applying cognitive diagnostic models, the goal is to accurately estimate students’ diagnostic profiles. The accuracy of these estimates may be enhanced by looking at the types of incorrect options a student selects. This thesis research examines the additional diagnostic information available from the distractors in multiple-choice items used in large-scale achievement assessments and identifies optimal conditions for extracting diagnostic information. The study is based on the analyses of both real student responses and simulated data. The real student responses are from a large-scale provincial math assessment for grade 6 students in Ontario. Data were then simulated under different skill dimensionality and item discrimination conditions. Comparisons were made between student profile estimates when using the DINA and MC-DINA models. The MC-DINA model is a newly developed cognitive diagnostic model where the probability of a student choosing a particular item option depends on how closely the student’s cognitive skill profile matches the skills tapped by that option. The results from the simulation data analysis suggested that when the simulated data included additional diagnostic information in the distractors, the MC-DINA model was able to use that information to improve the estimation of the student profiles, which shows the utility of the additional information obtained from item distractors. The value of adding information from distractors was greater when there was lower item discrimination and more skill multidimensionality. However, in the real data, the keyed options provided more diagnostic information than the distractors, and there was little information in the distractors that could be utilized by the MC-DINA model. This implies that current math test items could be further developed to include diagnostically rich distractors. The study offers some suggestions for a design of multiple-choice test items and its formative use.
4

Maximizing the Potential of Multiple-choice Items for Cognitive Diagnostic Assessment

Gu, Zhimei 09 January 2012 (has links)
When applying cognitive diagnostic models, the goal is to accurately estimate students’ diagnostic profiles. The accuracy of these estimates may be enhanced by looking at the types of incorrect options a student selects. This thesis research examines the additional diagnostic information available from the distractors in multiple-choice items used in large-scale achievement assessments and identifies optimal conditions for extracting diagnostic information. The study is based on the analyses of both real student responses and simulated data. The real student responses are from a large-scale provincial math assessment for grade 6 students in Ontario. Data were then simulated under different skill dimensionality and item discrimination conditions. Comparisons were made between student profile estimates when using the DINA and MC-DINA models. The MC-DINA model is a newly developed cognitive diagnostic model where the probability of a student choosing a particular item option depends on how closely the student’s cognitive skill profile matches the skills tapped by that option. The results from the simulation data analysis suggested that when the simulated data included additional diagnostic information in the distractors, the MC-DINA model was able to use that information to improve the estimation of the student profiles, which shows the utility of the additional information obtained from item distractors. The value of adding information from distractors was greater when there was lower item discrimination and more skill multidimensionality. However, in the real data, the keyed options provided more diagnostic information than the distractors, and there was little information in the distractors that could be utilized by the MC-DINA model. This implies that current math test items could be further developed to include diagnostically rich distractors. The study offers some suggestions for a design of multiple-choice test items and its formative use.
5

The Performance of the Linear Logistic Test Model When the Q-Matrix is Misspecified: A Simulation Study

Macdonald, George T. 14 November 2013 (has links)
A simulation study was conducted to explore the performance of the linear logistic test model (LLTM) when the relationships between items and cognitive components were misspecified. Factors manipulated included percent of misspecification (0%, 1%, 5%, 10%, and 15%), form of misspecification (under-specification, balanced misspecification, and over-specification), sample size (20, 40, 80, 160, 320, 640, and 1280), Q-matrix density (60% and 46%), number of items (20, 40, and 60 items), and skewness of person ability distribution (-0.5, 0, and 0.5). Statistical bias, root mean squared error, confidence interval coverage, confidence interval width, and pairwise cognitive components correlations were computed. The impact of the design factors were interpreted for cognitive components, item difficulty, and person ability parameter estimates. The simulation provided rich results and selected key conclusions include (a) SAS works superbly when estimating LLTM using a marginal maximum likelihood approach for cognitive components and an empirical Bayes estimation for person ability, (b) parameter estimates are sensitive to misspecification, (c) under-specification is preferred to over-specification of the Q-matrix, (d) when properly specified the cognitive components parameter estimates often have tolerable amounts of root mean squared error when the sample size is greater than 80, (e) LLTM is robust to the density of Q-matrix specification, (f) the LLTM works well when the number of items is 40 or greater, and (g) LLTM is robust to a slight skewness of the person ability distribution. In sum, the LLTM is capable of identifying conceptual knowledge when the Q-matrix is properly specified, which is a rich area for applied empirical research.
6

Making Diagnostic Inferences about Student Performance on the Alberta Education Diagnostic Mathematics Project: An Application of the Attribute Hierarchy Method

Alves, Cecilia Unknown Date
No description available.
7

Diagnosing examinees' attributes-mastery using the Bayesian inference for binomial proportion: a new method for cognitive diagnostic assessment

Kim, Hyun Seok (John) 05 July 2011 (has links)
Purpose of this study was to propose a simple and effective method for cognitive diagnosis assessment (CDA) without heavy computational demand using Bayesian inference for binomial proportion (BIBP). In real data studies, BIBP was applied to a test data using two different item designs: four and ten attributes. Also, the BIBP method was compared with DINA and LCDM in the diagnosis result using the same four-attribute data set. There were slight differences in the attribute mastery probability estimate among the three model (DINA, LCDM, BIBP), which could result in different attribute mastery pattern. In Simulation studies, it was found that the general accuracy of the BIBP method in the true parameter estimation was relatively high. The DINA estimation showed slightly higher overall correct classification rate but the bigger overall biases and estimation errors than the BIBP estimation. The three simulation variables (Attribute Correlation, Attribute Difficulty, and Sample Size) showed impacts on the parameter estimations of both models. However, they affected differently the two models: Harder attributes showed the higher accuracy of attribute mastery classification in the BIBP estimation while easier attributes was associated with the higher accuracy of the DINA estimation. In conclusion, BIBP appears an effective method for CDA with the advantage of easy and fast computation and a relatively high accuracy of parameter estimation.

Page generated in 0.1189 seconds