1 |
Cognitive diagnostic model comparisonsLim, Yeongyu 08 June 2015 (has links)
Cognitive diagnostic assessment (CDA) is a new theoretical framework that is designed to integrate cognitive psychology into measurement theories. The main purpose of CDA is to provide examinees with diagnostic information while traditional psychometric approaches focus on how latent variables are accurately measured. Many cognitive diagnostic models (CDM) have been developed for CDA. Three cognitive diagnostic models- namely the rule space method (RSM), the high-order deterministic inputs, noisy ‘and’ gate (HO-DINA) model, and the multidimensional latent trait model for diagnosis (MLTM-D) model were compared using simulated data and empirical data. For the simulation study, three methods of data generation are proposed. Each method was designed based on one of the three models. A total of 12 conditions was involved in the simulation study: 2 item designs X 2 level of test X 3 methods of data generation. The diagnostic results were compared by level of test difficulty, level of ability estimates, and level of dimensionality. The effect of number of attributes on accurate classification was also investigated. For the empirical study, a mathematics test data was used and the diagnostic results were compared.
|
2 |
The Use of Cognitive Diagnostic Modeling in the Assessment of Computational ThinkingTingxuan Li (7046627) 14 August 2019 (has links)
<div>
<p>In order to achieve broadening
participation in computer science and other careers related to computing,
middle school classrooms should provide students opportunities (tasks) to think
like a computer scientist. Researchers in computing education promote the idea
that programming skill should not be a pre-requisite for students to display
computational thinking (CT). Thus, some tasks that aim to deliberately elicit
students’ CT competency should be stand-alone tasks rather than coding
fluency-oriented tasks. Guided by this approach, this assessment design process
began by examining national standards in CT. A Q-matrix (i.e., item–attribute
alignment table) was then developed and modified using (a) literature in CT, (b)
input from subject-matter experts, and (c) cognitive interviews with a small
sample of students. After multiple-choice item prototypes were written,
pilot-tested, and revised, 15 of them were finally selected to be administered
to 564 students in two middle schools in the Mid-western US. Through cognitive
diagnostic modeling, the estimation results yielded mastery classifications or
subscores that can be used diagnostically by teachers. The results help
teachers facilitate students’ <i>mastery
orientations</i>, that is, to address the gap between what students know and
what students need to know in order to meet desired learning goals. By
equipping teachers with a diagnostic classification based assessment, this
research has the capacity to inform instruction which, in turn, will enrich
students’ learning experience in CT. </p>
</div>
<br>
|
3 |
Cognitive diagnostic analysis using hierarchically structured skillsSu, Yu-Lan 01 May 2013 (has links)
This dissertation proposes two modified cognitive diagnostic models (CDMs), the deterministic, inputs, noisy, "and" gate with hierarchy (DINA-H) model and the deterministic, inputs, noisy, "or" gate with hierarchy (DINO-H) model. Both models incorporate the hierarchical structures of the cognitive skills in the model estimation process, and can be used for situations where the attributes are ordered hierarchically. The Trends in International Mathematics and Science Study (TIMSS) 2003 data are analyzed to illustrate the proposed approaches. The simulation study evaluates the effectiveness of the proposed approaches under various conditions (e.g., various numbers of attributes, test lengths, sample sizes, and hierarchical structures). The simulation study attempts to address the model fits, items fit, and accuracy of item parameter recovery when the skills are in a specified hierarchy and varying estimation models are applied. The simulation analysis examines and compares the impacts of the misspecification of a skill hierarchy on various estimation models under their varying assumptions of dependent or independent attributes. The study is unique in incorporating a skill hierarchy with the conventional DINA and DINO models. It also reduces the number of possible latent classes and decreases the sample size requirements. The study suggests that the DINA-H/ DINO-H models, instead of the conventional DINA/ DINO models, should be considered when skills are hierarchically ordered. Its results demonstrate the proposed approaches to analyzing the hierarchically structured CDMs, illustrate the usage in applying cognitive diagnosis models to a large-scale assessment, and provide researchers and test users with practical guidelines.
|
4 |
Cognitive Diagnostic Model, a Simulated-Based Study: Understanding Compensatory Reparameterized Unified Model (CRUM)Galeshi, Roofia 28 November 2012 (has links)
A recent trend in education has been toward formative assessments to enable teachers, parents, and administrators assist students succeed. Cognitive diagnostic modeling (CDM) has the potential to provide valuable information for stakeholders to assist students identify their skill deficiency in specific academic subjects. Cognitive diagnosis models are mainly viewed as a family of latent class confirmatory probabilistic models. These models allow the mapping of students' skill profiles/academic ability. Using a complex simulation studies, the methodological issues in one of the existing cognitive models, referred to as compensatory reparameterized unified model (CRUM) under the log-linear model family of CDM, was investigated. In order for practitioners to implement these models, their item parameter recovery and examinees' classifications need to be studied in detail. A series of complex simulated data were generated for investigation with the following designs: three attributes with seven items, three attributes with thirty five items, four attributes with fifteen items, and five attributes with thirty one items. Each dataset was generated with observations of: 50, 100, 500, 1,000, 5,000, and 10,000 examinees.
The first manuscript is the report of the investigation of how accurately CRUM could recover item parameters and classify examinees under true QMattrix specification and various research designs. The results suggested that the test length with regards to number of attributes and sample size affects the item parameter recovery and examinees classification accuracy. The second manuscript is the report of the investigation of the sensitivity of relative fit indices in detecting misfit for over- and opposite-Q-Matrix misspecifications. The relative fit indices under investigation were Akaike information criterion (AIC), Bayesian information criterion (BIC), and sample size adjusted Bayesian information criterion (ssaBIC). The results suggested that the CRUM can be a robust model given the consideration to the observation number and item/attribute combinations. The findings of this dissertation fill some of the existing gaps in the methodological issues regarding cognitive models' applicability and generalizability. It helps practitioners design tests in CDM framework in order to attain reliable and valid results. / Ph. D.
|
5 |
Recommendations Regarding Q-Matrix Design and Missing Data Treatment in the Main Effect Log-Linear Cognitive Diagnosis ModelMa, Rui 11 December 2019 (has links)
Diagnostic classification models used in conjunction with diagnostic assessments are to classify individual respondents into masters and nonmasters at the level of attributes. Previous researchers (Madison & Bradshaw, 2015) recommended items on the assessment should measure all patterns of attribute combinations to ensure classification accuracy, but in practice, certain attributes may not be measured by themselves. Moreover, the model estimation requires large sample size, but in reality, there could be unanswered items in the data. Therefore, the current study sought to provide suggestions on selecting between two alternative Q-matrix designs when an attribute cannot be measured in isolation and when using maximum likelihood estimation in the presence of missing responses. The factorial ANOVA results of this simulation study indicate that adding items measuring some attributes instead of all attributes is more optimal and that other missing data treatments should be sought if the percent of missing responses is greater than 5%.
|
6 |
The Hierarchy Misfit Index: Evaluating Person Fit for Cognitive Diagnostic AssessmentGuo, Qi Unknown Date
No description available.
|
Page generated in 0.02 seconds