1 |
Rethinking the Force Concept Inventory: Developing a Cognitive Diagnostic Assessment to Measure Misconceptions in Newton's LawsNorris, Mary Armistead 12 October 2021 (has links)
Student misconceptions in science are common and may be present even for students who are academically successful. Concept inventories, multiple-choice tests in which the distractors map onto common, previously identified misconceptions, are commonly used by researchers and educators to gauge the prevalence of student misconceptions in science. Distractor analysis of concept inventory responses could be used to create profiles of individual student misconceptions which could provide deeper insight into the phenomenon and provide useful information for instructional planning, but this is rarely done as the inventories are not designed to facilitate it. Researchers in educational measurement have suggested that diagnostic cognitive models (DCMs) could be used to diagnose misconceptions and to create such misconception profiles. DCMs are multidimensional, confirmatory latent class models which are designed to measure the mastery/presence of fine-grained skills/attributes. By replacing the skills/attributes in the model with common misconceptions, DCMs could be used to filter students into misconception profiles based on their responses to concept inventory-like questions. A few researchers have developed new DCMs that are specifically designed to do this and have retrofitted data from existing concept inventories to them. However, cognitive diagnostic assessments, which are likely to display better model fit with DCMs, have not been developed. This project developed a cognitive diagnostic assessment to measure knowledge and misconceptions about Newton's laws and fitted it with the deterministic input noisy-and-gate (DINA) model. Experienced physics instructors assessed content validity and Q-matrix alignment. A pilot test with 100 undergraduates was conducted to assess item quality within a classical test theory framework. The final version of the assessment was field tested with 349 undergraduates. Results showed that response data displayed acceptable fit to the DINA model at the item level, but more questionable fit at the overall model level; that responses to selected items were similar to those given to two items from the Force Concept Inventory; and that, although all students were likely to have misconceptions, those with lower knowledge scores were more likely to have misconceptions. / Doctor of Philosophy / Misconceptions about science are common even among well-educated adults. Misconceptions range from incorrect facts to personal explanations for natural phenomena that make intuitive sense but are incorrect. Frequently, they exist in people's minds alongside correct science knowledge. Because of this, misconceptions are often difficult to identify and to change. Students may be academically successful and still retain their misconceptions. Concept inventories, multiple-choice tests in which the incorrect answer choices appeal to students with common misconceptions, are frequently used by researchers and educators to gauge the prevalence of student misconceptions in science. Analysis of incorrect answer choices to concept inventory questions can be used to determine individual student's misconceptions, but it is rarely done because the inventories are not known to be valid measures for this purpose. One source of validity for tests is the statistical model that is used to calculate test scores. In valid tests, student's answers to the questions should follow similar patterns to those predicted by the model. For instance, students are likely to get questions about the same things either all correct or all incorrect. Researchers in educational measurement have proposed that certain types of innovative statistical models could be used to develop tests that identify student's misconceptions, but no one has done so. This project developed a test to measure knowledge and misconceptions about forces and assessed how well it predicted student's misconceptions compared to two statistical models. Results showed that the test predicted student's knowledge in good agreement and misconceptions in moderate agreement with the statistical models; that students tended to answer selected questions in the same way that they answered two similar questions from an existing test about forces; and that, although students with lower test scores were more likely to have misconceptions, students with high test scores also had misconceptions.
|
2 |
Identifying and measuring cognitive aspects of a mathematics achievement testLutz, Megan E. 16 March 2012 (has links)
Cognitive Diagnostic Models (CDMs) are a useful way to identify potential areas of intervention for students who may not have mastered various skills and abilities at the same time as their peers. Traditionally, CDMs have been used on narrowly defined classroom tests, such as those for determining whether students are able to use different algebraic principles correctly. In the current study, the Deterministic Input, Noisy "And" Gate model (DINA; Haertel, 1989; Junker&Sijtsma, 2001) and the Compensatory Reparameterized Unified Model (CRUM; Hartz, 2002), as parameterized by the log-linear cognitive diagnosis model (LCDM; Henson, Templin,&Willse, 2009), were used to analyze the utility of pre-defined cognitive components in estimating students' abilities in a broadly defined, standardized mathematics achievement test. The attribute mastery profile distributions were compared; the majority of students was classified into the extremes of no mastery or complete mastery for both the CRUM and DINA models, though greater variability among attribute mastery classifications was obtained by the CRUM.
|
3 |
Diagnostic Modeling of Intra-Organizational Mechanisms for Supporting Policy ImplementationMutcheson, Brock 28 June 2016 (has links)
The Virginia Guidelines for Uniform Performance Standards and Evaluation Criteria for Teachers represented a significant overhaul of conventional teacher evaluation criteria in Virginia. The policy outlined seven performance standards by which all Virginia teachers would be evaluated. This study explored the application of cognitive diagnostic modeling to measure teachers' perceptions of intra-organizational mechanisms available to support educational professionals in implementing this policy.
It was found that a coarse-grained, four-attribute compensatory, re-parameterized unified model (C-RUM) fit teacher perception data better and had lower standard errors than the competing finer-grained models. The Q-matrix accounted for the complex loadings of items to the four theoretically and empirically driven mechanisms of implementation support including characteristics of the policy, teachers, leadership, and the organization. The mechanisms were positively, significantly, and moderately correlated which suggested that each mechanism captured a different, yet related, component of policy implementation support. The diagnostic profile estimates indicated that the majority of teachers perceived support on items relating to "characteristics of teachers." Moreover, almost 60% of teachers were estimated to belong to profiles with perceived support on "characteristics of the policy." Finally, multiple group multinomial log-linear models (Xu and Von Davier, 2008) were used to analyze the data across subjects, grade levels, and career status. There was lower perceived support by STEM teachers than non-STEM teachers who have the same profile, suggesting that STEM teachers required differential support than non-STEM teachers.
The precise diagnostic feedback on the implementation process provided by this application of diagnostic models will be beneficial to policy makers and educational leaders. Specifically, they will be better prepared to identify strengths and weaknesses and target resources for a more efficient, and potentially more effective, policy implementation process. It is assumed that when equipped with more precise diagnostic feedback, policy makers and school leaders may be able to more confidently engage in empirical decision making, especially in regards to targeting resources for short-term and long-term organizational goals subsumed within the policy implementation initiative. / Ph. D.
|
Page generated in 0.1174 seconds