Spelling suggestions: "subject:"diagnosis amathematical models"" "subject:"diagnosis dmathematical models""
1 |
Evidence-based detection of spiculated lesions on mammographySampat, Mehul Pravin 28 August 2008 (has links)
Not available / text
|
2 |
A Hypothesis Testing Procedure Designed for Q-Matrix Validation of Diagnostic Classification ModelsSachdeva, Ruchi Jain January 2018 (has links)
Cognitive diagnosis models have become very popular largely because these models provide educators with an explanation for a student not performing well based on skills that have not yet been mastered, making it possible for educators to provide targeted remediation and tailor instruction to address individual strengths and weaknesses. However, in order for these procedures to be effective, the Q-matrix which establishes the relationships between latent variables representing knowledge structures (columns) and individual items on an assessment (rows) must be carefully considered. The goal of this work is to develop a new test statistic for the detection of model misspecifications of the Q-matrix, which include both underfitting the Q-matrix and overfitting the Q-matrix. In addition to the development of this new test statistic, this dissertation evaluated the performance of this new test statistic and developed an estimator of the asymptotic variance based on the Fisher Information Matrix of the slip and guess parameters.
The test statistic was evaluated by two simulation studies and also applied to the fraction subtraction dataset. The first simulation study investigated the true Type-I error rates for the test under four levels of sample size, three levels of correlation among attributes and three levels of item discrimination. Results showed that as the sample size increases the Type I error reduces to 5%. Surprisingly, the results for the relationship between Type I error and Item discrimination show that the most discriminating items (Item Discrimination of 4) have the largest Type I error rates. The power study showed that the statistic is very powerful in the detection of under-specification or over-specification of the Q-matrix with large sample sizes and/or when items are highly discriminating between students that have mastered or have not mastered a skill. Interestingly, the results when the Q matrix has multiple misspecifications the detection of under-specification is better than for over-specification when two misclassifications are being tested simultaneously. The analysis of the fraction subtraction dataset found 15% of the q-entries had enough evidence to reject the Null hypothesis. This clearly indicates that the test finds misfit in the original expert designed Q-matrix.
|
3 |
Diagnostic Classification Modeling of Rubric-Scored Constructed-Response ItemsMuller, Eric William January 2018 (has links)
The need for formative assessments has led to the development of a psychometric framework known as diagnostic classification models (DCMs), which are mathematical measurement models designed to estimate the possession or mastery of a designated set of skills or attributes within a chosen construct. Furthermore, much research has gone into the practice of “retrofitting” diagnostic measurement models to existing assessments in order to improve their diagnostic capability. Although retrofitting DCMs to existing assessments can theoretically improve diagnostic potential, it is also prone to challenges including identifying multidimensional traits from largely unidimensional assessments, a lack of assessments that are suitable for the DCM framework, and statistical quality, specifically highly correlated attributes and poor model fit. Another recent trend in assessment has been a move towards creating more authentic constructed-response assessments. For such assessments, rubric-based scoring is often seen as method of providing reliable scoring and interpretive formative feedback. However, rubric-scored tests are limited in their diagnostic potential in that they are usually used to assign unidimensional numeric scores.
It is the purpose of this thesis to propose general methods for retrofitting DCMs to rubric-scored assessments. Two methods will be proposed and compared: (1) automatic construction of an attribute hierarchy to represent all possible numeric score levels from a rubric-scored assessment and (2) using rubric criterion score level descriptions to imply an attribute hierarchy. This dissertation will describe these methods, discuss the technical and mathematical issues that arise in using them, and apply and compare both methods to a prominent rubric-scored test of critical thinking skills, the Collegiate Learning Assessment+ (CLA+). Finally, the utility of the proposed methods will be compared to a reasonable alternative methodology: the use of polytomous IRT models, including the Graded Response Model (GRM), the Partial Credit Model (PCM), and the Generalized-Partial Credit Model (G-PCM), for this type of test score data.
|
4 |
An Electromyographic kinetic model for passive stretch of hypertonic elbow flexorsHarben, Alan M. 05 1900 (has links)
No description available.
|
Page generated in 0.0742 seconds