Spelling suggestions: "subject:"diagnostic classification"" "subject:"diagnostic 1classification""
1 |
Brief Report: Concurrent Validity of Autism Symptom Severity MeasuresReszka, Stephanie S., Boyd, Brian A., McBee, Matthew, Hume, Kara A., Odom, Samuel L. 01 January 2014 (has links)
The autism spectrum disorder (ASD) diagnostic classifications, according to the DSM-5, include a severity rating. Several screening and/or diagnostic measures, such as the autism diagnostic and observation schedule (ADOS), Childhood Autism Rating Scale (CARS) and social responsiveness scale (SRS) (teacher and parent versions), include an assessment of symptom severity. The purpose of this study was to examine whether symptom severity and/or diagnostic status of preschool-aged children with ASD (N = 201) were similarly categorized on these measures. For half of the sample, children were similarly classified across the four measures, and scores on most measures were correlated, with the exception of the ADOS and SRS-P. While the ADOS, CARS, and SRS are reliable and valid measures, there is some disagreement between measures with regard to child classification and the categorization of autism symptom severity.
|
2 |
A Psychometric Analysis of the Precalculus Concept AssessmentJones, Brian Lindley 02 April 2021 (has links)
The purpose of this study was to examine the psychometric properties of the Precalculus Concept Assessment (PCA), a 25-item multiple-choice instrument designed to assess student reasoning abilities and understanding of foundational calculus concepts (Carlson et al., 2010). When this study was conducted, the extant research on the PCA and the PCA Taxonomy lacked in-depth investigations of the instruments' psychometric properties. Most notably was the lack of studies into the validity of the internal structure of PCA response data implied by the PCA Taxonomy. This study specifically investigated the psychometric properties of the three reasoning constructs found in the PCA taxonomy, namely, Process View of Function (R1), Covariational Reasoning (R2), and Computational Abilities (R3). Confirmatory Factor Analysis (CFA) was conducted using a total of 3,018 pretest administrations of the PCA. These data were collected in select College Algebra and Precalculus sections at a large private university in the mountain west and one public university in the Phoenix metropolitan area. Results showed that the three hypothesized reasoning factors were highly correlated. Rival statistical models were evaluated to explain the relationship between the three reasoning constructs. The bifactor model was the best fitting model and successfully partitioned the variance between a general reasoning ability factor and two specific reasoning ability factors. The general factor was the dominant factor accounting for 76% of the variance and accounted for 91% of the reliability. The omegaHS values were low, indicating that this model does not serve as a reliable measure of the two specific factors. PCA response data were retrofitted to diagnostic classification models (DCMs) to evaluate the extent to which individual mastery profiles could be generated to classify individuals as masters or non-masters of the three reasoning constructs. The retrofitting of PCA data to DCMs were unsuccessful. High attribute correlations and other model deficiencies limit the confidence in which these particular models could estimate student mastery. The results of this study have several key implications for future researchers and practitioners using the PCA. Researchers interested in using PCA scores in predictive models should use the General Reasoning Ability factor from the respecified bifactor model or the single-factor model in conjunction with structural equation modeling techniques. Practitioners using the PCA should avoid using PCA subscores for reasoning abilities and continue to follow the recommended practice of reporting a simple sum score (i.e., unit-weighted composite score).
|
3 |
Interrater Reliability of the Psychological Rating Scale for Diagnostic ClassificationNicolette, Myrna 12 1900 (has links)
The poor reliability of the DSM diagnostic system has been a major issue of concern for many researchers and clinicians. Standardized interview techniques and rating scales have been shown to be effective in increasing interrater reliability in diagnosis and classification. This study hypothesized that the utilization of the Psychological Rating Scale for Diagnostic Classification for assessing the problematic behaviors, symptoms, or other characteristics of an individual would increase interrater reliability, subsequently leading to higher diagnostic agreement between raters and with DSM-III classification. This hypothesis was strongly supported by high overall profile reliability and individual profile reliability. Therefore utilization of this rating scale would enhance the accuracy of diagnosis and add to the educational efforts of technical personnel and those professionals in related disciplines.
|
4 |
Psychometric and Machine Learning Approaches to Diagnostic ClassificationJanuary 2018 (has links)
abstract: The goal of diagnostic assessment is to discriminate between groups. In many cases, a binary decision is made conditional on a cut score from a continuous scale. Psychometric methods can improve assessment by modeling a latent variable using item response theory (IRT), and IRT scores can subsequently be used to determine a cut score using receiver operating characteristic (ROC) curves. Psychometric methods provide reliable and interpretable scores, but the prediction of the diagnosis is not the primary product of the measurement process. In contrast, machine learning methods, such as regularization or binary recursive partitioning, can build a model from the assessment items to predict the probability of diagnosis. Machine learning predicts the diagnosis directly, but does not provide an inferential framework to explain why item responses are related to the diagnosis. It remains unclear whether psychometric and machine learning methods have comparable accuracy or if one method is preferable in some situations. In this study, Monte Carlo simulation methods were used to compare psychometric and machine learning methods on diagnostic classification accuracy. Results suggest that classification accuracy of psychometric models depends on the diagnostic-test correlation and prevalence of diagnosis. Also, machine learning methods that reduce prediction error have inflated specificity and very low sensitivity compared to the data-generating model, especially when prevalence is low. Finally, machine learning methods that use ROC curves to determine probability thresholds have comparable classification accuracy to the psychometric models as sample size, number of items, and number of item categories increase. Therefore, results suggest that machine learning models could provide a viable alternative for classification in diagnostic assessments. Strengths and limitations for each of the methods are discussed, and future directions are considered. / Dissertation/Thesis / Doctoral Dissertation Psychology 2018
|
5 |
The Position of Anxiety Disorders in Structural Models of Mental DisordersWittchen, Hans-Ulrich, Beesdo, Katja, Gloster, Andrew T. 23 April 2013 (has links) (PDF)
„Comorbidity“ among mental disorders is commonly observed in both clinical and epidemiological samples. The robustness of this observation is rarely questioned; however, what is at issue is its meaning. Is comorbidity „noise“ – nuisance covariance that researchers should eliminate by seeking „pure“ cases for their studies – or a „signal“ – an indication that current diagnostic systems are lacking in parsimony and are not „carving nature at its joints?“ (Krueger, p. 921).
With these words, Krueger started a discussion on the structure of mental disorders, which suggested that a 3-factor model of common mental disorders existed in the community. These common factors were labeled „anxious-misery,“ „fear“ (constituting facets of a higher-order internalizing factor), and „externalizing.“ Along with similar evidence from personality research and psychometric explorations and selective evidence from genetic and psychopharmacologic studies, Krueger suggested that this model might not only be phenotypically relevant, but might actually improve our understanding of core processes underlying psychopathology. Since then, this suggestion has become an influential, yet also controversial topic in the scientific community, and has received attention particularly in the context of the current revision process of the Manual of Mental Disorders (Fifth Edition) (DSM-V) and the International Classification of Diseases, 11th Revision (ICD-11).
Focusing on anxiety disorders, this article critically discusses the methods and findings of this work, calls into question the model’s developmental stability and utility for clinical use and clinical research, and challenges the wide-ranging implications that have been linked to the findings of this type of exploration. This critical appraisal is intended to flag several significant concerns about the method. In particular, the concerns center around the tendency to attach wide-ranging implications (eg, in terms of clinical research, clinical practice, public health, diagnostic nomenclature) to the undoubtedly interesting statistical explorations.
|
6 |
Women-specific mental disorders in DSM-V: are we failing again?Wittchen, Hans-Ulrich 20 February 2013 (has links) (PDF)
Despite a wealth of studies on differences regarding the biobehavioral and social–psychological bases of mental disorders in men and women and repeated calls for increased attention, women-specific issues have so far not been comprehensively addressed in past diagnostic classification systems of mental disorders. There is also increasing evidence that this situation will not change significantly in the upcoming revisions of ICD-11 and DSM-V. This paper explores reasons for this continued failure, highlighting three major barriers: the fragmentation of the field of women's mental health research, lack of emphasis on diagnostic classificatory issues beyond a few selected clinical conditions, and finally, the “current rules of game” used by the current DSM-V Task Forces in the revision process of DSM-V. The paper calls for concerted efforts of researchers, clinicians, and other stakeholders within a more coherent and comprehensive framework aiming at broader coverage of women-specific diagnostic classificatory issues in future diagnostic systems.
|
7 |
Psychiatric Diagnosis: Rater Reliability and Prediction Using Psychological Rating Scale for Diagnostic ClassificationMcDowell, DeLena Jean 08 1900 (has links)
This study was designed to assess the reliability of the "Psychological Rating Scale for Diagnostic classification as an instrument for determining diagnoses consistent with DSM-III criteria and nomenclature. Pairs of raters jointly interviewed a total of 50 hospital patients and then independently completed the 70-item rating scale to arrive at Axis I and Axis II diagnoses which were subsequently correlated with diagnoses obtained by standard psychometric methods, interrater agreement was 88 per cent for Axis I and 62 per cent for Axis II, with correlations of .94 and .79 respectively.
|
8 |
A preliminary assessment of a framework for the allocation of comprehensive primary dental servicesNascimento, Denise Antunes Do January 2010 (has links)
Magister Public Health - MPH / Summary:The aim of this study was to produce a preliminary assessment of the DRAF by determining its face validity, testing reliability and usability of its diagnostic classification tool, and to produce a set of preliminary recommendations on the viability of the DRAF before it is released for use within the Family Health Programme.
|
9 |
Recommendations Regarding Q-Matrix Design and Missing Data Treatment in the Main Effect Log-Linear Cognitive Diagnosis ModelMa, Rui 11 December 2019 (has links)
Diagnostic classification models used in conjunction with diagnostic assessments are to classify individual respondents into masters and nonmasters at the level of attributes. Previous researchers (Madison & Bradshaw, 2015) recommended items on the assessment should measure all patterns of attribute combinations to ensure classification accuracy, but in practice, certain attributes may not be measured by themselves. Moreover, the model estimation requires large sample size, but in reality, there could be unanswered items in the data. Therefore, the current study sought to provide suggestions on selecting between two alternative Q-matrix designs when an attribute cannot be measured in isolation and when using maximum likelihood estimation in the presence of missing responses. The factorial ANOVA results of this simulation study indicate that adding items measuring some attributes instead of all attributes is more optimal and that other missing data treatments should be sought if the percent of missing responses is greater than 5%.
|
10 |
Women-specific mental disorders in DSM-V: are we failing again?Wittchen, Hans-Ulrich January 2010 (has links)
Despite a wealth of studies on differences regarding the biobehavioral and social–psychological bases of mental disorders in men and women and repeated calls for increased attention, women-specific issues have so far not been comprehensively addressed in past diagnostic classification systems of mental disorders. There is also increasing evidence that this situation will not change significantly in the upcoming revisions of ICD-11 and DSM-V. This paper explores reasons for this continued failure, highlighting three major barriers: the fragmentation of the field of women's mental health research, lack of emphasis on diagnostic classificatory issues beyond a few selected clinical conditions, and finally, the “current rules of game” used by the current DSM-V Task Forces in the revision process of DSM-V. The paper calls for concerted efforts of researchers, clinicians, and other stakeholders within a more coherent and comprehensive framework aiming at broader coverage of women-specific diagnostic classificatory issues in future diagnostic systems.
|
Page generated in 0.1307 seconds