1 |
THE UTILITY OF ADHD-DIAGNOSTIC AND SYMPTOM VALIDITY MEASURES IN THE ASSESSMENT OF UNDERGRADUATE STUDENT RESPONSE DISTORTION: A CLINICALLY-ENHANCED SIMULATION STUDYSollman, Myriam Jessica 01 January 2008 (has links)
This study evaluated the efficacy of various attention-related, neuropsychological, and symptom validity measures in the detection of feigned ADHD in an undergraduate sample. Performance was compared between a group of presumed normal students (HON), a group of diagnostically "clean" ADHD students asked to respond to the best of their ability (ADHD), and a group of motivated, coached feigners (FGN). Feigners were educated about symptoms and characteristics of ADHD, provided with a scenario to help them relate to the plight of a student who might seek diagnosis, admonition to feign believably, and a significant monetary incentive for "successful feigning" ($45). They were not forewarned about the specific types of tests they would take nor alerted to the presence of malingering detection instruments. Results illustrated that the ADHD symptom-report measures, though sensitive to ADHD, were quite susceptible to faking. The ARS and CAARS—S:L (using a stringent cut score of four or more scale elevations) were successfully faked by 80% and 67% of students, respectively. The Conners CPT, in contrast to those measures, had both limited sensitivity to ADHD and specificity for FGN in this sample. Very high specificity and moderate sensitivity were noted for symptom validity measures across the board, translating into high positive predictive values. Binary logistic regression results indicate that the TOMM Trial 1 coupled with the DMT, LMT, or NV-MSVT may be used to identify feigners with high predictive accuracy.
|
2 |
A survey to assess ADHD symptoms and detect feigning in adult ADHD: Initial scale developmentBabcock, Michelle 23 September 2021 (has links)
No description available.
|
3 |
Passing or Failing of Symptom Validity Tests in Academic Accessibility Populations: Neuropsychological Assessment of “Near-Pass” PatientsFarrer, Thomas Jeffrey 01 June 2015 (has links)
There is overwhelming evidence that the presence of secondary gain is an independent predictor of both performance validity and neuropsychological test outcomes. In addition, studies have demonstrated that genuine cognitive and/or psychological conditions can influence performance validity testing, both in the presence and absence of secondary gain. However, few studies have examined these factors in a large sample of academic accommodation seeking college students. The current study examined base rates of symptom validity test failure, the possibility of a “Near-Pass” intermediate group on symptom validity tests, the influence of diagnoses on performance indicators, and whether performance validity differed for “Near-Pass” patients relative to those who pass and those who fail performance validity indicators.
|
4 |
Assessment of Feigned Neurocognitive Impairment in Retired Athletes in a Monetarily Incentivized Forensic SettingSmotherman, Jesse M. 08 1900 (has links)
Compromised validity of test data due to exaggeration or fabrication of cognitive deficits inhibits the capacity to establish appropriate conclusions and recommendations in neuropsychological examinations. Detection of feigned neurocognitive impairment presents a formidable challenge, particularly for evaluations involving possibilities of significant secondary gain. Among specific populations examined in this domain, litigating mild traumatic brain injury (mTBI) samples are among the most researched. One subpopulation with potential to contribute significantly to this body of literature is that of retired athletes undergoing fixed-battery neuropsychological evaluations within an assessment program. Given the considerable prevalence of concussions sustained by athletes in this sport and the substantial monetary incentives within this program, a unique opportunity exists to establish rates of feigning within this population to be compared to similar forensic mTBI samples. Further, a fixed battery with multiple validity tests (VT) offers a chance to evaluate the classification accuracy of an aggregated VT failure paradigm, as uncertainty abounds regarding the optimal approach to the recommended use of multiple VTs for effort assessment. The current study seeks to examine rates of feigned neurocognitive impairment in this population, demonstrate prediction accuracy equivalence between models based on aggregated VT failures and logistic regression, and compare classification performance of various individual VTs.
|
5 |
The development and initial validation of the cognitive response bias scale for the personality assessment inventoryGaasedelen, Owen J. 01 August 2018 (has links)
The Personality Assessment Inventory (PAI) is a commonly used instrument in neuropsychological assessment; however, it lacks a symptom validity test (SVT) that is sensitive to cognitive response bias (also referred to as non-credible responding), as defined by performance on cognitive performance validity tests (PVT). Therefore the purpose of the present study was to derive from the PAI item pool a new SVT, named the Cognitive Response Bias Scale (CRBS), that is sensitive to non-credible responding, and to provide initial validation evidence supporting the use of the CRBS in a clinical setting. The current study utilized an existing neuropsychological outpatient clinical database consisting of 306 consecutive participants who completed the PAI and PVTs and met inclusion criteria. The CRBS was empirically derived from this database utilizing primarily an Item Response Theory (IRT) framework.
Out of 40 items initially examined, 10 items were ultimately retained based on their empirical properties to form the CRBS. An examination of the internal structure of the CRBS indicated that 8 items on the CRBS demonstrated good fit to the graded response IRT model. Overall scale reliability was good (Cronbach’s alpha = 0.77) and commensurate with other SVTs. Examination of item content revealed the CRBS consisted of items related to somatic complaints, psychological distress, and denial of fault. Items endorsed by participants exhibiting lower levels of non-credible responding consisted of vague and non-specific complaints, while participants with high levels of non-credible responding endorsed items indicating ongoing active pain and distress.
The CRBS displayed expected relationships with other measures, including high positive correlations with negative impression management (r = 0.73), depression (r = 0.78), anxiety (r = 0.78), and schizophrenia (r = 0.71). Moderate negative correlations were observed with positive impression management (r = -0.31), and treatment rejection (r = -0.42). Two hierarchical logistic regression models showed the CRBS has significant predictive power above and beyond existing PAI SVTs and clinical scales in accurately predicting PVT failure. The overall classification accuracy of the CRBS in detecting failure on multiple PVTs was comparable to other SVTs (area under the curve = 0.72), and it displayed moderate sensitivity (i.e., 0.31) when specificity was high (i.e., 0.96). These operating characteristics suggest that the CRBS is effective at ruling in the possibility of non-credible responding, but not for ruling it out. The conservative recommended cut score was robust to effects of differential prediction due to gender and education. Given the extremely small sample subsets of forensic-only and non-Caucasian participants, future validation is required to establish reliable cut-offs when inferences based on comparisons to similar populations are desired.
Results of the current study indicate the CRBS has comparable psychometric properties and clinical utility to analogous SVTs in similar personality inventories to the PAI. Furthermore, item content of the CRBS is consistent with and corroborates existing theory on non-credible responding and cognitive response bias. This study also demonstrated that a graded response IRT model can be useful in deriving and validating SVTs in the PAI, and that the graded response model provides unique and novel information into the nature of non-credible responding.
|
6 |
Examining the Utility of the MMPI-3 Overreporting Scales in a Forensic Disability SampleTylicki, Jessica L. 03 June 2021 (has links)
No description available.
|
7 |
fMRI Evidence of Group Differences on the Word Memory Test in a Sample of Traumatic Brain Injury PatientsLarsen, James Douglas 07 August 2008 (has links) (PDF)
The Word Memory Test (WMT) is a popular effort test that requires participants to memorize lists of paired words and repeat them back in a variety of different memory tasks. Four brain injured patients participated in two trials of the delayed recall (DR) portion of the WMT while undergoing fMRI scanning. In the first trial subjects put forth full effort, and during the second trial subjects were instructed to simulate increased memory impairment in order to represent poor effort. fMRI activation from both trials were compared in order to contrast full and simulated poor effort activation patterns during the WMT. Raw scores from full effort and simulated poor effort trials were compared to a control group to test the hypothesis that a brain injured population will score lower than a healthy population on the WMT while putting forth full effort. Raw score results showed lower WMT scores for TBI group. fMRI results showed larger between-group differences than between-condition differences, suggesting that the WMT is sensitive to TBI.
|
8 |
Noncredible Presentation of Attention-Deficit/Hyperactivity Disorder in the Assessment of Functional Impairment Among Postsecondary StudentsLee, Grace J. 16 September 2022 (has links)
No description available.
|
Page generated in 0.0776 seconds