• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The development and initial validation of the cognitive response bias scale for the personality assessment inventory

Gaasedelen, Owen J. 01 August 2018 (has links)
The Personality Assessment Inventory (PAI) is a commonly used instrument in neuropsychological assessment; however, it lacks a symptom validity test (SVT) that is sensitive to cognitive response bias (also referred to as non-credible responding), as defined by performance on cognitive performance validity tests (PVT). Therefore the purpose of the present study was to derive from the PAI item pool a new SVT, named the Cognitive Response Bias Scale (CRBS), that is sensitive to non-credible responding, and to provide initial validation evidence supporting the use of the CRBS in a clinical setting. The current study utilized an existing neuropsychological outpatient clinical database consisting of 306 consecutive participants who completed the PAI and PVTs and met inclusion criteria. The CRBS was empirically derived from this database utilizing primarily an Item Response Theory (IRT) framework. Out of 40 items initially examined, 10 items were ultimately retained based on their empirical properties to form the CRBS. An examination of the internal structure of the CRBS indicated that 8 items on the CRBS demonstrated good fit to the graded response IRT model. Overall scale reliability was good (Cronbach’s alpha = 0.77) and commensurate with other SVTs. Examination of item content revealed the CRBS consisted of items related to somatic complaints, psychological distress, and denial of fault. Items endorsed by participants exhibiting lower levels of non-credible responding consisted of vague and non-specific complaints, while participants with high levels of non-credible responding endorsed items indicating ongoing active pain and distress. The CRBS displayed expected relationships with other measures, including high positive correlations with negative impression management (r = 0.73), depression (r = 0.78), anxiety (r = 0.78), and schizophrenia (r = 0.71). Moderate negative correlations were observed with positive impression management (r = -0.31), and treatment rejection (r = -0.42). Two hierarchical logistic regression models showed the CRBS has significant predictive power above and beyond existing PAI SVTs and clinical scales in accurately predicting PVT failure. The overall classification accuracy of the CRBS in detecting failure on multiple PVTs was comparable to other SVTs (area under the curve = 0.72), and it displayed moderate sensitivity (i.e., 0.31) when specificity was high (i.e., 0.96). These operating characteristics suggest that the CRBS is effective at ruling in the possibility of non-credible responding, but not for ruling it out. The conservative recommended cut score was robust to effects of differential prediction due to gender and education. Given the extremely small sample subsets of forensic-only and non-Caucasian participants, future validation is required to establish reliable cut-offs when inferences based on comparisons to similar populations are desired. Results of the current study indicate the CRBS has comparable psychometric properties and clinical utility to analogous SVTs in similar personality inventories to the PAI. Furthermore, item content of the CRBS is consistent with and corroborates existing theory on non-credible responding and cognitive response bias. This study also demonstrated that a graded response IRT model can be useful in deriving and validating SVTs in the PAI, and that the graded response model provides unique and novel information into the nature of non-credible responding.
2

CROSS-VALIDATION OF THE VALIDITY-10 SUBSCALE OF THE NEUROBEHAVIORAL SYMPTOM INVENTORY

Harp, Jordan P. 01 January 2017 (has links)
The present study is a cross-validation of the Validity-10 embedded symptom validity indicator from the Neurobehavioral Symptom Inventory (NSI) for the detection of questionable response validity during evaluation for mild traumatic brain injury (TBI). The sample and data derived from a three-site Veterans Affairs (VA) parent study to validate the TBI Clinical Reminder, a routine set of questions asked of all recently returned veterans at VA facilities to screen for history of TBI. In the parent study, veterans recently returned from Iraq and Afghanistan underwent an evaluation for TBI with a physician and completed an assessment battery including neuropsychological tests of cognitive performance and indicators of symptom and performance validity, psychiatric assessment measures, a structured interview for post-traumatic stress disorder (PTSD), and various behavioral health questionnaires. The present study estimated the test operating characteristics of Validity-10, using NSI results gathered during the physician evaluation to compute Validity-10 scores, and using results on several other measures of symptom and performance validity from the assessment battery as criteria for questionable response validity. Only individuals who had positive screen results for TBI on the TBI Clinical Reminder prior to full evaluation were included in the present sample. Sensitivity of Validity-10 to questionable validity was moderately high (.60 - .70) to excellent (.90 - 1.00) at high levels of specificity (> .80). Effects of different base rates of and different criteria for questionable validity on the utility of Validity-10 were explored as well. Chi-square analyses to determine the effect of PTSD symptoms on the utility of Validity-10 demonstrated overall classification accuracy in general, and false positive rate in particular, were relatively poorer when used with individuals who reported significant PTSD symptoms. Overall, these findings support the use of Validity-10 (at cut score Validity-10 ≥ 19) to identify those veterans being evaluation for mild TBI in the VA system who should be referred for comprehensive secondary evaluation by a clinical neuropsychologist using multiple forms of symptom and performance validity testing. Further studies of the effects of PTSD symptoms on the accuracy of Validity-10 for this purpose are recommended.

Page generated in 0.087 seconds