• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 630
  • 149
  • 47
  • 24
  • 19
  • 19
  • 19
  • 19
  • 19
  • 19
  • 8
  • 6
  • 5
  • 4
  • 3
  • Tagged with
  • 1089
  • 551
  • 148
  • 144
  • 134
  • 105
  • 104
  • 91
  • 91
  • 90
  • 89
  • 88
  • 88
  • 75
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

The validity and reliability of the abbreviated version of the diagnostic interview for borderlines (DIB-Ab) /

Ahmadi, Shamila. January 2001 (has links)
Objective. The diagnostic interview for borderline personality disorder (DIB) requires a long duration of administration (45 minutes). This led to the development of a briefer (10 minutes), and therefore more feasible, version of the DIB named the DIB-Ab. It is the aim of this study to test the validity and reliability of the DIB-Ab. Method. Forty-seven previously suicidal adolescents, aged 14--21 years, participated in this study. The DIB-Ab and DIB-R were administered during a battery of tests separated by 130 minutes of unrelated measures. Results. The Pearson correlation coefficient of the DIB-Ab with respect to the DIB-R ranged from .52--.80 with respect to the total scores on three sections (i.e. affect, cognition, and impulse/action). The correlation coefficient of the DIB-Ab ranged from .43--.91 for the total section scores and the same section scores. The standardized alpha for internal consistency of the DIB-Ab ranged from .54--.83 for the total scores and for cognition and impulse/action section scores. Conclusion. The preliminary data analysis revealed that the DIB-Ab is a valid and reliable instrument, and it could replace the parent version in certain research and clinical paradigms.
322

Assessment of spatial orientation in Alzheimer's disease : theoretical and clinical implications

Liu, Lili, 1962- January 1993 (has links)
The purpose of this project was to develop a reliable and valid battery for the assessment of spatial orientation skills (SOS) in persons with Alzheimer's disease (AD). The battery, comprised of 13 subtests, was administered to 97 normal control subjects, 25 subjects with early AD and 10 with late AD. The test-retest reliability of the battery was based on the test results of 33 normal control subjects and 25 early AD subjects. Inter-rater reliability was determined using four trained raters who evaluated 27 normal control subjects and the same 25 early AD subjects. Content validity was established using a panel of six experts and construct validity was determined by comparing the performance of the normal control and early AD groups. To establish criterion validity, the Global Deterioration Scale (GDS) was used as the criterion. For the AD group, eight subtests demonstrated acceptable test-retest and inter-rater reliability coefficients (ICC $ ge$.70). For the control group, three subtests had acceptable test-retest coefficients and four had acceptable inter-rater coefficients. The internal consistency of the battery was acceptable as shown by overall Cronbach's alpha of.86 for AD subjects and.72 for control subjects, and was further analyzed using factor analysis which yielded five factors. Logistic regression provided evidence for good construct validity. Scores on the SOS subtests were able to differentiate the three groups of subjects established on the basis of the GDS scores (GDS 1 and 2, GDS 3 and 4, and GDS 5). A preliminary shortened version of the battery was developed using six subtests which demonstrated high test-retest and inter-rater reliability. The performance of subjects with AD on the battery is discussed with respect to its implications for the theoretical basis and clinical assessment of spatial orientation in AD.
323

The psychometrics of a bipolar valence activation model of self-reported affect

Carroll, James M. 11 1900 (has links)
Since the 1950's, researchers have sought unsuccessfully to identify a consensual psychometric structure of self-reported affect. One unresolved question, central to any psychometric model, is whether the structure includes bipolar or unipolar dimensions. For example, are positive and negative affect two ends of the same bipolar dimension or are they better represented by separable unipolar dimensions? In contrast to what has been assumed in previous analyses, a bipolar model is presented that distinguishes between two forms of bipolarity, each with its own conceptual definition, operational definition, and statistical properties. It is shown both conceptually and empirically that the two forms of bipolarity lead to different results when examined by traditional psychometric methods such as exploratory factor analysis, confirmatory factor analysis, and the linear correlation. Furthermore, when the bipolar model is applied to previous analyses, the psychometric evidence that has suggested unipolar dimensions can be interpreted as evidence suggesting bipolar dimensions. Two studies were conducted to examine specific predictions of the bipolar model. Study 1 examined judgements of the hypothesized opposites of hot-cold and happy-sad. Study 2 examined judgments of affect terms based on a circumplex model of affect characterized by orthogonal valence and activation dimensions. In both studies the bipolar model is strongly supported. Furthermore, the analyses highlighted specific problems with current methods that emphasize sophisticated techniques based on the correlation coefficient and demonstrated the utility of more simple descriptive statistics.
324

Empirically keying personality measures to mitigate faking effects and improve validity| A Monte Carlo investigation

Tawney, Mark Ward 03 July 2013 (has links)
<p>Personality-type measures should be viable tools to use for selection. They have incremental validity over cognitive measures and they add this incremental validity while decreasing adverse impact (Hough, 1998; Ones, Viswesvaran &amp; Schmidt, 1993; Ones &amp; Viswesvaran, 1998a). However, personality measures are susceptible to faking; individual's instructed to fake on personality measures are able to increase their scores (Barrick &amp; Mount, 1996; Ellingson, Sackett &amp; Hough, 1999; Hough, Eaton, Dunnette, Kamp, &amp; McCloy, 1990). Further, personality measures often reveal less than optimal validity estimates as research continually finds meta-analytic coefficients near .2 (e.g., Morgeson, Campion, Dipboye, Hollenbeck, Murphy, &amp; Schmitt, 2007). Some researchers have suggested that these two problems are linked as faking on personality measure may reduce their ability to predict job performance (e.g., Tett &amp; Christansen, 2007). Empirically keyed instruments traditionally enhance prediction and have been found to mitigate the effects of faking (Kluger, Reilly &amp; Russell, 1991; Scott &amp; Sinar, 2011). Recently suggested as a means to key to personality measures (e.g., Tawney &amp; Mead, In Prep), this dissertation further investigates empirical keying methods as a means to both mitigate faking effects and as a means to increase validity of personality-type measures. A Monte Carlo methodology is used due to the difficulties in obtaining accurate measures of faking. As such, this dissertation investigates faking issues under controlled and known parameters, allowing for more robust conclusions as compared to prior faking research. </p>
325

Effects of repeated administration on intensity scales

Stothart, Cary R. 27 July 2013 (has links)
<p> This study assessed the extent to which multiple administrations of an intensity scale; in this case, the Simulator Sickness Questionnaire (SSQ), influences participant responding on subsequent administrations of the same scale. The first experiment sought to determine this by using a laboratory task in which one group of participants were asked to watch a number of identical videos depicting a simulated drive from the driver's point of view, and fill out an SSQ and Center for Epidemiological Studies Depression Scale (CES-D) between viewings of the videos. Another group of participants were asked to view the videos, but were only asked to fill out the SSQ and CES-D once before the first video and once after the last video. Overall, it was found that multiple administrations of the SSQ and CES-D do not substantially influence subsequent responding on both scales. The second experiment sought to replicate the findings from the first experiment online by using Amazon's Mechanical Turk service. Here, the same pattern of responding to the SSQ was found. Together, these findings suggest that additional administrations of an intensity scale; in this case, the SSQ, do not substantially influence participant responding on subsequent administrations.</p>
326

Multilevel modeling of cognitive ability in highly functioning adults

Trapani, Catherine Schuler 31 July 2013 (has links)
<p> The goal of this research was to study differences in cognitive performance on verbal and quantitative measures among subjects of different ages. Data was gathered on subjects ranging in age from 16 to 80 years of age from birth-cohorts from 1927 to 1990. In addition to year of birth, personal characteristics of gender, race/ethnicity and undergraduate area of study were obtained. Multilevel models were built that predict cognitive performance as a function of age, cohort and other non-independent personal characteristics . Verbal performance rises as the age of the test-taker rises; quantitative performance declines as the age of the test-taker rises. After controlling for the race/ethnicity and gender of the test-taker, there are both age and cohort effects for verbal and quantitative models. On the verbal measure, the cohort effect favors those test-takers born at an earlier time. There is an interaction between age and cohort on the quantitative measure. This data is secondary analysis and the records are from those test-takers who choose to take a consequential assessment. When the multilevel models are produced independently for those test-takers ages 20-39 and those ages 40-64, different results are seen between the two age groups. There is little difference in performance for 20-39 year olds on the verbal measure other than a positive effect for age at time of test. For the test-taker aged 40-64, there is a positive effect due to age, a positive cohort effect and a negative interaction effect between age and science study. Comparing the 20-39 year olds with the 40-64 year olds on the quantitative measure, the decline in performance for the older group is one-fourth the rate of decline in the younger group. For the quantitative measure, after controlling for age, there is a positive cohort effect for both age groups. </p>
327

Modeling completion at a community college

Nguyen, Quoc Tim H. 09 August 2013 (has links)
<p> The purpose of the current study was to assess a model of college completion at a 2-year community college based on Tinto's Theory of Student Drop Out and current factors known to impact college completion. A freshman cohort (<i>n</i> = 2,846) that attended a large-urban community college was assessed. Logistic regression analysis found student age and math proficiency when entering college were significant factors in the model. The older the student was when first enrolling, the lower their likelihood of completing college. The more remediation a student needed in math skills, then the less likely she or he was in completing college. Placement into developmental (remedial) English writing courses did not seem to suppress completion, and was a non-significant finding in the model. Reading proficiency and participation in a student success course (first-year seminar) were not significant factors in the model, though estimated coefficients aligned with research literature.</p>
328

Are four heads better than one? Comparing groups and individuals on behavioral rating accuracy

Borg, Maria Rita January 1991 (has links)
The main objective of this research was to determine whether differences between group and individual accuracy on behavioral rating tasks are due to differences in memory sensitivity or to systematic differences in the type of decision criterion adopted. Group vs. individual differences in evaluative judgment and in confidence levels, and the effects of a five-day delay were also investigated. Lastly the relationship between response bias and prior evaluative judgment was explored. The results revealed a group memory superiority but also demonstrated that groups adopt a too-liberal decision criterion when rating the occurrence of effective behaviors. In addition, in the delayed rating condition, groups were found to be more confident in their correct responses than individual subjects. And finally, for individual subjects, prior evaluative decisions were positively related to response bias in the rating of effective behaviors and negatively related to response bias in the rating of ineffective behaviors.
329

Unit analysis of implicit and explicit memory tests

Schacherer, Christopher William January 1997 (has links)
The present study compares the cognitive processes underlying two perceptual implicit memory tests (word stem completion and word fragment completion) and four explicit memory tests (word stem cued recall, word fragment cued recall, free recall, and recognition). Like many previous studies (and as is predicted by most current memory theories) manipulation of the level, or depth of cognitive processing engaged during the study phase dissociated the explicit tests from the implicit tests. That is, for the explicit tests, processing the study items under a deep level of processing resulted in a greater number of words being recalled or recognized (compared to performance under a shallow level of processing at study). On the implicit tests, this manipulation had very little effect. These differential effects are often interpreted as evidence that qualitatively different processes underlie performance on implicit and explicit tests. However, in looking at which (instead of how many) items are produced on the tests, the conclusions are somewhat different. In the present study this "unit analysis" approach described by Rubin (1985) showed that: (1) implicit and explicit tests correlated more strongly within stimulus type (stem/fragment) than they did within test type (implicit/explicit), (2) both part-word cued recall tests (word stem and word fragment cued recall) correlated strongly with recognition even though they correlated only modestly with each other, and (3) free recall did not correlate positively with any of the other tests (implicit or explicit). These results are explained in terms of a generate/recognize model that incorporates transfer appropriate processing assumptions. Briefly, it is suggested that the implicit tests and their explicit counterparts involve the same data-limited process, and that recognition is not similarly limited--relying almost exclusively on conceptually driven processes. However, this generate/recognize explanation fails to explain why free recall does not correlate positively with any of the other tests. The failure of free recall to correlate positively with any of the other tests is interpreted as suggesting that free recall may rely on qualitatively different processes.
330

Justice in personality testing: Influence of feedback of results, test modality, and elaboration opportunity on attitudinal reactions to and responses on a personality test

Cruz, Pablo January 2003 (has links)
Manipulations of a personality test administration are examined, in light of their effects on the test-takers' perceptions of the test's fairness, their acceptance of an outcome derived from the test, socially desirable responding, and other test reactions. Test-takers were administered the same personality test either face-to-face with the experimenter, or it was given to them as a traditional paper-and-pencil measure. Also, they either were or were not given an opportunity to elaborate on their responses to the items on the test. The opportunity to elaborate improved perceptions of the test's fairness. Negative test outcomes were associated with negative test reactions. Additionally, it was found that socially desirable responding was decreased in the face-to-face administration by providing the elaboration opportunity.

Page generated in 0.0259 seconds