1 |
The Structured Employment Interview: An Examination of Construct and Criterion ValidityLevine, Anne B. January 2006 (has links)
This study extends the literature on interview validity by attempting to create a structured employment interview with both construct- and criterion-related validity. For this study, a situational interview was developed with the specific purpose of enhancing the interview's construct validity while retaining the interview's predictive power. To enhance the construct validity, two guidelines were applied to the creation of the interview based on previous research in interview and assessment center literature limit the number of applicant characteristics to be rated to 3; and (2) ensure that the dimensions to be measured are conceptually distinct. Based on these two guidelines, three constructs were chosen for assessment of real estate sales agents extraversion, proactive personality and customer orientation. The critical incident technique was used to develop six interview items. To test the construct validity of the interview, the six items were correlated with other measures, specifically, self-report questionnaires and managers' ratings, of extraversion, proactivity and customer orientation. Correlations were weak, at best (rs ranged from -.06 to .25). To test the predictive validity of the interview, the six items were correlated with both objective and subjective measures of performance. Predictive validities were stronger, ranging from .23 to .30. These findings are consistent with previous research on employment interviews which have found that although the predictive validity of the interview is strong, the construct validity is very weak, leaving researchers to wonder what it is that the interview is actually measuring. Possible explanations for these findings are offered, and the implications of these findings are discussed.
|
2 |
The predictive validity of a selection battery for university bridging students in a public sector organisation / Philippus Petrus Hermanus AlbertsAlberts, Philippus Petrus Hermanus January 2007 (has links)
South Africa has faced tremendous changes over the past decade, which has had a huge impact
on the working environment. Organisations are compelled to address the societal disparities
between various cultural groups. However, previously disadvantaged groups have had to face
inequalities of the education system in the past, such as a lack of qualified teachers (especially in
the natural sciences), and poor educational books and facilities. This has often resulted in poor
grade 12 results. Social responsibility and social investment programmes are an attempt to rectify
these inequalities.
The objective of this research was to investigate the validity of the current selection battery of the
Youth Foundation Training Programme (YFTP) in terms of academic performance of the
students on the bridging programme. A correlational design was used in this research in order to
investigate predictive validity whereby data on the assessment procedure was collected at about
the time applicants were hired. The scores obtained from the Advanced Progressive Matrices
(APM), which forms part of the Raven's Progressive Matrices as well as the indices of the
Potential Index Battery (PIB) tests, acted as the independent variables, while the Matric results of
the participants served as the criterion measure ofthe dependent variable. The data was analysed
using the Statistical Package for Social Sciences (SPSS) software programme by means of
correlations and regression analyses.
The results showed that although the current selection battery used for the bridging students does
indeed have some value, it only appears to be a poor predictor of the Matric results. Individually,
the SpEEx tests used in the battery evidently were not good predictors of the Matric results,
while the respective beta weights of the individual instruments did confirm that the APM was the
strongest predictor.
Limitations were identified and recommendations for further research were discussed. / Thesis (M.A. (Industrial Psychology))--North-West University, Potchefstroom Campus, 2007.
|
3 |
Accounting for correlated artifacts and true validity in validity generalization procedures : an extension of model 1 for assessing validity generalizationThomas, Adrain L. 12 1900 (has links)
No description available.
|
4 |
The predictive validity of a selection battery for university bridging students in a public sector organisation / Philippus Petrus Hermanus AlbertsAlberts, Philippus Petrus Hermanus January 2007 (has links)
South Africa has faced tremendous changes over the past decade, which has had a huge impact
on the working environment. Organisations are compelled to address the societal disparities
between various cultural groups. However, previously disadvantaged groups have had to face
inequalities of the education system in the past, such as a lack of qualified teachers (especially in
the natural sciences), and poor educational books and facilities. This has often resulted in poor
grade 12 results. Social responsibility and social investment programmes are an attempt to rectify
these inequalities.
The objective of this research was to investigate the validity of the current selection battery of the
Youth Foundation Training Programme (YFTP) in terms of academic performance of the
students on the bridging programme. A correlational design was used in this research in order to
investigate predictive validity whereby data on the assessment procedure was collected at about
the time applicants were hired. The scores obtained from the Advanced Progressive Matrices
(APM), which forms part of the Raven's Progressive Matrices as well as the indices of the
Potential Index Battery (PIB) tests, acted as the independent variables, while the Matric results of
the participants served as the criterion measure ofthe dependent variable. The data was analysed
using the Statistical Package for Social Sciences (SPSS) software programme by means of
correlations and regression analyses.
The results showed that although the current selection battery used for the bridging students does
indeed have some value, it only appears to be a poor predictor of the Matric results. Individually,
the SpEEx tests used in the battery evidently were not good predictors of the Matric results,
while the respective beta weights of the individual instruments did confirm that the APM was the
strongest predictor.
Limitations were identified and recommendations for further research were discussed. / Thesis (M.A. (Industrial Psychology))--North-West University, Potchefstroom Campus, 2007.
|
5 |
The validation of the perceived wellness survey in the South African Police Service / Jolanda EkkerdEkkerd, Joland January 2005 (has links)
The era of globalisation calls for a flexible, multi-skilled, knowledgeable, inter-changeable
and adaptable healthy workforce. Employee wellness is essential to ensure an effective and
efficient workforce. It is important. however. to measure wellness before it can be developed.
Currently there is a need for a measuring instrument in South Africa which can measure all
the dimensions of wellness as conceptualised in the literature. However, it is risky to apply
psychometric instruments developed in other cultures to the South African contest without
validating it.
The objective of this study were to validate the Perceived Wellness Survey (PWS) in the
South African Police Service (SAPS) The specific objectives of the study. included to
conceptualise perceived wellness and the dimensions thereof from the literature to access the
internal consistency and construct validity of the PWS in a sample of police personnel and to
investigate differences in the perceived wellness of biographical groups.
A cross-sectional survey design with an accidental sample (N=840) of police personnel was
used. The sample was composed of personnel from multiple divisions in the SAPS, including
Functional as well as Public Service Act personnel. The Perceived Wellness Survey (PWS)
and a biographical questionnaire were administered. Descriptive statistics, principal
component analysis, target rotations, alpha coefficients and multivariate analysis of variance
were used to analyse the data.
Exploratory factor analysis with target rotations failed to confirm the construct equivalence or
the PWS for Afrikaans and Setswana language groups. Two reliable factors. namely wellness
and illness were extracted in a random sample (n = 335) of the Setswana group and in a
replication sample (n=338) However. an alternative interpretation was also possible.
Statistically significant differences were found between perceived wellness of employees in
terms of age and rank. Recommendations for future research were made. / Thesis (M.A. (Industrial Psychology))--North-West University, Potchefstroom Campus, 2006.
|
6 |
The Incremental Utility of Behavioral Rating Scales and a Structured Diagnostic Interview in the Assessment of ADHDVaughn, Aaron 02 October 2009 (has links)
Attention-Deficit/Hyperactivity Disorder (ADHD) is a disorder characterized by a persistent pattern of developmentally inappropriate levels of inattention, hyperactivity, and impulsivity (American Psychiatric Association, 2000). Currently, clinicians typically utilize a multi-method assessment battery focusing on identifying the core symptoms of ADHD. Further, current recommendations for a comprehensive assessment of ADHD require a lengthy and costly evaluation protocol despite a lack of evidence supporting the incremental utility of each method. Assessment strategies exhibiting the strongest evidence of reliability and validity include symptom-based rating scales, empirically-derived rating scales, and structured diagnostic interviews (Pelham, Fabiano, & Massetti, 2005), yet, their review provided limited empirical support for this conclusion. Nonetheless, other reviews have noted the lack of research examining whether each procedure and/or method adds unique information to a diagnosis of ADHD (Johnston & Murray, 2003). In order to fill this gap in the literature, the current study examined the independent and incremental utility of multiple methods and informants in a comprehensive, “gold standard” assessment of ADHD. The sample include 185 children with ADHD (Mage =9.22, SD=.95) and 82 children without ADHD (Mage =9.24, SD=.88). Logistic regressions were used to examine the incremental contribution of each method in the prediction of consensus diagnoses derived by two Ph.D. level experts in the field of ADHD following a review of comprehensive assessment data. This study also examined the clinical utility and efficiency of diagnostic algorithms using the methods demonstrating the greatest statistical association with a diagnosis of ADHD. Finding provided an empirical support for arguments espousing the redundancy of information in a comprehensive assessment. Namely, information collected from a structured diagnostic interview was unable to significantly improve a prediction model including parent and teacher ratings (Block X2-= .91 = .64). Importantly, parent and teacher ratings on a symptom-based scale alone were able to correctly classify 265 of 267 participants. Based on these results, a diagnostic algorithm that was derived utilizing only behavioral rating scales was able to classify correctly all 267 participants. Clinical implications are highlighted and future research directions are discussed.
|
7 |
Convergent Validity of Variables Residualized By a Single Covariate: the Role of Correlated Error in Populations and SamplesNimon, Kim 05 1900 (has links)
This study examined the bias and precision of four residualized variable validity estimates (C0, C1, C2, C3) across a number of study conditions. Validity estimates that considered measurement error, correlations among error scores, and correlations between error scores and true scores (C3) performed the best, yielding no estimates that were practically significantly different than their respective population parameters, across study conditions. Validity estimates that considered measurement error and correlations among error scores (C2) did a good job in yielding unbiased, valid, and precise results. Only in a select number of study conditions were C2 estimates unable to be computed or produced results that had sufficient variance to affect interpretation of results. Validity estimates based on observed scores (C0) fared well in producing valid, precise, and unbiased results. Validity estimates based on observed scores that were only corrected for measurement error (C1) performed the worst. Not only did they not reliably produce estimates even when the level of modeled correlated error was low, C1 produced values higher than the theoretical limit of 1.0 across a number of study conditions. Estimates based on C1 also produced the greatest number of conditions that were practically significantly different than their population parameters.
|
8 |
A psychometric examination of the knowledge of ADHD scaleHepp, Shelanne L. 17 August 2009
Saskatchewan-based pre-service and in-service teachers knowledge of ADHD was assessed and data was collected to accumulate psychometric evidence for the modified K-ADHD (Jerome, Gordon, & Hustler, 1994) scale. Using results from a questionnaire administered to pre-service (n = 100) and in-service (n = 66) teachers, the current study did find a significant difference on the K-ADHD (Jerome et al., 1994) scale between groups. Divergent and convergent validity evidence was found for the K-ADHD (Jerome et al., 1994) for both groups. However, reliability estimates were questionable between in-service (á = .66) and pre-service (á = .82) teachers, possibly due to asymmetric outlier contamination. The evidence found for the K-ADHD (Jerome et al., 1994) scale suggests problems with the psychometrics of the instrument. Future implications and research are discussed.
|
9 |
A psychometric examination of the knowledge of ADHD scaleHepp, Shelanne L. 17 August 2009 (has links)
Saskatchewan-based pre-service and in-service teachers knowledge of ADHD was assessed and data was collected to accumulate psychometric evidence for the modified K-ADHD (Jerome, Gordon, & Hustler, 1994) scale. Using results from a questionnaire administered to pre-service (n = 100) and in-service (n = 66) teachers, the current study did find a significant difference on the K-ADHD (Jerome et al., 1994) scale between groups. Divergent and convergent validity evidence was found for the K-ADHD (Jerome et al., 1994) for both groups. However, reliability estimates were questionable between in-service (á = .66) and pre-service (á = .82) teachers, possibly due to asymmetric outlier contamination. The evidence found for the K-ADHD (Jerome et al., 1994) scale suggests problems with the psychometrics of the instrument. Future implications and research are discussed.
|
10 |
Assessment of the conclusion validity for empirical research studies published in the journal of speech, language, and hearing researchByrns, Glenda Elkins 15 May 2009 (has links)
Research-based decision making has been advanced as a way for
professionals to make a determination about the effectiveness of a potential
treatment. However, informed consumers of research need to be able to
determine what constitutes evidence-based practices and what criteria can be
used to determine if evidence-based practices have been met.
This study was a synthesis of research that involved a critical review of
the empirical research studies reported in Volume 47 of the Journal of Speech,
Language, and Hearing Research (JSLHR) published in 2004. This
methodological research synthesis evaluated (a) the research designs used in
the JSLHR studies, (b) information and rationale used to inform population
validity assessment decisions, and (c) the extent to which the sampling designs,
population validity rating, data analysis procedures, and the specification of
generalizations and conclusions provide sufficient evidence to determine an
overall rating of conclusion validity. Results indicated that less than one-fifth of the 105 research synthesis
population of studies used experimental research designs. Additionally, the vast
majority of the research synthesis population of studies (83.8%) were
observational research designs.
Only five studies out of the research synthesis population of studies
(4.8%) were determined to have high population validity. In contrast, 84.8
percent of the research synthesis population of studies were found to have low
population validity. That is, the studies did not contain adequate information or
description of the essential sampling concerns.
The vast majority or 75.3 percent of the research synthesis population of
studies were rated as having low conclusion validity. Approximately one-fifth of
the 105 research synthesis study population (22 studies or 20.9%) were found to
have moderate conclusion validity while less than five percent of the total studies
(4 of 105 studies or 3.8%) were found to have high conclusion validity.
A meaningful relationship between population validity ratings and
conclusion validity ratings was established. Since 81 of 105 studies have
identical ratings for both population and conclusion validity, the accuracy of the
prediction model developed for this study is 77.1 percent.
|
Page generated in 0.0433 seconds