• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 15
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 122
  • 122
  • 32
  • 28
  • 28
  • 27
  • 23
  • 19
  • 16
  • 15
  • 15
  • 14
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Search for Construct Validity of Assessment Centers: Does the Ease of Evaluation of Dimensions Matter?

Jalbert, Nicole Marie 14 May 1999 (has links)
The purpose of the present study was to investigate the effect of ease of evaluation of dimensions on the construct validity of a selection assessment center conducted in 1993. High ease of evaluation dimensions, operationalized as the greatest proportion of highly diagnostic behaviors, were expected to demonstrate greater construct and criterion related validity. Multitrait-multimethod analysis and confirmatory factor analysis results indicated that high ease of evaluation dimensions demonstrated greater convergent and discriminant validity than low ease of evaluation dimensions. Contrary to predictions, however, there was little difference in the criterion related validity of the high versus low ease of evaluation dimensions. Moreover, the entire assessment center yielded extremely low predictive validity using both dimension and exercise scores as predictors. The implications of the findings from this study are discussed. / Ph. D.
2

A Factor Analytic Study of the Construct Validity of Three Value Analysis Instruments

Evans, Ann Marie 08 1900 (has links)
This study used component R-analysis factor technique with orthagonal rotation to investigate the construct validity of the Values for Working, Values for Teaching, and Values for Helpers value system analysis instruments, by a factor analysis of the items on each. Random selection was used to compile a sample of 100 for each. Items measured tribalism, existentialism, sociocentrism, egocentrism, and manipulativeness on the first test, egocentrism and existentialism on the second, and only two items, measuring egocentrism, on the third. The study recommends that other items be eliminated or revised, and that data be reanalyzed for the presence of higher order or obligque factors corresponding to the value systems.
3

The Structured Employment Interview: An Examination of Construct and Criterion Validity

Levine, Anne B. January 2006 (has links)
This study extends the literature on interview validity by attempting to create a structured employment interview with both construct- and criterion-related validity. For this study, a situational interview was developed with the specific purpose of enhancing the interview's construct validity while retaining the interview's predictive power. To enhance the construct validity, two guidelines were applied to the creation of the interview based on previous research in interview and assessment center literature limit the number of applicant characteristics to be rated to 3; and (2) ensure that the dimensions to be measured are conceptually distinct. Based on these two guidelines, three constructs were chosen for assessment of real estate sales agents extraversion, proactive personality and customer orientation. The critical incident technique was used to develop six interview items. To test the construct validity of the interview, the six items were correlated with other measures, specifically, self-report questionnaires and managers' ratings, of extraversion, proactivity and customer orientation. Correlations were weak, at best (rs ranged from -.06 to .25). To test the predictive validity of the interview, the six items were correlated with both objective and subjective measures of performance. Predictive validities were stronger, ranging from .23 to .30. These findings are consistent with previous research on employment interviews which have found that although the predictive validity of the interview is strong, the construct validity is very weak, leaving researchers to wonder what it is that the interview is actually measuring. Possible explanations for these findings are offered, and the implications of these findings are discussed.
4

The Substantive Validity of Work Performance Measures: Implications for Relationships Among Work Behavior Dimensions and Construct-Related Validity

Carpenter, Nichelle 2012 August 1900 (has links)
Performance measurement and criterion theory are critical topics in the fields of I/O psychology, yet scholars continue to note several issues with the criterion, including empirically redundant behaviors, construct and measure proliferation, and definitions that conflict. These interconnected problems hinder the advancement of criterion measurement and theory. The goal of this study was to empirically examine the issues of theory/construct clarity and measurement as they exist regarding work performance behaviors. This study's first objective was to clarify definitions of core performance behaviors, particularly to resolve issues of construct proliferation and conceptual conflict. Universal definitions of four core criterion constructs (i.e., task performance, citizenship performance, counterproductive work behavior, and withdrawal) were developed that integrated existing definitions of similar behaviors. Each definition reflects a parsimonious conceptualization of existing performance behaviors, which serves to clarify existing, and at times divergent, criterion conceptualizations. Importantly, these integrated definitions represent commonly-held definitions of the constructs and replace the largely discrepant accumulation of definitions. The second objective was to determine whether existing items assumed to measure the four core work performance behaviors were judged by raters to represent their respective constructs. The results showed that of the 851 items examined, over half were judged to not represent their respective constructs which, importantly, replicated previous research. Additionally, the results highlight items that match their respective construct definition and contain minimal overlap with non-posited constructs. Finally, the third objective was to determine the implications of using the problematic items for both the empirical relationships among work performance behaviors and evidence of construct-related validity. The results provided preliminary evidence that while nomological networks are minimally affected, relationships among some work performance dimensions are significantly affected when problematic items are removed from measures of performance constructs. This dissertation demonstrated the need for more attention to the construct labels placed on the behaviors described in work performance items, as there are potentially adverse consequences for theory and measurement. Ultimately, the results of this study showed that work performance behaviors/items have often been assigned incorrect construct labels which, subsequently, may cast considerable doubt on the theoretical and empirical understanding of the criterion domain.
5

A Multitrait-Multimethod Approach to Isolating Situational Judgment from Situational Judgment Tests

Salter, Nicholas P. 29 July 2009 (has links)
No description available.
6

An evaluation of the construct validity of situational judgment tests

Trippe, David Matthew 10 December 2002 (has links)
Situational judgment tests are analogous to earlier forms of "high fidelity" simulations such that an ostensible paradox emerges in the consistent finding of criterion-referenced validity but almost complete lack of construct validity evidence. The present study evaluates the extent to which SJT's can demonstrate convergent and discriminant validity by analyzing a SJT from a multitrait-multimethod perspective. A series of hierarchically nested confirmatory factor models were tested. Results indicate that the SJT demonstrates convergent and discriminant validity but also contains non-trivial amounts of construct-irrelevant method variance. Wide variability in the content and validation methods of SJT's are discussed as the reason previous attempts to find construct validity have failed. / Master of Science
7

Psychometric evaluation of a leadership empowerment questionnaire in selected organisations in South Africa / Desiree Zikalala

Zikalala, Senzekile Nompumelelo Desiree January 2015 (has links)
The world of work has become extremely volatile, with the scarcity of skills and the management of human capital at the top of the agenda. Human capital is the most valuable asset in any organisation. It is evident that leadership is vital in organisations in ensuring their success; thus making leadership empowerment behaviour crucial. It is essential that our leaders become people developers who focus on growing and up skilling subordinates as a way of attracting and retaining talent. It is important that leaders create an enabling environment for their subordinates; one of independence, innovation and, more importantly, growth and development. The purpose of this study was to explore the psychometric properties of the leadership empowerment questionnaire by investigating internal consistency; furthermore investigating the differences between genders regarding male and female perceptions of leadership empowerment behaviour. A quantitative cross-sectional survey was used. The measuring battery comprised the Leadership Empowerment Behaviour Questionnaire (LEBQ), which is originally a six-factor structure. The analysis was carried out using the IBM-SPSS and Mplus statistical modelling programs. Reliability was explored by utilising the Confirmatory Factor Analysis (CFA) index (rho). Construct validity was assessed by examining the factor structure, utilising the Exploratory Factor Analysis (EFA) and the CFA. Satisfactory reliability indices were attained. A three-factor model of the LEBQ was confirmed. The three-factor model consists of autonomy, development and accountability. Measurement invariance was tested by the use of configural, scalar and metric invariance. The configural model concluded that the three-factor structure obtained for the total sample also holds for the two groups (Males & Females) of respondents separately. The metric model indicates that the latent variables are measured in the same way with the same metric in the two target groups. The Scalar model indicates that on these three items, males and females differ regarding their starting points in their response to these questions. Although there were differences in the starting points of certain items, there were no real differences evident in the overall model regarding males and females. Recommendations for further research were made. / MA (Industrial Psychology)--North-West University, Vaal Triangle Campus, 2015
8

Psychometric evaluation of a leadership empowerment questionnaire in selected organisations in South Africa / Desiree Zikalala

Zikalala, Senzekile Nompumelelo Desiree January 2015 (has links)
The world of work has become extremely volatile, with the scarcity of skills and the management of human capital at the top of the agenda. Human capital is the most valuable asset in any organisation. It is evident that leadership is vital in organisations in ensuring their success; thus making leadership empowerment behaviour crucial. It is essential that our leaders become people developers who focus on growing and up skilling subordinates as a way of attracting and retaining talent. It is important that leaders create an enabling environment for their subordinates; one of independence, innovation and, more importantly, growth and development. The purpose of this study was to explore the psychometric properties of the leadership empowerment questionnaire by investigating internal consistency; furthermore investigating the differences between genders regarding male and female perceptions of leadership empowerment behaviour. A quantitative cross-sectional survey was used. The measuring battery comprised the Leadership Empowerment Behaviour Questionnaire (LEBQ), which is originally a six-factor structure. The analysis was carried out using the IBM-SPSS and Mplus statistical modelling programs. Reliability was explored by utilising the Confirmatory Factor Analysis (CFA) index (rho). Construct validity was assessed by examining the factor structure, utilising the Exploratory Factor Analysis (EFA) and the CFA. Satisfactory reliability indices were attained. A three-factor model of the LEBQ was confirmed. The three-factor model consists of autonomy, development and accountability. Measurement invariance was tested by the use of configural, scalar and metric invariance. The configural model concluded that the three-factor structure obtained for the total sample also holds for the two groups (Males & Females) of respondents separately. The metric model indicates that the latent variables are measured in the same way with the same metric in the two target groups. The Scalar model indicates that on these three items, males and females differ regarding their starting points in their response to these questions. Although there were differences in the starting points of certain items, there were no real differences evident in the overall model regarding males and females. Recommendations for further research were made. / MA (Industrial Psychology)--North-West University, Vaal Triangle Campus, 2015
9

Attempting measurement of psychological attributes

Salzberger, Thomas 26 February 2013 (has links) (PDF)
Measures of psychological attributes abound in the social sciences as much as measures of physical properties do in the physical sciences. However, there are crucial differences between the scientific underpinning of measurement. While measurement in the physical sciences is supported by empirical evidence that demonstrates the quantitative nature of the property assessed, measurement in the social sciences is, in large part, made possible only by a vague, discretionary definition of measurement that places hardly any restrictions on empirical data. Traditional psychometric analyses fail to address the requirements of measurement as defined more rigorously in the physical sciences. The construct definitions do not allow for testable predictions; and content validity becomes a matter of highly subjective judgment. In order to improve measurement of psychological attributes, it is suggested to, first, readopt the definition of measurement in the physical sciences; second, to devise an elaborate theory of the construct to be measured that includes the hypothesis of a quantitative attribute; and third, to test the data for the structure implied by the hypothesis of quantity as well as predictions derived from the theory of the construct. (author's abstract)
10

Construct representation of self-report future time perspective for work and retirement scholarship

Kerry, Matthew James 27 May 2016 (has links)
The dissertation presents evidence on the measurement properties of self-report items in contemporary organizational contexts (Podsakoff & Organ, 1986). Operationally, the dissertation adopts a construct representation approach to construct validity, defined by the response processes engaged for measurement performance in trait assessment (AERA, 2014; Embretson, 1983). For example, self-report measures are known to be affected by a variety of variables, such as semantic and referent features (Cermac & Craik, 1979; Kelly, 1955) and design factors that impact cognitive context (Stone, et al, 2000; The Science of Self-Report). In turn, the response processes impacts the external correlations (Embretson, 2007). To the extent that semantic-referent features and design factors are construct-irrelevant, reduced external correlations can be expected. This dissertation presents evidence from a qualitative review of self-report future time perspective (FTP) instruments across organizational and retirement contexts. A quantitative review compares external correlates of the two instruments. A retrospective-observational study benchmarks the psychometric properties of Carstensen's self-report instrument using modern latent-variable modeling (item-response theory [IRT]). Structural equation modeling (SEM) is further used to test for moderating effects of subjective life expectancy (SLE) on latent predictors of FTP and retirement plans. Evidence from a '3 x 2' mixed-subjects experimental design is also presented indicating the effects of subjective life expectancy (SLE) on measurement error in personality factors, FTP, and retirement plans. Discussion centers on advancing measurement paradigms in psychological and education research, as well as -more generally- adopting an integrated perspective of construct validity for advancing and evaluating substantive research.

Page generated in 0.0819 seconds