Spelling suggestions: "subject:"psychology -- 3research -- eliability."" "subject:"psychology -- 3research -- deliability.""
1 |
MULTIVARIATE MEASURE OF AGREEMENTTowstopiat, Olga Michael January 1981 (has links)
Reliability issues are always salient as behavioral researchers observe human behavior and classify individuals from criterion-referenced test scores. This has created a need for studies to assess agreement between observers, recording the occurrance of various behaviors, to establish the reliability of their classifications. In addition, there is a need for measuring the consistency of dichotomous and polytomous classifications established from criterion-referenced test scores. The development of several log linear univariate models for measuring agreement has partially met the demand for a probability-based measure of agreement with a directly interpretable meaning. However, multi-variate repeated measures agreement produres are necessary because of the development of complex intrasubject and intersubject research designs. The present investigation developed applications of the log linear, latent class, and weighted least squares procedures for the analysis of multivariate repeated measures designs. These computations tested the model-data fit and calculated the multivariate measure of the magnitude of agreement under the quasi-equiprobability and quasi-independence models. Applications of these computations were illustrated with real and hypothetical observational data. It was demonstrated that employing log linear, latent class, and weighted least squares computations resulted in identical multi-variate model-data fits with equivalent chi-square values. Moreover, the application of these three methodologies also produced identical measures of the degree of agreement at each point in time and for the multivariate average. The multivariate methods that were developed also included procedures for measuring the probability of agreement for a single response classification or subset of classifications from a larger set. In addition, procedures were developed to analyze occurrences of systematic observed disagreement within the multivariate tables. The consistency of dichotomous and polytomous classifications over repeated assessments of the identical examinees was also suggested as a means of conceptualizing criterion-referenced reliability. By applying the univariate and multivariate models described, the reliability of these classifications across repeated testings could be calculated. The procedures utilizing the log linear, latent structure, and weighted least squares concepts for the purpose of measuring agreement have the advantages of (1)yielding a coefficient of agreement that varies between zero and one and measures agreement in terms of the probability that the observers' judgements will agree, as estimated under a quasi-equiprobability or quasi-independence model, (2)correcting for the proportion of "chance" agreement, and (3) providing a directly interpretable coefficient of "no agreement." Thus, these multivariate procedures may be regarded as a more refined psychometric technology for measuring inter-observer agreement and criterion-referenced test reliability.
|
Page generated in 0.4054 seconds