• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 33
  • 26
  • 8
  • 8
  • 7
  • 7
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interrater Agreement and Reliability of Observed Behaviors: Comparing Percentage Agreement, Kappa, Correlation Coefficient, ICC and G Theory

Cao, Qian 02 October 2013 (has links)
The study of interrater agreement and itnerrater reliability attract extensive attention, due to the fact that the judgments from multiple raters are subjective and may vary individually. To evaluate interrater agreement and interrater reliability, five different methods or indices are proposed: percentage of agreement, kappa coefficient, the correlation coefficient, intraclass correlation coefficient (ICC), and generalizability (G) theory. In this study, we introduce and discuss the advantages and disadvantages of these methods to evaluate interrater agreement and reliability. Then we review and explore the rank across these five indices by use of frequency in practice in the past five years. Finally, we illustrate how to use these five methods under different circumstances and provide SPSS and SAS code to analyze interrater agreement and reliability. We apply the methods above to analyze the data from Parent-Child Interaction System of global ratings (PARCHISY), and conclude as follows: (1) ICC is the most often used method to evaluate interrater reliability in recent five years, while generalizability theory is the least often used method. The G coefficients provide similar interrater reliability with weighted kappa and ICC on most items, based on the criteria. (2) When the reliability is high itself, different methods provide consistent indication on interrater reliability based on different criteria. If the reliability is not consistent among different methods, both ICC and G coefficient will provide better interrater reliability based on the criteria, and they also provide consistent results.
2

Call Me Old Fashioned - Is My Job Analysis Accurate or Not?

Gibson, Shanan Gwaltney IV 22 May 2001 (has links)
As a process designed to collect information about jobs, job analysis is one of the most fundamental aspects of personnel psychology. It forms the foundation upon which almost every other human resource management component is built, including selection, compensation, performance appraisal, and training program development. Despite the considerable evidence of human fallibility in other judgment processes, many have followed the implicit assumption that job analysis information is accurate without actually examining this proposition. This study considers two potential sources of job analysis rating inaccuracy — the source of the ratings and the type of instrument utilized to collect ratings. By utilizing less job-familiar job analysis raters and shorter, more holistic job analysis instruments, industrial-organizational psychologists have attempted to attenuate the time and costs associated with the job analysis process; however, findings regarding the reliability and accuracy of such practices are questionable. Hypotheses tested in the current study indicated that decomposed measures of job behavior converged to a greater degree with an external job analysis than did holistic measures. Interrater agreements for all types of raters and across all types of instruments are concluded to be inadequate. Potential explanations from the cognitive and social psychological domains for these findings are conjectured and directions for future research are noted. / Ph. D.
3

Interrater reliabilita vyšetřovacího setu klinických funkcí u pacientů s roztroušenou sklerózou mozkomíšní / Interrater reliability of assessment set of clinical features of patients with multiple sclerosis

Marková, Pavla January 2013 (has links)
Title: Interrater reliability of assessment set of clinical features of patients with multiple sclerosis Objectives: The aim of this thesis is to verify interrater reliability of the assessment set of clinical features of patients with multiple sclerosis which purpose is to evaluate sensitivly and comprehensivly the stage of the patients' clinical condition. Methods: According to the inclusion criteria, the patients with MS were selected by an independent neurologist who determined the EDSS score and duration of the disease. After, patients were evaluated by the assessment set by two independent physiotherapists. The assessment set of clinical features incluedes Low-Contrast Letter Acuity Test which tests contrast vision, Nine Hole Peg Test investiges fine motor skills, Timed 25 - Foot Walk evaluates walking speed over a distance of 7,5 m, Paced Auditory Serial Addition assesses cognitive function, Motricity Index tests muscle strength, Modified Ashworth Scale spasticity, Berg balance Scale equilibrium. Furthemore, the tests for evaluation of righting, equilibrium and protective reactions, the test evaluating knee hyperextension, the examination of dysdiadochokinesia and ataxia. Results: High interrater reliability was confirmed in all tests in the examinig set (ICC: 0,80 - 1), except for MAS...
4

Comparison of Video and Audio Rating Modalities for Assessment of Provider Fidelity to a Family-Centered, Evidence-Based Program

January 2019 (has links)
abstract: The current study assessed whether the interrater reliability and predictive validity of fidelity ratings differed significantly across the modalities of audio and video recordings. As empirically supported programs are moving to scale, attention to fidelity, the extent to which a program is delivered as intended, is essential because high fidelity is needed for positive program effects. Consequently, an important issue for prevention science is the development of feasible and acceptable methods for assessing fidelity. Currently, fidelity monitoring is rarely practiced, as the typical way of measuring fidelity, which uses video of sessions, is expensive, time-consuming, and intrusive. Audio recording has multiple advantages over video recording: 1) it is less intrusive; 2) equipment is less expensive; 3) recording procedures are simpler; 4) files are smaller so it takes less time to upload data and storage is less expensive; 5) recordings contain less identifying information; and 6) both clients and providers may be more willing to have sensitive interactions recorded with audio only. For these reasons, the use of audio recording may facilitate the monitoring of fidelity and increase the acceptability of both the intervention and implementation models, which may serve to broaden the scope of the families reached and improve the quality of the services provided. The current study compared the reliability and validity of fidelity ratings across audio and video rating modalities using 77 feedback sessions drawn from a larger randomized controlled trial of the Family Check-Up (FCU). Coders rated fidelity and caregiver in-session engagement at the age 2 feedback session. The composite fidelity and caregiver engagement scores were tested using path analysis to examine whether they predicted parenting behavior at age 3. Twenty percent of the sessions were double coded to assess interrater reliability. The interrater reliability and predictive validity of fidelity scores and caregiver engagement did not significantly differ across rating modality. However, caution must be used in interpreting these results because the interrater reliabilities in both conditions were low. Possible explanations for the low reliability, limitations of the current study, and directions for future research are discussed. / Dissertation/Thesis / Doctoral Dissertation Psychology 2019
5

Estimating Performance Mean and Variability With Distributional Rating Scales: A Field Study Towards Improved Performance Measurement

Colatat, Mahyulee C. 09 April 2008 (has links)
No description available.
6

Experts' Assessment of Color in Burn-Wound Photographs As a Predictor of Skin Graft

Baker, Rose Ann Urdiales 01 July 2011 (has links)
No description available.
7

Assessment and Reporting of Intercoder Reliability in Published Meta-Analyses Related to Preschool Through Grade 12 Education

Raffle, Holly 10 October 2006 (has links)
No description available.
8

Interrater Agreement of Incumbent Job Specification Importance Ratings: Rater, Occupation, and Item Effects

Burnkrant, Steven Richard 27 October 2003 (has links)
Despite the importance of job specifications to much of industrial and organizational psychology, little is known of their reliability or validity. Because job specifications are developed based on input from subject matter experts, interrater agreement is a necessary condition for their validity. The purpose of the present research is to examine the validity of job specifications by assessing the level of agreement in ratings and the effects of occupational tenure, occupational complexity, and the abstractness of rated worker requirements. Based on the existing literature, it was hypothesized that (1) agreement will be worse than acceptable levels, (2) agreement will be higher among those with longer tenure, (3) agreement will be lower in more complex occupations, (4) the effect of occupational tenure will be more pronounced in complex than simple occupations, (5) agreement will be higher on more abstract items, and (6) agreement will be lowest for concrete KSAOs in complex occupations. These hypotheses were tested using ratings from 38,041 incumbents in 61 diverse occupations in the Federal government. Consistent with Hypothesis 1, agreement failed to reach acceptable levels in nearly every case, whether measured with the awg or various forms of the rwg agreement indices. However, tenure, occupational complexity, and item abstractness had little effect on ratings, whether agreement was measured with rwg or awg. The most likely explanation for these null findings is that the disagreement reflected a coarse classification system that overshadowed the effects of tenure, complexity, and abstractness. The existence of meaningful subgroups within a single title threatens the content validity of job specifications: the extent to which they include all relevant and predictive KSAOs. Future research must focus on the existence of such subgroups, their consequences, and ways of identifying them. / Ph. D.
9

Interrater Reliability of the Psychological Rating Scale for Diagnostic Classification

Nicolette, Myrna 12 1900 (has links)
The poor reliability of the DSM diagnostic system has been a major issue of concern for many researchers and clinicians. Standardized interview techniques and rating scales have been shown to be effective in increasing interrater reliability in diagnosis and classification. This study hypothesized that the utilization of the Psychological Rating Scale for Diagnostic Classification for assessing the problematic behaviors, symptoms, or other characteristics of an individual would increase interrater reliability, subsequently leading to higher diagnostic agreement between raters and with DSM-III classification. This hypothesis was strongly supported by high overall profile reliability and individual profile reliability. Therefore utilization of this rating scale would enhance the accuracy of diagnosis and add to the educational efforts of technical personnel and those professionals in related disciplines.
10

Evaluierung bestehender Prüfungsmodalitäten in der Zahnärztlichen Vorprüfung und Implementierung neuer Prüfungsstrukturen / The evaluation of existing examination procedures of the dental preliminary exam and the implementation of a novel assessment tool

Ellerbrock, Maike 02 April 2019 (has links)
No description available.

Page generated in 0.0907 seconds