The Kappa coefficient is widely used in assessing categorical agreement between two raters or two methods. It can also be extended to more than two raters (methods). When using Kappa, the shortcomings of this coefficient should be not neglected. Bias and prevalence effects lead to paradoxes of Kappa. These problems can be avoided by using some other indexes together, but the solutions of the Kappa problems are not satisfactory. This paper gives a critical survey concerning the Kappa coefficient and gives a real life example. A useful alternative statistical approach, the Rank-invariant method is also introduced, and applied to analyze the disagreement between two raters.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-126672 |
Date | January 2010 |
Creators | Xier, Li |
Publisher | Uppsala universitet, Statistiska institutionen |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/masterThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.002 seconds