The kappa statistic is a measure of agreement, corrected for chance, for a categorical variable. For example, if two radiologists each assess the results for the same set of patients, the kappa is one way to measure how well their conclusions agree. The kappa may be used if the rating system used to grade each patient is binary or categorical. With either a large number of ordinal categories (such as a scale from 0 to 20) or a continuous rating scale, Pearson's correlation coefficient would provide a better assessment of agreement than the kappa.

The formula for the kappa is k= (pope)= (1 − pe), where po is the proportion of observed agreement (the sum of the observed values of the cells on the ...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles