Intercoder Reliability Techniques: Cohen’s Kappa

Cohen’s kappa (κ) constitutes one classic technique of measuring the level of consistency between two raters. This entry discusses measuring intercoder reliability using κ and presents two approaches for characterizing κ.

Measuring Intercoder Reliability

Suppose there is a researcher investigating the extent to which a particular news company produces reports in favor of a certain presidential candidate. The investigator would first generate a sample frame (e.g., news aired or published within certain time period) and then randomly select a predetermined number of news articles to be used for analysis. The researcher will finally create a coding protocol whereby each unit of analysis (e.g., word, sentence, paragraph, whole article) can be judged whether or not it contains elements conveying favorable attitudes toward the candidate (e.g., presence or absence ...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles