In the workplace, employees may provide ratings about their supervisors, teams, peers, and the organization as a whole. These ratings are often provided by multiple raters (i.e., a group of raters). The similarity of one person’s ratings to the ratings provided by the other members of the group is often an important concern and can be evaluated by examining the consistency of ratings, the agreement of ratings, or both. Interrater reliability (IRR) refers to the extent to which ratings of multiple targets, provided by multiple raters, exhibit relative consistency. Interrater agreement (IRA) reflects the extent to which raters provide the same ratings. Although both IRR and IRA are used to estimate the similarity of ratings, they are used in different ways to answer different research ...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles