Skip to main content icon/video/no-internet

Reliability, which is the consistency of test scores or ratings, is one of several ways to assess the quality of a measure. Three major types of reliability, including brief descriptions, are:

  • Interrater reliability refers to the consistency of the ratings made by two or more observers on the behavior (e.g., classroom behavior) of one or more students.
  • Internal consistency reliability (e.g., split-half reliability, parallel forms reliability, and coefficient alpha) refers to the degree of uniformity of the item content of a measure. In other words, the degree to which the items on a measure are similar in content to each other. To evaluate the internal consistency reliability, a single administration of the measure is given and then the degree of homogeneity of the item content of the measure is examined.
  • Test-retest reliability involves repeated administrations of the same measure, such as a reading achievement test, to the same individuals on two or more occasions to determine whether the test scores obtained on the measure are similar or consistent over time.

The reliability of test scores or ratings is important to school psychologists and other professionals who administer tests or conduct classroom observations in educational settings. Test scores or ratings must be reliable in order to have confidence in the results obtained from these measures or classroom observation forms. Test scores and observer ratings help educators and parents make decisions about educational programs for students and to monitor student progress in these programs. Thus, test scores or ratings need to be reliable.

Patricia A.Lowe
  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading