Skip to main content icon/video/no-internet

A confidence interval (CI) is an estimated range of values expected in the population but calculated from a sample. One common use of CIs is in the reporting of standardized test results, such as from the Iowa Test of Basic Skills (ITBS) (Hoover & colleagues, 1993). A student who takes the ITBS receives an overall score on the mathematics subtest. Because the observed test score is an imperfect reflection of the test-taker's true mathematical ability, a test-taker's true (or error-free) score will differ by some amount from the observed test score. As such, a value is added or subtracted from the test score to form a plausible range on which a person's true test score may lie. The interval width indicates the degree of uncertainty in the accuracy of the observed score as a reflection of the test-taker's true score; the wider the interval, the greater the uncertainty. CIs are also available for other types of statistics, such as group means and percentages, to indicate the degree of error in the observed value of the statistic.

MichaelFinger

References and Further Reading

Hoover, H., Hieronymus, A., Frisbie, D., & Dunbar, S. (1993). Iowa Test of Basic Skills. Chicago: Riverside.
  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading