Skip to main content icon/video/no-internet

Type I error refers to one of two kinds of error of inference that could be made during statistical hypothesis testing. The concept was introduced by J. Newman and E. Pearson in 1928 and formalized in 1933. A Type I error occurs when the null hypothesis (Ho), that there is no effect or association, is rejected when it is actually true. A Type I error is often referred to as a false positive, which means that the hypothesis test showed an effect or association, when in fact there was none.

In contrast, a Type II error occurs when the null hypothesis fails to be rejected when it is actually false. The relation between the Type I error and Type II error is summarized in Table 1.

The probability of a Type I error is usually denoted by the Greek letter alpha (α) and is often called the significance level or the Type I error rate. In the table, the Greek letter beta (β) is the probability of a Type II error. In most studies the probability of a Type I error is chosen to be small—for example, 0.1, 0.05, 0.01—or expressed in a percentage (10%, 5%, or 1%) or as odds (1 time in 10, 1 time in 20, or 1 time in 100). Selecting the alpha level of 0.05, for example, means that if the test were to be conducted many times where the null hypothesis is true, one can expect that 5% of the tests will produce an erroneously positive result. A Type I error is an error due to chance and not because of a systematic error such as model misspecification or confounding.

A Type I error could be illustrated with an example of a disease diagnostic test. The null hypothesis is that a person is healthy and does not have a particular disease. If the result of a blood test to screen for the disease is positive, the probability that the person has the disease is high; however, because of the test used, a healthy person may also show a positive test result by chance. Such a false positive result is a Type I error for the disease diagnostic test. Note that a Type I error depends on the way the hypothesis test is formulated, that is, whether “healthy” or “sick” is taken as the null condition. In survey research, an example of a Type I error would occur when a pollster finds a statistically significant association between age and attitudes toward normalization of U.S. relations with Cuba, if in fact no such relationship exists apart from the particular polling data set.

Table 1 Type I and Type II errors in hypothesis testing

None

When testing multiple hypotheses simultaneously (e.g., conducting post-hoc testing or data mining), one needs to consider that the probability of observing a false positive result increases with the number of tests. In particular, a family-wise Type I error is making one or more false discoveries, or Type I errors, among all the hypotheses when performing multiple pairwise tests. In this situation, an adjustment for multiple comparison, such as Bonferroni, Tukey, or Scheffe adjustments, is needed. When the number of tests is large, such conservative adjustments would often require prohibitively small individual significance levels. Y. Benjamini and Y. Hochberg proposed that, in such situations, other statistics such as false discovery rates are more appropriate when deciding to reject or accept a particular hypothesis.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading