Skip to main content icon/video/no-internet

Type II Error

Type II error refers to one of two errors that could be made during hypothesis testing. The concept was introduced by J. Newman and E. Pearson in 1928 and formalized in 1933. A Type II error occurs when the null hypothesis (Ho), that there is no effect or association, fails to be rejected when it is actually false. A Type II error is often referred to as a false negative because the hypothesis test led to the erroneous conclusion that no effect or association exists, when in fact an effect or association does exist. In contrast to Type II errors, a Type I error occurs when the null hypothesis is rejected when it is actually true. The features of Type II and Type I errors are summarized in Table 1, above.

In the table, the maximum probability of a Type II error is denoted by the Greek letter beta (β), and 1 - β is often referred to as the statistical power of the test. It is intuitive to require that the probability of a Type II error be small; however, a decrease of β causes an increase in the Type I error, denoted by Greek letter alpha (α), for the same sample size.

In many statistical tasks, data collection is limited by expense or feasibility. Thus, the usual strategy is to fix the Type I error rate and collect sufficient data to give adequate power for appropriate alternative hypotheses. Although power considerations vary depending on the purpose of the study, a typical requirement for the power is 0.8, which corresponds to a 20% Type II error rate. However, when data exist in abundance, this strategy will lead to rejecting the null hypothesis in favor of even tiny effects that might have little practical value. For practical use, it is important to have a clear and measurable statement of the alternative hypothesis. For example, if Ho for a continuous effect x states that there is no effect (e.g., x = 0) and the alternative hypothesis (Hi) states that there is some effect (e.g., x ≠ 0), the concept of Type II error is not practical because Hi covers all possible outcomes except a single value (that for Ho). Commonly, an alternative hypothesis is given in terms of the measurable effect size, based on the scientific relevance.

For example, the use of mammography to screen for breast cancer provides an illustration of how Type II error operates. The null hypothesis is that a subject is healthy. A positive test result does not necessarily mean that a woman has breast cancer. The main purpose of the mammogram is to not miss the cancer if it is present, that is, to minimize the Type II error. However, tests like mammograms have to be designed to balance the risk of unneeded anxiety caused by a false positive result (Type I error) and the consequences of failing to detect the cancer (Type II error). In survey research, an example of a Type II error would occur when a pollster fails to find a statistically significant association between age and attitudes toward normalization of U.S. relations with Cuba, if in fact such a relationship exists apart from the particular polling data set.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading