Skip to main content icon/video/no-internet

Tukey's Honestly Significant Difference (HSD)

The Tukey's honestly significant difference test (Tukey's HSD) is used to test differences among sample means for significance. The Tukey's HSD tests all pairwise differences while controlling the probability of making one or more Type I errors. The Tukey's HSD test is one of several tests designed for this purpose and fully controls this Type I error rate. Other tests such as the Newman −Keuls lead to an inflated Type I error rate in some situations. This entry describes how to conduct and interpret the Tukey's HSD test.

Philosophy

It is rare for any two experimental treatments to have identical effects. For example, it is implausible that two drug treatments could produce the same relief from depression if measured to 100 decimal places. As a result, the role of inferential statistics in these situations is not to reject the null hypothesis of no difference, because that hypothesis is false on its face. Instead, it is to determine whether a confident statement can be made about the direction of the difference. Depending on the results of the inferential test, a researcher might be able to state with confidence the direction of the difference, might have a hint about the direction, or might have little or no information about the direction. Clearly, an all-or-nothing decision rule in which one either rejects or fails to reject the null hypothesis is not consistent with this approach. Instead, the probability value obtained in the inferential test is used to aid in the assessment of the confidence one should have in the direction of the difference.

Because some treatments of inferential statistics take the more traditional approach of testing whether a difference is exactly zero, it is important to show the correspondence between the approach taken here and the traditional approach. In the traditional approach, a Type I error is defined as rejecting a true null hypothesis. Here, a Type I error is defined as making a confident claim about the direction of the difference when there is either no difference or when the difference is in the opposite direction of the claimed difference. The Type I error rate in this context depends on the size of the true difference between means: If there is a large difference between the means, then the probability of getting the direction wrong is smaller than when there is a small difference between means. In computing the Type I error rate, the conservative approach is to assume that the true difference is zero. Although this might not often represent reality, it is the best way to ensure that the Type I error rate is controlled and is adopted here.

Table 1 The Six Comparisons Among Four Treatment Conditions
ComparisonConditionsCompared
112
213
314
423
524
634

The Problem of Multiple Comparisons

If a researcher compared the means of four treatment conditions, there would be six pairwise comparisons. These comparisons are shown in Table 1.

It is important to distinguish between the following two error probabilities: 1) the probability that any single comparison results in a Type I error and 2) the probability that one or more comparisons result in a Type I error. The former probability is referred to as the per-comparison error rate; the latter probability is referred to as the family-wise error rate.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading