Skip to main content icon/video/no-internet

Olive Jean Dunn's work was one of the earliest attempts to provide researchers with a way to select, in advance, and test a number of contrasts from among a set of mean scores. Fisher, Scheffé, and Tukey had already provided techniques for testing comparisons between all possible linear contrasts among a set of normally distributed variables. Dunn's contribution meant that researchers no longer needed to test all possible comparisons when they were interested in only a few such comparisons, yet they maintained control over an inflated Type I error rate.

Although the groundwork for Dunn's multiple comparison tests is usually attributed to Carlos Emilio Bonferroni, it actually originated with George Boole, who worked in the middle of the 19th century. Boole's inequality (also known as the union bound) states that for any finite set of events, the probability that at least one of the events will occur is no greater than the sum of the probabilities of the individual events. Bonferroni expanded Boole's inequality by demonstrating how upper and lower bounds (i.e., a confidence interval) could be calculated for the probability of the finite union of events. These are called Bonferroni's inequalities.

Dunn, and later, Dunn and Massey, used a Bonferroni inequality to construct simultaneous confidence intervals for k means, m comparisons, and v degrees of freedom based on the Student's t statistic. She demonstrated the differences in confidence intervals obtained when the variances of the means were unknown, and when the variances were unknown but equal; she also showed how her confidence intervals could be used in fitting data to locate regression curves (e.g., growth in height or weight). As such, no comprehensive table for different numbers of means, comparisons, and degrees of freedom was produced until B. J. R. Bailey did so in 1977. Bailey noted that Dunn's tables were incomplete, were rounded to two decimal places, and contained errors in the tabled values. Although Dunn conducted the initial work showing how complete tables might be constructed, Bailey honored the forerunner by titling his paper “Tables of the Bonferroni t Statistic”; nevertheless, the overlapping t values are, except for rounding errors, identical. To date, there remains confusion about the attribution of this multiple comparison method, no doubt partly because Bonferroni's publications were written in Italian.

Perhaps adding to the confusion, Zbynek Sidák constructed a partial set of tables using the multiplicative inequality to control family-wise Type I error, whereas Dunn had employed the additive inequality for the same purpose. Sidák showed that using the multiplicative inequality produced slightly smaller confidence intervals than using the additive inequality. This increases the probability of finding statistically significant differences between pairs of means, making the test more powerful. Ten years later, Paul Games published a more complete set of tables using Sidák's method. Nowadays, one often sees references to the Dunn-Sidák multiple comparison test, but, as noted above, the two methods are not identical and produce somewhat different results.

Why the Dunn Multiple Comparison Test is Used

Dunn's multiple comparison test is an adjustment used when several comparisons are performed simultaneously. Although a value of alpha may be appropriate for one individual comparison, it is not appropriate for the set of all comparisons. In order to avoid a surfeit of Type I errors, alpha should be lowered to account for the number of comparisons tested.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading