Skip to main content icon/video/no-internet

The F ratio is the ratio of variances or variancelike quantities. A simple example is the testing of whether the variance between groups is significantly larger than the variance within groups in analysis of variance. The general notion underlying such a test is that the differences between individuals within groups are only random differences (error variance), but that the differences between group means stand out above chance (alternative hypothesis).

An Example

An example of calculating this can be found when the partition of variance is dealt with. A series of scores 2234456789 with a mean of 5 and variation 54 is partitioned into two groups. We can think of two groups (e.g., a group of men and a group of women) for which the scores of a quantitative variable are considered—let us say, their smoking behavior measured as the number of cigarettes per day. The group of men with scores 2 2 3 4 4 has a mean of 3 and a variation of 4. The group of women with scores56789hasa mean of 7 and a variation of 10. The within-group variation is 4 + 10 = 14. The between-group variation is defined by the group means 3 and 7. Their mean is 5 and their variation 8. The between-group variation entirely determined by the group is obtained by multiplying the variation between groups by the number of individuals per group: 8 × 5 = 40. It appears that 54 = 40 + 14 or SST = SSB + SSW (sum of squares total = sum of squares between + sum of squares within). An analogous partition of the total into a between and within component can be carried out for the degrees of freedom. One degree of freedom is lost to the mean of a series. Therefore, there are 9 degrees of freedom for the total of 10 scores. For the between component, we calculate for 2 groups, so that there is 1 degree of freedom. For the within component, we calculate for 2 × 5 individuals, each time losing a degree of freedom to the group mean, so that 2 (5 − 1) = 8 remain. We see here, too, that total = between + within, because 9 = 1 + 8. Should one now want to work with variances instead of variations (Mean squares, MS, instead of Sum of Squares, SS), then one must divide each of the variations concerned by the corresponding degrees of freedom: MSB = 40/1 and MSW = 14/8. The ratio of this between-variance MSB and the within-variance MSW is a statistical quantityF = (40/1)/(14/8) = 22.86, for which the sampling distribution for the degrees of freedom, 1 for the numerator and 8 for the denominator, is distributed as F (the symbol F was chosen to commemorate the contributions by Sir Ronald Fisher). The critical value of F for 1 degree of freedom for the numerator and 8 degrees of freedom for the denominator is found in statistical tables of the theoretical F distribution and is equal to 5.32 for a postulated α of .05. The value of the F ratio we found in our empirical example is greater, and we can therefore, even with these small numbers, reject the null hypothesis with a probability of 95%. The null hypothesis states that the ratio MSB/MSW is equal to 1; in other words, that the differences between groups (MSB) are not greater than the randomly assumed differences between individuals within groups (MSW = error variance).

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading