Skip to main content icon/video/no-internet

The F test was named by George W. Snedecor in honor of its developer, Sir Ronald A. Fisher. F tests are used for a number of purposes, including comparison of two variances, analysis of variance (ANOVA), and multiple regression. An F statistic is a statistic having an F distribution.

The F Distribution

The F distribution is related to the chi-square (χ2) distribution.

A random variable has a chi-square distribution with m degrees of freedom (d.f.) if it is distributed as the sum of squares of m independent standard normal random variables. The chi-square distribution arises in analysis of variance and regression analysis because the sum of squared deviations (numerator of the variance) of the dependent variable is decomposed into parts having chi-square distributions under the standard assumptions on the errors in the model.

The F distribution arises because one takes ratios of the various terms in the decomposition of the overall sum of squared deviations. When the errors in the model are normally distributed, these terms have distributions related to chi-square distributions, and the relevant ratios have F distributions. Mathematically, the F distribution with m and n d.f. is the distribution of

None

where U and V are statistically independent and distributed according to chi-square distributions with m and n d.f.

F Tests

Given a null hypothesis H0 and a significance level α, the corresponding F test rejects H0 if the value of the F statistic is large; more precisely, if F>Fm,n;α, the upper αth quantile of the Fm,n distribution. The values of m and n depend upon the particular problem (comparing variances, ANOVA, multiple regression). The achieved (descriptive) level of significance (p value) of the test is the probability that a variable with the Fm,n distribution exceeds the observed value of the statistic F. The null hypothesis is rejected if p < α.

Many tables are available for the quantiles, but they can be obtained in Excel and in statistical computer packages, and p values are given in the output for various procedures.

Comparing Two Variances

One sort of F test is that for comparing two independent variances. The sample variances are compared by taking their ratio. This problem is mentioned first, as other applications of F tests involve ratios of variances (or mean squares) as well. The hypothesis H0: σ21 = σ22 is rejected if the ratio

None

is large. The statistic has an F distribution if the samples are from normal distributions.

Normal Distribution Theory

The distribution of the sample variance s2 computed from a sample of N from a normal distribution with variance σ2 is given by the fact that (N − 1)s2/σ2 is distributed according to a chi-square distribution with m = N − 1 d.f. So, for the variances s21 and s22 of two independent samples of sizes N1 and N2 from normal distributions, the variable U = (N11)s21/σ21 is distributed according to chi-square with m = N1 − 1 d.f., and the variable V = (N21)s22/σ22 is distributed according to chi-square with n = N2 − 1 d.f. If σ21 = σ22, the ratio F = s21/s22 has an F distribution with m = N11 and n = N2 − 1 d.f.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading