Skip to main content icon/video/no-internet

An F-test is any statistical hypothesis test whose test statistic assumes an F probability distribution. The F-test is frequently associated with analysis of variance (ANOVA) and is most commonly used to test the null hypothesis that the means of normally distributed groups are equal, although it can be used to test a variety of different hypotheses. The F-test was devised as an extension to the Z-test: F is equal to the squared value of t (t2 = F). Although the F-test produces the same information as the Z-test when testing one independent variable with a nondirectional hypothesis, the F-test has a distinct advantage over the Z-test because multiple independent groups can easily be compared. Survey researchers often use the F-test because of its flexibility to compare multiple groups and to identify whether the relationship they are studying among a set or combination of independent variables has occurred by chance.

For example, if a survey researcher hypothesizes that confidence in government varies between two groups of persons with different levels of education (e.g. those with a college degree and those without a college degree), a Z-test and an F-test would produce the same results. More often, one is interested in comparing multiple or subsets of independent variables. The F-test gives researchers the ability to examine the independent (main) effects of education and the combined (main) effects of a set of socioeconomic status (SES) variables (e.g. education, income, and occupation) as well as the potential effects of the interaction among these variables on confidence in government.

F-tests are also often used to test the effects of subsets of independent variables when comparing nested regression models. For instance, the researcher could compare the F-tests from a model with only the SES variables, a model with a set of variables measuring satisfaction with government services (e.g. police, fire, water, and recreation), and an overall model with both sets of variables to determine whether, as a group, the SES and government services variables make a statistically significant contribution to explaining differences in confidence in government.

The F-test compares the observed value to the critical value of F. If the observed value of F (which is derived by dividing the mean squared regression by the mean squared error) is larger than the critical value of F (obtained using the F-distribution table), then the relationship is deemed statistically significant and the null hypothesis is rejected. There are two types of degrees of freedom associated with the F-test: The first is derived by subtracting 1 from the number of independent variables and the second by subtracting the number of independent variables from the total number of cases. In output tables from statistical software packages, such as SPSS, SAS, or STATA, the F value is listed with the degrees of freedom and a p-value. If the p-value is less than the alpha value chosen (e.g. p < .05), then the relationship is statistically significant and the null hypothesis is rejected. It is important to note that the F-test is sensitive to non-normality when testing for equality of variances and thus may be unreliable if the data depart from the normal distribution.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading