Skip to main content icon/video/no-internet
Effect Size

In a very large sample almost any, even trivial, difference will be statistically significant. This happens because the standard errors for test statistics become smaller as sample size increases, which makes the test statistic larger (because the standard error is the denominator), and hence has a smaller p value. Effect size is about the distinction between statistical and practical significance, which gets complicated because effect size can be calculated for many different situations and for all levels of data (nominal, ordinal, interval).

One such measure is the Pearson product-moment correlation (r), ranging from–1 to +1, where a value of–1 means that two variables are perfectly inversely related, +1 means that two variables are perfectly directly related, and 0 means that there is no relationship between two variables. When r is squared, it is the coefficient of determination, measuring the proportion of variation in the dependent variable explained by one or more regression predictors. Analysis of variance models generally expresses effect size as partial eta-squared, describing the proportion of total variability attributable to a factor.

A good way to compare the results of different models is Adjusted R2, which is the regular R2 value minus the ratio of the number of degrees of freedom for the model and the error times the proportion of variation that is unexplained. This is interpreted as an index (which can be negative) measuring the predictive validity of the model.

In a one-sample t test, effect size is the absolute value of the difference between the data mean (μ1) and the hypothesized value (μ), measured in standard deviation units. That is, d = | μ1−μ|/σ, which is known as Cohen's d. Jacob Cohen proposed criteria for identifying the magnitude of an effect size, with a small effect size between 0.2 and 0.5 standard deviations, a medium effect size between 0.5 and 0.8 standard deviations, and a large effect size greater than 0.8 standard deviations.

When two group means are compared, Cohen's d becomes d = |μ1–μ2 |/σ, where μ1 and μ2 are the respective group means and ó is the standard deviation of the difference between the two means, calculated as the “pooled” estimate of the population variance. For two independent samples, another commonly used measure is Glass's delta (Δ), which is set up the same as Cohen's d, except that the standard deviation of just the control group is used in the denominator.

Omega squared (ω2) also is used for a two independent sample hypothesis test. It measures the proportion of the variability of the dependent variable associated with the independent variable (treatments) and is calculated from the t test as ω2. Although ω2 generally is between 0 and 1, ω2 will be negative when |t | < 1. The closer ω2 is to 1, the stronger the association between the independent and dependent variables; the closer ω2 is to 0, the weaker the association. Values of ω2 less than or equal to 0 indicate that there is no association between the variables.

For a one-way ANOVA, the definition of ω2 changes to ω2 = (k-1)(F-1)/[(k-1)(F-1) + nk], where k is the number of categories for the independent variable, F is the value of the F test, and n is the sample size. Eta squared is defined as the ratio of the sum of squares between the groups to the total sum of squares: η2 = SSBG/SST. Hedges' g, which uses the value of the square root of Mean Square Error in the denominator, is defined as g = |μ1–μ2 |/ √MSW.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading