Skip to main content icon/video/no-internet

An effect size refers to the magnitude of the impact of treatment on an outcome measure. There are two broad families of effect size indexes: the standardized mean difference and measures of association (Kline, 2004). Both compare results across different studies or variables measured in different units and each is a first step toward evaluating the practical importance of a research finding.

For example, suppose that the same two treatments are compared in two different studies. The outcome variable in each study reflects the same construct, off-task behavior, but the standard deviation (the square root of the average squared distance of a set of scores from their mean or average [i.e., the sum of all the scores divided by the number of scores in the set]) is 10 in the first study and 50 in the second, and the mean (average) difference between treatments in each study is 5. The first study, therefore, has a larger, more powerful effect than the second study. This is because a mean difference of 5 points corresponds to half of a standard deviation in the first study (5.00/10.00) but to a tenth of a standard deviation in the second (5.00/50.00). These ratios are standardized mean differences, and they express the difference between treatments in a common metric, as the proportion of a standard deviation. Standardized mean differences and other standardized effect size indexes provide a common language for comparing results measured on different scales (Kline, 2004).

A measure of association describes the relationship between the independent and dependent variables. An example of a measure of association is the correlation coefficient, a measure of the extent of the relationship between two variables. Squaring the correlation coefficient provides the proportion of variance in the dependent variable that is explained by the independent variable. The proportion of variance (i.e., variability) can be calculated for each independent variable or can be calculated when all variables are simultaneously accounting for the variance in the dependent variable.

VickiPeyton

References and Further Reading

Kline, R. B. (2004). Beyond significance testing: Reforming data analysis methods in behavioral research. Washington, DC: American Psychological Association. http://dx.doi.org/10.1037/10693-000
  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading