Skip to main content icon/video/no-internet

Effect Size, Measures of

Effect size is a statistical term for the measure of associations between two variables. It is widely used in many study designs, such as meta-analysis, regression, and analysis of variance (ANOVA). The presentations of effect size in these study designs are usually different. For example, in meta-analysis-an analysis method for combining and summarizing research results from different studies—effect size is often represented as the standardized difference between two continuous variables' means. In analysis of variance, effect size can be interpreted as the proportion of variance explained by a certain effect versus total variance. In each study design, due to the characteristic of variables, say, continuous versus categorical, there are several ways to measure the effect size. This entry discusses the measure of effect size by different study designs.

Measures and Study Designs

Meta-Analysis

Meta-analysis is a study of methodology to summarize results across studies. Effect size was introduced as standardized mean differences for continuous outcome. This is especially important for studies that use different scales. For example, in a meta-analysis for the study of different effects for schizophrenia from a drug and a placebo, researchers usually use some standardized scales to measure patients' situation. These scales can be the Positive and Negative Syndrome Scale (PANSS) or the Brief Psychiatric Rating Scale (BPRS). The PANSS is a 30-item scale, and scores range from 30 to 210. The BPRS scale is a 16-item scale, and one can score from 16 to 112. Difference studies may report results measured on either scale. When a researcher needs to use meta-analysis to combine studies with both scales reported, it would be better for him or her to convert those study results into a common standardized score so that those study results become comparable. Cohen's d and Hedge's g are common effect sizes used in metaanalysis with continuous outcomes.

For a dichotomized outcome, the odds ratio is often used as an indicator of effect size. For example, a researcher may want to find out whether smokers have greater chances of having lung cancer compared to nonsmokers. He or she may do a meta-analysis with studies reporting how many patients, among smokers and nonsmokers, were diagnosed with lung cancer. The odds ratio is appropriate to use when the report is for a single study. One can compare study results by investigating odds ratios for all of these studies.

The other commonly used effect size in meta-analysis is the correlation coefficient. It is a more direct approach to tell the association between two variables.

Cohen's d

Cohen's d is defined as the population means difference divided by the common standard deviation. This definition is based on the t-test on means and can be interpreted as the standardized difference between two means. Cohen's d assumes equal variance of the two populations. For two independent samples, it can be expressed as

None
for a one-tailed effect size index and
None
for a two-tailed effect size index. Here, mA and mB are two population means in their raw scales, and σ is the standard deviation of either population (both population means have equal variance). Because the population means and standard deviations are usually unknown, sample means and standard deviations are used to estimate Cohen's d. One-tailed and two-tailed effect size index for t-test of means in standard units are
None
and
None
where
None
and
None
are sample means, and s is the common standard deviation of both samples.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading