Skip to main content icon/video/no-internet

The best way to understand meta-analysis is to begin with a review of basic statistics. There are two main areas in statistics: descriptive and inferential. The former deals with the basic organization and presentation of data, the latter with the process of deriving conclusions and generalizations (i.e., inferences) about a population based on an analysis of sample data taken from that population.

Significance testing is an older and more traditional means of making inferences about populations based on sample data. Developed by the eminent statistician Ronald Fisher during the early 1930s, significance testing focuses on the concept of the null hypothesis and involves estimating the probability that differences observed in a sample occurred entirely by chance, with no true effect in the corresponding population. The real strength of significance testing is that it constrains Type I errors (i.e., rejecting the null hypothesis when there is no true effect in the population) to the α level or less. The Achilles' heel of significance testing is that it does not have any formal control over Type II errors. Some have estimated that the average probability of a Type II error (i.e., retaining the null hypothesis when there is a true effect in the population) in the behavioral sciences is as high as 50%.

Meta-analysis is a second approach to inferential statistics. Like significance testing, its goal is to make inferences about a population based on an analysis of sample data taken from that population. However, the process by which meta-analysis makes inferences is very different. Whereas significance testing focuses on evaluating the probability of chance with a single (usually new) research study, meta-analysis seeks to mathematically combine a group of related studies that have already been conducted. In meta-analysis, the primary analysis computes the mean (often weighted by sample size) of the common test statistic that is reported or computed for each study, which represents the best available estimate of the true strength of the effect in the population.

Meta-analysis began to be formally developed during the late 1970s, pioneered independently by two camps of researchers: Gene Glass in the clinical area and Frank Schmidt and John Hunter in the industrial and organizational area. Two factors contributed to the emergence of meta-analysis. One was a growing concern about the impact of Type II errors on behavioral science research. Traditional thinking maintained that it is more important to prevent researchers from claiming false effects (i.e., making Type I errors), but some began to believe it is also (even equally) important to prevent researchers from missing real effects (i.e., making Type II errors). In psychotherapy, for example, a series of studies with Type II errors could lead to the conclusion that a particular technique is not consistently helpful, when in fact it might have at least some benefit for most clients.

The second factor that contributed to the development of meta-analysis was a realization that large numbers of studies had accumulated in some areas of behavioral science research. Employment interviews, gender differences in personality, and psychotherapy are examples of areas in which literally hundreds of independent studies are available. In these areas, it made sense to pull together these vast bodies of research to gain a better understanding of the characteristic in question.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading