Skip to main content icon/video/no-internet

The file drawer problem is the threat that the empirical literature is biased because nonsignificant research results are not disseminated. The consequence of this problem is that the results available provide a biased portrayal of what is actually found, so literature reviews (including meta-analyses) will conclude stronger effects than actually exist. The term arose from the image that these nonsignificant results are placed in researchers’ file drawers, never to be seen by others. This file drawer problem also has several similar names, including publication or dissemination bias. Although all literature reviews are vulnerable to this problem, meta-analysis provides methods of detecting and correcting for this bias. This entry first discusses the sources of publication bias and then the detection and correction of such bias.

Sources

The first source of publication bias is that researchers may be less likely to submit null than significant results. This tendency may arise in several ways. Researchers engaging in “data snooping” (cursory data analyses to determine whether more complete pursuit is warranted) simply may not pursue investigation of null results. Even when complete analyses are conducted, researchers may be less motivated—due to expectations that the results will not be published, professional pride, or financial interest in finding supportive results—to submit results for publication.

The other source is that null results are less likely to be accepted for publication than are significant results. This tendency is partly due to reliance on decision making from a null hypothesis significance testing (versus effect size) framework; statistically significant results lead to conclusions, whereas null results are inconclusive. Reviewers who have a professional or financial interest in certain results may also be less accepting of and more critical toward null results than those that confirm their expectations.

Detection

Three methods are commonly used to evaluate whether publication bias exists within a literature review. Although one of these methods can be performed using vote-counting approaches to research synthesis, these approaches are typically conducted within a meta-analysis focusing on effect sizes.

The first method is to compare results of published versus unpublished studies, if the reviewer has obtained at least some of the unpublished studies. In a vote-counting approach, the reviewer can evaluate whether a higher proportion of published studies finds a significant effect than do the proportion of unpublished studies. In a meta-analysis, one performs moderator analyses that statistically compare whether effect sizes are greater in published versus unpublished studies. An absence of differences is evidence against a file drawer problem.

A second approach is through the visual examination of funnel plots, which are scatterplots of each study's effect size to sample size. Greater variability of effect sizes is expected in smaller versus larger studies, given their greater sampling variability. Thus, funnel plots are expected to look like an isosceles triangle, with a symmetric distribution of effect sizes around the mean across all levels of sample size. However, small studies that happen to find small effects will not be able to conclude statistical significance and therefore may be less likely to be published. The resultant funnel plot will be asymmetric, with an absence of studies in the small sample size/small effect size corner of the triangle.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading