Skip to main content icon/video/no-internet

Publication bias can result from the selective publication of manuscripts based on the direction and magnitude of results, multiple publication of results, and selective reporting of results within a published study. In particular, research with statistically significant positive results is more likely to be submitted for publication, to be published, and to be published more quickly than research with negative or nonsignificant results. Consequently, published studies on a particular topic might not be representative of all valid studies conducted on the topic, leading to distortion of the scientific record.

Publication bias tends to be greater in clinical research than in public health research, and in observational studies as opposed to randomized studies. Nevertheless, it has been demonstrated across all these types of research. One area where a variety of publication biases have been documented is pharmaceutical industry studies of new drug applications.

The primary sources of publication bias are commonly assumed to be editorial decision making, together with authors’ reluctance to submit research with null or negative results—sometimes referred to as the file drawer problem. While research has supported the latter explanation, studies of publication bias in editorial decision making have yielded mixed findings. Less well-recognized sources of publication bias include multiple publication of results and within-study selective reporting among multiple outcomes, exposures, subgroup analyses, and other multiplicities. Although these types of publication bias have until recently received little attention, they are likely to cause even greater bias in the literature than does selective publication.

Publication bias presents a serious threat to the validity of systematic reviews and meta-analyses. Undetected publication bias not only can lead to misleading conclusions but at the same time can also give the impression of unfounded precision of results. A screening method for selective-publication bias in meta-analysis involves correlating observed effect sizes with study design features that are potential risk factors for publication bias, such as sample size. A funnel plot provides an informal graphical method where effect sizes are plotted against sample sizes, while the null hypothesis of no publication bias can be tested using rank correlation approaches such as Kendall's tau or Spearman's rho. Detecting withinstudy selective reporting presents a greater challenge, unless access is available to a study's original protocol and complete results of all analyses performed.

Several strategies exist for reducing or adjusting for publication bias. Sampling methods involve tracking down unpublished manuscripts, sometimes referred to as the grey literature, as well as broader systemic solutions such as requiring prospective registration of clinical trials. Analytic methods include the file drawer adjustment strategy, where the number of zero-effect studies needed to eliminate significant findings in a meta-analysis is estimated. More complex analytic approaches involving weighted distribution theory are also available. All analytic methods involve important assumptions, which in many situations can be questionable. Perhaps most important, consumers of metaanalyses and systematic reviews are cautioned to be constructively skeptical in interpreting results.

Norman A.Constantine

Further Readings

Chan, A. W., Hrobjartsson, A., Haahr, M. T., Gotzsche, P. C., and Altman, D. G.Empirical evidence for selective reporting of

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading