Original articlesPublication bias in meta-analysis: its causes and consequences
Introduction
The “dictionary of epidemiology” [1] defines publication bias as “an editorial predilection for publishing particular findings, e.g., positive results, which leads to the failure of authors to submit negative findings for publication.” Rosenthal, in his “file drawer problem,” described an extreme view in which journals are filled with the 5% of studies showing a false-positive result, the other 95%, showing nonsignificant results (at P < 0.05), being left to fill file drawers [2]. Awareness of publication bias began in 1956 when the editor of the Journal of Abnormal Social Psychology indicated that negative studies were less likely to be published in his journal [3]. In 1959, it was found that very few negative results were reported in four psychological journals, a finding regarded as strongly suggesting publication bias [4]. However, no attempt was made to quantify the problem until 1964 [5]. The existence of publication bias is now widely accepted. Attempts to summarize evidence relating to a specific hypothesis, whether by narrative review or meta-analysis, can be seriously distorted by publication bias. For example, one recent analysis estimated that 45% of an observed association could be due to publication bias [6].
This article aims to explore publication bias and issues related to it, and the effect it may have on attempts to review evidence relating to various hypotheses. Features of the design and execution of both single studies and meta-analyses that may lead to publication bias are examined, along with factors that may influence the author's decision to submit his results for publication. The role of journal editors and reviewers in deciding which studies to publish is also considered. Methods aimed at confirming the existence of, correcting for, and preventing publication bias are reviewed. It is shown that one can estimate the extent to which such a bias may have occurred, and even correct for it, helping authors of future reviews not only to be fully aware of the problem, but also to take steps to minimize it.
Section snippets
Publication bias arising from the design or execution of single studies
Several facets of the design or execution of a study, including sample size and the method of reporting the data, may lead to publication bias. The investigator's own beliefs and expectations may also influence the outcome. A small sample size leads to lack of power [7], and significance may then only be obtained if chance exaggerates any true differences between the groups under study [8]. Though the obvious likely effect of inadequate sample size is failure to demonstrate statistical
Publication bias arising from the researcher deciding whether or not to submit results
An early study found that dissertations and theses were three or four times more likely to be published if they were positive than if they were negative [5]. Such findings may be more because researchers decide not to submit their findings than because journal editors reject their papers 7, 11, 18, 19, 20, 21, statistically significantly positive studies being up to 10 times more likely to be submitted for publication 13, 22. The main reasons given for nonsubmission of studies are the negative
Publication bias arising from the tendency of journals to reject negative studies
Some editors and reviewers strongly dislike negative studies 7, 8, 20, 25. The British Medical Journal states that “negative results have never made rivetting reading.” Their ideal article is one that affects clinical practice, improves prognosis, or simplifies management [19]. While some negative reports may legitimately be rejected due to poor quality 3, 19, even negative studies that appear to be better conducted than positive ones may be much less likely to be accepted for publication [25].
Sponsorship
A study's source of funding may also unduly influence the probability of subsequent publication of the results. For instance, studies showing no association between exposure and disease may be published by groups with a presumed special interest in demonstrating a lack of causation,such as the companies that introduced the risk factor 13, 29. Similarly, reports submitted to governments by Scandinavian pharmaceutical companies showed a lower proportion of published than unpublished studies
Bias arising from the design and execution of reviews and meta-analyses
There are likely to be unpublished studies relevant to any given hypothesis. As published studies may systematically differ from unpublished ones 31, 32, reviews or meta-analyses based only on published data may reach misleading conclusions [33]. It is widely thought, therefore, that as many studies as possible should be included, both published and unpublished 22, 31, 34, 35, 36.
However, there are some problems with this simple view. Firstly, it should be noted that it is often impossible to
Methods of detecting and correcting for publication bias
As publication bias may seriously distort the findings of a meta-analysis, various methods have been devised for detecting its presence. Each of the methods is described below, in some cases with examples of its use, its chief advantages and limitations being listed in Table 1.
Registries
Identifying published trials through the use of literature searches and computer databases is relatively straightforward, but information on unpublished trials is not as readily available. The use of registries has been advocated to overcome this, and registries already exist in the fields of perinatal medicine, cancer and acquired immunodeficiency syndrome treatment, and antithrombotic trials 19, 33, 68. As registration usually occurs before results are known a complete database of all trials
Conclusions
Publication bias appears to be a widespread problem in the scientific literature, and has been demonstrated in many fields of research. Various aspects of the design and execution of both single studies and meta-analyses may increase the probability of bias of this type, and its occurrence may seriously distort any attempts to derive valid estimates by pooling data from a group of studies, skewing the outcome towards positive results. Although various methods have been proposed for determining
Acknowledgements
We thank Mrs P.J. Wassell and Mrs D.P. Morris for their assistance in the typing of this manuscript, and Mrs B.A. Forey for preparing the simulated funnel plot. Financial support was provided by Philip Morris Europe, to whom we are also grateful.
References (75)
The “file drawer problem” and tolerance for null results
Psychol Bull
(1979)- et al.
Bayesian meta-analysis, with application to studies of ETS and lung cancer
Lung Cancer
(1996) Bias in meta-analytic research
J Clin Epidemiol
(1992)Meta-analysis
Lancet
(1988)- et al.
Bias against the null hypothesisthe reproductive hazards of cocaine
Lancet
(1989) - et al.
Publication bias in the environmental tobacco smoke/coronary heart disease epidemiologic literature
Regul Toxicol Pharmacol
(1995) - et al.
Publication bias in clinical research
Lancet
(1991) - et al.
Guidelines for application of meta-analysis in environmental epidemiology
Regul Toxicol Pharmacol
(1995) - et al.
Meta-analyses of randomized clinical trialshow to improve their quality?
Lung Cancer
(1994) - et al.
Meta-analysis in clinical trials
Controlled Clin Trials
(1986)