Associate Professor of Psychology, University of Sydney; Co-director, Centre for Time; Special Associate Editor, Perspectives on Psychological Science
Science Is Self-Correcting

The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition.

Publication bias is the tendency to not publish "negative" or non-confirmatory results. Its effect, the suppression of corrections, can prevail even when much work has gone into obtaining the negative results. Ideally, such findings would move quickly and easily from individual scientists' laboratories to public availability. But the path can be so difficult, and is so infrequently used, that many areas of science do not deserve the self-correcting moniker.

The prestigious journals in many fields make no bones about it. They declare they are in the business of publishing exciting discoveries that advance the field in new ways, not studies similar to previous ones that find a less-interesting result. Even at those journals that claim to welcome negative findings, a would-be corrector faces an uphill battle. The scientists who vet the new evidence for the journal typically include he or she who published the original and possibly incorrect conclusion. Human frailty, egotism, and anonymity together bias reviewers' verdicts toward "reject". That's normally enough to deny new negative results an appearance in a journal.

Self-correction is thus undermined by several factors, some very human, and some simply institutional. These institutional factors are sometimes historical accidents. One is the number of venues where it is considered appropriate to publish one's work. In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. Fields with a variety of publication venues and stakeholders are healthier; in them it is more difficult for a single school of thought to take power.

Several fields, such as astrophysics, have a culture of sharing and citing manuscripts before they are even submitted to journals. Researchers need only post their manuscript to a website, such as arXiv.org. The result reported might then be ignored, but cannot be fully suppressed. In principle, all areas of science could adopt this practice but for now, most stick with their secretive reviews that frequently torpedo new results.

The bias against corrections is especially harmful in areas where the results are cheap, but the underlying measurements are noisy. In those scientific realms, the literature may quickly become polluted with statistical flukes. Unfortunately, these two features of cheap results and noisy measurement are characteristic of most sub-areas of psychology, my own discipline. Some other fields, such as contemporary epidemiology, may have it even worse, particularly with regards to a third exacerbating factor: the small size of the true effects investigated. As John Ioannidis has pointed out, the smaller the true effects in an area, the more likely it is that a given claimed effect is instead a statistical fluke (a false positive).

There are fixes. One both improves the behavior of individual researchers and dissolves institutional obstacles: Public registration of the design and analysis plan of a study before it is begun. Clinical trials researchers have done this for decades, and in 2013 researchers in other areas rapidly followed suit. Registration includes the details of the data analyses that will be conducted, which eliminates the former practice of presenting the inevitable fluctuations of multifaceted data as robust results. Reviewers assessing the associated manuscripts end up focusing more on the soundness of the study's registered design rather than disproportionately favoring the findings. This helps reduce the disadvantage that confirmatory studies usually have relative to fishing expeditions. Indeed, a few journals have begun accepting articles from well-designed studies even before the results come in.

The Internet's explosive growth has led to pervasive public ratings of, and useful comments on, nearly every product and service. But, somehow, not on scientific papers, in spite of the obvious value of commenting for pointing out flaws and correcting errors. Until recently, to point out a problem with a paper in a place that other researchers would come across, one had to run the gauntlet of the same editors and reviewers who had missed or willfully overlooked the problem in the first place. Those reviewers, as experts also publishing in the area, frequently have similar commitments as the authors to flawed practices or claims. Now, finally, scientists are taking advantage of the Internet to contribute expertise and opinions that go beyond the authors of an article and its two or three reviewers. In October, the U.S. National Library of Medicine began allowing researchers to post comments on practically any paper in biology and medicine, via the most widely used database of such papers (PubMed). Correction of simple errors is no longer arduous.

Besides simple error correction, comments can bring in new perspectives. Cross-pollination of ideas then increases. Exhausted research areas will be revitalized by the introduction of new approaches, and attacks by researchers from outside a field will break hardened orthodoxies.

But hiring, promotion, and grant committees typically don't value the contributions made by individual researchers using these tools. As long as this continues, progress may be slow. As Max Planck observed, revolutions in science sometimes have to wait for funerals. Even after the defenders of old practices assume their final resting places, the antiquated traditions sometimes endure, in part from the support of institutional policies. A policy does not die until someone kills it. New reforms and innovations need our active support—only then can science live up to its self-correcting tagline.