The topic itself is not new. For decades, there have been rumors about famous historical scientists like Newton, Kepler, and Mendel. The charge was that their research results were too good to be true. They must have faked the data, or at least prettied it up a bit. But Newton, Kepler, and Mendel nonetheless retained their seats in the Science Hall of Fame. The usual reaction of those who heard the rumors was a shrug. So what? They were right, weren't they?
What's new is that nowadays everyone seems to be doing it, and they're not always right. In fact, according to John Ioannidis, they're not even right most of the time.
John Ioannidis is the author of a paper titled "Why Most Published Research Findings Are False," which appeared in a medical journal in 2005. Nowadays this paper is described as "seminal" and "famous," but at first it received little attention outside the field of medicine, and even medical researchers didn't seem to be losing any sleep over it.
Then people in my own field, psychology, began to voice similar doubts. In 2011, the journal Psychological Science published a paper titled "False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant." In 2012, the same journal published a paper on "the prevalence of questionable research practices." In an anonymous survey of more than 2000 psychologists, 53 percent admitted that they had failed to report all of a study's dependent measures, 38 percent had decided to exclude data after calculating the effect it would have on the outcome, and 16 percent had stopped collecting data earlier than planned because they had gotten the results they were looking for.
The final punch landed in August, 2015. The news was published first in the journal Science and quickly announced to the world by the New York Times, under a title that was surely facetious: "Psychologists welcome analysis casting doubt on their work." The article itself painted a more realistic picture. "The field of psychology sustained a damaging blow," it began. "A new analysis found that only 36 percent of findings from almost 100 studies in the top three psychology journals held up when the original experiments were rigorously redone." On average, effects found in the replications were only half the magnitude of those reported in the original publications.
Why have things gone so badly awry in psychological and medical research? And what can be done to put them right again?
I think there are two reasons for the decline of truth and the rise of truthiness in scientific research. First, research is no longer something people do for fun, because they're curious. It has become something that people are required to do, if they want a career in the academic world. Whether they enjoy it or not, whether they are good at it or not, they've got to turn out papers every few months or their career is down the tubes. The rewards for publishing have become too great, relative to the rewards for doing other things, such as teaching. People are doing research for the wrong reasons: not to satisfy their curiosity but to satisfy their ambitions.
There are too many journals publishing too many papers. Most of what's in them is useless, boring, or wrong.
The solution is to stop rewarding people on the basis of how much they publish. Surely the tenure committees at great universities could come up with other criteria on which to base their decisions!
The second thing that has gone awry is the vetting of research papers. Most journals send out submitted manuscripts for review. The reviewers are unpaid experts in the same field, who are expected to read the manuscript carefully, make judgments about the importance of the results and the validity of the procedures, and put aside any thoughts of how the publication of this paper might affect their own prospects. It's a hard job that has gotten harder over the years, as research has become more specialized and data analysis more complex. I propose that this job should be performed by paid experts—accredited specialists in the analysis of research. Perhaps this could provide an alternative path into academia for people who don't particularly enjoy the nitty-gritty of doing research but who love ferreting out the flaws and virtues in the research of others.
In Woody Allen's movie "Sleeper," set 200 years in the future, a scientist explains that people used to think that wheat germ was healthy and that steak, cream pie, and hot fudge were unhealthy—"precisely the opposite of what we now know to be true." It's a joke that hits too close to home. Bad science gives science a bad name.
Whether wheat germ is or isn't good for people is a minor matter. But whether people believe in scientific research or scoff at it is of crucial importance to the future of our planet and its inhabitants.