2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT?

gary_klein's picture
Senior Scientist, MacroCognition LLC; Author, Seeing What Others Don't: The Remarkable Ways We Gain Insights
Blinded By Data

The 23 October 2015 issue of the journal Science reported a feel-good story about how some children in India had received cataract surgery and were able to see.  On the surface, there is nothing in this incident that should surprise us.  Ready access to cataract surgery is something we take for granted.  But the story is not that simple. 

The children had been born with cataracts.  They had never been able to see.  By the time their condition was diagnosed — they came from impoverished and uneducated families in remote regions — the regional physicians had told the parents that it was too late because the children were past a critical period for gaining vision. 

Nevertheless, a team of eye specialists visited the area and arranged for the cataract surgery to be performed even on teenagers.  Now, hundreds of formerly blind children are able to see.  After having the surgery four years earlier, one young man of 22 can ride a bicycle through a crowded market. 

The concept of a critical period for developing vision was based on studies that David Hubel and Torsten Wiesel performed on cats and monkeys.  The results showed that without visual signals during a critical period of development, vision is impaired for life.  For humans, this critical window closes tight by the time a child is eight years old.  (For ethical reasons, no comparable studies were run on humans.)  Hubel and Wiesel won a Nobel Prize for their work.  And physicians around the world stopped performing cataract surgery on children older than 8 years.  The data were clear.  But they were wrong.  The results of the cataract surgeries on Indian teenagers disprove the critical period data.

In this light, an apparent “feel-good” story becomes a “feel-bad” story about innumerable other children who were denied the cataract surgery because they were too old.  Consider all the children who endured a lifetime of blindness because of excessive faith in misleading data.

The theme of excessive faith in data was illustrated by another 2015 news item.  Brian Nosek and a team of researchers set out to replicate 100 high profile psychology experiments that had been performed in 2008.  They reported their findings in the 28 August 2015 issue of Science.  Only about a third of the original findings were replicated and even for these, the effect size was much smaller than the initial report.

Other fields have run into the same problem.  A few years ago the journal Nature reported a finding that the majority of cancer studies selected for review could not be replicated.  In October 2015, Nature devoted a special issue to exploring various ideas for reducing the number of non-reproducible findings.  Many others have taken up the issue of how to reduce the chances of unreliable data.

I think this is the wrong approach.  It exemplifies the bedrock bias: a desire for a firm piece of evidence that can be used as a foundation for deriving inferences.

Scientists appreciate the tradeoff between Type I errors (detecting effects that aren’t actually present — false positives) and Type II errors (failing to detect an effect that is present — false negatives).  When you put more energy into reducing Type I errors, you run the risk of increasing Type II errors, missing findings and discoveries.  Thus we might change the required significance level from .05 to .01, or even .001, to reduce the chances of a false positive but in so doing we would greatly increase the false negatives. 

The bedrock bias encourages us to make extreme efforts to eliminate false positives, but that approach would slow progress.  A better perspective is to give up the quest for certainty and accept the possibility that any datum may be wrong.  After all, skepticism is a mainstay of the scientific enterprise.

I recall a conversation with a decision researcher who insisted that we cannot trust our intuitions; instead, we should trust the data.  I agreed that we should never trust intuitions (we should listen to our intuitions but evaluate them), but I didn’t agree that we should trust the data.  There are too many examples, as described above, where the data can blind us. 

What we need is the ability to draw on relevant data without committing ourselves to the validity of those data.  We need to be able to derive inferences, make speculations, and form anticipations, in the face of ambiguity and uncertainty.  And to do that, we will need to overcome the bedrock bias.  We will need to free ourselves from the expectation that we can trust the data.

I am not arguing that it’s okay to get the research wrong — witness the consequence of all the Indian children who suffered unnecessary blindness.  My argument is that we shouldn’t blind ourselves to the possibility that the data might be wrong.  The team of Indian eye specialists responded to anecdotes about cases of recovered vision and explored the possible benefits of cataract surgery past the critical period.

The heuristics-and-biases community has done an impressive job of sensitizing us to the limits of our heuristics and intuitions.  Perhaps we need a parallel effort to sensitize us to the limits of the data — a research agenda demonstrating the kinds of traps we fall into when we trust the data too much.  This agenda might examine the underlying causes of the bedrock bias, and possible methods for de-biasing ourselves.  A few cognitive scientists have performed experiments on the difficulty of working with ambiguous data but I think we need more: a larger, coordinated research program.

Such an enterprise would have implications beyond the scientific community.  We live in an era of Big Data, an era in which quants are taking over Wall Street, an era of evidence-based strategies. In a world that is becoming increasingly data-centered, there may be value in learning how to work with imperfect data.