2013 : WHAT *SHOULD* WE BE WORRIED ABOUT?

victoria_stodden's picture
Associate Professor of Information Sciences, University of Illinois at Urbana-Champaign
Where Did You Get That Fact?

We are being inundated every day with computational findings, conclusions, and statistics. In op-eds, policy debates, and public discussions numbers are presented with the finality of a slammed door. In fact we need to know how these findings were reached, so we can evaluate their relevance, their credibility, resolve conflicts when they differ and make better decisions. Even figuring out where a number came from is a challenge, let alone trying to understand how it was determined.

This is important because of how we reason. In the thousands of decisions we make each day, seldom do we engage in a deliberately rational process anything like gathering relevant information, distilling it into useful knowledge, and comparing options. In most situations standing around weighing pros against cons is a pretty good way to ensure rape pillage and defeat, either metaphorical or real, and miss out on pleasures in life. So of course we don't very often do it, and instead we make quick decisions based on instinct, intuition, heuristics, and shortcuts honed over millions of years.

Computers, however, are very good at components of the decision making process that we're not: they can store vast amounts of data accurately, organize and filter it, carry out blindingly fast computations, and beautifully display the results. Computers can't (yet?) direct problem solving or contextualize findings, but for certain important sets of questions they are invaluable in enabling us to make much more informed decisions. They operate at scales our brains can't and make it possible to tackle problems at ever greater levels of complexity.

The goal of better decision making is behind the current hype surrounding big data, the emergence of "evidence-based" everything—policy,—medicine,—practice,—management, and issues such as climate change, fiscal predictions, health assessment, even what information you are exposed to online. The field of statistics has been addressing the reliability of results derived from data for a long time, with many very successful contributions (for example, confidence intervals, quantifying the distribution of model errors, and the concept of robustness).

The scientific method suggests skepticism when interpreting conclusions, and a responsibility to communicate scientific findings transparently, so others may evaluate and understand the result. We need to bring these notions into our everyday expectations when presented with new computational results. We should be able to dig in and find out where the statistics came from, how they were computed, and why we should believe them. Those concepts receive almost no consideration when findings are publicly communicated.

I'm not saying we should independently verify every fact that enters our daily life—there just isn't enough time, even if we wanted to—but the ability should exist where possible, especially for knowledge generated with the help of computers. Even if no one actually tries to follow the chain of reasoning and calculations, more care will be taken when generating the findings when the potential for inspection exists. If only a small number of people look into the reasoning behind results, they might find issues, provide needed context, or be able to confirm their acceptance of the finding as is.  In most cases the technology exists to make this possible.

Here's an example. When news articles started appearing on the world wide web in the 90's I remember eagerly anticipating hot-linked stats—being able to click on any number in the text to see where it came from. More than a decade later this still isn't routine, and facts are asserted without the possibility of verification. For any conclusions that enter the public sphere, it should be expected that all the steps that generated the knowledge are disclosed, including making the data they are based on available for inspection whenever possible, and making available the computer programs that carried out the data analysis—open data, open source, scientific reproducibility.

Without the ability to question findings, we risk fooling ourselves into thinking we are capitalizing on the information age when we are really just making decisions based on evidence that no one, except perhaps the people who generated it, actually has the ability to understand. That's the door closing.