2017 : WHAT SCIENTIFIC TERM OR CONCEPT OUGHT TO BE MORE WIDELY KNOWN?

Theoretical Physicist; Foundation Professor, School of Earth and Space Exploration and Physics Department, ASU; Author, The Greatest Story Ever Told . . . So Far
Uncertainty

Nothing feels better than being certain, but science has taught us over the years that certainty is largely an illusion. In science, we don’t "believe" in things, or claim to know the absolute truth. Something is either likely or unlikely, and we quantify how likely or unlikely. That is perhaps the greatest gift that science can give.

That uncertainty is a gift may seem surprising, but it is precisely for this reason that the scientific concept of uncertainty needs to be better and more broadly understood.

Quantitatively estimating uncertainties—the hallmark of good science—can have a dramatic effect on the conclusions one draws about the world, and it is the only way we can clearly counteract the natural human tendency to assume whatever happens to us is significant.

The physicist Richard Feynman was reportedly fond of saying to people, “You won’t believe what happened to me today!” and then he would add “Absolutely nothing!” We all have meaningless dreams night after night, but dream that a friend breaks their leg, and later hear that a cousin had an accident, and it is easy to assume some correlation. But in a big and old universe even rare accidents happen all the time. Healthy skepticism is required because the easiest person to fool in this regard is oneself.

To avoid the natural tendency to impute spurious significance, all scientific experiments include an explicit quantitative characterization of how likely it is that results are as claimed. Experimental uncertainty is inherent and irremovable. It is not a weakness of the scientific method to recognize this, but a strength.

There are two different sorts of uncertainties attached to any observation. One is purely statistical. Because no measurement apparatus is free from random errors, this implies that any sequence of measurements will vary over some range determined by the accuracy of the measurement apparatus, but also by the size of the sample being measured. Say a million people voting in an election go to the polling booth on two consecutive days and vote for exactly the same candidates on both days. Random measurement errors suggest that if the margin of difference was reported to be less than a few hundred votes, on successive days different candidates might be declared the winner.

Take the recent "tentative" observation of a new particle at the Large Hadron Collider, which would have revolutionized our picture of fundamental physics. After several runs, calculations suggested the statistical likelihood that the result was spurious was less than 1%. But in particle physics, we can usually amass enough data to reduce the uncertainty to a much smaller level (this is not always possible in other areas of science—less than one in a million—before claiming a discovery. And this year, after more data was amassed the signal disappeared.

There is a second kind of uncertainty, called systematic uncertainty, that is generally much harder to quantify. A scale, for example, might not be set to zero when no weight is on it. Experimenters can often test for systematic uncertainties by playing with their apparatus, readjusting the dials and knobs, and see what the effect is, but this is not always possible. In astronomy one cannot fiddle with the Universe. However, one can try to estimate systematic uncertainties in one's conclusions by exploring their sensitivity to uncertainties in the underlying physics that one uses to interpret the data.  

Systematic uncertainties are particularly important when considering unexpected and potentially unlikely discoveries. Say, for example, in the election example I quote earlier, one discovered that there was an error in the design of a ballot so that by accident selecting one candidate ended up sometimes being recorded as voting for two candidates, in which case the ballot would be voided. Even a very small systematic error of this type could then overwhelm the result in any close election.

In 2014 the BICEP experiment claimed to observe gravitational waves from the earliest moments of the Big Bang. This could have been one of the most important scientific discoveries in recent times, if it were true. However, a later analysis discovered an unexpected source of background—dust in our own galaxy. When all the dust had settled, if you forgive the pun, it turned out that the observation had only a 92% likelihood of being correct. In many areas of human activity this would be sufficient to claim its validity. But extraordinary claims require extraordinary evidence. So the cosmology community has decided that no such claim can yet be made.

Over the past several decades we have been able to refine the probabilistic arguments associated with the determination of likelihood and uncertainty, developing an area of mathematics called Bayesian analysis that has turned the science of determining uncertainty into one of the most sophisticated areas of experimental analysis. Here, we first fold in a priori estimates of likelihood, and then see how the evidence changes our estimates. This is science at its best: Evidence can change our minds, and it is better to be wrong rather than be fooled.

In the public arena, scientists' inclusion of uncertainties has been used by some critics to discount otherwise important results. Consider the climate change debate. The evidence for human-induced climate change is neither controversial nor surprising. Fundamental physics arguments anticipate the observed changes. When the data shows that the last sixteen years have been the warmest in recorded human history, and when measured CO2 levels exceed those determined over the past 500,000 years, and when the West Antarctic ice sheet is observed to be melting at an unprecedented rate, the fact that responsible scientists report may small uncertainties associated with each of these measurements, this should not discount the resulting threat we face.

Pasteur once said, “Fortune favors the prepared mind.” Incorporating uncertainties prepares us to make more informed decisions about the future. This does not obviate our ability to draw rational and quantitatively reliable conclusions on which to base our actions—especially when our health and security may depend on them.