2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

raphael_bousso's picture
Professor, Berkeley Center for Theoretical Physics, UC Berkeley
It Is Easy To Predict The Future

 

The future, that is, of a simple system with known initial conditions. It is hopeless to make detailed predictions for a complex, poorly understood system like human civilization. Yet, a general argument provides some crude but powerful constraints.

The argument is that we are likely to be typical among any collection of intelligent beings. (The collection should be defined by some general criteria that we meet, not carefully crafted to make us special.) For example, the probability that a randomly chosen human is among the first 0.1of humans on Earth is, well, .1%, given no other information. Of course, our ancestors ten thousand years ago would have drawn the wrong conclusion from this reasoning. But among all humans who ever live, 99.9% would be correct, so it's a good bet to make. The probability that we are among the first .1% of intelligent objects, human or artificial, is similarly tiny.

The assignments of probability would have to be updated if, unrealistically, we somehow gained conclusive new information proving that human civilization will continue in present numbers for a billion years. This would be one way of finding out that we lost the bet. But we have no such information, so we must assign probabilities accordingly. (This type of reasoning has been articulated by astrophysicists J. R. Gott and A. Vilenkin, among many others.)

The assumption that we may consider ourselves randomly chosen is sometimes questioned; but in fact, it lies at the heart of the scientific method. In physics and other sciences, theories almost never predict definite outcomes. Instead, we compute a probability distribution from the theory. Consider a hydrogen atom: the probability of finding the electron a mile from the proton is not exactly zero, just very, very small. Yet when we find an electron, we do not seriously entertain the possibility that it is part of a remote hydrogen atom. More generally, after repeating an experiment enough times to be satisfied that the probability for the outcome was sufficiently small according to some hypothesis, we reject the hypothesis and move on. In doing so, we are betting that we are not highly atypical observers.

An important rule is that we do not get to formulate the question after we made the observation, tayloring it to make the observation look surprising. For example, no matter where we find the electron, in hindsight the probabability was small to have found it at that particular spot, as opposed to all the other places it could have been. This is irrelevant, as we would have been unlikely to formulate this question before the measurement. Similarly, humans may well be atypical with respect to some variable we have measured: perhaps most intelligent objects in the visible universe do not have ten fingers. However, our location in the full temporal distribution of all humans on Earth is not known to us. We know how much time has passed or how many humans have been born since the first humans; but we do not know what fraction of the full time span or of the total number of intelligent observers on Earth this represents. The typicality assumption can be applied to these questions.

Our typicality makes the following two scenarios extremely unlikely: (1) that humans will continue to exist for many millions of years (with or without the help of thinking machines); and (2) that humans will be supplanted by a much longer-lived or much larger civilization of a completely different type, such as thinking machines. If either were true, then we would be among the very first intelligent observers on Earth, either in time or by number, and hence highly atypical.

Typicality implies our likely demise in the next million years. But it tells us nothing about whether this will come at the hands (or other appendages) of an artificial intelligence; after all, there is no shortage of doomsday scenarios.

Typicality is consistent with the possibility of a considerable number of civilizations that form and expire elsewhere in our galaxy and beyond. By the same reasoning, their duration is unlikely to vastly exceed ours, a tiny fraction of the lifetime of a star. Even if Earth-like planets are common, as observational evidence increasingly suggests, detectable signals from intelligent beings may not be likely to overlap with our own limited attention span. Still, if our interest lies in assessing the predominance of intelligent machines as a final and potentially fatal evolutionary step, the study of distant planetary systems may not be the worst starting point.