anthony_aguirre's picture
Professor of Physics, University of California, Santa Cruz; Author, Cosmological Koans
The Odds On AI

I attribute an unusually low probability to the near-future prospect of general-purpose AI—by which I mean one that can formulate abstract concepts based on experience, reason and plan using those concepts, and take action based on the results. We have exactly one example of technological-level intelligence arising, and it has done so though millions of generations of information-processing agents interacting with an incredibly rich environment of other agents and structures that have similarly evolved.

I suspect that there are many intricately-interacting hierarchically-structured organizational levels involved, from sub-neuron to the brain as a whole. My suspicion is that replicating the effectiveness of this evolved intelligence in an artificial agent will require amounts of computation that are not that much lower than evolution has required, which would far outstrip our abilities for many decades even given exponential growth in computational efficiency per Moore's law—and that's even if we understand how to correctly employ that computation.

I would assign a probability of ~ 1% for AGI arising in the next ten years, and ~ 10% over the next thirty years. (This essentially reflects a probability that my analysis is wrong, times a probability more representative of AI experts who—albeit with lots of variation—tend to assign somewhat higher numbers.)

On the other hand, I assign a rather high probability that, if AGI is created (and especially if it arises relatively quickly), it will be—in a word—insane. Human minds are incredibly complex, but have been battle-tested into (relative) stability over eons of evolution in a variety of extremely challenging environments. The first AGIs are unlikely to have been honed in this way. Like the human systems, 'narrow' AIs are likely to become more 'general' by researchers cobbling together AI components (like visual-field, or text-processing, symbolic manipulation, optimization algorithms, etc.), along with currently nonexistent systems for much more efficient learning, concept abstraction, decision-making, etc.

Given trends in the field, many of these will probably be rather opaque 'deep learning' or similar systems that are effective but somewhat inscrutable. In the first systems, I'd guess that these will just barely work together.

So I think the a-priori likelihood of early AGIs actually doing just what we want them to is quite small.

In this light, there is a tricky question of whether AGIs very quickly lead to superintelligent AIs (SIs). There is emerging agreement on AGI that it essentially implies SI. While I largely agree, I'd add the caveat that it's quite possible that progress will 'stall' for a while at the near-human level until something cognitively stable can be developed, or that the AGI, even if somewhat unstable, must still be high-functioning enough to self-improve its intelligence.

Either case, however, is not that encouraging: the superintelligence that arises could well be quite flawed in various ways, even if very effective at what it does. This intuition is perhaps not that far removed from the various scenarios in which superintelligence goes badly awry (taking us with it), often for lack of what we might call 'common sense.' But this 'common sense' is in part a label for the stability we have built up being part of an evolutionary and social ecosystem. 

So even if AGI is a long way away, I'm deeply pessimistic about what will happen 'by default' if we get it. I hope I'm wrong, but time will tell. (I don't think we can—nor should!—try to stop the development of AI generally. It will do a multitude of great things.)

In the meantime, I hope that on the way to AGI, researchers can put a lot of thought into how to dramatically lower the probability that things will go wrong once we arrive. Something I find very frustrating in this arena, where the stakes are potentially incredibly high, is when I hear "I think X is what's going to happen, so I'm not worried about Y." That's generally a fine way to think, as long as your confidence in X is high and Y is not super-important. But when you're talking about something that could radically determine the future (or future existence of) humanity, 75% confidence is not enough. 90% is not enough. 99% is not enough! We would never have built the LHC if there was a 1% (let alone 10%) chance of it actually spawning black holes that consumed the world—there were, instead, extremely compelling arguments against that. Let's see if those compelling reasons not to worry about AGI exist, and if not, let's make our own.