2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

neil_gershenfeld's picture
Physicist, Director, MIT's Center for Bits and Atoms; Co-author, Designing Reality
Yes, But

 

Something about discussion of artificial intelligence appears to displace human intelligence. The extremes of the arguments that AI is either our salvation or damnation are a sure sign of the impending irrelevance of this debate.

Disruptive technologies start as exponentials, which means the first doublings can appear inconsequential because the total numbers are small. Then there appears to be a revolution when the exponential explodes, along with exaggerated claims and warnings to match, but it's a straight extrapolation of what's been apparent on a log plot. That's around when growth limits usually kick in, the exponential crosses over to a sigmoid, and the extreme hopes and fears disappear.

That's what we're now living through with AI. The size of common-sense databases that can be searched, or the number of inference layers that can be trained, or the dimension of feature vectors that can be classified have all been making progress that can appear to be discontinuous to someone who hasn't been following them.

Notably absent from either side of the debate about AI have been the people making many of the most important contributions to this progress. Advances like random matrix theory for compressed sensing, convex relaxations for heuristics for intractable problems, and kernel methods in high-dimensional function approximation are fundamentally changing our understanding of what it means to understand something.

The evaluation of AI has been an exercise in moving goal posts. Chess was conquered by analyzing more moves, Jeopardy was won by storing more facts, natural language translation was accomplished by accumulating more examples. These accumulating advances are showing that the secret of AI is likely to be that there isn't a secret; like so many other things in biology, intelligence appears to be a collection of really good hacks. There's a vanity that our consciousness is the defining attribute of our uniqueness as a species, but there's growing empirical evidence from studies of animal behavior and cognition that self-awareness evolved continuously and can be falsified in a number of other species. There's no reason to accept a mechanistic explanation for the rest of life, while declaring one part of it to be off-limits.

We've long since become symbiotic with machines for thinking; my ability to do research rests on tools that augment my capability to perceive, remember, reflect, and communicate. Asking whether or not they are intelligent is as fruitful as asking how I know I exist—amusing philosophically, but not testable empirically.

Asking whether or not they're dangerous is prudent, as it is for any technology. From steam trains to gunpowder to nuclear power to biotechnology we've never not been simultaneously doomed and about to be saved. In each case salvation has lain in the much more interesting details, rather than a simplistic yes/no argument for or against. It ignores the history of both AI and everything else to believe that it will be any different.