2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? [1]

steven_pinker's picture [5]
Johnstone Family Professor, Department of Psychology; Harvard University; Author, Rationality
Thinking Does Not Imply Subjugating

 

Thomas Hobbes's pithy equation "Reasoning is but reckoning" is one of the great ideas in human history. The notion that rationality can be accomplished by the physical process of calculation was vindicated in the 20th century by Turing's thesis that simple machines are capable of implementing any computable function and by models from D. O. Hebb, McCullough and Pitts, and their scientific heirs showing that networks of simplified neurons could achieve comparable feats. The cognitive feats of the brain can be explained in physical terms: to put it crudely (and critics notwithstanding), we can say that beliefs are a kind of information, thinking a kind of computation, and motivation a kind of feedback and control.

This is a great idea for two reasons. First, it completes a naturalistic understanding of the universe, exorcising occult souls, spirits, and ghosts in the machine. Just as Darwin made it possible for a thoughtful observer of the natural world to do without creationism, Turing and others made it possible for a thoughtful observer of the cognitive world to do without spiritualism.

Second, the computational theory of reason opens the door to artificial intelligence—to machines that think. A human-made information processor could, in principle, duplicate and exceed the powers of the human mind. Not that this is likely to happen in practice, since we will probably never see the sustained technological and economic motivation that would be necessary to bring it about. Just as inventing the car did not involve duplicating the horse, developing an AI system that could pay for itself will not require duplicating a specimen of Homo sapiens. A device designed to drive a car or predict an epidemic need not be designed to attract a mate or avoid putrid carrion.

Nonetheless, recent baby steps toward more intelligent machines have led to a revival of the recurring anxiety that our knowledge will doom us. My own view is that current fears of computers running amok are a waste of emotional energy—that the scenario is closer to the Y2K bug than the Manhattan Project

For one thing, we have a long time to plan for this. Human-level AI is still the standard 15-to-25 years away, just as it always has been, and many of its recently touted advances have shallow roots. It's true that in the past, "experts" have comically dismissed the possibility of technological advances that quickly happened. But this cuts both ways: "experts" have also heralded (or panicked over) imminent advances that never happened, like nuclear-powered cars, underwater cities, colonies on Mars, designer babies, and warehouses of zombies kept alive to provide people with spare organs.

Also, it's bizarre to think that roboticists will not build in safeguards against harm as they proceed. They would not need any ponderous "rules of robotics" or some newfangled moral philosophy to do this, just the same common sense that went into the design of food processors, table saws, space heaters, and automobiles. The worry that an AI system would so clever at attaining one of the goals programmed into it (like commandeering energy) that it would run roughshod over the others (like human safety) assumes that AI will descend upon us faster than we can design fail-safe precautions. The reality is that progress in AI is hype-defyingly slow, and there will be plenty of time for feedback from incremental implementations, with humans wielding the screwdriver at every stage.

Would an artificially intelligent system deliberately disable these safeguards? Why would it want to? AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world. But intelligence is the ability to deploy novel means to attain a goal; the goals are extraneous to the intelligence itself. Being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems. It's telling that many of our techno-prophets don't entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization.

Now, we can imagine a malevolent human who designed and released a battalion of robots to sow mass destruction. But disaster scenarios are cheap to play out in the imagination, and we should keep in mind the chain of probabilities that would have to multiply out before it would be a reality. An evil genius would have to arise with the combination of a thirst for pointless mass murder and a brilliance in technological innovation. He would have to recruit and manage a team of co-conspirators that exercised perfect secrecy, loyalty, and competence. And the operation would have to survive the hazards of detection, betrayal, stings, blunders, and bad luck. In theory it could happen, but we have more pressing things to worry about.

Once we put aside the sci-fi disaster plots, the possibility of advanced artificial intelligence is exhilarating—not just for the practical benefits, like the fantastic gains in safety, leisure, and environment-friendliness of self-driving cars, but for the philosophical possibilities. The computational theory of mind has never explained the existence of consciousness in the sense of 1st-person subjectivity (though it's perfectly capable of explaining the existence of consciousness in the sense of accessible and reportable information). One suggestion is that subjectivity is inherent to any sufficiently complicated cybernetic system. I used to think that this hypothesis (and its alternatives) were permanently untestable. But imagine an intelligent robot programmed to monitor its own systems and pose scientific questions. If, unprompted, it asked about why it itself had subjective experiences, I'd take the idea seriously.