Mathematician; Executive Director, H-STAR Institute, Stanford; Author, The Man of Numbers: Fibonacci's Arithmetic Revolution
Leveraging Human Intelligence


I know many machines that think. They are people. Biological machines.

Be careful of that last phrase, "biological machines." It's a convenient way to refer to stuff we don't fully understand in a way that suggests we do. (We do the same in physics when we use terms like "matter," "gravity," and "force.") "People" is a safer term, since it reminds us we really don't understand what we are talking about.

In contrast, I have yet to encounter a digital-electronic, electro-mechanical machine that behaves in a fashion that would merit the description "thinking," and I see no evidence to suggest that such may even be possible. Hal-like thinking (sic) devices that will eventually rule us are, I believe, destined to remain in the realm of science fiction.

Just because something waddles like a duck and quacks, does not make it a duck. And a machine that exhibits some features of thinking (e.g. decision making) does not make it a thinking machine.

We admire the design complexity in things we have built, but we can do that only because we built them, and can therefore genuinely understand them. You only have to turn on the TV news to be reminded that we are not remotely close to understanding people, either individually or in groups. If by thinking we mean what people do with their brains, then to refer to any machine we have built as "thinking" is sheer hubris.

The trouble is, we humans are suckers for being seduced by the "if it waddles and quacks, it's a duck" syndrome. Not because we are stupid; rather because we are human. The very features that allow us to act, for the most part, in our best interests when faced with potential information overload in complex situations, leave us wide open for such seduction.

Many years ago I remember walking into a humanoid robotics lab in Japan. It looked like a typical engineering skunk-works. In one corner was a metallic skeletal device, festooned with electrical wires, which had the rough outline of a human upper torso. The sophisticated looking functional arms and hands were, I assume, the focus of much of the engineering research, but they were not active during my visit, and it was only later that I really noticed them. My entire attention when I walked in, and for much of my time there, was taken up by the robot's head.

Actually, it wasn't a head at all. Just a metal frame with a camera where the nose and mouth would be. Above the camera were two white balls (about the size of ping pong balls, which may be what they were) with black pupils painted on. Above the eyeballs, two large paperclips had been used to provide eyebrows.

The robot was programmed to detect motion of people and pick up sound sources (who was speaking). It would move its head and eyeballs to point at and follow anyone who moved, and to raise and lower its paperclip eyebrows when the target individual was speaking.

What was striking was how alive and intelligent the device seemed. Sure, both I and everyone else in the room knew exactly what was going on, and how simple was the mechanism that controlled the eyeball "gaze" and the paperclip eyebrows. It was a trick. But it was a trick that tapped deep into hundreds of thousands of years of human social and cognitive development, so our natural response was the one normally elicited by another person.

It wasn't even that I was not aware of how the trick worked. My then Stanford colleague and friend, the late Cliff Nass, had done hundreds of hours of research showing how we humans are genetically programmed to ascribe intelligent agency based on a few very simple interaction clues, reactions that are so deep and so ingrained, we cannot eliminate them.

There probably was some sophisticated AI that could control the robot's arms and hands—if it had been switched on at the time of my visit—but the eyes and eyebrows were controlled by a very simple program.

Even so, that behavior was sufficient so that, throughout my visit, I had this very clear sense that the robot was a curious, intelligent participant, able to follow what I said.

What it was doing, of course, was leveraging my humanity and my intelligence. It was not thinking.

Leveraging human intelligence is all well and good if the robot is used to clean the house, book your airline tickets, or drive your car. But would you want such a machine to serve on a jury, make a crucial decision regarding a hospital procedure, or have control over your freedom? I certainly would not.

So, when you ask me what I think about machines that think, I answer that, for the most part I like them, because they are people (and perhaps also various other animals).

What worries me is the increasing degree to which we are giving up aspects of our lives to machines that decide, often much more effectively and reliably than people can, but very definitely do not think. There is the danger: machines that can make decisions—but do not think.

Decision-making and thinking are not the same and we should not confuse the two. When we deploy decision-making systems in matters of national defense, health care, and finance, as we do, the potential dangers of such confusion are particularly high, both individually and societally.

To guard against that danger, it helps to be aware that we are genetically programmed to act in trustful, intelligent-agency-ascribing ways in certain kinds of interactions, be they with people or machines. But sometimes, a device that waddles and quacks is just a device. It ain't no duck.