2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? [1]

irene_pepperberg's picture [5]
Research Associate & Lecturer, Harvard; Author, Alex & Me
A Beautiful (Visionary) Mind

While machines are terrific at computing, this issue is that they're not very good at actual thinking.

Machines have an endless supply of grit and perseverance, and, as others have said, will effortlessly crunch out the answer to a complicated mathematical problem or direct you through traffic in an unknown city, all by use of the algorithms and programs installed by humans. But what do machines lack?

Machines (at least so far, and I don’t think this will change with a singularity) lack vision. And I don’t mean sight. Machines do not devise the next new killer app on their own. Machines don’t decide to explore distant galaxies—they do a terrific job once we send them, but that’s a different story. Machines are certainly better than the average person at solving problems in calculus and quantum mechanics—but machines don’t have the vision to see the need for such constructs in the first place. Machines can beat humans at chess—but they have yet to design the type of mind game that will intrigue humans for centuries. Machines can see statistical regularities that my feeble brain will miss—but they can’t make the insightful leap that connects entirely disparate sets of data to devise a new field.

I am not too terribly concerned about machines that compute—I’ll deal with the frustration of my browser in exchange for a smart refrigerator that, based on tracking RFID codes of what comes in and out, texts me to buy cream on my way home (hint to those working on such a system…sooner rather than later!). I like having my computer underline words it doesn’t recognize, and I’ll deal with the frustration of having to ignore its comments on "phylogenetic" in exchange for catching my typo on a common term (in fact, it won’t let me misspell a word here to make a point). But these examples show that just because a machine is going through the motions of what looks like thinking doesn’t mean that it actually is engaging in that behavior—or at least one equivalent to the human process.

I am reminded of one of the earliest studies to train apes to use "language"—in this case, to manipulate plastic chips to answer a number of questions. The system was replicated with college students, who did exceptionally well—not surprisingly—but when asked about what they had been trained to do, claimed that they had solved some interesting puzzles, and that they had no idea that they were being taught a language. Much debate ensued, and much was learned—and put into practice—in subsequent studies so that several nonhuman subjects did eventually understand the referential meaning of the various symbols that they were taught to use, and we did learn a lot about ape intelligence from the original methodology. The point, however, is that what initially looked like a complicated linguistic system needed a lot more work before it became more than a series of (relatively) simple paired associations.

My concern therefore is not about thinking machines, but rather about a complacent society—one that might give up on its visionaries in exchange merely for getting rid of drudgery. Humans need to take advantage of all the cognitive capacity that is released when machines take over the scut work—and be so very thankful for that release, and use that release—to channel all that ability into the hard work of solving pressing problems that need insightful, visionary leaps.