2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

christopher_chabris's picture
Senior Investigator, Geisinger Health System; Visiting Fellow, Institute for Advanced Study, Toulouse, France; Co-author, The Invisible Gorilla
Why Is It Hard To Think About Thinking Machines?

I've often wondered why we human beings have so much trouble thinking straight about machines that think.

In the arts and entertainment, machines that can think are often depicted as simulacra of humans, sometimes down to the shape of the body and its parts, and their behavior suggests that their thoughts are much like our own. But thinking does not have to follow human rules or patterns to count as thinking. Examples of this fact now abound: chess computers outthink humans not because they think like humans think about chess but better, but because they think in an entirely different way. Useful language translation can be done without deep knowledge of grammar.

Evolution has apparently endowed human beings, more than any other animals, with the capacity to represent and reason about the contents of other human minds. By the time children start school, they can keep track of what different people know about the same set of facts (this is a prerequisite for lying). Later, as adults, we use this capacity to figure out how to negotiate, collaborate, and solve problems, for the benefit of ourselves and others. This uniquely human capacity is often called "Theory of Mind."

This piece of mental equipment is fairly new and hasn't been perfected. It has trouble when there are more than a couple of levels of belief involved (John thinks that Mary knows that Josephine felt …). And it springs into action even in situations where there are no "minds" to represent. Videos of two-dimensional shapes moving around on computer screens can tell stories of love, betrayal, hate, and violence that exist entirely in the mind of the viewer, who temporarily forgets that yellow triangles and blue squares don't have emotions.

Maybe we have trouble thinking about thinking machines because we don't have a correspondingly intuitive "Theory of Machine." Mentally simulating a simple mechanical device consisting of a few interlocking gears—say, figuring out whether turning the first gear will cause the last gear to rotate left or right, faster or slower—is devilishly difficult, not to mention aversive. More complex machines, consisting not of concrete parts but of abstract algorithms and data, are just as alien to our built-in mental faculties.

Perhaps this is why, when confronted with the notion of thinking machines, we fall back on understanding them as though they were thinking beings—in other words, as though they were humans. We apply the best tools our mind has, namely Theory of Mind (what would a machine do if it were like a person?) and general-purpose reasoning. Unfortunately, the former tool is not designed for this job, and the latter tool is hampered by our severely limited capacities for attention and working memory. Sure, we have disciplines like physics, engineering, and computer science that teach us how to understand and build machines, including machines that think, but years of formal education are required to appreciate the basics.

A Theory of Machine module would ignore intentionality and emotion, and instead specialize in representing the interactions of different subsystems, inputs, and outputs to predict what machines would do in different circumstances, much as Theory of Mind helps us to predict how other humans will behave.

If we did have Theory of Machine capacities built into our brains, things might be much different. Instead, we seem condemned to see the complex reality of thinking machines, which think based on much different principles from the ones we are used to, through the simplifying lens of assuming they will be like thinking minds, perhaps reduced or amplified in capacity, but essentially the same. Since we will be interacting with thinking machines more as time goes on, we need to figure out how to develop better intuitions about how they work. Crafting a new module isn't easy, but our brains did it—by reusing existing faculties in a clever new way—when written language was invented. Perhaps our descendants will learn the skill of understanding machines in childhood as easily as we learned to read.