2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

joshua_bongard's picture
Cyril G. Veinott Green and Gold Professor, Department of Computer Science, University of Vermont Author, How the Body Shapes the Way We Think
Manipulators and Manipulanda

Place a familiar object on a table in front of you, close your eyes, and manipulate that object such that it hangs upside down above the table. Your eyes are closed so that you can focus on your thinking: which way did you reach out, grasp, and twist that object? What sense feedback did you receive to know that you were succeeding or failing? Now: close your eyes again, and think about manipulating someone you know into doing something they may not want to do. Again, observe your own thinking: what strategies might you employ? If you implement those strategies, how will you distinguish progress from stalemate?

Although much recent progress has been made in building machines that sense patterns in data, most people feel that general intelligence involves action: reaching some desired goal, or, failing that, keeping one's future options open. It is hypothesized that this embodied approach to intelligence allows humans to use physical experiences (such as manipulating objects) as scaffolding for learning more subtle abilities (such as manipulating people). But our bodies shape the kinds of physical experiences we have. For example, we can only manipulate a few objects at once because we only have two hands; perhaps this limitation also constrains our social abilities in ways we have yet to discover. George Lakoff taught us that we can find clues to the body-centrism of thinking in metaphors: we counsel each other not to "look back" in anger because, based on our bias to walk in the direction of our forward-facing eyes, past events tend to literally be behind us.

So: in order for machines to think, they must act. And in order to act, they must have bodies to connect physical and abstract reasoning. But what if machines do not have bodies like ours? Consider Hans Moravec's hypothetical Bush Robot: picture a shrub in which each branch is an arm and each twig is a finger. This robot's fractal nature would allow it to manipulate thousands or millions of objects simultaneously. How might such a robot differ in its thinking about manipulating people, compared to how people think about manipulating people?

One of many notable deficiencies in human thinking is dichotomous reasoning: believing something is black or white, rather than considering its particular shade of grey. But we are literally rigid and modular creatures: our branching set of bones house fixed organs and support fixed appendages with specific functions. What about machines that are not so "black and white"? Thanks to advances in materials science and 3D printing, soft robots are starting to appear. Such robots can change their shape in extreme ways, and may in future be composed of 20% battery and 80% motor at one place on their surface, 30% sensor and 70% support structure at another, and 40% artificial material and 60% biological matter someplace else. Such machines may be much better able to appreciate gradations than we are.

Let's go deeper. Most of us have no problem using the singular pronoun "I" to refer to the tangle of neurons in our heads. We know exactly where we end and the world—and other people—begins. But consider modular robots: small cubes or spheres that can physically attach and detach to one another at will. How would such machines approach the self/non-self discrimination problem? Might such machines be able to empathize more strongly with other machines (and maybe even people) if they can physically attach to them, or even become part of them?

That's how I think machines will think: familiar, because they will use their bodies as tools to reason about the world, yet alien, because bodies different from human ones will lead to very different modes of thought. But what do I think about thinking machines?

Personally, I find the ethical side of thinking machines straightforward: Their danger will correlate exactly with how much leeway we give them in fulfilling the goals we set for them. Machines told to "detect and pull broken widgets from the conveyer belt the best way possible" will be extremely useful, intellectually uninteresting, and will likely destroy more jobs than they will create. Machines instructed to "educate this recently displaced worker (or young person) the best way possible" will create jobs and possibly inspire the next generation. Machines commanded to "survive, reproduce, and improve the best way possible" will give us the most insight into all of the different ways in which entities may think, but will probably give us humans a very short window of time in which to do so. AI researchers and roboticists will, sooner or later, discover how to create all three of these species. Which ones we wish to call into being is up to us all.