MIT robotics professor Rodney Brooks helped bring about a paradigm shift in robotics in the late 1980s when he advocated a move away from top-down programming (which required complete control of the robot's environment) toward a biologically inspired model that helped robots navigate dynamic, constantly changing surroundings on their own. His breakthroughs paved the way for Roomba, the vacuuming robot disc that uses multiple sensors to adapt to different floor types and avoid obstalces in its path. (Brooks is chief technology officer and cofounder of Roomba's parent company, iRobot.) Brooks talked to NEWSWEEK's Katie Baker about the challenges involved in creating robots that can interact in social settings. Excerpts:
NEWSWEEK: Sociologists talk about the importance of culture and sociability in humans, and why [it should be equally important] in robots. Do roboticists consider things such as culture when thinking about how to integrate robots into human lives?
Rodney Brooks: Some of us certainly do, absolutely. My lab has been working on gaze direction. This is the one thing that you and I don't have right now [over the telephone], but if we were doing some task together, working in the same workspace, we would continuously be looking up at each other's eyes, to see what the other one was paying attention to. Certainly that level of integration with a robot has been of great interest to me. And if you're going to have a robot doing really high-level tasks with a person, I think you will want to know where its eyes are pointing, what it's paying attention to. Dogs do that with us and we do that with dogs, it happens all the time. Somehow cats don't seem to bother.
So are there ethical implications involved when you think about developing sociable robots, in terms of how they might change human behavior?
Well, every technology that we build changes us. There's a great piece on Edge.org by Kevin Kelly, I think it was, talking about how printing changed us, reading changed us. Computers have changed us, and robots will change us, in some way. It doesn't necessarily mean it's bad.
What are some of the more interesting robots that you've seen, or that you're developing or have developed?
I think what gets interesting is when robots have an ongoing existence. And Roomba has an ongoing existence, [though] less than [that of] an insect. But the ones that I have in my lab here, that I've scheduled to come out and clean every night, they do it day after day and recharge themselves. And they just sort of live as other entities in the lab and we never see them doing anything, except every so often we go and empty their dirt bins. So they've got an ongoing existence, but at a very, very primitive level. All the robots that you see from Honda and all those places don't even have that level of ongoing-ness. They're demo robots. But up until now, people haven't been building robots to have an ongoing existence, so they're sitting in the world, ready to do their thing when the situation is appropriate. So I think that's where the really interesting things will start to happen.
When you don't have to completely control the robot's environment?
When it becomes [something that] can have an ongoing existence with people … that is where things get interesting. We've done a few things like that here, starting back with [MIT professor] Cynthia Brezeal and [her sociable robot] Kismet, and in her new lab she's got some fantastic new robots where she's pushing towards that. We've had other robots here in my lab—Mertz, which was trying to recognize people from day to day as their looks change, and know about them. And some of our robots, like Domo, will interact with a person for 10 minutes or so, and [it] has face detectors and things like that. There are other projects in Europe—the RoboCub, which is focused in Genoa [Italy], is building these robots that many labs in Europe now have, which are all about emulating child development.
Obviously, we can tell when something's a robot and when something's a human. But when a robot is too humanlike, do we get concerned?
Our robots that we've built in my lab and the European robots, [if you] show a picture of that robot—just a static picture—to a kid, they'll tell you, "That's a robot." There's no question in their mind. And nevertheless, when the robots are acting and interacting, they can draw even skeptics into interacting with them—for a few seconds at least—where the person responds in a social way. And some people will go on responding for minutes and minutes. Then there are these super-realistic robots that a couple of different groups, one in Asia and one in the U.S., are building. One of them looks like Albert Einstein, and [the other] looks like this television reporter. And there it gets a little weirder. Because very quickly, you realize that they're not Albert Einstein or they're not the television reporter. But they look so much like it, you get this--some of the researchers in Japan call it the Uncanny Valley, I think. There's this dissonance in your head, because the expectations go so far up when it looks so realistic.
What else is important to understand about the robotics field today?
There are two typical reporter questions that you haven't asked me, and I'm glad you haven't. [The first] is: but a robot can't do anything it's not programmed to do anyway, so it's totally different from us. And my answer to that is that it's an artificial distinction, I think. Because my belief is that we are machines. And I think modern science treats us as machines. When you have a course in molecular biology, it's all mechanistic, and likewise in neuroscience. So if we are machines, then at least it seems to me, in principle, there could exist machines built out of other stuff, silicon and steel maybe, which are lawfully following the laws of physics and chemistry, just as we are, but could in principle have as much free will and as much soul as we have. Whether we humans are smart enough to build such machines is a different question. Maybe we're just not smart enough. That pisses off the scientists when I say that.
Well don't physicists say that, in a way? That there may be things that our brains are just not configured to understand about the universe?
Yes. Actually, Patrick Winston, who is a professor here—I used to co-teach his AI [artificial intelligence] course many years ago—he'd always start the first lecture on artificial intelligence with the undergrads here, talking about his pet raccoon he'd had as a kid, growing up in the Midwest. And it was very dexterous with its hands. But, he said, it never occurred to him that that raccoon was going to build a robot replica of itself. The raccoon just isn't smart enough. And maybe there are flying saucers up there, with little green men or green entities from somewhere and they're looking down at my lab and saying "What, he's trying to build robot replicas of himself? Isn't that funny! He'll never make it!"
And you said there was one other [typical reporter] question.…
When? When are we going to have them in our homes? When, when, when?