Edge: BEYOND COMPUTATION


BEYOND COMPUTATION: A TALK WITH RODNEY BROOKS [5.23.02]

Maybe there's something beyond computation in the sense that we don't understand and we can't describe what's going on inside living systems using computation only. When we build computational models of living systems—such as a self-evolving system or an artificial immunology system—they're not as robust or rich as real living systems. Maybe we're missing something, but what could that something be?


Introduction

Rodney Brooks, a computer scientists and Director of the MIT's Artificial Intelligence Laboratory, is looking for something beyond computation in the sense that we don't understand and we can't describe what's going on inside living systems using computation only. When we build computational models of living systems, such as a self-evolving system or an artificial immunology system — they're not as robust or rich as real living systems.

"Maybe we're missing something," Brooks asks, "but what could that something be?" He is puzzled that we've got all these biological metaphors that we're playing around with—artificial immunology systems, building robots that appear lifelike—but none of them come close to real biological systems in robustness and in performance. "What I'm worrying about," he says, "is that perhaps in looking at biological systems we're missing something that's always in there. You might be tempted to call it an essence of life, but I'm not talking about anything outside of biology or chemistry."


JB


RODNEY A. BROOKS is Director of the MIT Artificial Intelligence Laboratory, and Fujitsu Professor of Computer Science. He is also Chairman and Chief Technical Officer of iRobot, a 120-person robotics company. Dr. Brooks also appeared as one of the four principals in the Errol Morris movie Fast, Cheap, and Out of Control (named after one of his papers in the Journal of the British Interplanetary Society) in 1997 (one of Roger Ebert's 10 best films of the year). He is the author of Flesh and Machines.


BEYOND COMPUTATION: A TALK WITH RODNEY BROOKS

ROD BROOKS: Every nine years or so I change what I'm doing scientifically. Last year, 2001, I moved away from building humanoid robots to worry about what the difference is between living matter and non-living matter. You have an organization of molecules over here and it's a living cell; you have an organization of molecules over here and it's just matter. What is it that makes something alive? Humberto Maturana was interested in this question, as was the late Francisco Varela in his work on autopoesis. More recently, Stuart Kauffman has talked about what it is that makes something living, how it is a self-perpetuating structure of interrelationships.

We have all become computation-centric over the last few years. We've tended to think that computation explains everything. When I was a kid, I had a book which described the brain as a telephone-switching network. Earlier books described it as a hydrodynamic system or a steam engine. Then in the '60s it became a digital computer. In the '80s it became a massively parallel digital computer. I bet there's now a kid's book out there somewhere which says that the brain is just like the World Wide Web because of all of its associations. We're always taking the best technology that we have and using that as the metaphor for the most complex things—the brain and living systems. And we've done that with computation.

But maybe there's more to us than computation. Maybe there's something beyond computation in the sense that we don't understand and we can't describe what's going on inside living systems using computation only. When we build computational models of living systems—such as a self-evolving system or an artificial immunology system—they're not as robust or rich as real living systems. Maybe we're missing something, but what could that something be?

You could hypothesize that what's missing might be some aspect of physics that we don't yet understand. David Chalmers has certainly used that notion when he tries to explain consciousness. Roger Penrose uses that notion to a certain extent when he says that it's got to be the quantum effects in the microtubules. He's looking for some physics that we already understand but are just not describing well enough.

If we look back at how people tried to understand the solar system in the time of Kepler and Copernicus, we notice that they had their observations, geometry, and a. They could describe what was happening in those terms, but it wasn't until they had calculus that they were really able to make predictions and have a really good model of what was happening. My working hypothesis is that in our understanding of complexity and of how lots of pieces interact we're stuck at that algebra-geometry stage. There's some other tool—some organizational principle—that we need to understand in order to really describe what's going on.

And maybe that tool doesn't have to be disruptive. If we look at what happened in the late 19th century through the middle of the 20th, there were a couple of very disruptive things that happened in physics: quantum mechanics and relativity. The whole world changed. But computation also came along in that time period—around the 1930s—and that wasn't disruptive. If you were to take a 19th century mathematician and sit him down in front of a chalk board, you could explain the ideas of computation to him in a few days. He wouldn't be saying, "My God, that can't be true!" But if we took a 19th century physicist (or for that matter, an ordinary person in the 21st century) and tried to explain quantum mechanics to him, he would say, "That can't be true. It's too disruptive." It's a completely different way of thinking. Using computation to look at physical systems is not disruptive to the extent that it needs its own special physics or chemistry; it's just a way of looking at organization.

So, my mid-life research crisis has been to scale down looking at humanoid robots and to start looking at the very simple question of what makes something alive, and what the organizing principles are that go on inside living systems. We're coming at it with two and a half or three prongs. At one level we're trying to build robots that have properties of living systems that robots haven't had before. We're trying to build robots that can repair themselves, that can reproduce (although we're a long way from self-reproduction), that have metabolism, and that have to go out and seek energy to maintain themselves. We're trying to design robots that are not built out of silicon steel, but out of materials that are not as rigid or as regular as traditional materials—that are more like what we're built out of. Our theme phrase is that we're going to build a robot out of Jello. We don't really mean we're actually going to use Jello, but that's the image we have in our mind. We are trying to figure out how we could build a robot out of "mushy" stuff and still have it be a robot that interacts in the world.

The second direction we're going is building large-scale computational experiments. People might call them simulations, but since we're not necessarily simulating anything real I prefer to call them experiments. We're looking at a range of questions on living systems. One student, for example, is looking at how multi-cellular reproduction can arise from single-cell reproduction. When you step back a little bit you can understand how single-cell reproduction works, but then how did that turn into multi-cellular reproduction, which at one level of organization looks very different from what's happening in the single-cell reproduction. In single-cell reproduction one thing gets bigger and then just breaks into two; in multicell reproduction you're actually building different sorts of cells. This is important in speculating about the pre-biotic emergence of self-organization in the soup of chemicals that used to be Earth. We're trying to figure out how self-organization occured, and how it bootstraped Darwinian evolution, DNA, etc. out of that. The current dogma is that DNA is central. But maybe DNA came along a lot later as a regulatory mechanism.

In other computational experiments we're looking at very simple animals and modeling their neural development. We're looking at polyclad flatworms, which have a very primitive, but very adaptable brain with a couple of thousand neurons. If you take a polyclad flatworm and cut out its brain, it doesn't carry out all of its usual behaviors but it can still survive. If you then get a brain from another one and you put it into this brainless flatworm, after a few days it can carry out all of its behaviors pretty well. If you take a brain from another one and you turn it about 180 degrees and put it in backwards, the flatworm will walk backwards a little bit for the first few days, but after a few days it will be back to normal with this brain helping it out. Or you can take a brain and flip it over 180 degrees, and it adapts, and regrows. How is that regrowth and self-organization happening in this fairly simple system? All of these different projects are looking at how this self-organization happens with computational experiments in a very artificial life-like way.

The third piece is trying to see if we can generate some mathematical principles out of these robots and these computational experiments. That, of course, is what we're really after. But at the same time, my research methodology is not to go after a question like that directly, because you sit and twiddle your thumbs and speculate for years and years. I try to build some real systems and then try and generalize from them.

If we—or more probably, other people—are successful at this, and can get to a real understanding of how all of these different pathways inside a living system interact to create a living system, then we'll have a new level of technology that can be built on top of that. We will in a principled way then be able to manipulate biological material in the way that we've learned in the last couple of hundred years to manipulate steel and then silicon. In 50 years our technological infrastructure and our bodies may be quite indistinguishable in that they'll be the same sort of processes.

I have several interesting robotics projects underway. One of the robots I must say was inspired by Bill Joy, probably to his dismay. We have a robot now that wanders around the corridors, finds electrical outlets, and plugs itself in. The next step is to make it hide during the day and come out at night and plug itself in. I'd like to build a robot vermin. Once I started talking about this, someone told me about a science fiction story from the '50s or '60s about a similar creature—The Beast Mark 3, or 4—which I like quite a lot. In the story the robot squeals when you pick it up and runs away. It doesn't have an off-switch, so the only way to get rid of it is to take a hammer to the thing, or lock it in a room where there are no outlets and let it starve to death. I'm trying to build some robots like that as thought-provoking pieces—and just because Bill Joy was afraid of them.

We're also trying to build self-reproducing robots. We've been doing experiments with Fischer Technik and Lego. We're trying to build a robot out of Lego which can put together a copy of itself with Lego pieces. Obviously you need motors and some little computational units, but the big question is to determine what the fixed points in mechanical space are to create objects that can manipulate components of themselves and construct themselves. There is a deep mathematical question to get at there, and for now we're using these off-the-shelf technologies to explore that. Ultimately we expect we're going to get to some other generalized set of components which have lots and lots of ways of cooperatively being put together, and hope that we can get them to be able to manipulate themselves. You can do this computationally in simulation very easily, but in the real world the mechanical properties matter. What is that self-reflective point of mechanical systems? Biomolecules as a system have gotten together and are able to do that.

We've also been looking at how things grow. We, and biological systems, grow from simple to more complex. How do the mechanics of that growth happen? How does rigidity come out of fairly sloppy materials? To address these questions we've been looking at tensegrity structures. On the computational side, I'm trying to build an interesting chemistry which is related to physics and has a structure where you get interesting combinatorics out of simple components in a physical simulation, so that properties of living systems can arise through spontaneous self-organization. The question here is: What sorts of influences do you need on the outside? In the pre-biotic soup on Earth you had tides, which were very important for sorting. You had regular thunderstorms every three or four days which served as very regular sorting operations, and then we had the day and night cycle—heating and cooling. With this thermodynamic washing through of chemicals, it may be that some clays attached themselves to start self-organizations, but you had to get from crystal to this other sort of organization. What are the key properties of chemistry which can let that arise? What's the simplest chemistry you can have in which that self-organization will arise? What is the relationship between the combinatorics and the sorts of self-organizations that can arise? Obviously our chemistry let that arise. We are creating computational systems and exploring that space.

My company, iRobot, has been pushing in a bunch of different areas. There's been a heightened interest in military robots, especially since September 11. By September 12 we had some of our robots down at Ground Zero in New York trying to help look for survivors under the rubble. There's been an increase in interest in robots that can do search and rescue, in robots that can find mines, and in portable robots that can do reconnaissance. These would be effective when small groups, like the special forces we've seen in Afghanistan, go in somewhere and they don't necessarily want to stick their heads up to go look inside a place. They can send the robot in to do that.

Another robot that we're just starting to get into production now after three years of testing is a robot to go down oil wells. This particular one is 5 centimeters in diameter and 14 meters long. It has to be autonomous, because you can't communicate by radio. Right now, if you want to go and manipulate oil wells while they are in production, you need a big infrastructure on the surface to shove a big thick cable down. This can mean miles and miles of cable, which means tons of cable on the surface, or a ship sitting above the oil well to push this stuff down through 30-foot segments of pipe that go one after the other after the other for days and days and days. We've built these robots that can go down oil wells,—where the pressure is 10,000 psi at 150 degrees Centigrade—carry along instruments, do various measurements, and find out where there might be too much water coming into the well. Modern wells have sleeves that can be moved back and forth to block off work in segments where changes in pressure in the shale layer from oil flow would suggest that it would be more effective to let the oil in somewhere else. When you have a managed oil well you're going to increase the production by about a factor of two over the life of the well. The trouble is, it's been far too expensive to manage the oil wells because you need this incredible infrastructure. These robots cost something on the order of a hundred thousand dollars.

They're retrievable, because you don't want them down there blocking the oil flow. And they're tiny. A robot that's five centimeters in diameter in an oil bores that is the standard size soon starts to clog things up. The robots go down there and you can't communicate with them, but we've pushed them to failures artificially and have also had some failures down there which we didn't predict, and in every case they've managed to reconfigure themselves and get themselves out.

Other things happening in robots are toys. Just like the first microprocessors, the first robots are getting into people's homes in toys. We had a bit of a downturn in high tech toys since September 11, and we're more back to basics, but it will spring back next year. There are a lot of high-tech, simple robot toys coming on the market; we're certainly playing in that space.

Another interesting thing just now starting to happen is robots in the home. For a couple of years now you've been able to buy lawn-mowing robots from the Israeli company, Friendly Machines. In the past month Electrolux has just started selling their floor-cleaning robot. A couple of other players have also made announcements, but no one's delivering besides Electrolux. We're on the start of the curve of getting robots into our homes and doing useful work if these products turn out to be successful.

My basic research is conducted at The Artificial Intelligence Lab at MIT, which is an interdisciplinary lab. We get students from across the Institute, although the vast majority are computer science majors. We also have electrical engineering majors, brain and cognitive science students, some mechanical engineering students, even some aeronautics and astronautics students these days because there is a big push for autonomous systems in space. We work on a mixture of applied and wacky theoretical stuff.

The most successful applied stuff over the last 3 or 4 years has been in assistance of surgery. Using computer vision techniques, we have built robots that take all different sorts of imagery during surgery. There are new MRI machines where you can have a patient inside an MRI as you're doing surgery. You get coarse measurements, register those with the fine MRI measurements done in a bigger machine beforehand, and then get the surgeon a real-time 3-dimensional picture of everything inside the brain of the patient undergoing brain surgery. If you go to one of the major hospitals here in Boston for brain surgery, you're going to have a surgeon assisted by AI systems developed at the lab. The first few times this was running we had grad students in the OR rebooting Unix at critical points. Now we're way past that—we don't have any one of our own staff there. It's all handed over to the surgeons and the hospital staff, and it's working well. They use it for every surgery.

The newest thing, which is just in clinical trials right now, is virtual colonoscopies. Instead of actually having to shove the thing up to look, we can take MRI scans, and then the clinician sits there and does a fly-through the body. Algorithms go in, look for the polyp, and highlight the potential polyps. It's an external scan to replace what has previously been an internal intrusion.

The clinical trials have just started. I view this registration of data sets as a step forwards. It's like the Star Trek tricorder which scans up and down the body and tells you what's wrong. We're building the technologies that are going to allow that sort of thing to happen. If these clinical trials work out, within five years the colonoscopies could become common. Scanning a patient with something like the tricoder is a lot further off, but that's the direction we're going; we're putting those pieces of technology together.

That's the applied end of what we're doing at the lab. At the wackier, far-out end, Tom Knight now has a compiler in which you give a simple program to the system, and it compiles the program into a DNA strip. He then inserts that DNA string into the genome of E. coli, and it grows into a whole bunch of E. coli. When the RNA transcription mechanism encounters that piece of DNA it does a digital computation inside the living cell, connecting them to sensors and actuators. The sensors that he's used so far are sensing various lactone molecules. It can then send messages to these cells by putting a molecule in a solution with the cells. They, in turn, then do some computation. In the two outputs he's demonstrated so far they produce other lactone molecules which diffuse across the cell membrane, and maybe go to a different species of E. coli that he has in the same batch with a different program running in them. He also stole a luminescent chain from a Japanese jellyfish, so he can make these cells light up with one big answer—1 or 0—depending on the results of the computation. This is still in its early days, but this, in conjunction with another program on amorphous computing, holds some promise down the line.

To explain amorphous computing, let me suggest the following thought experiment. Say that in a bucket of paint you have a whole bunch of computers which are little display elements. Instead of having a big LCD screen, you just get your paint brush, you paint this paint on the wall, and these little computation elements locally can communicate with the other elements nearby them in the paint. They're not regularly spaced, but you can predict ahead of time the density, and have them self-organize themselves into a big geometric display. Next you couple this with some of these cells that can do digital computation.

A little further out, you grow a sheet of cells—just feed 'em some sugar and have them grow. They're all doing the same little computation—communicating with their neighbors by diffusing lactone molecules—and you have them self-organize and understand their spatial structure. 30 years from now, instead of growing a tree, cutting down the tree and building this wooden table, we would be able to just place some DNA in some living cells, and grow the table, because they self-organize. They know where to grow and how to change their production depending on where they are. This is going to be a key to this new industrial infrastructure of biomaterials—a little bit of computation inside each cell, and self-organization.

We've come a long way since the early AI stuff. In the '50s, when John McCarthy had that famous 6-week meeting up in Dartmouth where he coined the term "artificial intelligence," people got together and thought that the keys to understanding intelligence were being able to reproduce the stuff that those MIT and Carnegie Tech graduates found difficult to do. Al Newell and Herb Simon, for example, built some programs that could start to prove some of the theorems in Russell and Whitehead's Principia. Other people, like Turing and Wiener, were interested in playing chess, and that was the thing that people with a technical degree still found difficult to do. The concentration was really on those intellectual pursuits. Herb Simon thought that they would be the key to understanding thinking.

What they missed was how important our embodiment and our perception of the world are as the basis for our thinking. To a large extent they ignored vision, which does a large part of the processing that goes on in your head. In our vision algorithms today we can do things like face recognition and face tracking . We can do motion tracking very well now, actually. But we still cannot do basic object recognition. We can't have a system look at a table and identify a cassette recorder or a pair of eye glasses, which is stuff that a 3-year-old can do. In the early days that stuff was viewed as being so easy, and because everyone could do it no one thought that it could be the key. Over time there's been a realization that vision, sound-processing, and early language are maybe the keys to how our brain is organized and that everything that's built on top of that makes us human and gives us our intellect. There's a whole other approach to getting to intellectual robots if you like—based on perception and language—which was not there in the early days.

I used to carry this paper around from 1967: MIT Artificial Intelligence Memo #100. It was written by Seymour Papert. He assigned Gerry Sussman, who was an undergraduate at the time, a summer project of solving vision. They thought it must be easy and that an undergraduate should be able to knock it off in three months.

It didn't quite turn out that way.

John Brockman, Editor and Publisher
contact: [email protected]
Copyright © 2002 by
Edge Foundation, Inc
All Rights Reserved.

|Top|