Edge: BEYOND COMPUTATION


BEYOND COMPUTATION: A TALK WITH RODNEY BROOKS

The clinical trials have just started. I view this registration of data sets as a step forwards. It's like the Star Trek tricorder which scans up and down the body and tells you what's wrong. We're building the technologies that are going to allow that sort of thing to happen. If these clinical trials work out, within five years the colonoscopies could become common. Scanning a patient with something like the tricoder is a lot further off, but that's the direction we're going; we're putting those pieces of technology together.

That's the applied end of what we're doing at the lab. At the wackier, far-out end, Tom Knight now has a compiler in which you give a simple program to the system, and it compiles the program into a DNA strip. He then inserts that DNA string into the genome of E. coli, and it grows into a whole bunch of E. coli. When the RNA transcription mechanism encounters that piece of DNA it does a digital computation inside the living cell, connecting them to sensors and actuators. The sensors that he's used so far are sensing various lactone molecules. It can then send messages to these cells by putting a molecule in a solution with the cells. They, in turn, then do some computation. In the two outputs he's demonstrated so far they produce other lactone molecules which diffuse across the cell membrane, and maybe go to a different species of E. coli that he has in the same batch with a different program running in them. He also stole a luminescent chain from a Japanese jellyfish, so he can make these cells light up with one big answer—1 or 0—depending on the results of the computation. This is still in its early days, but this, in conjunction with another program on amorphous computing, holds some promise down the line.

To explain amorphous computing, let me suggest the following thought experiment. Say that in a bucket of paint you have a whole bunch of computers which are little display elements. Instead of having a big LCD screen, you just get your paint brush, you paint this paint on the wall, and these little computation elements locally can communicate with the other elements nearby them in the paint. They're not regularly spaced, but you can predict ahead of time the density, and have them self-organize themselves into a big geometric display. Next you couple this with some of these cells that can do digital computation.

A little further out, you grow a sheet of cells—just feed 'em some sugar and have them grow. They're all doing the same little computation—communicating with their neighbors by diffusing lactone molecules—and you have them self-organize and understand their spatial structure. 30 years from now, instead of growing a tree, cutting down the tree and building this wooden table, we would be able to just place some DNA in some living cells, and grow the table, because they self-organize. They know where to grow and how to change their production depending on where they are. This is going to be a key to this new industrial infrastructure of biomaterials—a little bit of computation inside each cell, and self-organization.

We've come a long way since the early AI stuff. In the '50s, when John McCarthy had that famous 6-week meeting up in Dartmouth where he coined the term "artificial intelligence," people got together and thought that the keys to understanding intelligence were being able to reproduce the stuff that those MIT and Carnegie Tech graduates found difficult to do. Al Newell and Herb Simon, for example, built some programs that could start to prove some of the theorems in Russell and Whitehead's Principia. Other people, like Turing and Wiener, were interested in playing chess, and that was the thing that people with a technical degree still found difficult to do. The concentration was really on those intellectual pursuits. Herb Simon thought that they would be the key to understanding thinking.

What they missed was how important our embodiment and our perception of the world are as the basis for our thinking. To a large extent they ignored vision, which does a large part of the processing that goes on in your head. In our vision algorithms today we can do things like face recognition and face tracking . We can do motion tracking very well now, actually. But we still cannot do basic object recognition. We can't have a system look at a table and identify a cassette recorder or a pair of eye glasses, which is stuff that a 3-year-old can do. In the early days that stuff was viewed as being so easy, and because everyone could do it no one thought that it could be the key. Over time there's been a realization that vision, sound-processing, and early language are maybe the keys to how our brain is organized and that everything that's built on top of that makes us human and gives us our intellect. There's a whole other approach to getting to intellectual robots if you like—based on perception and language—which was not there in the early days.

I used to carry this paper around from 1967: MIT Artificial Intelligence Memo #100. It was written by Seymour Papert. He assigned Gerry Sussman, who was an undergraduate at the time, a summer project of solving vision. They thought it must be easy and that an undergraduate should be able to knock it off in three months.

It didn't quite turn out that way.

Previous Page 1 2 3 4 5 6 Beginning