New technology=New perceptions.

Edge 102— June 5, 2002

(8,700 words)


TWELVE FLOWERS

By Katinka Matson

INTRODUCTION BY KEVIN KELLY


When I saw Matson's images I was blown away. Erase from your mind any notion of pixels or any grainy artifact of previous digitalization gear. Instead imagine a painter who could, like Vermeer, capture the quality of light that a camera can, but with the color of paints. That is what a scanner gives you. Now imagine a gifted artist like Matson exploring what the world looks like when it can only see two inches in front of its eye, but with infinite detail! In her flowers one can see every microscopic dew drop, leaf vein, and particle of pollen—in satisfying rich pigmented color. (From the Introduction By Kevin Kelly)


BEYOND COMPUTATION: A TALK WITH RODNEY BROOKS [6.3.02]

Maybe there's something beyond computation in the sense that we don't understand and we can't describe what's going on inside living systems using computation only. When we build computational models of living systems—such as a self-evolving system or an artificial immunology system—they're not as robust or rich as real living systems. Maybe we're missing something, but what could that something be?


A MUTUAL, JOINT-STOCK WORLD IN ALL MERIDIANS BY JAMES J. O'DONNELL [6.3.02]

It was on the 24th of August, in the year 410 of the common era, that the unthinkable came to pass. A guerrilla army, led by a renegade Roman general named Alaric, who had been brought up in a German-speaking community outside the actual boundaries of the Roman empire, ended years of threats and intimidation by invading the city of Rome itself. For three days they remained, destroying, looting, and killing. The exact loss of life was never known and may have been less than fears of the moment said it was, but the experience was a shattering one nonetheless. It had been 800 years since the last such defeat of the city, 800 years in which Rome had grown to be the greatest city in the world, the envy of the nations, the model for what a great city was like.

The shock was felt throughout the Roman world. In far-off Bethlehem, the scholar and monk Jerome, so prolific that one might think of him as the Stephen King of his time, could not work.

Here are his words: "And I was stunned and stupefied, so much so that I couldn't think about anything else day and night. I felt as if I were being held hostage myself and couldn't even open my mouth until I knew for sure what had happened. Hanging there, caught between hope and despair, I was torturing myself with the thought of what others were suffering. But after the brightest light of all the lands was extinguished — after the head of the whole Roman empire was lopped off — to speak truly, after the whole world had perished in a single city: I fell silent and was humbled, and I kept my silence and my sorrow was renewed. My heart grew warm within me and fire blazed up in my thoughts ..."



"I have mounted on my wall a most remarkable image. It's a gloriously vibrant water lily, with creamy colors and almost infinite deepness in detail and tone. It's large, about two feet square. It was neither painted, nor is it technically a photograph. It's beautiful. Everyone who has seen it has remarked on how stunning it looks, and how unlike a typical photograph. It looks like a painting but it is much to finely tuned and rendered, too polished. It's different." (From Kevin Kelly's Introduction)

TWELVE FLOWERS BY KATINKA MATSON
Introduction By Kevin Kelly

 
 
 
 
 
 
 
 
 
 

"For the past several years I have experimented with non-photographic techniques for creating images by utilizing input through a flatbed CCD scanner. No photographs are employed in the process."
— Katinka Matson


TWELVE FLOWERS BY KATINKA MATSON

Introduction By Kevin Kelly

I have mounted on my wall a most remarkable image. It's a gloriously vibrant water lily, with creamy colors and almost infinite deepness in detail and tone. It's large, about two feet square. It was neither painted, nor is it technically a photograph. It's beautiful. Everyone who has seen it has remarked on how stunning it looks, and how unlike a typical photograph. It looks like a painting but it is much to finely tuned and rendered, too polished. It's different.

This flower is one of a series of ravishing images made by Katinka Matson; the images in both her series, Forty Flowers (January, 2002) , and the current Twelve Flowers, can be seen here on Edge in low resolution versions. Katinka Matson's digital images are both pioneering and representative. She is in the venerable mode of following the technology.

Painting, the technology, changed how we use our eyes. Photography, another technology, changed how we painted. According to painter David Hockney's controversial theory, early experiments with optics and drawing "put a hand in the camera." Painters like Vermeer traced images from convex mirrors and simple lenses—thus the hand in the camera. Now the newest technology, digital gear, is overhauling photography, in part by putting the hand back into the camera. That's what we call Photoshop. Whatever distinction there may have been between painting and photography, Photoshop has completely vanished it. We can put our fingers into photographs, or mechanicize hand-crafted paintings. However this vanishing act required not only Photoshop, but two other technologies: a digital retina, and ink jet printing.

There are many ways to make an artificial eye. We assume a central lens is needed because that's how our eyes work and cameras, too. That's why it is a shock to hear that Matson's images weren't made with a camera. How else could it be done? In 1975 Ray Kurzweil explored a different route by inventing a flat bed scanner. The eye became a sensitive stick that floated along the object to be seen. When the object was a flat piece of paper this was easy. A room, or the world outside, however was too distant for the sensitivity of the scanning eye without a lens, so in our minds we kept the scanner enslaved to papers and books.

Like the many people who xeroxed their body parts for fun, or used a copy machine for art, Matson discovered that the scanning eye stick was far better at depth that was assumed. More importantly as color scanning became cheap, and then became super hi-res, the final image of a quick scan had all the detail of a painting. She began composing cut flowers on a scanner bed and capturing the color images. So the images you see here were not photographed but scanned with an ordinary office scanner. The grace of the images is self-evident. But there was one more needed technology to bring them to life: ink jet printing.

Scale is an important aspect of the visual world. Paintings could be made larger than photographs because of the constraint the falloff of light had on the physics of photographic printing. It was difficult (expensive) to keep the tones on a wall-size photographic print even from the center to the edges because of the differential in distance from the projecting lens. It was difficult (expensive) to chemically treat paper in the dark evenly at this scale. It was difficult (expensive) to maintain temperature (which affected color) at this scale. It was difficult (expensive) to capture sufficient resolution at this scale. Therefore photographs were created smallish. All these constraints have been removed by digital photography and ink jet printing.

It is now possible to make a very, very large ink jet print that has more resolution that your eye can discern, that has as much color as oil paint (and as permanent), that is critically even from edge to edge, and that is reproducible in however many quantities you need. I recently finished a book of color photographs published by the world's best art house publisher, printed in the best printer in Italy, and the colors of these pages can't compare to the ink jet prints that I made of the images as a proof. And this technology will only get better.

When I saw Matson's images I was blown away. Erase from your mind any notion of pixels or any grainy artifact of previous digitalization gear. Instead imagine a painter who could, like Vermeer, capture the quality of light that a camera can, but with the color of paints. That is what a scanner gives you. Now imagine a gifted artist like Matson exploring what the world looks like when it can only see two inches in front of its eye, but with infinite detail! In her flowers one can see every microscopic dew drop, leaf vein, and particle of pollen—in satisfying rich pigmented color.

Matson has a gift with design. I delight in her new images, particularly the sly one with a wood mushroom and flower. She is at the forefront of a new wave in photography, or what we should call new imaging. New cameras, like the Foveon, new scanning technology, and new pigmented printers like the Epson series, are all going to give artists like Matson room to reinvent how we see again.

Kevin Kelly

KEVIN KELLY helped launch Wired magazine in 1993 and served as Executive Editor. In 1994 and 1997, during Kelly's tenure, Wired won the National Magazine Award for General Excellence (the industry's equivalent of two Oscars). He is now Editor-At-Large for Wired. Previously, Kelly was editor and publisher of the Whole Earth Review. He is the author of Out of Control; New Rules for the New Economy, and the recently published Asia Grace. Instead of going to college he went to Asia as a photographer. His photographs have appeared in Life and other national magazines.

KEVIN KELLY's Edge Bio Page

KATINKA MATSON is President of Brockman, Inc., a New York literary agency, co-founder of Edge Foundation, Inc., an author, and an artist. Her digital art is featured on Edge.

KATINKA MATSON's Edge Bio Page



BEYOND COMPUTATION: A TALK WITH RODNEY BROOKS [6.03.02]

Content on this page requires a newer version of Adobe Flash Player.

Get Adobe Flash player

Introduction

Rodney Brooks, a computer scientists and Director of the MIT's Artificial Intelligence Laboratory, is looking for something beyond computation in the sense that we don't understand and we can't describe what's going on inside living systems using computation only. When we build computational models of living systems, such as a self-evolving system or an artificial immunology system — they're not as robust or rich as real living systems.

"Maybe we're missing something," Brooks asks, "but what could that something be?" He is puzzled that we've got all these biological metaphors that we're playing around with—artificial immunology systems, building robots that appear lifelike—but none of them come close to real biological systems in robustness and in performance. "What I'm worrying about," he says, "is that perhaps in looking at biological systems we're missing something that's always in there. You might be tempted to call it an essence of life, but I'm not talking about anything outside of biology or chemistry."


JB


RODNEY A. BROOKS is Director of the MIT Artificial Intelligence Laboratory, and Fujitsu Professor of Computer Science. He is also Chairman and Chief Technical Officer of iRobot, a 120-person robotics company. Dr. Brooks also appeared as one of the four principals in the Errol Morris movie Fast, Cheap, and Out of Control (named after one of his papers in the Journal of the British Interplanetary Society) in 1997 (one of Roger Ebert's 10 best films of the year). He is the author of Flesh and Machines.


BEYOND COMPUTATION: A TALK WITH RODNEY BROOKS

ROD BROOKS: Every nine years or so I change what I'm doing scientifically. Last year, 2001, I moved away from building humanoid robots to worry about what the difference is between living matter and non-living matter. You have an organization of molecules over here and it's a living cell; you have an organization of molecules over here and it's just matter. What is it that makes something alive? Humberto Maturana was interested in this question, as was the late Francisco Varela in his work on autopoesis. More recently, Stuart Kauffman has talked about what it is that makes something living, how it is a self-perpetuating structure of interrelationships.

We have all become computation-centric over the last few years. We've tended to think that computation explains everything. When I was a kid, I had a book which described the brain as a telephone-switching network. Earlier books described it as a hydrodynamic system or a steam engine. Then in the '60s it became a digital computer. In the '80s it became a massively parallel digital computer. I bet there's now a kid's book out there somewhere which says that the brain is just like the World Wide Web because of all of its associations. We're always taking the best technology that we have and using that as the metaphor for the most complex things—the brain and living systems. And we've done that with computation.

But maybe there's more to us than computation. Maybe there's something beyond computation in the sense that we don't understand and we can't describe what's going on inside living systems using computation only. When we build computational models of living systems—such as a self-evolving system or an artificial immunology system—they're not as robust or rich as real living systems. Maybe we're missing something, but what could that something be?

You could hypothesize that what's missing might be some aspect of physics that we don't yet understand. David Chalmers has certainly used that notion when he tries to explain consciousness. Roger Penrose uses that notion to a certain extent when he says that it's got to be the quantum effects in the microtubules. He's looking for some physics that we already understand but are just not describing well enough.

If we look back at how people tried to understand the solar system in the time of Kepler and Copernicus, we notice that they had their observations, geometry, and a. They could describe what was happening in those terms, but it wasn't until they had calculus that they were really able to make predictions and have a really good model of what was happening. My working hypothesis is that in our understanding of complexity and of how lots of pieces interact we're stuck at that algebra-geometry stage. There's some other tool—some organizational principle—that we need to understand in order to really describe what's going on.

And maybe that tool doesn't have to be disruptive. If we look at what happened in the late 19th century through the middle of the 20th, there were a couple of very disruptive things that happened in physics: quantum mechanics and relativity. The whole world changed. But computation also came along in that time period—around the 1930s—and that wasn't disruptive. If you were to take a 19th century mathematician and sit him down in front of a chalk board, you could explain the ideas of computation to him in a few days. He wouldn't be saying, "My God, that can't be true!" But if we took a 19th century physicist (or for that matter, an ordinary person in the 21st century) and tried to explain quantum mechanics to him, he would say, "That can't be true. It's too disruptive." It's a completely different way of thinking. Using computation to look at physical systems is not disruptive to the extent that it needs its own special physics or chemistry; it's just a way of looking at organization.

So, my mid-life research crisis has been to scale down looking at humanoid robots and to start looking at the very simple question of what makes something alive, and what the organizing principles are that go on inside living systems. We're coming at it with two and a half or three prongs. At one level we're trying to build robots that have properties of living systems that robots haven't had before. We're trying to build robots that can repair themselves, that can reproduce (although we're a long way from self-reproduction), that have metabolism, and that have to go out and seek energy to maintain themselves. We're trying to design robots that are not built out of silicon steel, but out of materials that are not as rigid or as regular as traditional materials—that are more like what we're built out of. Our theme phrase is that we're going to build a robot out of Jello. We don't really mean we're actually going to use Jello, but that's the image we have in our mind. We are trying to figure out how we could build a robot out of "mushy" stuff and still have it be a robot that interacts in the world.

The second direction we're going is building large-scale computational experiments. People might call them simulations, but since we're not necessarily simulating anything real I prefer to call them experiments. We're looking at a range of questions on living systems. One student, for example, is looking at how multi-cellular reproduction can arise from single-cell reproduction. When you step back a little bit you can understand how single-cell reproduction works, but then how did that turn into multi-cellular reproduction, which at one level of organization looks very different from what's happening in the single-cell reproduction. In single-cell reproduction one thing gets bigger and then just breaks into two; in multicell reproduction you're actually building different sorts of cells. This is important in speculating about the pre-biotic emergence of self-organization in the soup of chemicals that used to be Earth. We're trying to figure out how self-organization occured, and how it bootstraped Darwinian evolution, DNA, etc. out of that. The current dogma is that DNA is central. But maybe DNA came along a lot later as a regulatory mechanism.

In other computational experiments we're looking at very simple animals and modeling their neural development. We're looking at polyclad flatworms, which have a very primitive, but very adaptable brain with a couple of thousand neurons. If you take a polyclad flatworm and cut out its brain, it doesn't carry out all of its usual behaviors but it can still survive. If you then get a brain from another one and you put it into this brainless flatworm, after a few days it can carry out all of its behaviors pretty well. If you take a brain from another one and you turn it about 180 degrees and put it in backwards, the flatworm will walk backwards a little bit for the first few days, but after a few days it will be back to normal with this brain helping it out. Or you can take a brain and flip it over 180 degrees, and it adapts, and regrows. How is that regrowth and self-organization happening in this fairly simple system? All of these different projects are looking at how this self-organization happens with computational experiments in a very artificial life-like way.

The third piece is trying to see if we can generate some mathematical principles out of these robots and these computational experiments. That, of course, is what we're really after. But at the same time, my research methodology is not to go after a question like that directly, because you sit and twiddle your thumbs and speculate for years and years. I try to build some real systems and then try and generalize from them.

If we—or more probably, other people—are successful at this, and can get to a real understanding of how all of these different pathways inside a living system interact to create a living system, then we'll have a new level of technology that can be built on top of that. We will in a principled way then be able to manipulate biological material in the way that we've learned in the last couple of hundred years to manipulate steel and then silicon. In 50 years our technological infrastructure and our bodies may be quite indistinguishable in that they'll be the same sort of processes.

I have several interesting robotics projects underway. One of the robots I must say was inspired by Bill Joy, probably to his dismay. We have a robot now that wanders around the corridors, finds electrical outlets, and plugs itself in. The next step is to make it hide during the day and come out at night and plug itself in. I'd like to build a robot vermin. Once I started talking about this, someone told me about a science fiction story from the '50s or '60s about a similar creature—The Beast Mark 3, or 4—which I like quite a lot. In the story the robot squeals when you pick it up and runs away. It doesn't have an off-switch, so the only way to get rid of it is to take a hammer to the thing, or lock it in a room where there are no outlets and let it starve to death. I'm trying to build some robots like that as thought-provoking pieces—and just because Bill Joy was afraid of them.

We're also trying to build self-reproducing robots. We've been doing experiments with Fischer Technik and Lego. We're trying to build a robot out of Lego which can put together a copy of itself with Lego pieces. Obviously you need motors and some little computational units, but the big question is to determine what the fixed points in mechanical space are to create objects that can manipulate components of themselves and construct themselves. There is a deep mathematical question to get at there, and for now we're using these off-the-shelf technologies to explore that. Ultimately we expect we're going to get to some other generalized set of components which have lots and lots of ways of cooperatively being put together, and hope that we can get them to be able to manipulate themselves. You can do this computationally in simulation very easily, but in the real world the mechanical properties matter. What is that self-reflective point of mechanical systems? Biomolecules as a system have gotten together and are able to do that.

We've also been looking at how things grow. We, and biological systems, grow from simple to more complex. How do the mechanics of that growth happen? How does rigidity come out of fairly sloppy materials? To address these questions we've been looking at tensegrity structures. On the computational side, I'm trying to build an interesting chemistry which is related to physics and has a structure where you get interesting combinatorics out of simple components in a physical simulation, so that properties of living systems can arise through spontaneous self-organization. The question here is: What sorts of influences do you need on the outside? In the pre-biotic soup on Earth you had tides, which were very important for sorting. You had regular thunderstorms every three or four days which served as very regular sorting operations, and then we had the day and night cycle—heating and cooling. With this thermodynamic washing through of chemicals, it may be that some clays attached themselves to start self-organizations, but you had to get from crystal to this other sort of organization. What are the key properties of chemistry which can let that arise? What's the simplest chemistry you can have in which that self-organization will arise? What is the relationship between the combinatorics and the sorts of self-organizations that can arise? Obviously our chemistry let that arise. We are creating computational systems and exploring that space.

My company, iRobot, has been pushing in a bunch of different areas. There's been a heightened interest in military robots, especially since September 11. By September 12 we had some of our robots down at Ground Zero in New York trying to help look for survivors under the rubble. There's been an increase in interest in robots that can do search and rescue, in robots that can find mines, and in portable robots that can do reconnaissance. These would be effective when small groups, like the special forces we've seen in Afghanistan, go in somewhere and they don't necessarily want to stick their heads up to go look inside a place. They can send the robot in to do that.

Another robot that we're just starting to get into production now after three years of testing is a robot to go down oil wells. This particular one is 5 centimeters in diameter and 14 meters long. It has to be autonomous, because you can't communicate by radio. Right now, if you want to go and manipulate oil wells while they are in production, you need a big infrastructure on the surface to shove a big thick cable down. This can mean miles and miles of cable, which means tons of cable on the surface, or a ship sitting above the oil well to push this stuff down through 30-foot segments of pipe that go one after the other after the other for days and days and days. We've built these robots that can go down oil wells,—where the pressure is 10,000 psi at 150 degrees Centigrade—carry along instruments, do various measurements, and find out where there might be too much water coming into the well. Modern wells have sleeves that can be moved back and forth to block off work in segments where changes in pressure in the shale layer from oil flow would suggest that it would be more effective to let the oil in somewhere else. When you have a managed oil well you're going to increase the production by about a factor of two over the life of the well. The trouble is, it's been far too expensive to manage the oil wells because you need this incredible infrastructure. These robots cost something on the order of a hundred thousand dollars.

They're retrievable, because you don't want them down there blocking the oil flow. And they're tiny. A robot that's five centimeters in diameter in an oil bores that is the standard size soon starts to clog things up. The robots go down there and you can't communicate with them, but we've pushed them to failures artificially and have also had some failures down there which we didn't predict, and in every case they've managed to reconfigure themselves and get themselves out.

Other things happening in robots are toys. Just like the first microprocessors, the first robots are getting into people's homes in toys. We had a bit of a downturn in high tech toys since September 11, and we're more back to basics, but it will spring back next year. There are a lot of high-tech, simple robot toys coming on the market; we're certainly playing in that space.

Another interesting thing just now starting to happen is robots in the home. For a couple of years now you've been able to buy lawn-mowing robots from the Israeli company, Friendly Machines. In the past month Electrolux has just started selling their floor-cleaning robot. A couple of other players have also made announcements, but no one's delivering besides Electrolux. We're on the start of the curve of getting robots into our homes and doing useful work if these products turn out to be successful.

My basic research is conducted at The Artificial Intelligence Lab at MIT, which is an interdisciplinary lab. We get students from across the Institute, although the vast majority are computer science majors. We also have electrical engineering majors, brain and cognitive science students, some mechanical engineering students, even some aeronautics and astronautics students these days because there is a big push for autonomous systems in space. We work on a mixture of applied and wacky theoretical stuff.

The most successful applied stuff over the last 3 or 4 years has been in assistance of surgery. Using computer vision techniques, we have built robots that take all different sorts of imagery during surgery. There are new MRI machines where you can have a patient inside an MRI as you're doing surgery. You get coarse measurements, register those with the fine MRI measurements done in a bigger machine beforehand, and then get the surgeon a real-time 3-dimensional picture of everything inside the brain of the patient undergoing brain surgery. If you go to one of the major hospitals here in Boston for brain surgery, you're going to have a surgeon assisted by AI systems developed at the lab. The first few times this was running we had grad students in the OR rebooting Unix at critical points. Now we're way past that—we don't have any one of our own staff there. It's all handed over to the surgeons and the hospital staff, and it's working well. They use it for every surgery.

The newest thing, which is just in clinical trials right now, is virtual colonoscopies. Instead of actually having to shove the thing up to look, we can take MRI scans, and then the clinician sits there and does a fly-through the body. Algorithms go in, look for the polyp, and highlight the potential polyps. It's an external scan to replace what has previously been an internal intrusion.

The clinical trials have just started. I view this registration of data sets as a step forwards. It's like the Star Trek tricorder which scans up and down the body and tells you what's wrong. We're building the technologies that are going to allow that sort of thing to happen. If these clinical trials work out, within five years the colonoscopies could become common. Scanning a patient with something like the tricoder is a lot further off, but that's the direction we're going; we're putting those pieces of technology together.

That's the applied end of what we're doing at the lab. At the wackier, far-out end, Tom Knight now has a compiler in which you give a simple program to the system, and it compiles the program into a DNA strip. He then inserts that DNA string into the genome of E. coli, and it grows into a whole bunch of E. coli. When the RNA transcription mechanism encounters that piece of DNA it does a digital computation inside the living cell, connecting them to sensors and actuators. The sensors that he's used so far are sensing various lactone molecules. It can then send messages to these cells by putting a molecule in a solution with the cells. They, in turn, then do some computation. In the two outputs he's demonstrated so far they produce other lactone molecules which diffuse across the cell membrane, and maybe go to a different species of E. coli that he has in the same batch with a different program running in them. He also stole a luminescent chain from a Japanese jellyfish, so he can make these cells light up with one big answer—1 or 0—depending on the results of the computation. This is still in its early days, but this, in conjunction with another program on amorphous computing, holds some promise down the line.

To explain amorphous computing, let me suggest the following thought experiment. Say that in a bucket of paint you have a whole bunch of computers which are little display elements. Instead of having a big LCD screen, you just get your paint brush, you paint this paint on the wall, and these little computation elements locally can communicate with the other elements nearby them in the paint. They're not regularly spaced, but you can predict ahead of time the density, and have them self-organize themselves into a big geometric display. Next you couple this with some of these cells that can do digital computation.

A little further out, you grow a sheet of cells—just feed 'em some sugar and have them grow. They're all doing the same little computation—communicating with their neighbors by diffusing lactone molecules—and you have them self-organize and understand their spatial structure. 30 years from now, instead of growing a tree, cutting down the tree and building this wooden table, we would be able to just place some DNA in some living cells, and grow the table, because they self-organize. They know where to grow and how to change their production depending on where they are. This is going to be a key to this new industrial infrastructure of biomaterials—a little bit of computation inside each cell, and self-organization.

We've come a long way since the early AI stuff. In the '50s, when John McCarthy had that famous 6-week meeting up in Dartmouth where he coined the term "artificial intelligence," people got together and thought that the keys to understanding intelligence were being able to reproduce the stuff that those MIT and Carnegie Tech graduates found difficult to do. Al Newell and Herb Simon, for example, built some programs that could start to prove some of the theorems in Russell and Whitehead's Principia. Other people, like Turing and Wiener, were interested in playing chess, and that was the thing that people with a technical degree still found difficult to do. The concentration was really on those intellectual pursuits. Herb Simon thought that they would be the key to understanding thinking.

What they missed was how important our embodiment and our perception of the world are as the basis for our thinking. To a large extent they ignored vision, which does a large part of the processing that goes on in your head. In our vision algorithms today we can do things like face recognition and face tracking . We can do motion tracking very well now, actually. But we still cannot do basic object recognition. We can't have a system look at a table and identify a cassette recorder or a pair of eye glasses, which is stuff that a 3-year-old can do. In the early days that stuff was viewed as being so easy, and because everyone could do it no one thought that it could be the key. Over time there's been a realization that vision, sound-processing, and early language are maybe the keys to how our brain is organized and that everything that's built on top of that makes us human and gives us our intellect. There's a whole other approach to getting to intellectual robots if you like—based on perception and language—which was not there in the early days.

I used to carry this paper around from 1967: MIT Artificial Intelligence Memo #100. It was written by Seymour Papert. He assigned Gerry Sussman, who was an undergraduate at the time, a summer project of solving vision. They thought it must be easy and that an undergraduate should be able to knock it off in three months.

It didn't quite turn out that way.


A MUTUAL, JOINT-STOCK WORLD IN ALL MERIDIANS BY JAMES J. O'DONNELL


Introduction


Jim O'Donnell is the penultimate contemporary rennaissence man. He is Vice Provost for Information Systems and Computing at the University of Pennsylvania and is also a Professor of Classical Studies. After 21 years at UPenn, he departing to become Provost of Georgetown University, effective July 1st.

I recently ran into him on the street in Philadelphia where he had just addressed the graduating senior class. The title of his talk was "A Mutual, Joint-Stock World In All Meridians." "The title," he said, "comes from Moby Dick, ch. 13, and is meant to be slightly misleading, inasmuch as the full text, spoken by Queequeg, is: 'It's a mutual, joint-stock world, in all meridians. We cannibals must help these Christians.' " I am pleased to present Jim's talk to readers of Edge.


JB

JAMES J. O'DONNELL, is Professor of Classical Studies and Vice Provost for Information Systems and Computing at the University of Pennsylvania. On 1 July 2002, he will become Provost of Georgetown University He is the author of Avatars of the Word: From Papyrus to Cyberspace.

He has published widely on the cultural history of the late antique Mediterranean world and is a recognized innovator in the application of networked information technology in higher education. In 1990, he co-founded Bryn Mawr Classical Review, the second on-line scholarly journal in the humanities ever created. In 1994, he taught an Internet-based seminar on the work of Augustine of Hippo that reached 500 students.


J
AMES J. O'DONNELL's Edge Bio Page


A MUTUAL, JOINT-STOCK WORLD IN ALL MERIDIANS BY JAMES J. O'DONNELL

It was on the 24th of August, in the year 410 of the common era, that the unthinkable came to pass. A guerrilla army, led by a renegade Roman general named Alaric, who had been brought up in a German-speaking community outside the actual boundaries of the Roman empire, ended years of threats and intimidation by invading the city of Rome itself. For three days they remained, destroying, looting, and killing. The exact loss of life was never known and may have been less than fears of the moment said it was, but the experience was a shattering one nonetheless. It had been 800 years since the last such defeat of the city, 800 years in which Rome had grown to be the greatest city in the world, the envy of the nations, the model for what a great city was like.

The shock was felt throughout the Roman world. In far-off Bethlehem, the scholar and monk Jerome, so prolific that one might think of him as the Stephen King of his time, could not work.

Here are his words: "And I was stunned and stupefied, so much so that I couldn't think about anything else day and night. I felt as if I were being held hostage myself and couldn't even open my mouth until I knew for sure what had happened. Hanging there, caught between hope and despair, I was torturing myself with the thought of what others were suffering. But after the brightest light of all the lands was extinguished — after the head of the whole Roman empire was lopped off — to speak truly, after the whole world had perished in a single city: I fell silent and was humbled, and I kept my silence and my sorrow was renewed. My heart grew warm within me and fire blazed up in my thoughts ..."

I have been reading and thinking about the events of 410 for over thirty years, but never with the intensity and compassion that I have known since that other ghastly day last September. So forgive me: I am a historian, and I have a story to tell this afternoon. History of this kind offers us a way to think about our world — but it offers no obvious or simple answers to our questions. I hope you will give me leave to provoke you for a while.

Roman government's response to the crisis was military and ineffective. The Roman emperor had years earlier moved his western court to the northern Italian city of Ravenna, protected by surrounding marshes and with a sea-lane for escape, but he sent his troops to pursue the enemy, then negotiate with him, then pursue him some more. From the official perspective, the issue was simple: barbarism versus civilization. The renegade general and his followers were demonized, pursued, and feared. Within a few years, they had migrated to what is now modern Spain and settled there, establishing a regime that thrived independent of Rome for three hundred years - until the Islamic invasions.

The years that followed were marked by a series of such migrations. The Spanish kingdom we call Visigothic, after the ancestral people of their generals. Within the century, Roman Africa fell into the hands of the Vandals from northern Europe, Roman Gaul into the hands of the Franks (who would give their country a name it still holds), and Italy itself became the homeland of the Ostrogoths. Barbarism had triumphed. To be sure, Roman armies in this period were recruited heavily from among the same peoples, and it happened more than once in the fifth century that you could not tell the Romans on a given battlefield without a scorecard - on one occasion two different contenders for the imperial throne itself fought each other through proxy armies led respectively by Vandals and by Visigoths.

But on the ground, it is far from clear that these developments constituted a defeat for civilization. Within a decade of the sack of Rome, Alaric's successor was being quoted as saying that in his youth he had thought to overthrow the Roman empire and replace it with a Gothic one, but now in power he saw that his people needed the law and structure of Roman civilization to have peace and prosperity for themselves. All of those "barbarian" kingdoms would soon rewrite the Roman law codes for local use and practice, in eloquent testimony to the power of the greatest of Roman civil achievements.

Roman government persisted in demonizing the barbarian, and the politics of the fifth and sixth centuries persisted in seeing the challenges of the age as military and technological. They could not have known or heard the lesson of a famous line from the modern French poet Paul Valery: "Seeing is forgetting the name of the thing one sees." The Romans of that age knew exactly what they were seeing - and it made them blind to the reality around them.

And so the long dance of Roman armies and barbarian ones played out, and in the end, Rome was the loser. Preoccupation with barbarians took attention away from another more threatening military frontier, the one shared with Persia, and gradually Roman resolve and strength were worn away there. When Islam arose in the seventh century, the remaining Roman power, headquartered at Constantinople (modern Istanbul) was unable to mount more than a token resistance.

But the final irony is important to grasp. You may have visited modern Rome or seen pictures of its ancient ruins, and you may be thinking that the events of 410 of which I spoke earlier can explain what you have seen. Not so.

The greatest destruction visited upon the city of Rome, the depredations that left most of the city a prey to malaria and a home to oxen and owls for a thousand years, came not from barbarian invaders but Roman ones. In the mid-sixth century, the reigning emperor at Constantinople, preoccupied with his vision of barbarism versus civilization, sent his own mercenary army (containing, to be sure, a good many fighters of non Roman stock) to recapture Italy for the empire. The fifteen years of war that followed were responsible for the destruction of much of the physical fabric of Rome, and responsible as well for shattering the political and social unity of the peninsula that had been built up laboriously through many centuries. From the sixth century to the nineteenth, there was no Italy, only a peninsula divided among pieces of other people's property. That disarray was the result not of barbarism, but of self-styled civilization run amok.

Could it have been otherwise? Was there an alternate future in the aftermath of the sack of Rome? Choices in history are hard to see as we live the history, but perhaps a little easier to see from a distance.

Some of the refugees from the events of August 410 landed up in a grimy seaport city in Africa, then called Hippo Regius, today the city of Annaba in Algeria. A backwater by any standards, it owed its standing to its harbor, through which the grain and olive supply of the province of Numidia - think of it as the Roman Nebraska - came down to the sea for shipment to the capital city. It was a natural place for wealthy refugees to make landfall, and a fair number of them indeed owned the great Numidian estates in the breadbasket of empire.

The leading figure of the city of Hippo in those days was the Christian bishop, Aurelius Augustinus, known to us as Augustine or Augustine. He was at this period a minor provincial figure, known within a limited circle for some of his theological writing (including the Confessions), but deeply engaged in local politics and church politics, fighting a relentless battle against other sects of his own religion. An indefatigable social climber, he made his way among the wealthy refugees, and found there disturbing ideas in circulation. Perhaps, it was being said, the sack of Rome came from a religious failure. For centuries we worshipped the old gods in the old ways and they protected the city; now in the last century we have given allegiance to a puzzling kind of new age religion — Christianity— , and a fat lot of good it has done us.

Augustine could not stand such defeatism, and so began to write a book. His motives were self-interested and polemical, but the book quickly transcended its moment. Over the next two decades, starting from that moment of crisis and doubt, Augustine elaborated his view of human society and human history in the twenty two books of his work entitled the City of God.

The book was finished long after the sack of Rome had faded from the newspapers and before the next wave of invasions trapped Augustine in his own city, where he died in 430. What marks the book is its dramatic and inclusive vision of a society that transcends the divisions of that particular time. This is not the place to outline its contents or its theology, but it should be easy enough for you to imagine the perspective, so familiar is it to moderns. The organizing principle of human history for Augustine was not membership in a given nation or state, but participation in a society that was notionally worldwide in its scope and eternal in its duration.

My point is not to test how much of that particular vision may still make sense today, but to emphasize its visionary quality. In a world where governments and soldiers emphasized division, Augustine found a way to emphasize inclusion. His criterion of inclusion was less than absolutely world-wide, of course, depending as it did on the Christian religion. But all the barbarians whom men feared in those days, all of them, were Christians of one stripe or another. To speak of a Christian vision of society, then, was to find a way to talk about humankind that embraced potentially all the warring and suspicious parties of the time.

Emperors, generals, and armies were little influenced by African bishops and their books. But the grassroots organization of Christianity - in large measure sponsored by government suppression of their opponents - had spread far enough and wide enough in those days to make a difference. When the supposedly "barbarian" communities of the western Mediterranean made their peace and settled down in the fifth and sixth centuries, bishops and monks were the community leaders who made sense of the world, along lines not very different from what Augustine laid out. If you want a hero for this story, you want perhaps not Augustine but Theoderic. Theoderic was the Ostrogothic king of Italy from 490 to 526 CE, a time that contemporaries spoke of as a golden age, when you could leave your money lying by the side of the road at night and find it there untouched in the morning - an exaggeration perhaps, but an exaggeration that speaks volumes for the social order that underlay it. Under his leadership, sects of Christians who engaged in mutual persecution in other lands lived side by side in remarkable harmony. You can visit Theoderic's massive tomb today in Ravenna, or read his words on at least one Penn website: "civilitas", the Latin word for something like "civility" or even "civilization", was his favorite theme. Not bad for a supposed "barbarian".

But if books are mostly ineffective as instruments of social change in the short term, they can, however, be persuasive in the long run. It can and should be argued and understood that the peculiarly European vision of humankind that gives birth eventually to the university tradition we embody today in our robes and rituals and to a whole series of widening circles of inclusive imagination of human society goes back to this age. The sense of community that binds together western nations today, that gave rise to such diverse organizations as the Catholic Church, the European Union, and World Cup football take their origins in that late antique vision of a society whose inclusiveness transcends old and seemingly obvious divisions.

But what are originally visions of inclusiveness have a way of exhausting themselves. The Roman empire had lost its ability to embrace new peoples by the time of which I have been speaking, and it is only too clear that in our time the traditional religions of the book, though their wisest practitioners speak well and act fairly, have lost much of their persuasive inclusiveness. It is indeed precisely the mode of their claims at universality that puts them most in conflict with each other.

The challenges today are thus obvious and many, but the opportunity is great as well. Few would have thought in the first half of the twentieth century that France and Germany could ever live so much at peace as they do now, and at the height of the Pacific war, it was unthinkable that Japan and the United States could ever become the allies they have now become. Our current strife may find its own comparable resolution, if we are wise and generous and visionary. Whether the vision we need comes from theologians or politicians or holders of McDonald's franchises is very much in doubt. I take some encouragement from a ragtag band of aging hippies and young computer scientists who are planning to build a clock.

The clock they build - and the library that goes with it - will be designed to live for 10,000 years: the clock of the long now, they call it, and there is a mountain in Nevada under which they plan to build it. They are already preparing for the future in ingenious and whimsical ways. They would report today's date, for example, as May 13, in the year 02002 - the initial zero being their way of reminding us to begin preparing for the inevitable Y10K crisis, hurtling towards us in a mere 7,998 years. Their mission is to encourage all who hear them to think beyond this year, this decade, or this lifetime, to remember that we live in and share responsibility for a very long future. To look out to that future is to take a deep breath and to find a place for ourselves in a narrative in which our concerns are not so paramount as they inevitably must be on a day like today.

You here may not want to be reminded of this, but very soon now, you graduates will begin bringing children into the world, children some of whom will live to see the year 2100. That's already a "now" long enough to give pause. Those children will see a world that is surely warmer and more crowded than this one. How else will it seem? That is for us, and for you, to determine, and that will be the real test of the value of what you and we have done here these last four years. Have we taught you to think in the long now? Have we taught you to forget the name of the thing you see, to forget what you think you know and see what is? Have we taught you to promote civility, to build civilization among peoples, rather than merely to oppose barbarism?

I hope we have . . .


|Top|