Edge 88– August 7, 2001

(9,887 words)


David Gelernter, Brian Greene. Marc D. Hauser, Alan Guth, JB,
Jordan Pollack, Jaron Lanier, Lee Smolin]

[See streaming videos of the participants. Transcripts to follow]

One aspect of our culture that is no longer open to question is that the most signigicant developments in the sciences today (i.e. those that affect the lives of everybody on the planet) are about, informed by, or implemented through advances in software and computation. This Edge event is an opportunity for people in various fields such as computer science, cosmology, cognition, evolutionary biology, etc., to begin talking to each other, to become aware of interesting and imp dortant work in other fields.

A Response to Jaron Lanier's ONE HALF A MANIFESTO (http://www.edge.org/3rd_culture/lanier/lanier_index.html) and POSTSCRIPT REGARDING RAY KURZWEIL (http://www.edge.org/discourse/jaron_answer.html)

Jaron writes that "the whole enterprise of Artificial Intelligence is based on an intellectual mistake." Until such time that computers at least match human intelligence in every dimension, it will always remain possible for skeptics to say the glass is half empty. Every new achievement of AI can be dismissed by pointing out yet other goals have not yet been accomplished. Indeed, this is the frustration of the AI practitioner, that once an AI goal is achieved, it is no longer considered AI and becomes just a useful technique. AI is inherently the set of problems we have not yet solved.

Click Here for Ray Kurtzweil's Bio Page


By Dennis Overbye
(free registration required)

BETHLEHEM, Conn. —These would seem to be heady times to be a computer scientist. This is the information age, in which, we are told, biology is defined by a three-billion- letter instruction manual called the genome and human thoughts are analogous to digital bits flowing through a computer. And, we are warned, human intellect will soon be dwarfed by superintelligent machines.

"All kinds of people," said Jaron Lanier, a computer scientist and musician, "are happy to tell us what we do is the central metaphor, the best explanation of everything from biology to economics to aesthetics to child rearing, sex, you name it. It's very ego-gratifying."

Mr. Lanier is the lead scientist of the National Tele-Immersion Initiative, a virtual reality system that has been designed for the Internet.

He and six other scientists were sitting under a maple tree one recent afternoon worrying whether this headiness was justified. They found instead that they could not even agree on useful definitions of their field's most common terms, like "information" and "complexity," let alone the meaning and future of this revolution.

The other scientists were two computer science professors, Dr. David Gelernter of Yale and Dr. Jordan Pollack of Brandeis University; three physicists, Dr. Brian Greene of Columbia, Dr. Alan Guth of the Massachusetts Institute of Technology and Dr. Lee Smolin of the Center for Gravitational Physics and Geometry at Penn State; and a psychologist and neuroscientist, Dr. Marc Hauser of Harvard.

John Brockman, a literary agent who represents these scientists, had convened them at the country house here that he shares with his wife and partner, Katinka Matson. Mr. Brockman said he had been inspired to gather the group by a conversation with Dr. Seth Lloyd, a professor of mechanical engineering and quantum computing expert at M.I.T. Mr. Brockman recently posted Dr. Lloyd's statement on his Web site, www.edge.org: "Of course, one way of thinking about all of life and civilization," Dr. Lloyd said, "is as being about how the world registers and processes information. Certainly that's what sex is about; that's what history is about." .....

(Wer Weiss?: Ein Treffen der Dritten Kultur)

Jordan Mejias

[original German version — http://www.edge.org/documents/press/faz_8101.g.html]


Plato once sought out an olive grove in which he might finally bring the world its first academy. But olive trees are rare in New England. Instead, there are strong maples, and recently, beneath a knotty, especially old and venerable specimen on Eastover Farm in Connecticut, academics fled their laboratories and lecture halls and, in the tradition of their intellectual ancestors, conversed in nature about more than their surroundings.

There were no professional philosophers, which might hardly come as a surprise since the invitation to the open-air symposium was issued by the Internet salon "Edge," whose founder, John Brockman, cultivated the Third Culture and is now busy washing away the border between the natural sciences and the humanities.

Thus, computer scientist David Gelernter of Yale brought along news that industry invests much more energy into research than universities do. The professor, who is also an entrepreneur, was already more than a little anxious, because although the Internet has just entered the race for the exchange of knowledge it might soon overtake its competition from the universities. This thesis was not contested. Jaron Lanier, who gave virtual reality its name, and Jordan Pollock, head of the Brandeis robotics program, were also in agreement that software limps behind hardware and is even losing more ground.

In the free-floating exchange of ideas, however, the scientists repeatedly put reins on wildly galloping progress. In this they distinguish themselves considerably from us mere mortals. While we might think we can distinguish between a dead and a living organism, no specialist ventures a definition of life. It was similar here. Science uncovers its fundamental lack of knowledge.

"We don't know what information is," said Lee Smolin, a physicist at Pennsylvania State University, and none of the collected authorities on information could explain it to him. Brian Greene, who teaches mathematics and physics at Columbia University and writes bestsellers about "string theory," sat and smiled at how perplexing the concepts of space and time are: "We don't know what it is." Evolutionary psychologist Marc D. Hauser, who traveled from Harvard, took up the riddle of the brain, a part of the human body that compels us with the illusion that we know more than what is actually true. He thanked Noam Chomsky not least of all for this insight. As cosmologist Alan Guth of the Massachusetts Institute of Technology explained, maybe the assertions of quantum mechanics also manifest themselves in this way, as they describe the cosmos in possibilities. Where should there still be room for certainties? Guth spoke of dark energy, which composes sixty percent of all of the energy in the universe, but "We don't know what it is."

They know more, these scientists, than their predecessors ever knew. But in the end, when they add their knowledge together, they are quite Socratic in their realization that they know that they know nothing. Today, when every day witnesses a new discovery, the keys to the primary causes and the fundamental laws of the universe are still missing. The maple tree, under which the scientists speculated in green Connecticut, is little more than a tree of limited knowledge. In this sense, the virtual cybersalon committed no faux pas as it spent a summer afternoon reconstructing itself in the real shadow of the maple tree in order to consider who we are, how we live, and - above all - how we will live in the future. Was this temporary change in the conditions of aggregation, after all, also a sign of what the roundtable demonstrated as the apparent instability of our revolutionary times? Who knows.

[translation: Christopher Williams]



John Horgan, Lynn Margulis, David Berreby remember Francisco Varela


"Everything is up for grabs. Everything will change. There is a magnificent sweep of intellectual landscape right in front of us."


One aspect of our culture that is no longer open to question is that the most signigicant developments in the sciences today (i.e. the developments that affect the lives of everybody on the planet) are about, informed by, or implemented through advances in software and computation.

This Edge event is an opportunity for people in various fields such as computer science, cosmology, cognition, evolutionary biology, etc., to begin talking to each other, to become aware of interesting and important work in other fields.

— JB

[See streaming videos of the participants. Transcripts to follow]

[Click here for previous EDGE events at Eastover Farm]

Requires Real Player — Free Download


David Gelernter on software (5:56)
DSL+ | Modem

Everything is up for grabs. Everything will change. There is a magnificent sweep of intellectual landscape right in front of us.

DAVID GELERNTER, Professor of Computer Science at Yale University and adjunct fellow at the Manhattan Institute, is a leading figure in the third generation of Artificial Intelligence scientists, known for his programming language called "Linda" that made it possible to link computers together to work on a single problem. He has since emerged as one of the seminal thinkers in the field known as parallel, or distributed, computing. He is the author of Mirror Worlds, The Muse in the Machine, 1939: The Lost World of the Fair, and Drawiing a Life: Surviving the Unabomber.

Brian Greene on using software
DSL+ | Modem

Progress in science proceeds in fits and starts. Some periods are filled with great breakthroughs; at other times researchers experience dry spells. Scientists put forward results, both theoretical and experimental. The results are debated by the community, sometimes they are discarded, sometimes they are modified, and sometimes they provide inspirational jumping-off points for new and more accurate ways of understanding the physical universe. In other words, science proceeds along a zig-zag path toward what we hope will be ultimate truth, a path that began with humanity's earliest attempts to fathom the cosmos and whose end we cannot predict. Whether string theory is an incidental rest stop along this path, a landmark turning point, or in fact the final destination we do not know. But the last two decades of research by hundreds of dedicated physicists and mathematicians from numerous countries have given us well-founded hope that we are on the right and possibly final track.

It is a telling testament of the rich and far-reaching nature of string theory that even our present level of understanding has allowed us to gain striking new insights into the workings of the universe. A central thread in what follows will be those developments that carry forward the revolution in our understanding of space and time initiated by Einstein's special and general theories of relativity. We will see that if string theory is correct, the fabric of our universe has properties that would likely have dazzled even Einstein.

BRIAN GREENE received his undergraduate degree from Harvard University and his doctorate from Oxford University, where he was a Rhodes scholar. He joined the physics faculty of Cornell University in 1990, was appointed to a full professorship in 1995, and in 1996 joined Columbia University where he is currently a professor of physics and of mathematics. He has lectured at both a general and a technical level in more than twenty countries and is widely regarded for a number of groundbreaking discoveries in superstring theory.

He is the author of The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for an Ultimate Theory ("In the great tradition of physicists writing for the masses, The Elegant Universe sets a standard that will be hard to beat." — New York Times Book Review).

Alan Guth on the state of cosmology (7:08)
DSL+ | Modem

One of the most amazing features of the inflationary- universe model is that it allows the universe to evolve from something that's initially incredibly small. Something on the order of twenty pounds of matter is all it seems to take to start off a universe. This is very different from the standard cosmological model. Before inflation, the standard model required you to assume that all the matter that exists now was already there at the beginning, and the model just described how the universe expanded and how the matter cooled and evolved. Given the inflationary model, it becomes very tempting to ask whether, in principle, it's possible to create a universe in the laboratory — or a universe in your backyard — by man-made processes.

ALAN GUTH, father in the inflationary theory of the Universe, is Victor F. Weisskopf Professor of Physics at MIT; author of The Inflationary Universe: The Quest for a New Theory of Cosmic Origins.

Marc D. Hauser on biological computation
DSL+ | Modem

I've been looking at different domains of knowledge, and asking the question, what pressures would have shaped ways of thinking in different organisms, trying to get away from the common approach to thinking about humans, human evolution, and animal cognition, which is to say, humans are unique, and that's the end of the story. All animals are unique, and the really interesting question is how their minds have been shaped by the particular social and ecological problems that the environment throws at them. For example, instead of stating that humans are unique, we ask the question: what pressures did humans confront, that no other animal confronted, that created selection for the evolution of language? Why can other organisms make do with the kind of communication system they have? Why did we evolve color vision? Why did other organisms not evolve color vision? Why do certain animals have the capacity to navigate in space with a simple mechanism like dead reckoning, and other animals need other kinds of machinery in order to get by in space?

I've been looking at different domains of knowledge, and asking the question, what pressures would have shaped ways of thinking in different organisms, trying to get away from the common approach to thinking about humans, human evolution, and animal cognition, which is to say, humans are unique, and that's the end of the story. All animals are unique, and the really interesting question is how their minds have been shaped by the particular social and ecological problems that the environment throws at them. For example, instead of stating that humans are unique, we ask the question: what pressures did humans confront, that no other animal confronted, that created selection for the evolution of language? Why can other organisms make do with the kind of communication system they have? Why did we evolve color vision? Why did other organisms not evolve color vision? Why do certain animals have the capacity to navigate in space with a simple mechanism like dead reckoning, and other animals need other kinds of machinery in order to get by in space?

MARC D. HAUSER is an evolutionary psychologist, and a professor at Harvard University where he is a fellow of the Mind, Brain, and Behavior Program. He is a professor in the departments of Anthropology and Psychology, as well as the Program in Neurosciences. He is the author of The Evolution of Communication, and Wild Minds: What AnimalsThink.

Jaron Lanier on the bits in the universe: (3:47)
DSL+ | Modem

For the last twenty years, I have found myself on the inside of a revolution, but on the outside of its resplendent dogma. Now that the revolution has not only hit the mainstream, but bludgeoned it into submission by taking over the economy, it's probably time for me to cry out my dissent more loudly than I have before.

JARON LANIER, a computer scientist and musician, is a pioneer of virtual reality, and founder and former CEO of VPL. He is currently the lead scientist for the National Tele-Immersion Initiative. He is author of the forthcoming Information is Aolienated Experience.

Jordan Pollack on the economics of software (5:24)
DSL+ | Modem
I work on developing an understanding of biological complexity and how we can create it, because the limits of software engineering have been clear now for two decades. The biggest programs anyone can build are about ten million lines of code. A real biological object — a creature, an ecosystem, a brain — is something with the same complexity as ten billion lines of code. And how do we get there?

JORDAN POLLACK, is a computer science and complex systems professor at Brandeis University. His laboratory's work on AI, Artificial Life, Neural Networks, Evolution, Dynamical Systems, Games, Robotics, Machine Learning, and Educational Technology has been reported on by the New York Times, Time, Science, NPR, Slashdot.org and many other media sources worldwide. Jordan is a prolific inventor, advises several startup companies and incubators, and in his spare time runs Thin Mail, an Internet based service designed to increase the usefulness of wireless email.

Lee Smolin on holographic principle and computation
DSL+ | Modem

The point is that quantum information involves non-classical features such as superposition and entanglement, that make it very different from Shannon information. Thus, even given the equivalence between quantum evolution and quantum computation, it remains the case that the set of quantum systems that (in some basis) compute a classical algorithm is of measure zero. The fact that there are claims which are trivially false when referring to classical information and almost trivially true when referring to quantum information should make us wary about using the word information in a way that confuses the two.

The point is that quantum information involves non-classical features such as superposition and entanglement, that make it very different from Shannon information. Thus, even given the equivalence between quantum evolution and quantum computation, it remains the case that the set of quantum systems that (in some basis) compute a classical algorithm is of measure zero. The fact that there are claims which are trivially false when referring to classical information and almost trivially true when referring to quantum information should make us wary about using the word information in a way that confuses the two.

LEE SMOLIN is a theoretical physicist, professor of physics and member of the Center for Gravitational Physics and Geometry at Pennsylvania State University. He is the author of The Life of The Cosmos and Three Roads to Quantum Gravity.



In Jaron Lanier's Postscript, which he wrote after he and I spoke in succession at a technology event, Lanier points out that we agree on many things, which indeed we do. So I'll start in that vein as well. First of all, I share the world's esteem for Jaron's pioneering work in virtual reality, including his innovative contemporary work on the "Teleimmersion" initiative, and, of course, in coining the term "virtual reality." I probably have higher regard for virtual reality than Jaron does, but that comes back to our distinct views of the future.

As an aside I'm not entirely happy with the phrase "virtual reality" as it implies that it's not a real place to be. I consider a telephone conversation as being together in auditory virtual reality, yet we regard these to be real conversations. I have a similar problem with the term "artificial intelligence."

And as a pioneer in what I believe will become a transforming concept in human communication, I know that Jaron shares with me an underlying enthusiasm for the contributions that computer and related communications technologies can have on the quality of life. That is the other half of his manifesto. I appreciate Jaron pointing this out. It's not entirely clear sometimes, for example, that Bill Joy has another half to his manifesto.

And I agree with at least one of Jaron's six objections to what he calls "Cybernetic Totalism." In objection #3, he takes issues with those who maintain "that subjective experience either doesn't exist, or is unimportant because it is some sort of ambient or peripheral effect." The reason that some people feel this way is precisely because subjective experience cannot be scientifically measured. Although we can measure certain correlates of subjective experience (e.g., correlating certain patterns of objectively measurable neurological activity with objectively verifiable reports of certain subjective experiences), we cannot penetrate to the core of subjective experience through objective measurement. It's the difference between the concept of "objectivity," which is the basis of science, and "subjectivity," which is essentially a synonym for consciousness. There is no device or system we can postulate that could definitively detect subjectivity associated with an entity, at least no such device that does not have philosophical assumptions built into it.

So I accept that Jaron Lanier has subjective experiences, and I can even imagine (and empathize with!) his feelings of frustration at the dictums of "cybernetic totalists" such as myself (not that I accept this characterization) as he wrote his half manifesto. Like Jaron, I even accept the subjective experience of those who maintain that there is no such thing as subjective experience. Of course, most people do accept that other people are conscious, but this shared human assumption breaks down as we go outside of human experience, e.g., the debates regarding animal rights (which have everything to do with whether animals are conscious or just quasi-machines that operate by "instinct"), as well as the debates regarding the notion that a nonbiological entity could conceivably be conscious.

Consider that we are unable to truly experience the subjective experiences of others. We hear their reports about their experiences, and we may even feel empathy in response to the behavior that results from their internal states. We are, however, only exposed to the behavior of others and, therefore, can only imagine their subjective experience. So one can construct a perfectly consistent, and scientific, worldview that omits the existence of consciousness. And because there is fundamentally no scientific way to measure the consciousness or subjective experience of another entity, some observers come to the conclusion that it's just an illusion.

My own view is that precisely because we cannot resolve issues of consciousness entirely through objective measurement and analysis, i.e., science, there is a critical role for philosophy, which we sometimes call religion. I would agree with Jaron that consciousness is the most important ontological question. After all, if we truly imagine a world in which there is no subjective experience, i.e., a world in which there is swirling stuff but no conscious entity to experience it, then that world may as well not exist. In some philosophical traditions (i.e., some interpretations of quantum mechanics, some schools of Buddhist thought), that is exactly how such a world is regarded.

I like Jaron's term "circle of empathy," which makes it clear that the circle of reality that I consider to be "me" is not clear-cut. One's circle of empathy is certainly not simply our body, as we have limited identification with, say, our toes, and even less with the contents of our large intestines. Even with regard to our brains, we are aware of only a small portion of what goes on in our brains, and often consider thoughts and dreams that suddenly intrude on our awareness to have come from some foreign place. We do often include loved ones who may be physically disparate within our circle of empathy. Thus the aspect of the Universe that I consider to be "myself" is not at all clear cut, and some philosophies do emphasize the extent to which there is inherently no such boundary.

Having stated a few ways in which Jaron and I agree with each other's perspective, I will say that his "Half of a Manifesto" mischaracterizes many of the views he objects to. Certainly that's true with regard to his characterization of my own thesis. In particular, he appears to have only picked up on half of what I said in my talk, because the other half addresses at least some of the issues he raises. Moreover, many of Jaron's arguments aren't really arguments at all, but a amalgamation of mentally filed anecdotes and engineering frustrations. The fact that Time Magazine got a prediction wrong in 1966, as Jaron reports, is not a compelling argument that all discussions of trends are misguided. Nor is the fact that dinosaurs did not continue to increase in size indefinitely a demonstration that every trend quickly dies out. The size of dinosaurs is irrelevant; a larger size may or may not impart an advantage, whereas an increase in the price-performance and/or bandwidth of a technology clearly does impart an advantage. It would be hard to make the case that a technology with a lower price-performance had inherent advantages, whereas it is certainly possible that a smaller and therefore more agile animal may have advantages.

Jaron Lanier has what my colleague Lucas Hendrich calls the "engineer's pessimism." Often an engineer or scientist who is so immersed in the difficulties of a contemporary challenge fails to appreciate the ultimate long-term implications of their own work, and, in particular, the larger field of work that they operate in. Consider the biochemists in 1985 who were skeptical of the announcement of the goal of transcribing the entire genome in a mere 15 years. These scientists had just spent an entire year transcribing a mere one ten-thousandth of the genome, so even with reasonable anticipated advances, it seemed to them like it would be hundreds of years, if not longer, before the entire genome could be sequenced. Or consider the skepticism expressed in the mid 1980s that the Internet would ever be a significant phenomenon, given that it included only tens of thousands of nodes. The fact that the number of nodes was doubling every year and there were, therefore, likely to be tens of millions of nodes ten years later was not appreciated by those who struggled with "state of the art" technology in 1985, which permitted adding only a few thousand nodes throughout the world in a year.

In his "Postscript regarding Ray Kurzweil," Jaron asks the rhetorical question "about Ray's exponential theory of history. . . .[is he] stacking the deck by choosing points that fit the curves he wants to find?" I can assure Jaron that the more points we add to the dozens of exponential graphs I presented to him and the rest of the audience in Atlanta, the clearer the exponential trends become. Does he really imagine that there is some circa 1901 calculating device that has better price-performance than our circa 2001 devices? Or even a 1995 device that is competitive with a 2001 device? In fact what we do see as more points (representing specific devices) are collected is a cascade of "S-curves," in which each S-curve represents some specific technological paradigm. Each S-curve (which looks like an "S" in which the top portion is stretched out to the right) starts out with gradual and then extreme exponential growth, subsequently leveling off as the potential of that paradigm is exhausted. But what turns each S-curve into an ongoing exponential is the shift to another paradigm, and thus to another S-curve, i.e., innovation. The pressure to explore and discover a new paradigm increases as the limits of each current paradigm becomes apparent.

When it became impossible to shrink vacuum tubes any further and maintain the requisite vacuum, transistors came along, which are not merely small vacuum tubes. We've been through five paradigms in computing in this past century (electromechanical calculators, relay based computers, vacuum-tube-based computing, discrete transistors, and then integrated circuits, on which Moore's law is based). As the limits of flat integrated circuits are now within sight (one to one and a half decades away), there are already dozens of projects underway to pioneer the sixth paradigm of computing, which is computing in three dimensions, several of which have demonstrated small-scale working systems.

It is specifically the processing and movement of information that is growing exponentially. So one reason that an area such as transportation is resting at the top of an S-curve is that many if not most of the purposes of transportation have been satisfied by exponentially growing communication technologies. My own organization has colleagues in different parts of the country, and most of our needs that in times past would have required a person or a package to be transported can be met through the increasingly viable virtual meetings made possible by a panoply of communication technologies, some of which Jaron is himself working to advance. Having said that, I do believe we will see new paradigms in transportation. However, with increasingly realistic, high resolution full-immersion forms of virtual reality continuing to emerge, our needs to be together will increasingly be met through computation and communication.

Jaron's concept of "lock-in" is not the primary obstacle to advancing transportation. If the existence of a complex support system necessarily caused lock-in, then why don't we see lock-in preventing ongoing expansion of every aspect of the Internet? After all, the Internet certainly requires an enormous and complex infrastructure. The primary reason that transportation is under little pressure for a paradigm-shift is that the underlying need for transportation has been increasingly met through communication technologies that are expanding exponentially.

One of Jaron's primary themes is to distinguish between quantitative and qualitative trends, saying in essence that perhaps certain brute force capabilities such as memory capacity, processor speed, and communications bandwidths are expanding exponentially, but the qualitative aspects are not. And towards this end, Jaron complains of a multiplicity of software frustrations (many, incidentally, having to do with Windows) that plague both users and, in particular, software developers like himself. This is the hardware versus software challenge, and it is an important one. Jaron does not mention at all my primary thesis having to do with the software of intelligence. Jaron characterizes my position and that of other so-called "cybernetic totalists" to be that we'll just figure it out in some unspecified way, what he refers to as a software "Deus ex Machina." I have a specific and detailed scenario to achieve the software of intelligence, which concerns the reverse engineering of the human brain, an undertaking that is much further along than most people realize. I'll return to this in a moment, but first I would like to address some other basic misconceptions about the so-called lack of progress in software.

Jaron calls software inherently "unwieldy" and "brittle" and writes at great length on a variety of frustrations that he encounters in the world of software. He writes that "getting computers to perform specific tasks of significant complexity in a reliable but modifiable way, without crashes or security breaches, is essentially impossible." I certainly don't want to put myself in the position of defending all software (any more than I would care to characterize all people as wonderful). But it's not the case that complex software is necessarily brittle and prone to catastrophic breakdown. There are many examples of complex mission critical software that operates with very little if any breakdowns, for example the sophisticated software that controls an increasing fraction of airplane landings, or software that monitors patients in critical care facilities. I am not aware of any airplane crashes that have been caused by automated landing software; the same, however, cannot be said for human reliability.

Jaron says that "Computer user interfaces tend to respond more slowly to user interface events, such as a keypress, than they did fifteen years agoŠWhat's gone wrong?" To this I would invite Jaron to try using an old computer today. Even we put aside the difficulty of setting one up today (which is a different issue), Jaron has forgotten just how unresponsive, unwieldy, and limited they were. Try getting some real work done to today's standards with a fifteen year-old personal computer. It's simply not true to say that the old software was better in any qualitative or quantitative sense. If you believe that, then go use them.

Although it's always possible to find poor quality design, the primary reason for user interface response delays is user demand for more sophisticated functionality. If users were willing to freeze the functionality of their software, then the ongoing exponential growth of computing speed and memory would quickly eliminate software response delays. But they're not. So functionality always stays on the edge of what's feasible (personally, I'm waiting for my Teleimmersion upgrade to my videoconferencing software).

This romancing of software from years or decades ago is comparable to people's idyllic view of life hundreds of years ago, when we were unencumbered with the frustrations of machines. Life was unencumbered, perhaps, but it was also short (e.g., life expectancy less than half of today's), labor-intensive (e.g., just preparing the evening meal took many hours of hard labor), poverty-filled, disease and disaster prone.

With regard to the price-performance of software, the comparisons in virtually every area are dramatic. For example, in 1985 $5,000 bought you a speech recognition software package that provided a 1,000 word vocabulary, no continuous speech capability, required three hours of training, and had relatively poor accuracy. Today, for only $50, you can purchase a speech recognition software package with a 100,000 word vocabulary, continuous speech, that requires only five minutes of training, has dramatically improved accuracy, natural language understanding ability (for editing commands and other purposes), and many other features.

How about software development itself? I've been developing software myself for forty years, so I have some perspective on this. It's clear that the growth in productivity of software development has a lower exponent, but it is nonetheless growing exponentially. The development tools, class libraries, and support systems available today are dramatically more effective than those of decades ago. I have today small teams of just three or four people who achieve objectives in a few months that are comparable to what a team of a dozen or more people could accomplish in a year or more 25 years ago. I estimate the doubling time of software productivity to be approximately six years, which is slower than the doubling time for processor price-performance, which is approximately one year today. However, software productivity is nonetheless growing exponentially.

The most important point to be made here is that there is a specific game plan for achieving human-level intelligence in a machine. I agree that achieving the requisite hardware capacity is a necessary but not sufficient condition. As I mentioned above, we have a resource for understanding how to program the methods of human intelligence given hardware that is up to the task, and that resource is the human brain itself.

Here again, if you speak to some of the neurobiologists who are diligently creating detailed mathematical models of the hundreds of types of neurons found in the brain, or who are modeling the patterns of connections found in different regions, you will in at least a few cases encounter the same sort of engineer's / scientist's myopia that results from being immersed in the specifics of one aspect of a large challenge. However, having tracked the progress being made in accumulating all of the (yes, exponentially increasing) knowledge about the human brain and its algorithms, I believe that it is a conservative scenario to expect that within thirty years we will have detailed models of the several hundred information processing organs we collectively call the human brain.

For example, Lloyd Watts has successfully synthesized (that is, assembled and integrated) the detailed models of neurons and interconnections in more than a dozen regions of the brain having to do with auditory processing. He has a detailed model of the information transformations that take place in these regions, and how this information is encoded, and has implemented these models in software. The performance of Watt's software matches the intricacies that have been revealed in subtle experiments on human hearing and auditory discrimination. Most interestingly, using Watts' models as the front-end in speech recognition has demonstrated the ability to pick out one speaker against a backdrop of background sounds, an impressive feat that humans are capable of, and that up until Watts' work, had not been feasible in automated speech recognition systems.

The brain is not one big neural net. It consists of hundreds of regions, each of which is organized differently, with different types of neurons, different types of signaling, and different patterns of interconnections. By and large, the algorithms are not the sequential, logical methods that are commonly used in digital computing. The brain tends to use self organizing, chaotic, holographic (i.e., information not in one place but distributed throughout a region), massively parallel, and digital controlled-analog methods. However, we have demonstrated in a wide range of projects the ability to understand these methods, and to extract them from the rapidly escalating knowledge of the brain and its organization.

The speed, cost effectiveness, and bandwidth of human brain scanning is also growing exponentially, doubling every year. Our knowledge of human neuron models is also rapidly growing. The size of neuron clusters that we have successfully recreated in terms of functional equivalence is also scaling up exponentially.

I am not saying that this process of reverse engineering the human brain is the only route to "strong" AI. It is, however, a critical source of knowledge that is feeding into our overall research activities where these methods are integrated with other approaches.

Also, it is not the case that the complexity of software, and therefore its "brittleness" needs to scale up dramatically in order to emulate the human brain, even when we get to emulating its full functionality. My own area of technical interest is pattern recognition, and the methods that we typically use are self-organizing methods such as neural nets, Markov models, and genetic algorithms. When set up in the right way, these methods can often display subtle and complex behaviors that are not predictable by the designer putting them into practice. I'm not saying that such self-organizing methods are an easy short cut to creating complex and intelligent behavior, but they do represent one important way in which the complexity of a system can be increased without the brittleness of explicitly programmed logical systems.

Consider that the brain itself is created from a genome with only 23 million bytes of useful information (that's what's left of the 800 million byte genome when you eliminate all the redundancies, e.g., the sequence "ALU" which is repeated hundreds of thousands of times). 23 million bytes is not that much information (it's less than Microsoft Word). How is it, then, that the human brain with its 100 trillion connections can result from a genome that is so small? I have estimated that just the interconnection data alone to characterize the human brain is a million times greater than the information in the genome.

The answer is that the genome specifies a set of processes, each of which utilizes chaotic methods (i.e., initial randomness, then self-organization) to increase the amount of information represented. It is known, for example, that the wiring of the interconnections follows a plan that includes a great deal of randomness. As the individual person encounters her environment, the connections and the neurotransmitter level patterns self-organize to better represent the world, but the initial design is specified by a program that is not extreme in its complexity.

It is not my position that we will program human intelligence link by link as in some huge CYC-like expert system. Nor is it the case that we will simply set up a huge genetic (i.e., evolutionary) algorithm and have intelligence at human levels automatically evolve itself. Jaron worries correctly that any such approach would inevitably get stuck in some local minima. He also interestingly points out how biological evolution "missed the wheel." Actually, that's not entirely accurate. There are small wheel like structures at the protein level, although it's true that their primary function is not for vehicle transportation. Wheels are not very useful, of course, without roads. However, biological evolution did create a species that created wheels (and roads), so it did succeed in creating a lot of wheels, albeit indirectly (but there's nothing wrong with indirect methods, we use them in engineering all the time).

With regard to creating human levels of intelligence in our machines, we will integrate the insights and models gained from reverse engineering the human brain, which will involve hundreds of regions, each with different methods, many of which do involve self-organizing paradigms at different levels. The feasibility of this reverse engineering project and of implementing the revealed methods has already been clearly demonstrated. I don't have room in this response to describe the methodology and status of brain reverse engineering in detail, but I will point out that the concept is not necessarily limited to neuromorphic modeling of each neuron. We can model substantial neural clusters by implementing parallel algorithms that are functionally equivalent. This often results in substantially reduced computational requirements, which has been shown by Lloyd Watts and Carver Mead.

Jaron writes that "if there ever was a complex, chaotic phenomenon, we are it." I agree with that, but don't see this as an obstacle. My own area of interest is chaotic computing, which is how we do pattern recognition, which in turn is the heart of human intelligence. Chaos is part of the process of pattern recognition, it drives the process, and there is no reason that we cannot harness these methods in our machines just as they are utilized in our brains.

Jaron writes that "evolution has evolved, introducing sex, for instance, but evolution has never found a way to be any speed but very slow." But he is ignoring the essential nature of an evolutionary process, which is that it accelerates because each stage introduces more powerful methods for creating the next stage. Biological evolution started out extremely slow, and the first halting steps took billions of years. The design of the principal body plans was faster, requiring only tens of millions of years. The process of biological evolution has accelerated, with each stage faster than the stage before it. Later key steps, such as the emergence of Homo Sapiens, took only hundreds of thousands of years. Human technology, which is evolution continued indirectly (created by a species created by evolution), continued this acceleration. The first steps took tens of thousands of years, outpacing biological evolution, and has accelerated from there. The World Wide Web emerged in only a few years, distinctly faster than, say, the Cambrian explosion.

Jaron complains that "surprisingly few of the most essential algorithms have overheads that scale at a merely linear rate." Without taking up several pages to analyze this statement in detail, I will point out that the brain does what it does in its own real-time, using interneuronal connections (where most of our thinking takes place) that operate at least ten million times slower than contemporary electronic circuits. We can observe the brain's massively parallel methods in detail, ultimately scan and understand all of its tens of trillions of connections, and replicate its methods. As I've mentioned, we're well down that path.

To correct a few of Jaron's statements regarding (my) time frames, it's not my position that the "singularity" will "arrive a quarter of the way into the new century" or that a "new criticality" will be "achieved in the about the year 2020." Just so that the record is straight, my view is that we will have the requisite hardware capability to emulate the human brain in a $1,000 of a computation (which won't be organized in the rectangular forms we see today such as notebooks and palmtops, but rather embedded in our environment) by 2020. The software will take longer, to around 2030. The "singularity" has divergent definitions, but for our purposes here we can consider this to be a time when nonbiological forms of intelligence dominate purely biological forms, albeit being derivative of them. This takes us beyond 2030, to perhaps 2040 or 2050.

Jaron calls this an "immanent doom" and "an eschatological cataclysm," as if it were clear on its face that such a development were undesirable. I view these developments as simply the continuation of the evolutionary process and neither utopian nor dystopian. It's true, on the one hand, that nanotechnology and strong AI, and particularly the two together, have the potential to solve age-old problems of poverty and human suffering, not to mention clean up the messes we're creating today with some of our more primitive technologies. On the other hand, there will be profound new problems and dangers that will emerge as well. I have always considered technology to be a double-edged sword. It amplifies both our creative and destructive natures, and we don't have to look further than today to see that.

However, on balance, I view the progression of evolution as a good thing, indeed as a spiritual direction. What we see in evolution is a progression towards greater intelligence, greater creativity, greater beauty, greater subtlety (i.e., the emergence of entities with emotion such as the ability to love, therefore greater love). And "God" has been described as an ideal of an infinite level of these same attributes. Evolution, even in its exponential growth, never reaches infinite levels, but it's moving rapidly in that direction. So we could say that this evolutionary process is moving in a spiritual direction.

However, the story of the twenty-first century has not yet been written. So it's not my view that any particular story is inevitable, only that evolution, which has been inherently accelerating since the dawn of biological evolution, will continue its exponential pace.

Jaron writes that "the whole enterprise of Artificial Intelligence is based on an intellectual mistake." Until such time that computers at least match human intelligence in every dimension, it will always remain possible for skeptics to say the glass is half empty. Every new achievement of AI can be dismissed by pointing out yet other goals have not yet been accomplished. Indeed, this is the frustration of the AI practitioner, that once an AI goal is achieved, it is no longer considered AI and becomes just a useful technique. AI is inherently the set of problems we have not yet solved.

Yet machines are indeed growing in intelligence, and the range of tasks that machines can accomplish that previously required intelligent human attention is rapidly growing. There are hundreds of examples of narrow AI today (e.g., computers evaluating electrocardiograms and blood cell images, making medical diagnoses, guiding cruise missiles, making financial investment decisions, not to mention intelligently routing emails and cell phone connections), and the domains are becoming broader. Until such time that the entire range of human intellectual capability is emulated, it will always be possible to minimize what machines are capable of doing.

I will point out that once we have achieved complete models of human intelligence, machines will be capable of combining the flexible, subtle, human levels of pattern recognition with the natural advantages of machine intelligence. For example, machines can instantly share knowledge, whereas we don't have quick downloading ports on our interconnection and neurotransmitter concentration level patterns. Machines are much faster (as I mentioned contemporary electronics is already ten million times faster than the electrochemical information processing used in our brains) and have much more prodigious and accurate memories.

Jaron refers to the annual "Turing test" that Loebner runs, and maintains that "we have caused the Turing test to be passed." These are misconceptions. I used to be on the prize committee of this contest until a political conflict caused most of the prize committee members to quit. Be that as it may, this contest is not really a Turing test, as we're not yet at that stage. It's a "narrow Turing test" which deals with domain specific dialogues, not unrestricted dialogue as Turing envisioned it. With regard to the Turing test as Turing described it, it is generally accepted that this has not yet happened.

Returning to Jaron's nice phrase "circle of empathy," he writes that his "personal choice is to not place computers inside the circle." But would he put neurons inside that circle? We've already shown that a neuron or even a substantial cluster of neurons can be emulated in great detail and accuracy by computers. So where on that slippery slope does Jaron find a stable footing? As Rodney Brooks says in his September 25, 2000 commentary on Jaron's "Half of a Manifesto," Jaron "turns out to be a closet Searlean." He just assumes that a computer cannot be as subtle ­ or as conscious ­ as the hundreds of neural regions we call the human brain. Like Searle, Jaron just assumes his conclusion. (For a more complete discussion of Searle and his theories, see my essay "Locked in his Chinese Room, Response to John Searle" in the forthcoming book Are We Spiritual Machines?: Ray Kurzweil vs. the Critics of Strong AI, Discovery Institute Press, 2001. This entire book will be posted on http://www.KurzweilAI.net).

Near the end of Jaron's essay, he worries about the "terrifying" possibility that through these technologies the rich may obtain certain opportunities that the rest of humankind does not have access to. This, of course, would be nothing new, but I would point out that because of the ongoing exponential growth of price-performance, all of these technologies quickly become so inexpensive as to become almost free. Look at the extraordinary amount of high-quality information available at no cost on the web today which did not exist at all just a few years ago. And if one wants to point out that only a small fraction of the world today has Web access, keep in mind that the explosion of the Web is still in its infancy.

At the end of his "Half of a Manifesto," Jaron writes that "the ideology of cybernetic totalist intellectuals [may] be amplified from novelty into a force that could cause suffering for millions of people." I don't believe this fearful conclusion follows from Jaron's half of an argument. The bottom line is that technology is power and this power is rapidly increasing. Technology may result in suffering or liberation, and we've certainly seen both in the twentieth century. I would argue that we've seen more of the latter, but nevertheless neither Jaron nor I wish to see the amplification of destructiveness that we have witnessed in the past one hundred years. As I mentioned above, the story of the twenty first century has not yet been written. I think Jaron would agree with me that our destiny is in our hands. However, I regard "our hands" to include our technology, which is properly part of the human-machine civilization.



From Lynn Margulis
Date: June 6, 2001


One of the voices at EDGE, that "continues to launch intellectual skyrockets of stunning brlliance" ( Arts & Letters Daily) has been rudely silenced too soon. Chilean biologist and scholar, ironically the younger of the powerful team of inventive "autopoeticists" (Humberto Maturana and Francisco Varela (1946-2001) a director of research in a CNRS neurophysiology lab at the Ecole Politecnique died quietly in his Paris home on May 28. When I last saw him in Paris perhaps four years ago he had recovered from a relapse in his battle against hepatitis. Apparently this time he succumbed.

My attention was drawn to Francisco early in his career when, with Moran at the University of Chicago, he showed the detailed structure of a sensory cell in cockroaches. We all have experienced the rapid scurry of the common cockroach surprised in our pantry but that this magic of the escape artist with whom we unwillingly share scraps of food is based on microtubules was a surprise. In their legs the omnivorous insect speedsters have "campaniform" (bell-shaped) sensory receptors, specialized single cells with myriad microtubules that impinge on a membrane like a drum skin. A tiny deflection of that membrane quickly amplifies some sort of signal that reaches the nervous system - the nerve impulse immediately taken by the cockroach as a warning induces the flight response that makes these pesky insects so difficult to swat.

The campaniform microtubules revealed by Varela are the same tubulin protein-based structures that preoccupy Roger Penrose as noncomputable ingredients in consciousness (The Third Culture, p. 246). They are the same tubulin protein-based 24 nanometer diameter structures that preoccupy my colleagues and me in our work on swimming organelles (undulipodia) and agents of cell division (mitotic spindles). This early work by Varela brought him to the attention to the Department of Biology at Boston University where I was a faculty member from 1966 until 1988. His charm, linguistic ability and verbal acuity and probably his attractive appearance and ease with curious students and other academics led us to "feel him out" about joining the biology department. He unhesitatingly refused our offer, probably before it even became formal.

Francisco, in all his activities whether computational, immunological, philosophical or Buddhist-religious was concerned about identity, the nature of self. Perhaps because he came from a remote South American country dismissed or ignoredby Parisians and New Yorkers, perhaps because he spoke more than four languages fluently (Spanish, French, Italian, English, American and German) and recognized immediately the effects of national behaviors on native speakers his sense of self was always so activated and activatable. He was fascinated by autoimmune diseases such as AIDS and scleroderma because they are "outside the paradigm of immunology. there is nothing to vaccinate against, there´s no bacteria coming from the outside. It´s something the system does to itself". To Francisco the self is never a platonic unity, it is always a complex of interacting incessantly behaving entities that confirm its selfhood by many internal and external activities and responses.

The self and the identity of self became one of the tenents of Varela´s most well-know work: that with Maturana on autopoiesis. The Chilean biologist philosophers have been criticized for their development of autopoietic thinking but often those who have been critical do not realize that the autopoeticists merely make explicit concepts that all of us use all the time. Autopoetic entities whether minimal, such as cells or very large such as the Earth´s biosphere maintain their selfhood by chemical-metabolic and communicative means that always require expending of energy, transfer of informatioin and active exchange of parts for system maintenance. In his analysis of what we mean by life as autopoietic and cognitive entity Varela continued and expanded Maturana´s global dialogue that continues to grow. The breadth of eager conversation will now be sadly impeded by Francisco´s permanent absence. EDGE´s cacophony has lost a voice.

From: John Horgan
Date: June 6, 2001

I was sorry but not surprised to learn that Francisco Varela died in May. While doing research for my mysticism project, I interviewed Varela in December 1999 in Cambridge, where he was planning the next in a series of meetings on Buddhism and biology involving the Dalai Lama.

Varela was pale and thin; he spoke in a hoarse whisper, and he coughed frequently. He told me that he had been diagnosed with liver cancer five years earlier. "I've really been struggling with death for about five years," he said. Like some of the scientists whose views you present in your Edge posting, I found Varela both fascinating and frustrating. I had the sense that he was pointing me toward deep metaphysical truths, but they always seemed to slip from my grasp. Perhaps that's just the nature of the ideas he trafficked in, though.

We talked quite a bit about death. Much of Buddhist doctrine, Varela said, is perfectly compatible with modern, materialistic science; the glaring exception is the notion that mind can exist independently of and is somehow more fundamental than matter. Varela realized that most of his fellow neuroscientists would find this concept preposterous, but he took it quite seriously.

According to Buddhism, he explained, death represents just the transformation of consciousness, not its end; we pass through death and are eventually reborn, reincarnated into another body. Our individual, mortal minds are just transient waves "in this enormous field, this big mind, this open consciousness," Varela said. Various Buddhist sages claim that they remember their previous lives and deaths and the so-called bardo state that immediately follows death.

Buddhist adepts have also reported experiencing "luminous mind" the eternal, immaterial mind from which all individual minds emerge — while still alive. These reports "make me feel very, very confident that luminous mind is a tangible reality," Varela said. "It is not pie in the sky. Of course I don't know what's going to happen when I die," he continued. "I have to take their evidence. But why not? They are reliable witnesses."

Varela also revealed that several years earlier he had glimpsed what he believed to be luminous mind, when he woke up in the middle of a liver transplant operation. "That experience was so strong," Varela said. "It removed any doubts that consciousness is really the most intrinsic part of being." The episode convinced him that death "is not this big leap. It is more like...almost a feeling of coming back, coming back home."

Varela told me this story in a very calm, matter-of-fact way. His equanimity in the face of his own death seemed quite genuine. I hope he maintained that fearlessness to the end.

From David Berreby
Date: June 7, 2001

I'm very sad to hear this. I spent an afternoon with Varela a few years back. He was stimulating and smart; also relaxed, amused, patient and kind. Like many thinkers who perceive that the world is Heraclitean flux, he radiated constancy and strength, emotional as well as cognitive. I sensed that this was, spiritually and intellectually, one of the least confused people I'd ever met.



John Brockman, Editor and Publisher

Copyright © 2001 by Edge Foundation, Inc
All Rights Reserved.