Chapter 21 "A DYNAMICAL PATTERN"

Chapter 21 "A DYNAMICAL PATTERN"

Christopher G. Langton [5.1.96]

W. Daniel Hillis: Chris Langton is the central guru of this artificial-life stuff. He's onto a good idea when he says that life seems to be at the transition between order and disorder, as he calls it: right at the edge of chaos, just at the temperature between where water is ice and where water is steam, that area where it's liquid — right in between. In many ways, we're poised on the edge between being too structured and too unstructured.

__________

CHRISTOPHER G. LANGTON is a computer scientist; visiting professor at the Santa Fe Institute; director of the institute's artificial-life program; editor of the journal Artificial Life

Christopher G. Langton's Edge Bio Page


[Christopher G. Langton:] What was Darwin's algorithm? The idea of evolution had been around for a long time. Spencer, Lamarck, and others had proposed evolution as a process, but they didn't have a mechanism. The problem was revealing the mechanism — the algorithm — that would account for the tremendous diversity observed in nature, in all its scope and detail. The essence of an algorithm is the notion of a finitely specified, step-by-step procedure to resolve a set of inputs into a set of outputs. Darwin's genius was to take the huge variety of species he saw on the planet and propose a simple, elegant mechanism, a step-by- step procedure, that could explain their existence.

Darwin distinguished two fundamental roles: there had to be (1) a producer of variety and (2) a filter of variety. In the first few chapters of Origin of Species, Darwin appeals to his contemporaries' common knowledge that nature produces variability in the offspring of organisms. Everybody knew about the breeding of plants and animals. It was clear that the variety depended upon by breeders was a product of natural processes going on in animal and plant reproduction. A human breeder could arrange specific matings to take advantage of this natural variability to enhance certain desired traits among his stock. One could say, "Those two sheep produce more wool than most of the others, so I'm going to mate those two and get sheep that produce more wool." Although the variety was produced naturally, a human breeder arranged for the matings. Since the "filter" of the variety was an artifact of human design, this process is termed artificial selection.

Having cast the situation in terms familiar to his contemporaries, Darwin devoted the remainder of his book to showing how "Nature Herself" could fulfill the role of the selective filter: the entity that arranged for certain matings to take place preferentially, based on the traits of the individuals involved. Since certain traits enhanced the likelihood of survival for the organisms that bore them, organisms carrying those traits would be more likely to survive and mate than organisms that didn't carry those traits.

It's possible to cast this process in terms of a step-by- step procedure called a genetic algorithm, which runs on a computer, allowing us to abstract the process of evolution from its material substrate. John Holland, of the University of Michigan, was the first to seriously pursue implementing Darwin's algorithm in computers in the early 1960s.

People have been working with genetic algorithms ever since, but these algorithms haven't been very useful tools for studying biological evolution. This isn't because there's anything wrong with the algorithms per se, but rather because they haven't been embedded in the proper biological context. As genetic algorithms have been traditionally implemented, they clearly involve artificial selection: some human being provides explicit, algorithmic criteria for which of the entities is to survive to mate and reproduce. The real world, however, makes use of natural selection, in which it is the "nature" of the interactions among all the organisms — both with one another and with the physical environment — that determines which entities will survive to mate and reproduce. It required a bit of experimentation to work out how to bring about natural selection within the artificial worlds we create in computers.

Over the last several years, however, we've learned how to do that, through the work of Danny Hillis, Tom Ray, and others. We don't specify the selective criteria externally. Rather, we let all the "organisms" interact with one another, in the context of a dynamic environment, and the selective criteria simply emerge naturally. To any one of these organisms, "nature," in the computer, is the collective dynamics of the rest of the computerized organisms there. When we allow this kind of interaction among the organisms — when we allow them to pose their own problems to one another — we see the emergence of a Nature with a capital "N" inside the computer, whose "nature" we can't predict as it evolves through time.

Typically, a collection of organisms in such artificial worlds will form an ecology, which will be stable for a while but will ultimately collapse. After a chaotic transition, another stable ecology will form, and the process continues. What defines fitness — and what applies the selective pressure — is this constantly changing collective activity of the set of organisms themselves. I argue that such a virtual ecosystem — what I have termed "artificial life" — constitutes a genuine "nature under glass," and that the study of these virtual natures within computers can be extremely useful for studying the nature of nature outside the computer.

The notion of a human-created nature in a computer can be a little perplexing to people at first. Computers run algorithms, and algorithms seem to be in direct contrast to the natural world. The natural world tends to be wild, woolly, and unpredictable, while algorithms tend to be precise, predictable, and understandable. You know the outcome of an algorithm; you know what it's going to do, because you've programmed it to do just that. Because algorithms run on computers, you expect the "nature" of what goes on in computers to be as precise and predictable as algorithms appear to be. However, those of us who have a lot of experience with computers realize that even the simplest algorithms can produce completely novel and totally unpredictable behaviors. The world inside a computer can be every bit as wild and woolly as the world outside.

One can think of a computer in two ways: as something that runs a program and calculates a number, or as a kind of logical digital universe that behaves in many different ways. At the first artificial-life workshop, which I organized at Los Alamos National Laboratory in 1987, we asked ourselves, How are people going about modeling living things? How are we going about modeling evolution, and what problems do we run into? Once we saw the ways everyone was approaching these problems, we realized that there was a fundamental architecture underlying the most interesting models: they consisted of many simple things interacting with one another to do something collectively complex. By experimenting with this distributed kind of computational architecture, we created in our computers universes that were complex enough to support processes that, with respect to those universes, have to be considered to be alive. These processes behave in their universes the way living things behave in our universe.

I don't see artificial intelligence and artificial life as two distinct enterprises in principle; however, they're quite different in practice. Both endeavors involve attempts to synthesize — in computers — natural processes that depend vitally on information processing. I find it hard to draw a dividing line between life and intelligence. Both AI and AL study systems that determine their own behavior in the context of the information processes inside them. AI researchers picked the most complex example in that set, human beings, and were initially encouraged- -and misled — by the fact that it appeared to be easy to get computers to do things that human beings consider hard, like playing chess. They met with a lot of initial success at what turned out to be not very difficult problems. The problems that turned out to be hard were, ironically enough, those things that seem easy to human beings, like picking out a friend's face in a crowd, walking, or catching a baseball. By contrast, artificial- life researchers have decided to focus on the simplest examples of natural information processors, such as single cells, insects, and collections of simple organisms like ant colonies.

Our approach to the study of life and, ultimately, intelligence and consciousness is very bottom-up. Rather than trying to describe a phenomenon at its own level, we want to go down several levels to the mechanisms giving rise to it, and try to understand how the phenomenon emerges from those lower-level dynamics. For instance, fluid dynamics is reasonably well described explicitly by Navier-Stokes equations, but this is a high-level description imposed on the system from the outside and from the top down; the fluid itself does not compute Navier- Stokes equations. Rather, the fluid's behavior emerges out of interactions between all of the particles that make it up — for example, water molecules. Thus, one can also capture fluid dynamics implicitly, in the rules for collisions among the particles of which a fluid is constituted. The latter approach is the bottom-up approach, and it's truer to the way in which behavior is generated in nature. The traditional AI approach to intelligence is akin to the Navier-Stokes approach to fluid dynamics. However, in the case of phenomena like life and intelligence, we haven't been able to come up with high-level, top down "rules" that work. In my opinion, this is more than just a case of not yet having found them; I think it's quite likely that no such rules can be formulated.

In the early days of artificial intelligence, researchers assumed that the most important thing about the brain, for the purposes of understanding intelligence, was that it was a universal computer. Its parallel, distributed architecture was thought to be merely a consequence of the bizarre path that nature had to take to evolve a universal computer. Since we know that all universal computers are equivalent in principle in their computational power, it was thought that we could effectively ignore the architecture of the brain and get intelligent software running on our newly engineered universal computers, which had very different architectures. However, I think that the difference in architecture is crucial. Our engineered computers involve a central controller working from a top down set of rules, while the brain has no such central controller and works in a very distributed, parallel manner, from the bottom up. What's natural and spontaneous for this latter architecture can be achieved by the former only by using our standard serial computers to simulate parallel, distributed systems. There's something in the dynamics of parallel, distributed, highly nonlinear systems which lies at the roots of intelligence and consciousness — something that nature was able to discover and take advantage of.

What trick is it that nature capitalized on in order to create consciousness? We don't yet understand it, and the reason is that we don't understand what very distributed, massively parallel networks of simple interacting agents are capable of doing. We don't have a good feel for what the spectrum of possible behaviors is. We need to chart them, and once we do we may very well discover that there are some phenomena we didn't know about before — phenomena that turn out to be critical to understanding intelligence. We won't discover them if we work from the top down.

If you look at the architecture of most of the complex systems in nature — immune systems, economies, countries, corporations, living cells — there's no central controller in complete control of the system. There may be things that play a slightly centralized role, such as the nucleus in a cell, or a central government, but a great deal of the dynamics goes on autonomously. In fact, many of the emergent properties that such systems get caught up in would probably not be possible if everything had to be controlled by a centralized set of rules. Nature has learned how to bring about organization without employing a central organizer, and the resulting organizations seem much more robust, adaptive, flexible, and innovative than those we build ourselves that rely on a central controller.

In fact, natural systems didn't evolve under conditions that particularly favored central control. Anything that existed in nature had to behave in the context of a zillion other things out there behaving and interacting with it, without any one of these processes gaining control of the whole system and dictating to the others what to do. This is a very distributed, massively parallel architecture.

Think of an ant colony — a beautiful example of a massively parallel, distributed system. There's no one ant that's calling the shots, picking from among all the other ants which one is going to get to do its thing. Rather, each ant has a very restricted set of behaviors, but all the ants are executing their behaviors all the time, mediated by the behaviors of the ants they interact with and the state of the local environment. When one takes these behaviors on aggregate, the whole collection of ants exhibits a behavior, at the level of the colony itself, which is close to being intelligent. But it's not because there's an intelligent individual telling all the others what to do. A collective pattern, a dynamical pattern, takes over the population, endowing the whole with modes of behavior far beyond the simple sum of the behaviors of its constituent individuals. This is almost vitalistic, but not quite, because the collective pattern has its roots firmly in the behavior of the individual ants.

This example shows how one can be both a vitalist and a mechanist at the same time. We have a set of interacting agents, and they run into one another and do things based on their local interactions. That microcosm gives rise to a collective pattern of global dynamics. In turn, these global patterns set the context within which the agents interact — a context that can be a fairly stabilizing force. If it's too stabilizing, however, the system freezes, like a crystal, and can no longer react in a dynamic way to external pressures. The system as a whole has to respond to external pressures more like a fluid than a crystal, and thus it must be the case that the patterns that emerge can be easily destabilized under appropriate conditions, to be replaced by patterns that are more stable under the new circumstances. It could be that even without external perturbations one pattern of activity will reign for a while and ultimately collapse, to be replaced by another pattern — a stable organization under the new conditions. So, global patterns of organization can be causal, just as the vitalist wants, but these very patterns depend on the dynamics of the microcosm they inform, and don't exist independently of the entities that make up that microcosm, just as the mechanist requires.

In the late nineteenth century, the Austrian physicist Ludwig Boltzmann showed that one could account for many of the thermodynamic properties of macroscopic systems in terms of the collective activity of their constituent atoms. Boltzmann's most famous contribution to our understanding of the relationship between the microcosm of atoms and the macroscopic world of our experience was his definition of entropy: S = k log W. In the 1950s, the computer scientist Claude Shannon generalized Boltzmann's formula, lifting the concept of entropy from the thermodynamic setting in which it was discovered to the more general level of probability theory, providing a precise, quantitative meaning for the term "information." That was a good start. But a lot more needs to be lifted from the domain of thermodynamics. Other useful quantities to generalize from thermodynamics include energy and temperature. I'm convinced that generalizing other concepts from thermodynamics and statistical mechanics will have a major impact on our understanding of biology and other complex systems.

As Doyne Farmer has pointed out, our current understanding of complex systems is very much in the same state as our understanding of thermodynamics was in the mid-1800s, when people were screwing around with the basic concepts but didn't yet know which were the right quantities to measure. Until you know which are the relevant quantities to measure, you can't come up with quantitative expressions relating those quantities to one another. The French physicist Sadi Carnot was one of the first people to identify some basic quantities, such as heat and work. He was followed by a stream of people, like Rudolf Clausius and Josiah Willard Gibbs, until Boltzmann finally made the connection between the microcosmos of atoms and the macrocosmos of thermodynamics.

In my own work, I've focused on some general properties of thermodynamic systems which appear to be important for understanding complex systems. There are certain regimes of behavior of physical systems, generally called "phase transitions," which are best characterized by statistical mechanics. A physical system undergoes a phase transition when its state changes — for instance, when water freezes into ice. I've found that during phase transitions physical systems often exhibit their most complex behavior. I've also found that it's during phase transitions that information processes can appear spontaneously in physical systems and play an important role in the determination of the systems' behavior. One might even say that systems at phase transitions are caught up in complex computations to determine their own physical state. My belief is that the dynamics of phase transitions are the point at which information processing can gain a foothold in physical systems, gaining the upper hand over energy in the determination of the systems' behavior. It has long been a goal of science to discover where and how information theory and physics fit into each other; it's become something of a Holy Grail. I can't say I've found the Grail, but I do think I've found the mountain range it's located in.

People have been trying to synthesize life for a long time, but in most cases they were trying to build models that were explicitly like the life we know. When people would build a model of life, it would be a model of a duck, or a model of a mouse. The Hungarian mathematician John Von Neumann had the insight that we could learn a lot even if we didn't try to model some specific existing biological thing. He went after the logical basis, rather than the material basis, of a biological process, by attempting to abstract the logic of self-reproduction without trying to capture the mechanics of self-reproduction (which were not known in the late 1940s, when he started his investigations).

Von Neumann demonstrated that one could have a machine, in the sense of an algorithm, that would reproduce itself. Most biologists weren't interested, because it wasn't like any specific instance of biological self-reproduction (it wasn't a model of chromosomes, for example). Von Neumann was able to derive some general principles for the process of self- reproduction. For instance, he determined that the information in a genetic description, whatever it was, had to be used in two different ways: (1) it had to be interpreted as instructions for constructing itself or its offspring, and (2) it had to be copied passively, without being interpreted. This turned out to be the case for the information stored in DNA when James Watson and Francis Crick determined its structure in 1953. It was a far- reaching and very prescient thing to realize that one could learn something about "real biology" by studying something that was not real biology — by trying to get at the underlying "bio-logic" of life.

That approach is characteristic of artificial life. AL attempts to look beyond the collection of naturally occurring life in order to discover things about that set that could not be discovered by studying that set alone. AL isn't the same thing as computational biology, which primarily restricts itself to computational problems arising in the attempt to analyze biological data, such as algorithms for matching protein sequences to gene sequences, or programs to reconstruct phylogenies from comparisons of gene sequences. Artificial life reaches far beyond computational biology. For example, AL investigates evolution by studying evolving populations of computer programs — entities that aren't even attempting to be anything like "natural" organisms.

Many biologists wouldn't agree with that, saying that we're only simulating evolution. But what's the difference between the process of evolution in a computer and the process of evolution outside the computer? The entities that are being evolved are made of different stuff, but the process is identical. I'm convinced that such biologists will eventually come around to our point of view, because these abstract computer processes make it possible to pose and answer questions about evolution that are not answerable if all one has to work with is the fossil record and fruit flies.

The idea of artificially created life is pregnant with issues for every branch of philosophy, be it ontology, epistemology, or moral or social philosophy. Whether it happens in the next ten, hundred, or only in the next thousand years, we are at the stage where it's become possible to create living things that are connected to us not so much by material as by information. In geological time, even a thousand years is an instant, so we're literally at the end of one era of evolution and at the beginning of another. It's easy to descend into fantasy at this point, because we don't know what the possible outcome of producing "genuine" artificial life will be. If we create robots that can survive on their own, can refine their own materials to construct offspring, and can do so in such a way as to produce variants that give rise to evolutionary lineages, we'll have no way of predicting their future or the interactions between their descendants and our own. There are quite a few issues we need to think about and address before we initiate such a process. A reporter once asked me how I would feel about my children living in an era in which there was a lot of artificial life. I answered, "Which children are you referring to? My biological children, or the artifactual children of my mind?" — to use Hans Moravec's phrase. They would both be my children, in a sense.

It's going to be hard for people to accept the idea that machines can be as alive as people, and that there's nothing special about our life that's not achievable by any other kind of stuff out there, if that stuff is put together in the right way. It's going to be as hard for people to accept that as it was for Galileo's contemporaries to accept the fact that Earth was not at the center of the universe. Vitalism is a philosophical perspective that assumes that life cannot be reduced to the mere operation of a machine, but, as the British philosopher and scientist C.H. Waddington has pointed out, this assumes that we know what a machine is and what it's capable of doing.

Another set of philosophical issues raised in the pursuit of artificial life centers on questions of the nature of our own existence, of our own reality and the reality of the universe we live in. After working for a long time creating these artificial universes, wondering about getting life going in them, and wondering if such life would ever wonder about its own existence and origins, I find myself looking over my shoulder and wondering if there isn't another level on top of ours, with something wondering about me in the same way. It's a spooky feeling to be caught in the middle of such an ontological recursion. This is Edward Fredkin's view: the universe as we know it is an artifact in a computer in a more "real" universe. This is a very nice notion, if only for the perspective to be gained from it as a thought experiment — as a way to enhance one's objectivity with respect to the reality one's embedded in.

Biology has until now been occupied with taking apart what's already alive and trying to understand, based on that, what life is. But we're finding that we can learn a lot by trying to put life together from scratch, by trying to create our own life, and finding out what problems we run into. Things aren't necessarily as simple — or, perhaps, as complicated — as we thought. Furthermore, the simple change in perspective — from the analysis of "what is" to the synthesis of "what could be" — forces us to think about the universe not as a given but as a much more open set of possibilities. Physics has largely been the science of necessity, uncovering the fundamental laws of nature and what must be true given those laws. Biology, on the other hand, is the science of the possible, investigating processes that are possible, given those fundamental laws, but not necessary. Biology is consequently much harder than physics but also infinitely richer in its potential, not just for understanding life and its history but for understanding the universe and its future. The past belongs to physics, but the future belongs to biology.

 


Back to Contents

Excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995) . Copyright © 1995 by John Brockman. All rights reserved.