Chapter 22


"The Second Law of Organization"

W. Daniel Hillis: Doyne was in that group of physicists at Los Alamos who were starting to think about complexity, nonlinear phenomena, and adaptive systems. They began to realize that things like "strange attractors" were really ubiquitous in any kind of system — economic systems and biological systems, not just physical systems. That was an incredibly important idea, because it allowed all these people to start talking to each other.


J. DOYNE FARMER is a physicist, an external professor at the Santa Fe Institute, and a cofounder of Prediction Company, an investment firm.

J. Doyne Farmer: In the last half of this century, the view has emerged that life and consciousness are natural and inexorable outgrowths of the emergent and self-organizing properties of the physical world. This fundamental change in our view of consciousness and life gives us a new way of looking at ourselves and our beliefs, and of understanding how we fit into the universe.

Not that this is a fait accompli — it's a story in progress, an evolving idea about which there's no universal agreement. Our scientific understanding is still highly fragmented, and we await major breakthroughs as far as anything resembling broad theories is concerned. There's been little serious discussion of how this new view impacts philosophy or sociology. But it's rapidly taking hold, and the change is profound. More than ever, it's becoming impossible to contemplate seriously any philosophical or social question without understanding recent developments in science.

As a kid, I could never shrug off those nagging "Why" questions. It seemed really important to know why we were here, and to understand the meaning of life. It was upsetting to me that these questions, which seemed to lie at the foundation of everything, didn't have any good answers. The easy solutions just didn't fit. My brief preadolescent foray into religion left me with nothing but the realization that people have a desperate need to understand these questions.

When I arrived at college, I immediately took philosophy, picking it out as the subject where the "Why" questions would receive plenty of attention. But as I learned a little philosophy, I became frustrated by the endless debates that seemed to hinge on the meaning of words that could never be defined. Nothing was ever answered. I decided that "Why" questions are simply too deep to be answered with a frontal attack, using the sloppy weapon of human language. Perhaps I wasn't quite as naive as to have expected answers, but at any rate I wasn't satisfied by the study of philosophy.

Physics, on the other hand, seemed to have plenty of answers but not to the "Why" questions. Where I'd imagined that I would learn the foundations, the big principles that made the universe tick, we were instead memorizing formulas about masses on inclined planes. But somehow, I hoped, we'd eventually get to the good stuff. The masses and inclined planes were just an initiation rite, and in the meantime I might learn something tangible, perhaps even useful.

As I progressed through the physics curriculum, I did begin to learn something about fundamental principles — on my own and in discussion with other students — somewhere between the cracks of the problem sets. There was some satisfaction in this. And on learning more astronomy, in the phenomenon of "averted vision," I found a justification for my rationale about the roundabout path to metaphysics via physics: to see a faint star, it's necessary to look away from it; as soon as one looks at it directly, it vanishes.

But as I approached the end of the physics curriculum, there was still something lacking. Averted vision is all well and good, but it is necessary to look roughly in the right direction. Physics, in its quest for simple problems, has traditionally focused entirely on the immediate and direct aspects of matter and energy. What makes things move, what makes them get hot or cold. Pushing, pulling, bumping, smashing, and waving. The material aspect of the world, leading to fundamental ideas such as the curvature of spacetime, the quantum nature of reality, the uncertainty principle. All relevant to the big questions. But, the big questions inevitably hinge upon the nature of life and intelligence. While modern physics may say that science necessarily has a subjective element, it says nothing about the nature or origins of consciousness.

It seemed that fundamental physics was stuck. The particle physicists were smashing particles into each other with ever- increasing force, trying to discover how many quarks could dance on the head of a pin. The cosmologists were working with very few facts, debating different flavors for the universe on what seemed to me to be mainly religious grounds. And most of physics was still focused on pushing and pulling, on the material properties of the universe rather than on its informational properties. By informational properties, I mean those that relate to order and disorder. Disorder is fairly well understood, but order isn't. But I'll come back to this later.

I had the good fortune, in graduate school at UC Santa Cruz, to come into contact with some exceptional thinkers: my fellow graduate students Jim Crutchfield, Norman Packard, and Robert Shaw. We spent a lot of time hanging out together, thinking, talking, and sharing our ideas about just about everything. We mused about the informational properties of nature, and the natural origins of organization, and our discussions had a lot of influence on my thinking about these questions.

Norman and I had been friends from childhood, back in Silver City, New Mexico, and we'd always dreamed of starting a company together. So when I had a convenient break in my studies- having passed my qualifying exam and done all my course work, and being a bit dissatisfied with where my research into galaxy formation in unusual cosmologies was going — I decided to take a year off to work with Norman and some other friends, following up on an idea of Norman's. The scheme was to use Newton's laws to beat roulette: in experiments done in our basement, we determined that by means of a computer concealed in the soles of our shoes and activated by a toe switch, we could measure the velocity of the roulette ball and wheel and predict the ball's landing position. Thus ensued a wild time of desperate and adventurous living. The basic idea worked — we made some money in the casinos- -but the problems of doing this regularly enough and at sufficiently high stakes prevented us from making very much money. Scientifically, it forced me to learn all about computers (we built what may have been the first concealable digital computers), and it gave me a deep appreciation for the problem of prediction and the curious way in which an apparently simple physical system could be very difficult to predict.

So when Rob Shaw showed up one day and started talking about the phenomenon of "chaos," which he had just learned about, the idea had immediate relevance for me. I instantly understood what he was talking about, and why chaos was important to physical systems like roulette wheels. Rob, Norman, Jim, and I banded together to form the Dynamical Systems Collective at Santa Cruz, and all of us ended up doing our dissertations on the subject of chaos, using one another as our primary thesis advisors. We had a lot of fun doing it.

The fascination with chaos is that it explains some of the disorder in the world, how small changes at one time can give rise to very large effects at a future time. And it shows how simple mathematical rules can give rise to complicated behavior. It explains why simple things can be hard to predict — so much so that they appear to behave randomly. I was lucky enough to get involved in chaos theory fairly early on, and it was great to be in a field that was sufficiently undeveloped that there were a lot of easy problems lying around to be solved.

As I finished graduate school, I really wasn't very sure about getting a job. I'd never been keen on the idea of traditional jobs anyway, and with a degree in "chaos," which at the time very few people had heard of, and no advisor to argue my case, the prospect of a job in science seemed pretty remote. But I happened to see a poster soliciting applications for the Oppenheimer Fellowship at Los Alamos National Laboratory. I'd just been reading about Oppenheimer, and Los Alamos was in New Mexico, where I was raised and where I wanted to return, so even though I was very suspicious of the idea of working at a weapons laboratory I applied for the fellowship. I flew out for a visit, and I was immediately impressed. The people there were exciting, enthusiastic, intelligent, and scientifically they were anything but conservatives. They didn't care at all if what I was doing was not traditional physics. There was a tradition of intellectual freedom there that I haven't seen anywhere else. I ended up with a joint position, split between the Center for Nonlinear Studies and the Theoretical Division. They immediately gave me a lot of responsibility and resources and also gave me carte blanche to do whatever I wanted. Visitors streamed through from all over the world, studying everything under the sun, well beyond the traditional boundaries of physics and mathematics, and I learned an enormous amount just by listening and asking questions.

I continued my work on chaos, but as time went on I began to get a little bored, and increasingly began to think about how to get a handle on the opposite problem: Why is the universe so organized? In 1983 the Center for Nonlinear Studies provided some money for a conference on cellular automata, which I organized with Tomas Toffoli and Stephen Wolfram, and in 1986 Alan Lapedes, Norman Packard, Burton Wendroff, and I organized a conference on "Evolution, Games, and Learning." These conferences were a lot of fun, and gave us a chance to invite people working on all sorts of crazy, fascinating, and obscure things — simulating life in computer worlds, and so forth. These conferences put us in contact with the then tenuous network of people interested in these kinds of things, and that's how I got to know people such as Chris Langton and John Holland. There were others at Los Alamos working on related topics; Alan Lapedes and Dave Sharp were working on neural nets, people in the Theoretical Biology group were working on informational studies of DNA and also some very interesting aspects of the immune system. We were able to hire some really good postdocs interested in self-organization, like Steen Rasmussen and Walter Fontana, and in 1988 we started the Complex Systems group. Meanwhile, the Santa Fe Institute was just getting started, which brought in even more interesting people and expanded the horizon to include subjects, such as economics, that we hadn't paid much attention to.

Around 1986, Norman Packard and I got involved in two related projects: one with Alan Perelson involving a simulation of learning self/nonself recognition and evolution in the immune system, and the other with Stuart Kauffman, which was a simulation of prebiotic evolution. The idea of the simulation was similar in both cases: we made up some rules that allowed the parts of the system to evolve and interact with each other. In the case of the immune system, the parts were concentrations of different kinds of antibodies. For prebiotic evolution, they were concentrations of molecules such as proteins; the purpose was to show how a metabolism could arise spontaneously, without the presence of self-replicating molecules like DNA. The interesting and novel aspect of both simulations was that as the systems evolved, the compositions of their parts, and hence the parts' interactions, changed. This all came out of a few simple rules. We didn't have to put in anything by hand, other than the basic laws of chemistry — or our crude approximations of them.

The problems turned out to be harder than we'd originally hoped, and our early results weren't very conclusive. In the case of the autocatalytic networks, I was lucky to have a graduate student, Rik Bagley, who had migrated from San Diego and badly wanted to produce a Ph.D. thesis. Rik worked hard and ended up getting some nice results that showed there was real value in the whole approach.

To understand what we did, you first have to understand one of the basic questions relating to the origin of life. Speaking crudely, a living system — an organism — consists of a symbiotic relationship between a metabolism and a replicator. The metabolism, which is built out of proteins and other stuff, extracts energy from the environment, and the replicator contains the blueprint of the organism, with the information needed to grow, make repairs, and reproduce. Each needs the other: the replicator contains the information to make the proteins, the RNA, and other molecules that form the metabolism and run the organism; and the metabolism provides the energy and raw materials needed to build and run the replicator.

The question is, How did this "I'll scratch your back, you scratch mine" situation ever get started? Which came first, the metabolism or the replicator? Or can neither exist without the other, so that they had to evolve together?

In the 1950s, the chemist Harold Urey and the biologist Stanley Miller showed that it was possible for the basic building blocks of proteins — amino acids — to form spontaneously from "earth, fire, and water." However, the synthesis of the more complicated molecules needed in order to form replicators and metabolisms was much less clear. We were trying to demonstrate that a metabolism could spontaneously emerge from basic building blocks and evolve without the presence of a replicator. That is, it could be its own replicator, with the information stored simply in the so-called primordial soup. Starting with simple components — for example, simple amino acids — we wanted to get complex proteins: that is, long, highly diverse chains of amino acids. The basic principle of an autocatalytic network is that even though nothing can make itself, everything in the pot has at least one reaction that makes it, involving only other things in the pot. It's a symbiotic system, in which everything cooperates to make the metabolism work — the whole is greater than the sum of the parts. If normal replication is like monogamous sex, autocatalytic reproduction is like an orgy. We were interested in the logical possibility for this to happen — in an artificial world, simulated inside a computer, following chemical laws that were similar to those of the real world but vastly simplified to make the simulation possible.

In our first simulation, not much happened. The soup of amino acids pretty much remained just that. But after several years of work, Rik managed to speed up the simulation by a factor of 100 and expand things so that the chemistry was considerably more realistic. As we added features and understood the system better, we began to see things happening. We found that by setting the parameters of the system — which you can think of as determining the relative amounts of earth, fire, and water — we were able to set the knobs so that during the simulation the soup would spontaneously transform itself into a complex and highly specific network of large molecules. Not all molecules. Even though billions of different types are possible, only tens to hundreds are produced. This is like a real metabolism. Furthermore, in some work with Walter Fontana, we were able to show that the system could evolve: new "proteins" would emerge spontaneously, competing with the ones that were already there and changing the metabolism.

What we did in simulating the spontaneous emergence of evolving autocatalytic metabolisms is just one example of an approach that people like Chris Langton, Danny Hillis, and others are taking these days to study the evolution of complex systems. Physics has made most of its breakthroughs when it was able to find simple systems that capture the essence of something, without all the complications. One of the keys to understanding quantum mechanics was the hydrogen atom — the simplest atom — where the mathematics of quantum mechanics could be solved and its consequences understood.

The goal is to find a simple evolving system that contains some of the essential properties of evolving complex systems in general, but without all the complications of the real world. The other goal is to find lots of different evolving complex systems, and to try to determine what's common to all of them. What is the essence of what makes them complex? But at this point we still understand very little. Everyone's still arguing about what a "complex system" really is, and what "organization" means, and whether evolution really tends toward states of greater organization.

For many of us, the goal is to find what might be called "the second law of self-organization." The "second law" part is thrown in as a kind of joke; it's a reference to the second law of thermodynamics, which states that there's an inexorable tendency toward entropy — that is, for physical systems to become disordered. The paradox that immediately bothers everyone who learns about the second law is this: If systems tend to be become more disordered, why, then, do we see so much order around us? Obviously there must be something else going on. In particular, it seems to conflict with our "creation myth": In the beginning, there was a big bang. Suddenly a huge amount of energy was created, and the universe expanded to form particles. At first, things were totally chaotic, but somehow over the course of time complex structures began to form. More complicated molecules, clouds of gas, stars, galaxies, planets, geological formations, oceans, autocatalytic metabolisms, life, intelligence, societies. . . . If we take any particular step in this story, with enough information we can understand it without invoking a general principle. But if we take a step back, we see that there's a general tendency for things to get more organized no matter what the particular details are. Perhaps not everywhere, just in some places at some times. And it's important to stress that no one is saying the second law of thermodynamics is wrong, just that there is a contrapuntal process organizing things at a higher level.

One view of this, perhaps the mainstream view, is that everything depends on a set of disconnected "cosmic accidents." The emergence of organization in the universe depends on a series of highly unlikely unrelated details. The emergence of life is an accident, unrelated to the emergence of all the other forms of order we see in the universe. Life can occur only if all the physical laws are exactly as they are in our universe, and when conditions are almost exactly as they are on our planet.

Many of us find this view implausible. Why would so many different types of order occur? Why would our situation be so special? It seems more plausible to assume that "accidents tend to happen." An individual automobile wreck may seem like a fluke- -in fact, most automobile wrecks may seem like flukes — but on average they add up. We expect a certain number of them to happen. Our feeling is that the progression of increasing states of organization in the evolution from clouds of gas to life is not an accident. What we want to do is understand the common thread in the pattern, the universal driving force that causes matter to spontaneously organize itself.

This point of view isn't new. It was articulated in the nineteenth century by Herbert Spencer, who wrote about evolution before Darwin and who coined the terms "survival of the fittest" and "evolution." Spencer argued in a very articulate way for the commonality of these processes of self-organization, and used his ideas to make a theory of sociology. However, he was not able to put these ideas into mathematical form or argue them from first principles. And no one else has, either — doing so is perhaps the central problem in the study of complex systems.

Many of us believe that self-organization is a general property — certainly of the universe, and even more generally of mathematical systems that might be called "complex adaptive systems." Complex adaptive systems have the property that if you run them — by just letting the mathematical variable of "time" go forward — they'll naturally progress from chaotic, disorganized, undifferentiated, independent states to organized, highly differentiated, and highly interdependent states. Organized structures emerge spontaneously, just by letting the system run. Of course, some systems do this to a greater degree than others, or to higher levels than others, and there will be a certain amount of flukiness to it all. The progression from disorder to organization will proceed in fits and starts, as it does in natural evolution, and it may even reverse itself from time to time, as it does in natural evolution. But in an adaptive complex system, the overall tendency will be toward self organization. Complex adaptive systems are somewhat special, but not extremely special; the fact that simple forms of self-organization can be seen in many different computer simulations suggests that there are many "weak" complex adaptive systems. A weak system gives rise only to simpler forms of self-organization; a strong one gives rise to more complex forms, like life. The distinction between weak and strong may also depend on scale: even though something like Danny Hillis's "connection machine" is big, it's nothing compared with the Avogadro's number of processors that nature has at her disposal.

Of course, almost none of this is very well understood at this point. That's part of the challenge and fun of thinking about it! We don't know what "organization" is, we don't know why some systems are adaptive and some aren't, we don't know how to tell in advance whether a system is weakly or strongly adaptive, or whether there's a minimum degree of complexity that a system has to have in order to be adaptive. We do know that complex adaptive systems have to be nonlinear and capable of storing information. Also, the parts have to be able to exchange information, but not too much. In the physical world, this is equivalent to saying that they have to be at the right temperature: not too hot, not too cold.

Many simulations show this — in fact, finding the right temperature was one of the breakthroughs in our simulation of autocatalytic metabolisms. We know a little bit about what distinguishes an adaptive complex system from a nonadaptive complex system, such as a turbulent fluid flow, but most of this is lore — anecdotal evidence based on a few observations and cast in largely vague and undefined terms.

To return to the question of who and what we are: if you accept my basic theme that life and intelligence are the result of a natural tendency of the universe to organize itself, then we are just a passing phase, a step in this progression. Of course, one has to be very careful in generalizing from one level of evolution to another. One of the factors that caused Spencer's ideas to lose popularity was social Darwinism — the idea that those who were wealthy and powerful had become that way because they were somehow naturally "fit," while the downtrodden were unfit — which was a poor extension from biological to social evolution, based on a simpleminded understanding of how biological evolution really works. Social evolution is different from biological evolution: it's faster, it's Lamarckian, and it makes even heavier use of altruism and cooperation than biological evolution does. None of this was well understood at the time.

Another logical consequence of the evolutionary view is that humans aren't the endpoint of the process. Everything is evolving all the time. At this point, we happen to be the only organism with a sufficiently high degree of intelligence to be able to control our environment in a major way. That gives us the capability to do something remarkable — namely, change evolution itself. If we choose, we can use genetic engineering to alter the character of our offspring. As we understand the details of the human genome better, we're almost certain to do this in order to prevent disease. And we'll be tempted to go beyond that, and increase intelligence, say. There'll be an enormous debate, but with overpopulation, a decreased need for unskilled and manual labor, and pressure from cybernetic intelligences, the motivation to do this will eventually become overwhelming.

Cybernetic intelligences are a consequence of the view that self-organization and life are the natural outcomes of evolution in an adaptive complex system. We're rapidly creating an extraordinary, silicon-based petri dish for the evolution of intelligence. By the year 2025, at the present rate of improvement of computer technology, we're likely to have computers whose raw processing power exceeds that of the human brain. Also, we're likely to have more computers than people. It's difficult to realistically imagine a world of cyberintelligences and superintelligent humanlike beings. It's like a dog trying to imagine general relativity. But I think such a world is the natural consequence of adaptive complex systems. What's even more staggering is that it's not so far in the future — I would say a hundred years at the maximum. One of the amazing features of evolution is that it happens faster and faster. This is particularly vivid in the evolution of societies. Once we can manipulate our own genome, Lamarckian fashion, the rate of change will be staggering.

As for myself, I'm just going along, trying to stay sane, raise my children, and make a living. By July 1991, I'd become fed up with Los Alamos. When I became a group leader, I came fully into contact with the political struggles required to maintain funding. The winding down of the Cold War, combined with increased congressional scrutiny, increased bureaucracy, and poor management, made things tough at Los Alamos. The lab funds basic research by imposing a tax on all the money that comes in and then redistributing it. As the Cold War warmed up, weapons funding went down, the internal tax revenues went down, and basic research became a desperate, survival-oriented enterprise. The Golden Age of science at Los Alamos was over, or at least on hold. The Cold Warriors who used to build weapons were now fiercely scavenging for funds, making up for their lack of skill in science with skill in politics and the urge to survive. Meanwhile, Congress felt that this discretionary tax was subverting their control over the way scientists spend taxpayers' money, and increasingly channeled money into micromanaged Big Science funding initiatives. Running a group in an avant-garde, unestablished area was not fun anymore, depending more on political acumen and fund-raising skills than scientific ability.

So I quit my job at Los Alamos and joined up again with my old friend Norman Packard to take another shot at the global casino. We rounded up some venture capital, recruited another ex- graduate student in physics at UC Santa Cruz, Jim McGill, to run the business side of things, and started Prediction Company, in Santa Fe. Our goal is to make money by predicting and trading in financial markets.

Prediction Company is in part an outgrowth of our work in chaos. One of the reasons for being interested in chaos to begin with is that it presents the possibility that something that seems random may have some underlying simplicity, which can be exploited to make better predictions. In 1987 Sid Sidorowich and I wrote a paper showing how to exploit the order underlying chaos, so that some forms of chaos could be predicted without knowing anything about the underlying dynamics, by building models based only on historical data. We applied this to several phenomena, like fluid flow, sunspots, and ice ages, and got some reasonable results.

It turns out that predicting financial markets doesn't have a lot to do with what Sid and I wrote about earlier, but some of the same techniques work. At Prediction Company, we gather data about financial markets, like currency exchange rates. We apply our learning algorithms to the data, looking for patterns that seem to persist through time. We build models that make trades based on these patterns, and implement them. Every day, data flows into Santa Fe from all over the world, triggers our computer programs to make predictions and trades, which are then sent around the world to the appropriate financial markets. It takes about a minute from the time we receive the data until the trade gets made. So far so good. We have a nice contract with the Swiss Bank Corporation — they provide us with money to trade with, advance us money to pay our bills as needed — and we get a cut of the profits. We're just ramping up to the point where we're trading enough money to make a significant profit. So in the next few years we should either sink or swim.

If we succeed, it will show that, contrary to mainstream theories in economics, it's possible to beat the market. Our feeling is that one of the main causes for the patterns we find is mass psychology: traders respond to information in a predictable manner. So if we can predict the market, and our feeling is right, then it shows that the behavior of groups of human beings is predictable. We're not basing our predictions on a fundamental theory about human nature, but rather on patterns and data. Time will tell whether or not we're right.

These days, scientists are largely treated like beggars, their tin cups eternally extended to government funding agencies. If we succeed, then I'll have the luxury of being able to be a scientist without having to be a beggar. I hope to get back into the fray of pure research in complex systems before I'm too old and senile to think clearly anymore. There are some big questions to be answered, which can give us significant hints about the meaning of life. I'd like to get back on the front lines of answering these questions.

Francisco Varela: Doyne Farmer comes from the pure-mathematics tradition. He's one of the best examples of somebody who took the very abstract theory of dynamical systems and chaos theory and brought it down to a concrete level, where you can put it to work in interesting ways. For example, he's made concrete applications in economics. He's demonstrated that you can make short-term predictions about phenomena that are intrinsically chaotic, intrinsically random-looking. That's a major contribution, and in that sense he's quite an impressive applied mathematician.

Doyne is somebody who has stayed away from the Santa Fe Institute hype as he pursues his work. Since his work is so fundamental to the Santa Fe project, his name figures. His reputation carries far beyond the institute because his work and persona are so unique and interesting that he's been one of the leading characters in several recent books — some of them best- sellers — by science journalists.

Everybody knows what he and Norman Packard are doing at the Prediction Company, but nobody knows exactly how well or how badly they're doing it. If you have a few percent more accuracy than the best intuitive guesses of the good players on Wall Street, you still stand to make gazillions of dollars — for a while, until everybody else figures what you're doing. That will give them a window of a year or two, probably, which is enormous.

Brian Goodwin: The first time I met Doyne Farmer was at Los Alamos, and he impressed me as somebody who is fantastically on the ball. They were working on that origin-of-life scenario, with the autocatalytic-set story. I found Doyne to be very quick, smart, and tuned-in to these problems. He's one of the high flyers. It's a pity he dropped out, but never mind; he's doing what he wants to do.

W. Daniel Hillis: Too bad that Doyne Farmer went off and started his company, because he stopped talking about the good stuff he was doing. He's trying to use it to get rich in the stock market. Doyne is one of the few people I know who's really good explaining physical ideas to people in other fields.

Doyne was in that group of physicists at Los Alamos who were starting to think about complexity, nonlinear phenomena, and adaptive systems. They began to realize that things like "strange attractors" were really ubiquitous in any kind of system — economic systems and biological systems, not just physical systems. That was an incredibly important idea, because it allowed all these people to start talking to each other. It allowed Stuart Kauffman to make bridges between biology and physics. It allowed people like the economist Brian Arthur to make bridges between economics and biology. And in some sense it provided a context, a set of ideas that were discipline- independent, which was very important.

Murray Gell-Mann: Doyne Farmer is a very bright scientist, originally a theoretical physicist. He spent a long time at Los Alamos National Laboratory, doing excellent work at the Center for Nonlinear Studies. He was one of the people who really got the CNLS excited about branching out from chaotic phenomena in physics into much more general interests, including the study of complex adaptive systems of many kinds. A number of the people who attended the CNLS meeting on evolution, learning, and games have subsequently become involved with the work of the Santa Fe Institute.

Then he and Norman Packard decided they'd go from research into founding an investment firm, utilizing their discoveries about the not entirely random character of the fluctuations of prices in financial markets. Some dogmatic neoclassical economists had kept claiming that the fluctuations around so- called fundamentals in financial markets amounted to a random walk, and they had produced some evidence to support their assertion. But in the last few years it has been shown — I believe quite convincingly — that in fact various markets show fluctuations that are not entirely random. They're at least partly pseudorandom, and that pseudorandomness can be exploited. The possibility of exploitation depends, of course, on how big a space is being traced out by the nonrandom aspects of these fluctuations — as measured, for instance, by the so-called Hausdorff dimension. If that dimension is too large, then the nonrandomness is very hard to exploit. If the dimension is small, then you can probably make use of it.

They concluded that they could make money using the nonrandomness, and they founded an investment firm based on that idea. For a number of months, they worked with play money, and were quite successful with it, and at that point a financier in Chicago connected them with a Swiss bank, which allowed them to use real money. So far, I believe, it's going pretty well.

Richard Dawkins: I met Doyne Farmer in 1987, at the artificial- life conference organized by his colleague Chris Langton. I've also read The Eudemonic Pie, by Thomas A. Bass, and was very amused and entertained by the exploits of Farmer and his friends. Very interesting man.
Stuart Kauffman: I've known Doyne since 1984. He's an extremely bright young physicist. Doyne is charismatic, quite brilliant, creative. He did a lot when he was at Santa Cruz to push the early stages of the development of chaos theory.

After Santa Cruz, he came to Los Alamos, where he continued to develop the theory of chaos. By the early 1980s, he realized that chaos was a done deal, that people had done interesting things and it was time to move on to what was becoming the early stage of complexity. He and I, along with Norman Packard, joined forces to work on a model of autocatalytic sets of polymers. Doyne has gone on to think about other things — in particular, the time-series things he's doing now. He's always insightful, always inventive, freewheeling, eclectic, very clever.

Doyne did major things with chaos theory. It's too bad that he's gone off into business. Doyne could pull off a major coup intellectually, so it makes me sad that he's not lending his intuitive inventiveness more to complexity, because he would have a lot to contribute.

Christopher G. Langton: Doyne Farmer has been a scientific mentor and a good friend, although I don't see him as much as I'd like to these days. Doyne's talents were wasted at Los Alamos, and he had the foresight to escape from LANL and start his own company to apply his nonlinear time-series forecasting techniques to currency and other financial markets. His philosophy is that if his approaches work, they should be self-funding, so he doesn't have to convince some bonehead in Washington that they should be funded. His long-term goal is to make a lot of money in the financial markets, with which he would fund his own institute for the study of complex systems and artificial life. I wish him the best of luck — in a completely objective way, of course.

Back to Contents

Excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995) . Copyright 1995 by John Brockman. All rights reserved.