The metaphors of information processing and computation are at the center of today's intellectual action. A new and unified language of science is beginning to emerge.

.

Seth Lloyd

Computational Universe

Paul Steinhardt
Cyclic Universe

Alan Guth

Inflationary Universe

Marvin Minsky

Emotion Universe

Ray Kurzweil

Intelligent Universe
Participants (left to right): Ray Kurzweil, Seth Lloyd, JB, Alan Guth, Paul Steinhardt, Marvin Minsky

Dennis Overbye (The New York Times), Jordan Mejias (Frankfurter Allgemeine Zeitung), Steve Lohr (The New York Times), Steven Levy (Newsweek)

Marvin Minsky, Seth Lloyd, Paul Steinhardt, Alan Guth, Ray Kurzweil

n July 21, Edge held an event at Eastover Farm which included the physicists Seth Lloyd, Paul Steinhardt, and Alan Guth, computer scientist Marvin Minsky, and technologist Ray Kurzweil. This year, I noted there are a lot of "universes" floating around. Seth Lloyd: the computational universe (or, if you prefer, the it and bit-itty bitty-universe); Paul Steinhardt: the cyclic universe; Alan Guth: the inflationary universe; Marvin Minsky: the emotion universe, Ray Kurzweil: the intelligent universe. I asked each of the speakers to comment on their "universe". All, to some degree, were concerned with information processing and computation as central metaphors. See below for their links to their talks and streaming video.

Concepts of information and computation have infiltrated a wide range of sciences, from physics and cosmology, to cognitive psychology, to evolutionary biology, to genetic engineering. Such innovations as the binary code, the bit, and the algorithm have been applied in ways that reach far beyond the programming of computers, and are being used to understand such mysteries as the origins of the universe, the operation of the human body, and the working of the mind.

What's happening in these new scientific endeavors is truly a work in progress. A year ago, at the first REBOOTING CIVILIZATION meeting in July, 2001, physicists Alan Guth and Brian Greene, computer scientists David Gelernter, Jaron Lanier, and Jordan Pollack, and research psychologist Marc D. Hauser could not reach a consensus about exactly what computation is, when it is useful, when it is inappropriate, and what it reveals. Reporting on the event in The New York Times ("Time of Growing Pains for Information Age", August 7, 2001), Dennis Overbye wrote:

Mr. Brockman said he had been inspired to gather the group by a conversation with Dr. Seth Lloyd, a professor of mechanical engineering and quantum computing expert at M.I.T. Mr. Brockman recently posted Dr. Lloyd's statement on his Web site, www.edge.org: "Of course, one way of thinking about all of life and civilization," Dr. Lloyd said, "is as being about how the world registers and processes information. Certainly that's what sex is about; that's what history is about.

Humans have always tended to try to envision the world and themselves in terms of the latest technology. In the 17th and 18th centuries, for example, workings of the cosmos were thought of as the workings of a clock, and the building of clockwork automata was fashionable. But not everybody in the world of computers and science agrees with Dr. Lloyd that the computation metaphor is ready for prime time.

Several of the people gathered under the maple tree had come in the hopes of debating that issue with Dr. Lloyd, but he could not attend at the last moment. Others were drawn by what Dr. Greene called "the glimmer of a unified language" in which to talk about physics, biology, neuroscience and other realms of thought. What happened instead was an illustration of how hard it is to define a revolution from the inside.

Indeed, exactly what computation and information are continue to be subjects of intense debate. But less than a year later, in the "Week In Review" section of the Sunday New York Times ("What's So New In A Newfangled Science?", June 16, 2002) George Johnson wrote about "a movement some call digital physics or digital philosophy — a worldview that has been slowly developing for 20 years."...

Just last week, a professor at the Massachusetts Institute of Technology named Seth Lloyd published a paper in Physical Review Letters estimating how many calculations the universe could have performed since the Big Bang — 10^120 operations on 10^90 bits of data, putting the mightiest supercomputer to shame. This grand computation essentially consists of subatomic particles ricocheting off one another and "calculating" where to go.

As the researcher Tommaso Toffoli mused back in 1984, "In a sense, nature has been continually computing the `next state' of the universe for billions of years; all we have to do — and, actually, all we can do — is `hitch a ride' on this huge ongoing computation."

This may seem like an odd way to think about cosmology. But some scientists find it no weirder than imagining that particles dutifully obey ethereal equations expressing the laws of physics. Last year Dr. Lloyd created a stir on Edge.org, a Web site devoted to discussions of cutting edge science, when he proposed "Lloyd's hypothesis": "Everything that's worth understanding about a complex system can be understood in terms of how it processes information."*....

Dr, Lloyd did indeed cause a stir when his ideas were presented on Edge in 2001, but George Johnson's recent New York Times piece caused an even greater stir, as Edge received over half a million unique visits the following week, a strong confirmation that something is indeed happening here. (Usual Edge readership is about 60,000 unique visitors a month). There is no longer any doubt that the metaphors of information processing and computation are at the center of today's intellectual action. A new and unified language of science is beginning to emerge.


For last year's REBOOTING CIVILIZATION meeting click here.



THE COMPUTATIONAL UNIVERSE: SETH LLOYD [10.21.02]

Every physical system registers information, and just by evolving in time, by doing its thing, it changes that information, transforms that information, or, if you like, processes that information. Since I've been building quantum computers I've come around to thinking about the world in terms of how it processes information.



Seth Lloyd: EdgeVideo (11:15 min.)
DSL+ | Modem
Requires Real Player plug-in (Free Download)


SETH LLOYD is Professor of Mechanical Engineering at MIT and a principal investigator at the Research Laboratory of Electronics. He is also adjunct assistant professor at the Santa Fe Institute. He works on problems having to do with information and complex systems from the very small—how do atoms process information, how can you make them compute, to the very large — how does society process information? And how can we understand society in terms of its ability to process information?

His seminal work in the fields of quantum computation and quantum communications — including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon's noisy channel theorem, and designing novel methods for quantum error correction and noise reduction — has gained him a reputation as an innovator and leader in the field of quantum computing. Lloyd has been featured widely in the mainstream media including the front page of The New York Times, The LA Times, The Washington Post, The Economist, Wired, The Dallas Morning News, and The Times (London), among others. His name also frequently appears (both as writer and subject) in the pages of Nature, New Scientist, Science and Scientific American.


THE COMPUTATIONAL UNIVERSE

SETH LLOYD: I'm a professor of mechanical engineering at MIT. I build quantum computers that store information on individual atoms and then massage the normal interactions between the atoms to make them compute. Rather than having the atoms do what they normally do, you make them do elementary logical operations like bit flips, not operations, and-gates, and or-gates. This allows you to process information not only on a small scale, but in ways that are not possible using ordinary computers. In order to figure out how to make atoms compute, you have to learn how to speak their language and to understand how they process information under normal circumstances.

It's been known for more than a hundred years, ever since Maxwell, that all physical systems register and process information. For instance, this little inchworm right here has something on the order of Avogadro's number of atoms. And dividing by Boltzmann's concept, its entropy is on the order of Avogadro's number of bits. This means that it would take about Avogadro's number of bits to describe that little guy and how every atom and molecule is jiggling around in his body in full detail. Every physical system registers information, and just by evolving in time, by doing its thing, it changes that information, transforms that information, or, if you like, processes that information. Since I've been building quantum computers I've come around to thinking about the world in terms of how it processes information.

A few years ago I wrote a paper in Nature called "Fundamental Physical Limits to Computation," in which I showed that you could rate the information processing power of physical systems. Say that you're building a computer out of some collection of atoms. How many logical operations per second could you perform? Also, how much information could these systems register? Using relatively straightforward techniques you can show, for instance, that the number of elementary logical operations per second that you can perform with that amount of energy, E, is just E/H - well, it's 2E divided by pi times h-bar. [h-bar is essentially 10[-34] (10 to the -34) Joule-seconds, meaning that you can perform 10[-50] (10 to the 50) ops per second.)]If you have a kilogram of matter, which has mc2 — or around 10[17] Joules (10 to the 17) Joules — worth of energy and you ask how many ops per second it could perform, it could perform 10[17] (ten to the 17) Joules / h-bar. It would be really spanking if you could have a kilogram of matter — about what a laptop computer weighs — that could process at this rate. Using all the conventional techniques that were developed by Maxwell, Boltzmann, and Gibbs, and then developed by von Neumann and others back at the early part of the 20th century for counting numbers of states, you can count how many bits it could register. What you find is that if you were to turn the thing into a nuclear fireball — which is essentially turning it all into radiation, probably the best way of having as many bits as possible — then you could register about 10[30] (10 to the 30) bits. Actually that's many more bits than you could register if you just stored a bit on every atom, because Avogadro's number of atoms store about 10[24] (10 to the 24) bits.

Having done this paper to calculate the capacity of the ultimate laptop, and also to raise some speculations about the role of information-processing in, for example, things like black holes, I thought that this was actually too modest a venture, and that it would be worthwhile to calculate how much information you could process if you were to use all the energy and matter of the universe. This came up because back in when I was doing a Masters in Philosophy of Science at Cambridge. I studied with Stephen Hawking and people like that, and I had an old cosmology text. I realized that I can estimate the amount of energy that's available in the universe, and I know that if I look in this book it will tell me how to count the number of bits that could be registered, so I thought I would look and see. If you wanted to build the most powerful computer you could, you can't do better than including everything in the universe that's potentially available. In particular, if you want to know when Moore's Law, this fantastic exponential doubling of the power of computers every couple of years, must end, it would have to be before every single piece of energy and matter in the universe is used to perform a computation. Actually, just to telegraph the answer, Moore's Law has to end in about 600 years, without doubt. Sadly, by that time the whole universe will be running Windows 2540, or something like that. 99.99% of the energy of the universe will have been listed by Microsoft by that point, and they'll want more! They really will have to start writing efficient software, by gum. They can't rely on Moore's Law to save their butts any longer.

I did this calculation, which was relatively simple. You take, first of all, the observed density of matter in the universe, which is roughly one hydrogen atom per cubic meter. The universe is about thirteen billion years old, and using the fact that there are pi times 10[7] (10 to the 7) seconds in a year, you can calculate the total energy that's available in the whole universe. Remembering that there's a certain amount of energy, you then divide by Planck's Constant — which tells you how many ops per second can be performed — and multiply by the age of the universe, and you get the total number of elementary logical operations that could have been performed since the universe began. You get a number that's around 10[120] (10 to the 120). It's a little bigger — 10[122] (10 to the 122) or something like that — but within astrophysical units, where if you're within a factor of one hundred, you feel that you're okay;

The other way you can calculate it is by calculating how it progresses as time goes on. The universe has evolved up to now, but how long could it go? One way to figure this out is to take the phenomenological observation of how much energy there is, but another is to assume, in a Guthian fashion, that the universe is at its critical density. Then there's a simple formula for the critical density of the universe in terms of its age; G, the gravitational constant; and the speed of light. You plug that into this formula, assuming the universe is at critical density, and you find that the total number of ops that could have been performed in the universe over time (T) since the universe began is actually the age of the universe divided by the Planck scale — the time at which quantum gravity becomes important — quantity squared. That is, it's the age of the universe squared, divided by the Planck length, quantity squared. This is really just taking the energy divided by h-bar, and plugging in a formula for the critical density, and that's the answer you get.

This is just a big number. It's reminiscent of other famous big numbers that are bandied about by numerologists. These large numbers are, of course, associated with all sorts of terrible crank science. For instance, there's the famous Eddington Dirac number, which is 10[40] (10 to the 40). It's the ratio between the size of the universe and the classical size of the electron, and also the ratio between the electromagnetic force of, say, the hydrogen atom, and the gravitational force on the hydrogen atom. Dirac went down the garden path to try to make a theory in which this large number had to be what it was. The number that I've come up with is suspiciously reminiscent of (10[40])[3] (10 to the 40, quantity cubed). This number, 10[120], (10 to the 120) is normally regarded as a coincidence, but in fact it's not a coincidence that the number of ops that could have been performed since the universe began is this number cubed, because it actually turns out to be the first one squared times the other one. So whether these two numbers are the same could be a coincidence, but the fact that this one is equal to them cubed is not.

Having calculated the number of elementary logical operations that could have been performed since the universe began, I went and calculated the number of bits, which is a similar, standard sort of calculation. Say that we took all of this beautiful matter around us on lovely Eastover Farm, and vaporized it into a fireball of radiation. This would be the maximum entropy state, and would enable it to store the largest possible amount of information. You can easily calculate how many bits could be stored by the amount of matter that we have in the universe right now, and the answer turns out to be 10[90] (10 to the 90). This is necessary, just by standard cosmological calculations — it's (10[120])[3/4] (10 to the 120, quantity to the 3/4 power). We can store 10[90] (10 to the 90) bits in matter, and if one believes in somewhat speculative theories about quantum gravity such as holography — in which the amount of information that can be stored in a volume is bounded by the area of the volume divided by the Planck Scale squared — and if you assume that somehow information can be stored mysteriously on unknown gravitational degrees of freedom, then again you get 10[120] (10 to the 120). This is because, of course, the age of the universe squared divided by the Planck length squared is equal to the size of the universe squared divided by the Planck length. So the age of the universe squared, divided by the Planck time squared is equal to the size of the universe divided by the Planck length, quantity squared. So we can do 10[120] (10 to the 120) ops on 10[90] (10 to the 90) bits.

I made these calculations not to suggest any grandiose plan or to reveal large numbers, although of course I ended up with some large numbers, but I was curious what these numbers were. When I calculated I actually thought that these can't be right because they are too small. I can think of much bigger numbers than 10[120] (10 to the 120). There are lots of bigger numbers than that. It was fun to calculate the computational capacity of the universe, but I wanted to get at some picture of how much computation the universe could do if we think of it as performing a computation. These numbers can be interpreted essentially in three ways, two of which are relatively uncontroversial. The first one I already gave you: it's an upper bound to the size of a computer that we could build if we turned everything in the universe into a computer running Windows 2540. That's uncontroversial. So far nobody's managed to find a way to get around that. There's also a second interpretation, which I think is more interesting. One of the things we do with our quantum computers is to use them as analog computers to simulate other physical systems. They're very good at simulating other quantum systems, at simulating quantum field theories, at simulating all sort of effects, down to the quantum mechanical scale that is hard to understand and hard to simulate classically. These numbers are a lower limit to the size of a computer that could simulate the whole universe, because to simulate something you need at least as much stuff as is there. You need as many bits in your simulator as there are bits registered in the system if you are going to simulate it accurately. And if you're going to follow it step by step throughout its evolution, you need at least as many steps in your simulator as the number of steps that occur in the system itself. So these numbers, 10[120] (10 to the 120) ops, 10[90] (10 to the 90) bits of matter —10[120] if you believe in something like holography ­ also form a lower bound on the size of a computer you would need to simulate the universe as a whole, accurately and exactly. That's also uncontroversial.

The third interpretation, which of course is more controversial, arises if we imagine that the universe is itself a computer and that what it's doing is performing a computation. If this is the case, these numbers say how big that computation is — how many ops have been performed on how many bits within the horizon since the universe began. That, of course, is more controversial, and since publishing this paper I've received what is charitably described as "hate mail" from famous scientists. There have been angry letters to the editor of Physical Review Letters. "How dare you publish a paper like this?" they say. Or "It's just absolutely inconceivable. The standards have really gotten low." Thinking of the universe as a computer is controversial. I don't see why it should be so controversial, because many books of science fiction have already regarded the universe as a computer. Indeed, we even know the answer to the question it's computing — it's 42. The universe is clearly not a computer with a Pentium inside. It's not an electronic computer, though of course it operates partly by quantum electro-dynamics, and it's not running Windows — at least not yet. Some of us hope that never happens — though you never can tell — if only because you don't want the universe as a whole to crash on you all of a sudden. Luckily, whatever operating system it has seems to be slightly more reliable so far. But if people try to download the wrong software, or upgrade it in some way, we could have some trouble.

So why is this controversial? For one, it seems to be making a statement that's obviously false. The universe is not an electronic digital computer, it's not running some operating system, and it's not running Windows. Why does it make sense to talk about the universe as performing a computation at all? There's one sense in which it's actually obvious that the universe is performing a computation. If you take any physical system — say this quarter, for example. The quarter can register a lot of information. It registers each atom in it, has a position which registers a certain amount of information, has some jiggling motion which registers a few bits of information, and can be heads or tails. Whether it's heads or tails in the famous flipping a coin is generating a famous bit of information — unless it's Rosenkranz and Guildenstern Are Dead, in which case it always comes up heads. Because the quarter is a physical system, it's also dynamic and evolves in time. Its physical state is transforming. It's easier to notice if I flip it in the air — it evolves in time, it changes, and as it changes it transforms that information, so the information that describes it goes from one state to another — from heads to tails, heads to tails, heads to tails — really fast. The bit flips, again and again and again. In addition, the positions, momentum, and quantum states of the atoms inside are changing, so the information that they're registering is changing. Merely by existing and evolving in time — by existing — any physical system registers information, and by evolving in time it transforms or processes that information.

It doesn't necessarily transform it or process it in the same way that a digital computer does, but it's certainly performing information­processing. From my perspective, it's also uncontroversial that the universe registers 10[90] bits of information, transforms and processes that information at a rate which is determined by its energy divided by Planck's constant. All physical systems can be thought of as registering and processing information, and how one wishes to define computation will determine your view of what computation consists of. If you think of computation as being merely information-processing, then it's rather uncontroversial that the universe is computing, but of course many people regard computation as being more than information-processing. There are formal definitions of what computation consists of. For instance, there are universal Turing machines, and there is a nice definition that's now 70-odd years old of what it means for something to be able to perform digital computation. Indeed, the kind of computers we have sitting on our desks, as opposed to the kinds we have sitting in our heads or the kind that were in that little inchworm that was going along, are universal digital computers. So information-processing where a physical system is merely evolving in time is a more specific, and potentially more powerful kind of computing, because one way to evolve in time is just to sit there like a lump. That's a perfectly fine way of evolving in time, but you might not consider it a computation. Of course, my computer spends a lot of time doing that, so that seems to be a common thing for computers to do.

One of the things that I've been doing recently in my scientific research is to ask this question: Is the universe actually capable of performing things like digital computations? Again, we have strong empirical evidence that computation is possible, because I own a computer. When it's not sitting there like a lump, waiting to be rebooted, it actually performs computation. Whatever the laws of physics are, and we don't know exactly what they are, they do indeed support computation in the form of existing computers. That's one bit of empirical evidence for it.

There's more empirical evidence in the form of these quantum computers that I and colleagues like Dave Cory, Tai Tsun Wu, Ike Chuang, Jeff Kimball, Dave Huan, and Hans Mooij have built. They're actually computers. If you look at a quantum computer you don't see anything, because these molecules are too small. But if you look at what's happening in a quantum computer, it's actually attaining these limits that I described before, these fundamental limits of computation. I have a little molecule, and each atom in the molecule registers a bit of information, because spin up is zero, spin down is one. I flip this bit, by putting it in an NMR spectrometer, zapping it with microwaves and making the bit flip. I ask, how fast does that bit flip, given the energy of interaction between the electromagnetic field I'm putting on that spin and the amount of time it takes to flip? You find out that the bit flips in exactly this time that's given by this ultimate limit to computation. I take the energy of the interaction, divide by h-bar ­ if I want, I can make it more accurate, multiplying it by two over pi times h-bar ­ and I find that that's exactly how fast that this bit flips. Similarly, I can do a more complicated operation, like an exclusive or-operation where, if I have two spins, I make this one flip if and only if this spin is spun out, and then the other one flips. It's relatively straightforward to do. In fact, people have been doing it since 1948, and if they'd thought of building quantum computers in 1948 they could have, because they actually already had the wherewithal to do it. When this happens — and it's indeed the sort of thing that happens naturally inside an atom — it also takes place at the limits that are given by this fundamental physics of computation. It goes exactly at the speed that it's allowed to go and no faster. It's saturating its bound for how fast you can perform a computation.

The other neat thing about these quantum computers is that they're also storing a bit of information on every available degree of freedom. Every nuclear spin in the molecules stores exactly one bit of information. We have examples of computers that saturate these ultimate limits of computation, and they look like actual physical systems. They look like alanine molecules, or amino acids, or like chloroform. Similarly, when we do quantum computation using photons, etc. we also perform computation at this limit.

I have not proved that the universe is, in fact, a digital computer and that it's capable of performing universal computation, but it's plausible that it is. It's also a reasonable scientific program to look at the dynamics of the standard model and to try to prove from that dynamics that it is computationally capable. We have strong evidence for this case. Why would this be interesting? For one thing it would justify Douglas Adams and all of the people who've been saying it's a computer all along. But it would also explain some things that have been otherwise paradoxical or confusing about the universe. Alan has done work for a long time on why the universe is so homogeneous, flat, and isotropic. This was unexplained within the standard cosmological model, and your great accomplishment here was to make a wonderful, simple, and elegant model that explains why the universe has these existing features. Another feature that everybody notices about the universe is that it's complex. Why is it complicated? Well nobody knows. It turned out that way. Or if you're a creationist you say God made it that way. If you take a more Darwinian point of view the dynamics of the universe are such that as the universe evolved in time, complex systems arose out of the natural dynamics of the universe. So why would the universe being capable of computation explain why it's complex?

There's a very nice explanation about this, which I think was given back in the '60s, and actually Marvin, maybe you can enlighten me about when this first happened, because I don't know the first instance of it. Computers are famous for being able to do complicated things starting from simple programs. You can write a very short computer program which will cause your computer to start spitting out the digits of pi. If you want to make it slightly more complex you can make it stop spitting out those digits at some point so you can use it for something else. There are short programs that generate all sorts of complicated things. That in itself doesn't constitute an explanation for why the universe itself exhibits all this complexity, but if you combine the fact that you have something that's dynamically, computationally universal with the fact that you're constantly having information injected into the universe, — by the basic laws of quantum mechanics, full of quantum fluctuations are all the time injecting, programming the universe with bits of information — then you do have a reasonable explanation, which I'll close with.

About a hundred and twenty years ago, Ludwig Boltzmann proposed an explanation for why the universe is complex. He said that it's just a big thermal fluctuation. His is a famous explanation: the monkeys-typing-on-typewriters explanation for the universe. Say there were a bunch of monkeys typing a bunch of random descriptions into a typewriter. Eventually we would get a book, right? But Boltzman among other people realized right away that this couldn't be right, because the probability of this happening is vanishingly small. If you had one dime that assembled itself miraculously by a thermal fluctuation, the chances of finding another dime would be vanishingly small; you'd never find that happening in the same universe because it's just too unlikely.

But now let's turn to this other metaphor, which I want help from Marvin with. Now the monkeys are not typing into a typewriter, but into a computer keyboard. Let's suppose this computer is accepting what the monkeys are typing as instructions to perform computational tasks. This means that, for instance, because there are short programs for producing the digits of pi you don't need that many monkeys typing for that long until all of a sudden pi is being produced by the computer. If you got a monkey that's managed to produce a program to produce a dime, then all it has to do is hit return and it's got two dimes, right? Monkeys are probably pretty good at hitting return. There's a nice theory associated with this called algorithmic information theory, which says that if you've got monkeys typing into a computer the fact is that anything that can be realistically described by a mathematical equation, by a computer computing things, will at some point show up for these monkeys. In the monkey-typing-into-the-computer universe, all sorts of complex things arise naturally by the natural evolution of the universe.

I would suggest, merely as a metaphor here, but also as the basis for a scientific program to investigate the computational capacity of the universe, that this is also a reasonable explanation for why the universe is complex. It gets programmed by little random quantum fluctuations, like the same sorts of quantum fluctuations that mean that our galaxy is here rather than somewhere else. According to the standard model billions of years ago some little quantum fluctuation, perhaps a slightly lower density of matter, maybe right where we're sitting right now, caused our galaxy to start collapsing around here. It was just a little quantum fluctuation, but it programmed the universe and it's important for where we are, because I'm very glad to be here and not billions of miles away in outer space. Similarly, another famous little quantum fluctuation that programs you is the exact configuration of your DNA. The program takes strands of DNA from your mother and from your father, splits them up, and wires them together, recombines them. This is a process that has lots of randomness in it, as you know if you have siblings. If you trace that randomness down, you find that that randomness is actually arising from little quantum fluctuations, which masquerade as thermal and chemical fluctuations. Your genes got programmed by quantum fluctuation. There's nothing wrong with that, nothing to be ashamed of — that's just the way things are. Your genes are very important to you, and they themselves form a kind of program for your life, and how your body functions.

In this metaphor we actually have a picture of the computational universe, a metaphor which I hope to make scientifically precise as part of a research program. We have a picture for how complexity arises, because if the universe is computationally capable, maybe we shouldn't be so surprised that things are so entirely out of control.



THE CYCLIC UNIVERSE: PAUL STEINHARDT [10.21.02]

...in the last year I've been involved in the development of an alternative theory that turns the cosmic history topsy-turvy. All the events that created the important features of our universe occur in a different order, by different physics, at different times, over different time scales—and yet this model seems capable of reproducing all of the successful predictions of the consensus picture with the same exquisite detail.


Paul Steinhardt: EdgeVideo (11:45 min.)
DSL+
| Modem
Requires Real Player plug-in (Free Download)

PAUL STEINHARDT is the Albert Einstein Professor in Science and on the faculty of both the Departments of Physics and Astrophysical Sciences at Princeton University.

He is one of the leading theorists responsible for inflationary theory. He constructed the first workable model of inflation and the theory of how inflation could produce seeds for galaxy formation. He was also among the first to show evidence for dark energy and cosmic acceleration, introducing the term "quintessence" to refer to dynamical forms of dark energy. Neil Turok pioneered mathematical and computational techniques which decisively disproved rival theories of structure formation such as cosmic strings. He made leading contributions to inflationary theory and to our understanding of the origin of the matter-antimatter asymmetry in the Universe. Hence, the authors not only witnessed but also led firsthand the revolutionary developments in the standard cosmological model caused by the fusion of particle physics and cosmology in the last 20 years.


THE CYCLIC UNIVERSE

PAUL STEINHARDT: I am theoretical cosmologist, so I am addressing the issue from that point of view. If you were to ask most cosmologists to give a summary of where we stand right now in the field, they would tell you that we live in a very special period in human history where, thanks to a whole host of advances in technology, we can suddenly view the very distant and very early universe in ways that we haven't been able to do ever before. For example, we can get a snapshot of what the universe looked like in its infancy, when the first atoms were forming. We can get a snapshot of what the universe looked like in its adolescence, when the first stars and galaxies were forming. And we are now getting a full detail, three-dimensional image of what the local universe looks like today. When you put together this different information, which we're getting for the first time in human history, you obtain a very tight series of constraints on any model of cosmic evolution. If you go back to the different theories of cosmic evolution in the early 1990's, the data we've gathered in the last decade has eliminated all of them—save one, a model that you might think of today as the consensus model. This model involves a combination of the Big Bang model as developed in the 1920s, '30s, and '40s; the Inflationary Theory, which Alan Guth proposed in the 1980s; and a recent amendment that I will discuss shortly. This consensus theory matches the observations we have of the universe today in exquisite detail. For this reason, many cosmologists conclude that we have finally determined the basic cosmic history of the universe.

But I have a rather different point of view, a view that has been stimulated by two events. The first is the recent amendment to which I referred earlier. I want to argue that the recent amendment is not simply an amendment, but a real shock to our whole notion of time and cosmic history. And secondly, in the last year I've been involved in the development of an alternative theory that turns the cosmic history topsy-turvy. All the events that created the important features of our universe occur in a different order, by different physics, at different times, over different time scales—and yet this model seems capable of reproducing all of the successful predictions of the consensus picture with the same exquisite detail.

The key difference between this picture and the consensus picture comes down to the nature of time. The standard model, or consensus model, assumes that time has a beginning that we normally refer to as the Big Bang. According to this model, for reasons we don't quite understand, the universe sprang from nothingness into somethingness, full of matter and energy, and has been expanding and cooling for the past 15 billion years. In the alternative model the universe is endless. Time is endless in the sense that it goes on forever in the past and forever in the future, and, in some sense, space is endless. Indeed, our three spatial dimensions remain infinite throughout the evolution of the universe.

More specifically, this model proposes a universe in which the evolution of the universe is cyclic. That is to say, the universe goes through periods of evolution from hot to cold, from dense to under-dense, from hot radiation to the structure we see today, and eventually to an empty universe. Then, a sequence of events occurs that cause the cycle to begin again. The empty universe is reinjected with energy, creating a new period of expansion and cooling. This process repeats periodically forever. What we're witnessing now is simply the latest cycle.

The notion of a cyclic universe is not new. People have considered this idea as far back as recorded history. The ancient Hindus, for example, had a very elaborate and detailed cosmology based on a cyclic universe. They predicted the duration of each cycle to be 8.64 billion years—a prediction with three-digit accuracy. This is very impressive, especially since they had no quantum mechanics and no string theory! It disagrees with the number that I'm going suggest, which is trillions of years rather than billions.

The cyclic notion has also been a recurrent theme in Western thought. Edgar Allan Poe and Friedrich Nietzsche, for example, each had cyclic models of the universe, and in the early days of relativistic cosmology, Albert Einstein, Alexandr Friedman, Georges Lemaître, and Richard Tolman were interested in the cyclic idea. I think it is clear why so many have found the cyclic idea to be appealing: If you have a universe with a beginning, you have the challenge of explaining why it began and the conditions under which it began. If you have a universe, which is cyclic, it is eternal, so you don't have to explain the beginning.

During the attempts to try to bring cyclic ideas into modern cosmology, it was discovered in the '20s and '30s that there are various technical problems. The idea at that time was a cycle in which our three-dimensional universe goes through periods of expansion beginning from the Big Bang and then reversal to contraction and a big crunch. The universe bounces and expansion begins again. One problem is that, every time the universe contracts to a crunch, the density and temperature of the universe rises to an infinite value, and it is not clear if the usual laws of physics can be applied. Second, every cycle of expansion and contraction creates entropy through natural thermodynamic processes, which adds to the entropy from earlier cycles. So, at the beginning of a new cycle, there is higher entropy density than the cycle before. It turns out that the duration of a cycle is sensitive to the entropy density. If the entropy increases, the duration of the cycle increases as well. So, going forward in time, each cycle becomes longer than the one before. The problem is that, extrapolating back in time, the cycles become shorter until, after a finite time, they shrink to zero duration. The problem of avoiding a beginning has not been solved. It has simply been pushed back a finite number of cycles. If we're going to reintroduce the idea of a truly cyclic universe, these two problems must be overcome. The cyclic model that I will describe uses new ideas to do just that.

To appreciate why an alternative model is worth pursuing, its important to get a more detailed impression of what the consensus picture is like. Certainly some aspects are appealing. But, what I want to argue is that, overall, the consensus model is not so simple. In particular, recent observations have forced us to amend the consensus model and make it more complicated. So, let me begin with an overview of the consensus model.

The consensus theory begins with the Big Bang: the universe has a beginning. It's a standard assumption that people have made over the last 50 years, but it's not something we can prove at present from any fundamental laws of physics. Furthermore, you have to assume that the universe began with an energy density less than the critical value. Otherwise, the universe would stop expanding and recollapse before the next stage of evolution, the inflationary epoch. In addition, to reach this inflationary stage, there must be some sort of energy to drive the inflation. Typically this is assumed to be due to an inflation field. You have to assume that in those patches of the universe that began at less than the critical density, a significant fraction of the energy is stored in inflation energy so that it can eventually overtake the universe and start the period of accelerated expansion. All of these are reasonable assumption, but assumptions nevertheless. It's important that to count these assumptions and ingredients, because they are helpful in comparing the consensus model to the challenger.

Assuming these conditions are met, the inflation energy overtakes the matter and radiation after a few instants. The inflationary epoch commences and the expansion of the universe accelerates at a furious pace. The inflation does a number of miraculous things: it makes the universe homogeneous, it makes the universe flat, and it leaves behind certain inhomogeneities, which are supposed to be the seeds for the formation of galaxies. Now the universe is prepared to enter the next stage of evolution with the right conditions. According to the inflationary model, the inflation energy decays into a hot gas of matter and radiation. After a second or so, there form the first light nuclei. After a few tens of thousands of years, the slowly moving matter dominates the universe. It's during these stages that the first atoms form, the universe becomes transparent, and the structure in the universe begins to form—the first stars and galaxies. Up to this point the story is relatively simple.

But, there is the recent discovery that we've entered a new stage in the evolution of the universe. After the stars and galaxies have formed, something strange has happened to cause the expansion of the universe to speed up again. During the 15 billion years when matter and radiation dominated the universe and structure was forming, the expansion of the universe was slowing down because the matter and radiation within it is gravitationally self-attractive and resists the expansion of the universe. Until very recently, it had been presumed that matter would continue to be the dominant form of energy in the universe, and this deceleration would continue forever.

But we've discovered instead, due to recent observations that the expansion of the universe is speeding up. This means that most of the energy of the universe is neither matter nor radiation. Rather, another form of energy has overtaken the matter and radiation. For lack of a better term, this new energy form is called "dark energy." Dark energy, unlike the matter and radiation that we're familiar with, is gravitationally self-repulsive. That's why it causes the expansion to speed up rather than slow down. In Newton's theory gravity, all mass is gravitationally attractive, but Einstein's theory allows the possibility of forms of energy that are gravitationally self-repulsive.

I don't think either the physics or cosmology communities, or even the general public, have fully absorbed the full implications of this discovery. This is a revolution in the grand historic sense—in the Copernican sense. In fact, if you think about Copernicus—from whom we derive the word revolution—his importance was that he changed our notion of space and of our position in the universe. By showing that the earth revolves around the sun, he triggered a chain of ideas that led us to the notion that we live in no particular place in the universe; there's nothing special about where we are. Now we've discovered something very strange about the nature of time: that we may live in no special place, but we do live at a special time, a time of recent transition from deceleration to acceleration; from one in which matter and radiation dominate the universe to one in which they are rapidly becoming insignificant components; from one in which structure is forming in ever-larger scales to one in which now, because of this accelerated expansion, structure formation stops. We are in the midst of the transition between these two stages of evolution. And just as Copernicus's proposal that the earth is no longer the center of the universe led to a chain of ideas that changed our whole outlook on the structure of the solar system and eventually to the structure of the universe, it shouldn't be too surprising that perhaps this new discovery of cosmic acceleration could lead to a whole change in our view of cosmic history. That's a big part of the motivation for thinking about our alternative proposal.

With these thoughts about the consensus model in mind, let me turn to the cyclic proposal. Since it's cyclic, I'm allowed to begin the discussion of the cycle at any point I choose. To make the discussion parallel, I'll begin at a point analogous to the Big Bang; I'll call it The Bang. This is a point in the cycle where the universe reaches its highest temperature and density. In this scenario, though, unlike the Big Bang model, the temperature and density don't diverge. There is a maximal, finite temperature. It's a very high temperature, around 10[20] (ten to the 20) degrees Kelvin—hot enough to evaporate atoms and nuclei into their fundamental constituents—but it's not infinite. In fact, it's well below the so-called Planck energy scale, where quantum gravity effects dominate. The theory begins with a bang and then proceeds directly to a phase dominated by radiation. In this scenario you do not have the inflation one has in the standard scenario. You still have to explain why the universe is flat, you still have to explain why the universe is homogeneous, and you still have to explain where the fluctuations came from that led to the formation of galaxies, but that's not going to be explained by an early stage of inflation. It's going to be explained by yet a different stage in the cyclic universe, which I'll get to.

In this new model, you go directly to a radiation-dominated universe and form the usual nuclear abundances; then go directly to a matter-dominated universe in which the atoms and galaxies and larger scale structure form; and then proceed to a phase of the universe dominated by dark energy. In the standard case, the dark energy comes as a surprise, since it is something you have to add into the theory to make it consistent with what we observe. In the cyclic model, the dark energy moves to center stage as the key ingredient that is going to drive the universe, and in fact drives the universe into the cyclic evolution. The first thing the dark energy does when it dominates the universe is what we observe today: it causes the expansion of the universe to begin to accelerate. Why is that important? Although this acceleration rate is a hundred orders of magnitude smaller than the acceleration than one gets in inflation, if you give the universe enough time, it actually accomplishes the same feat that inflation does. Over time it thins out the distribution of matter and radiation in the universe, making the universe more and more homogeneous and isotropic—in fact, making it perfectly so—driving it into what is essentially a vacuum state.

Seth Lloyd said there were 10[80] (ten to the 80) or 10[90] (ten to the 90) bits inside the horizon, but if you were to look around the universe in a trillion years, you would find on average no bits inside your horizon, or less than one bit inside your horizon. In fact, when you count these bits, it's important to realize that now that the universe is accelerating our computer is actually losing bits from inside our horizon. This is something that we observe.

At the same time that the universe is made homogeneous and isotropic, it is also being made flat. If the universe had any warp or curvature to it, or if you think about the universe stretching over this long period of time, although it's a slow process it makes the space extremely flat. If it continued forever, of course, that would be the end of the story. But in this scenario, just like inflation, the dark energy only survives for a finite period, and triggers a series of events that eventually lead to a transformation of energy from gravity into new energy and radiation that will then start a new period of expansion of the universe. From a local observer's point of view, it looks like the universe goes through exact cycles; that is to say, it looks like the universe empties out each round, and a new matter and radiation is created, leading a new period of expansion. In this sense it's a cyclic universe. If you were a global observer and could see the entire universe, you'd discover that our three dimensions are forever infinite in this story. What's happened is that at each stage, when we create matter and radiation, it gets thinned out. It's out there somewhere, but it's getting thinned out. Locally, it looks like the universe is cyclic, but globally the universe has a steady evolution, a well-defined era in which, over time and throughout our three dimensions, entropy increases from cycle to cycle.

Exactly how this works in detail can be described in various ways. I will choose to present a very nice geometrical picture that is motivated by superstring theory. We use only a few basic elements from superstring theory, so you don't really have to know anything about superstring theory to understand what I'm going to talk about, except to understand that some of the strange things that I'm going to introduce I am not introducing for the first time. They are already sitting there in superstring theory waiting to be put to good purpose.

One of the ideas in superstring theory is that there are extra dimensions; it's an essential element to that theory that is necessary to make it mathematically consistent. In one particular formulation of that theory the universe has a total of 11 dimensions. Six of them are curled up into a little ball so tiny that, for my purposes, I'm just going to pretend that they're not there. However, there are three spatial dimensions, one time dimension, and one additional dimension that I do want to consider. In this picture our three dimensions with which we're familiar and through which we move, lies along a hypersurface or membrane. This membrane is a boundary of the extra dimension. There is another boundary or membrane on the other side. In between, there's an extra dimension that, if you like, only exists over a certain interval. It's like we are one end of a sandwich, in between which there is a so-called bulk volume of space. These surfaces are referred to as orbifolds or branes—the latter referring to the word membrane. The branes have physical properties. They have energy and momentum, and when you excite them you can produce things like quarks and electrons. We are composed of the quarks and electrons on of these branes. And, since quarks and leptons can only move along branes, we are restricted to moving along and seeing only the three dimensions of our branes. We cannot see directly the bulk or any matter on the other brane.

In the cyclic universe, at regular intervals of trillions of years, these two branes smash together. This creates all kinds of excitations—particles and radiation. The collision thereby heats up the branes, and then they bounce apart again. The branes are attracted to each other through a force that acts just like a spring, causing the branes come together at regular intervals. To describe it more completely, what's happening is that the universe goes through two kinds of stages of motion. When the universe has matter and radiation in it, or when the branes are far enough apart, the main motion is the branes stretching, or, equivalently, our three-dimensions expanding. During this period, the branes more or less remain a fixed distance apart. That's what's been happening, for example, in the last 15 billion years. During these stages, our three dimensions are stretching just as they normally would. At a microscopic distance away, there is another brane sitting and expanding, but since we can't touch, feel, or see across the bulk, we can't sense it directly. If there is a clump of matter over there, we can feel the gravitational effect, but we can't see any light or anything else that it emits, because anything it emits is going to move along that brane. We only see things that move along our own brane.

Next, the energy associated with the force between these branes takes over the universe. From our vantage point on one of the branes, this acts just like the dark energy we observe today. It causes the branes to accelerate in their stretching to the point where all the matter and radiation produced since the last collision is spread out, and the branes become essentially smooth, flat, empty surfaces. If you like, you can think of them as being wrinkled and full of matter up to this point, and then stretching by a fantastic amount over the next trillion years. The stretching causes the mass and energy on the brane to thin out and the wrinkles to be smoothed out. After trillions of years, the branes are, for all intents and purposes, smooth, flat, parallel and empty.

Then, the force between these two branes slowly brings the branes together. As it brings them together, the force grows stronger and the branes speed towards one another. When they collide, there's a walloping impact—enough to bring create a high density of matter and radiation with a very high, albeit finite temperature. The two branes go flying apart, more or less back to where they are, and then the new matter and radiation (through the action of gravity) causes the branes to begin a new period of stretching.

In this picture it's clear that the universe is going through periods of expansion, and a funny kind of contraction. Where the two branes come together, it's not a contraction of our dimensions, but a contraction of the extra dimension. Before the contraction, all matter and radiation has been spread out, but, unlike the old cyclic models of the 20's and 30's, it doesn't come back together again during the contraction because our three dimensions—that is, the branes—remain stretched out. Only the extra dimension contracts. This process repeats itself cycle after cycle.

If you compare the cyclic model to the consensus picture, two of the functions of inflation—namely, flattening and homogenizing the universe—are accomplished by the period of accelerated expansion that we've now just begun. Of course, I really mean the analogous expansion that occurred one cycle ago before the most recent Bang. The third function of inflation—producing fluctuations in the density—occurs as these two branes come together. As they approach, quantum fluctuations cause the branes to begin to wrinkle. And because they are wrinkled, they do not collide everywhere at the same time. Rather, some regions collide a bit earlier than others. This means that some regions reheat to a finite temperature and begin to cool a little bit before other regions. When the branes come apart again, the temperature of the universe is not perfectly homogeneous but has spatial variations left over from the quantum wrinkles.

Remarkably, although the physical processes are completely different, and the time scale is completely different—this is taking billions of years, instead of 10[-30] (ten to the -30) seconds—it turns out that the spectrum of fluctuations you get in the distribution of energy and temperature is essentially the same as what you get in inflation. Hence, the cyclic model is also in exquisite agreement with all of the measurements of the temperature and mass distribution of the universe that we have today.

Because the physics in these two models is quite different, there is an important distinction in what we would observe if one or the other were actually true—although this effect has not been detected yet. In inflation when you create fluctuations, you don't just create fluctuations in energy and temperature, but you also create fluctuations in spacetime itself, so-called gravitational waves. That's a feature that we hope to look for in experiments in the coming decades as a verification of the consensus model. In our model you don't get those gravitational waves. The essential difference is that inflationary fluctuations are created in a hyperrapid, violent process that is strong enough to created gravitational waves, whereas cyclic fluctuations are created in an ultraslow, gentle process that is too weak to produce gravitational waves. That's an example where the two models give an observational prediction that is dramatically different. It's just difficult to observe at the present time.

What's fascinating at the moment is that we have two paradigms that are now available to us. On the one hand they are poles apart, in terms of what they tell us about the nature of time, about our cosmic history, about the order in which events occur, and about the time scale on which they occur. On the other hand they are remarkably similar in terms of what they predict about the universe today. Ultimately what will decide between the two is be a combination of observations—for example, the search for cosmic gravitational waves—and theory—because a key aspect to this scenario entails assumptions about what happens at the collision between branes that might be checked or refuted in superstring theory. In the meantime, for the next few years, we can all have great fun speculating about the implications of each of these ideas, which we prefer, and how we can best distinguish between them.

 


THE INFLATIONARY UNIVERSE: ALAN GUTH [10.21.02]


Alan Guth: EdgeVideo (9:40 min.)
DSL+ | Modem
Requires Real Player plug-in (Free Download)

Inflationary theory itself is a twist on the conventional Big Bang theory. The shortcoming that inflation is intended to fill in is the basic fact that although the Big Bang theory is called the Big Bang theory it is, in fact, not really a theory of a bang at all; it never was.

ALAN GUTH , father in the inflationary theory of the Universe, is Victor F. Weisskopf Professor of Physics at MIT; author of The Inflationary Universe: The Quest for a New Theory of Cosmic Origins.

THE INFLATIONARY UNIVERSE

ALAN GUTH : Paul Steinhardt did a very good job of presenting the case for the cyclic universe. I'm going to describe the conventional consensus model upon which he was trying to say that the cyclic model is an improvement. I agree with what Paul said at the end of his talk as far as comparing these two models; it is yet to be seen which one works. But there are two grounds for comparing them. One is that in both cases the theory needs to be better developed. This is more true for the cyclic model where one has the issue of what happens when branes collide. The cyclic theory could die when that problem finally gets solved definitively. Secondly, there is, of course, the observational comparison of the gravitational wave predictions of the two models.

A brane is short for membrane, a term that comes out of string theories. String theories began purely as theories of strings, but when people began to study their dynamics more carefully, they discovered that for consistency it was not possible to have a theory which only discussed strings. Whereas a string is a one-dimensional object, the theory also had to include the possibility of membranes of various dimensions to make it consistent, which led to the notion of branes in general. The theory that Paul described in particular involves a four-dimensional space plus one time dimension, which he called the bulk. That four-dimensional space was sandwiched between two branes.

That's not what I'm going to talk about. I want to talk about the conventional inflationary picture, and in particular the great boost that that picture has attained over the past few years by the somewhat shocking revelation of a new form of energy that exists in the universe, the energy that for lack of a better name is typically called "dark energy."

But let me start the story further back. Inflationary theory itself is a twist on the conventional Big Bang theory. The shortcoming that inflation is intended to fill in is the basic fact that although the Big Bang theory is called the Big Bang theory it is, in fact, not really a theory of a bang at all; it never was. The conventional Big Bang theory, without inflation, was really only a theory of the aftermath of the Bang. It started with all of the matter in the universe already in place, already undergoing rapid expansion, already incredibly hot. There was no explanation of how it got that way. Inflation is an attempt to answer that question, to say what "banged," and what drove the universe into this period of enormous expansion. Inflation does that very wonderfully. It explains not only what caused the universe to expand, but also the origin of essentially all the matter in the universe at the same time. I qualify that with the word "essentially" because in a typical theory inflation needs about a gram's worth of matter to start. So, inflation is not quite a theory of the ultimate beginning, but it is a theory of evolution that explains essentially everything that we see around us, starting from almost nothing.

The basic idea behind inflation is that a repulsive form of gravity caused the universe to expand. General relativity predicts this repulsive form of gravity from the beginning; in the context of general relativity you basically need a material with a negative pressure to create repulsive gravity. According to general relativity it's not just matter densities or energy densities that create gravitational fields; it's also pressures. A positive pressure creates a normal attractive gravitational field of the kind that we're accustomed to, but a negative pressure would create a repulsive kind of gravity. It also turns out that according to modern particle theories, materials with a negative pressure are easy to construct out of fields which exist according to these theories. By putting together these two ideas — the fact that particle physics gives us states with negative pressures, and that general relativity tells us that those states cause a gravitational repulsion — we reach the origin of the inflationary theory.

By answering the question of what drove the universe into expansion, the inflationary theory can also answer some questions about that expansion that would otherwise be very mysterious. There are two very important properties of our observed universe that were never really explained by the Big Bang theory; they were just part of one's assumptions about the initial conditions. One of them is the uniformity of the universe — the fact that it looks the same everywhere, no matter which way you look. It's both isotropic, meaning the same in all directions, and homogeneous, meaning the same in all places. The conventional Big Bang theory never really had an explanation of that; it just had to be assumed from the start. The problem is that while we knew, for example, that any set of objects will approach a uniform temperature if you let them sit for a long time, and that you could calculate how much time the universe would take to reach a uniform temperature, it wasn't nearly enough. To explain, for example, the uniformity of temperature that we see in the cosmic background radiation in the standard Big Bang theory, you would need for that energy and information to transmit itself at about a hundred times the speed of light just to allow the possibility that the universe could have smoothed itself out and could have achieved such a uniform temperature by the time the cosmic background radiation was released.

In the inflationary theory this problem goes away completely, because in contrast to the conventional theory it postulates a period of accelerated expansion while this repulsive gravity is taking place. That means that if we follow our universe backwards in time towards the beginning using inflationary theory, we see that it started from something much smaller than you ever could have imagined in the context of conventional cosmology without inflation. While the region that would evolve to become our universe was incredibly small, there was plenty of time for it to reach a uniform temperature, just like a cup of coffee sitting on the table cools down to room temperature. Once this uniformity is established on this tiny scale by normal thermo-equilibrium hypotheses — and I'm talking now about something that's about a billion times smaller than the size of a single proton — inflation can take over, and cause that to expand rapidly, and to become large enough to encompass the entire visible universe. The inflationary theory not only allows the possibility for the universe to be uniform, but also tells us why it's uniform: It's uniform because it came from something that had time to become uniform, and was then stretched by this process of inflation.

The second peculiar feature of our universe that inflation does a wonderful job of explaining, and for which there never was a prior explanation, is the flatness of the universe and the fact that the geometry of the universe is so close to Euclidean. In the context of relativity, Euclidean geometry does not prevail; it's an oddity. With general relativity, curved space is the generic case. In the case of the universe as a whole, once we decide that the universe is homogeneous and isotropic, then this issue of flatness becomes directly related to the relationship between the mass density and the expansion rate of the universe. A large mass density would cause space to curve into a closed universe in the shape of a ball; if the mass density dominated, the universe would be a closed space with a finite volume and no edge. If a spaceship traveled in what it thought was a straight line for a long enough distance it would end up back where it started from. In the alternative case, if the expansion dominated, the universe would be geometrically open. Geometrically open spaces have the opposite geometric properties from closed spaces. They're infinite. In a closed space two lines which are parallel will start to converge; in an open space two lines which are parallel will start to diverge. In either case what you see is very different from Euclidean geometry.

In terms of the evolution of the universe, the fact that the universe is at least approximately flat today requires that the early universe was extraordinarily flat. The universe tends to evolve away from flatness, so even given what we knew ten or twenty years ago — we know much better now that the universe is extraordinarily close to flat — we could have extrapolated backwards and discovered that, for example, at one second after the Big Bang, the mass density of the universe must have been at the critical density where it counterbalanced the expansion rate to produce a flat universe. It must have been at that critical density to an accuracy of 15 decimal places. The conventional Big Bang theory gave us no reason to believe that there was any mechanism to require that, and it has to have been that way to explain why the universe looks the way it does today. The conventional Big Bang theory without inflation really only worked if you fed into it initial conditions which were highly finely tuned to make it just right to produce the universe like the one we see. Inflationary theory gets around this flatness problem because inflation changes the way the geometry of the universe evolves with time. Even though the universe always evolves away from flatness at all other periods in the history of the universe, during the inflationary period the universe is actually driven towards flatness incredibly quickly. If you had approximately 10-34 seconds or so of inflation at the beginning of the universe, that's all you need to be able to start out a factor of 105 or 1010 away from being flat. Inflation would then have driven the universe to be flat closely enough to explain what we see today.

There are two primary predictions that come out of inflationary models that are at least testable today. They have to do (1) with the mass density of the universe, and (2) with the properties of these density non-uniformities. I'd like to say a few words about each of them, one at a time. Let me begin with the question of flatness.

The mechanism that inflation provides that drives the universe towards flatness will in almost all cases overshoot, not giving us a universe that is just nearly flat today, but a universe that's almost exactly flat today. This can be avoided, and people have at times tried to design versions of inflation that avoided it, but to do so you have to go about it in a very contrived way. You have to arrange for inflation to end at just the right point, where it's almost made the universe flat but not quite. It requires a lot of delicate fine-tuning, and in the days when we thought the universe was open some people tried to design such models. I never did. They always looked very contrived, and never really caught on.

The generic inflationary model drives the universe to be completely flat, which means that one of the predictions is that today the mass density of the universe should be at the critical value which makes the universe geometrically flat. Until three or four years ago no astronomers believed that. They told us that if you looked at just the visible matter, you would see only about one percent of what you needed to make the universe flat. But they also said that they could offer more than that — there's also dark matter. Dark matter is matter that's inferred to exist because of the gravitational effect that we see that it has on visible matter. It's seen, for example, in the rotational curves of galaxies. When astronomers first measured how fast galaxies rotate, they found they were spinning so fast that if the only matter present were what you saw, galaxies would just fly apart. To stabilize galaxies it was necessary to assume that there was a large amount of dark matter in the galaxy — about five or ten times the amount of visible matter — which was needed just to hold the galaxy together. This problem repeats itself when one talks about the motion of galaxies within clusters of galaxies. The motion of galaxies in clusters is much more random and chaotic than the spiral galaxy, but the same issues arise. You can ask how much mass is needed to hold those clusters of galaxies together, and the answer is that you still need significantly more matter than what you assumed was in the galaxies. Adding all of that together, astronomers came up only to about a third of the critical density. They were pretty well able to guarantee that there wasn't any more than that out there; that was all they could detect. That was bad for the inflationary model, but many of us still had faith that inflation had to be right and sooner or later the astronomers would come up with something.

And they did, although what they came up with was something very different from the kind of matter that we were talking about previously. Starting in 1998, the remarkable fact was observed that the universe today appears to be accelerating, not slowing down. As I said at the beginning of this talk, the theory of general relativity allows for that. What's needed is a material with a negative pressure. We're now therefore convinced that our universe must be permeated with a material with negative pressure, which is causing the acceleration that we're now seeing. We don't know what this material is, but we're referring to it as "dark energy." Even without knowing what it is, general relativity by itself allows us to calculate how much mass has to be out there to cause the observed acceleration, and it turns out to be almost exactly equal to two-thirds of the critical density. This is exactly what was missing from the previous calculations! Now, with the addition of the assumption that this dark energy is real, we now have complete agreement between what the astronomers are telling us the mass density of the universe is, and what inflation predicts.

The other important prediction that comes out of inflation is becoming even more persuasive than the issue of flatness: namely, the issue of density perturbations. Inflation has what in some ways is a wonderful characteristic — that by stretching everything out (and Paul's model takes advantage of the same effect) you can smooth out any non-uniformities that were present prior to this expansion. Inflation does not depend sensitively on what you assume existed before inflation; everything there just gets washed away by the enormous expansion. For a while, in the early days of developing the inflationary model, we were all very worried that this would lead to a universe that would be absolutely, completely smooth. After a while the solution was that quantum fluctuations would save us. The universe really is fundamentally a quantum mechanical system, and it became clear that quantum theory was necessary not just to understand atoms, but also to understand galaxies. It is a rather remarkable idea that a fundamental idea like the basic ideas of quantum theory could have such a broad sweep. The point is that as inflation is ending, the classical Big Bang model would predict a completely uniform density of matter. According to quantum mechanics, however, everything is probabilistic. There are quantum fluctuations everywhere, which means that in some places the mass density would be slightly higher than average, and in other places it would be slightly lower than average. That's exactly the sort of thing you want to explain the structure of the universe. You can even go ahead and calculate the spectrum of these non-uniformities, which is something that Paul and I both worked on in the early days and had great fun with. The answer that we both came up with was that, in fact, quantum mechanics produces just the right spectrum of non-uniformities.

We really can't predict the overall amplitude — that is, the intensity of these ripples — unless we know more about the fundamental theory. At the present time, we have to take the overall factor that multiplies these ripples from observation. But we can predict the spectrum — that is, how the intensity of the ripples varies with the different wavelengths of all the different ripples that lie on top of each other. We knew how to do this back in 1982, but recently it has actually become possible for astronomers to see these non-uniformities imprinted on the cosmic background radiation. These were first observed back in 1992 by the Kobe satellite, but back then they could only see very broad features since the angular resolution of the satellite was only about seven degrees. Now, they've gotten down to angular resolutions of about a tenth of a degree. They basically plot the spectrum — the intensity of these ripples as a function of wave length. Gradually these experimental plots are becoming more and more detailed.

The most recent data set was made by an experiment called the Cosmic Background Imager, which released a new set of data in May that is rather spectacular. This graph of the spectrum that I'm talking about is rather complicated because these fluctuations are produced during the inflationary era, but then oscillate as the early universe evolves. Thus, what you see is a picture that includes the original spectrum plus all of the oscillations which depend on properties of the universe. A remarkable thing is that these curves now show five separate peaks, all of which the data fit very nicely. You can see that the peaks are in about the right place and have about the right heights, without any ambiguity, and the leading peak is rather well-mapped-out. It's a rather remarkable fit between theory based on wild ideas about quantum fluctuations at 10-35 seconds and what astronomers now actually measure. It's just in beautiful agreement.

At the present time this inflationary theory, which a few years ago was in significant conflict with observation now works perfectly with our measurements of the mass density and the fluctuations. The evidence for a theory that's either the one that I'm talking about or something very close to it is very, very strong.

I'd just like to close by saying that although I've been using the theory in the singular to talk about inflation I shouldn't, really. It's very important to remember that inflation is really a class of theories. If inflation is right it's by no means the end of our study of the origin of the universe, but still, it's really closer to the beginning. There are many different versions of inflation, and in fact the cyclic model that Paul described could be considered one version. It's a rather novel version since it puts the inflation at a completely different era of the history of the universe, but inflation is still doing many of the same things. There are many versions of inflation that are much closer to the kinds of theories that we were developing in the '80s and '90s, so saying that inflation is right is by no means the end of the story. There's still a lot of flexibility here, and a lot to be learned. And what needs to be learned will involve both the study of cosmology and the study of the underlying particle physics, which is essential to these models.


THE EMOTION UNIVERSE: MARVIN MINSKY[10.21.02]

To say that the universe exists is silly, because it says that the universe is one of the things in the universe. So there's something wrong with questions like, "What caused the Universe to exist?


Marvin Minsky: EdgeVideo (12 min.)
DSL+ | Modem
Requires Real Player plug-in (Free Download)

MARVIN MINSKY, mathematician and computer scientist, is considered one of the fathers of Artificial Intelligence. He is Toshiba Professor of Media Arts and Sciences at the Massachusetts Institute of Technology; cofounder of MIT's Artificial Intelligence Laboratory; and the author of eight books, including The Society of Mind.

THE EMOTION UNIVERSE

MARVIN MINSKY: I was listening to this group talking about universes, and it seems to me there's one possibility that's so simple that people don't discuss it. Certainly a question that occurs in all religions is, "Who created the universe, and why? And what's it for?" But something is wrong with such questions because they make extra hypotheses that don't make sense. When you say that X exists, you're saying that X is in the Universe. It's all right to say, "this glass of water exists" because that's the same as "This glass is in the Universe." But to say that the universe exists is silly, because it says that the universe is one of the things in the universe. So there's something wrong with questions like, "What caused the Universe to exist?"

The only way I can see to make sense of this is to adopt the famous "many-worlds theory" which says that there are many "possible universes" and that there is nothing distinguished or unique about the one that we are in‹except that it is the one we are in. In other words, there's no need to think that our world 'exists'; instead, think of it as like a computer game, and consider the following sequence of 'Theories of It":

(1) Imagine that somewhere there is a computer that simulates a certain World, in which some simulated people evolve. Eventually, when these become smart, one of those persons asks the others, "What caused this particular World to exist, and why are we in it?" But of course that World doesn't 'really exist' because it is only a simulation.

(2) Then it might occur to one of those people that, perhaps, they are part of a simulation. Then that person might go on to ask, "Who wrote the Program that simulates us, and who made the Computer that runs that Program?"

(3) But then someone else could argue that, "Perhaps there is no Computer at all. Only the Program needs to exist‹because once that Program is written, then this will determine everything that will happen in that simulation. After all, once the computer and program have been described (along with some set of initial conditions) this will explain the entire World, including all its inhabitants, and everything that will happen to them. So the only real question is what is that program and who wrote it, and why"

(4) Finally another one of those 'people' observes, "No one needs to write it at all! It is just one of 'all possible computations!' No one has to write it down. No one even has to think of it! So long as it is 'possible in principle,' then people in that Universe will think and believe that they exist!'

So we have to conclude that it doesn't make sense to ask about why this world exists. However, there still remain other good questions to ask, about how this particular Universe works. For example, we know a lot about ourselves‹in particular, about how we evolved‹and we can see that, for this to occur, the 'program' that produced us must have certain kinds of properties. For example, there cannot be structures that evolve (that is, in the Darwinian way) unless there can be some structures that can make mutated copies of themselves; this means that some things must be stable enough to have some persistent properties. Something like molecules that last long enough, etc.

So this, in turn, tells us something about Physics: a universe that has people like us must obey some conservation-like laws; otherwise nothing would last long enough to support a process of evolution. We couldn't 'exist' in a universe in which things are too frequently vanishing, blowing up, or being created in too many places. In other words, we couldn't exist in a universe that has the wrong kinds of laws. (To be sure, this leaves some disturbing questions about worlds that have no laws at all. This is related to what is sometimes called the Anthropic Principle." That's the idea that the only worlds in which physicists can ask about what created the universe are the worlds that can support such physicists.)

The Certainty Principle

In older times, when physicists tried to explain Quantum Theory, to the public what they call the uncertainty principle, they'd say that the world isn't the way Newton described it; instead it. They emphasized 'uncertainty'‹ that everything is probabilistic and indeterminate. However, they rarely mentioned the fact that it's really just the opposite: it is only because of quantization that we can depend on anything! For example in classical Newtonian physics, complex systems can't be stable for long. Jerry Sussman and John Wisdom once simulated our Solar System, and showed that the large outer planets would stable for billions of years. But they did not simulate the inner planets‹so we have no assurance that our planet is stable. It might be that enough of the energy of the big planets might be transferred to throw our Earth out into space. (They did show that the orbit of Pluto must be chaotic.)

Yes, quantum theory shows that things are uncertain: if you have a DNA molecule there's a possibility that one of its carbon atoms will suddenly tunnel out and appear in Arcturus. However, at room temperature a molecule of DNA is almost certain to stay in its place for billions of years, ‹because of quantum mechanics‹and that is one of the reasons that evolution is possible! For quantum mechanics is the reason why most things don't usually jump around! So this suggests that we should take the anthropic principle seriously, by asking. "Which possible universes could have things that are stable enough to support our kind of evolution?" Apparently, the first cells appeared quickly after the earth got cool enough; I've heard estimate that this took less than a hundred million years. But then it took another three billion years to get to the kinds of cells that could evolve into animals and plants. This could only happen in possible worlds whose laws support stability. It could not happen in a Newtonian Universe. So this is why the world that we're in needs something like quantum mechanics‹to keep things in place! (I discussed this "Certainty Principle" in my chapter in the book Feynman and Computation, A.J.G. Hey, editor, Perseus Books, 1999.)

Intelligence

Why don't we yet have good theories about what our minds are and how they work? In my view this is because we're only now beginning to have the concepts that we'll need for this. The brain is a very complex machine, far more advanced that today's computers, yet it was not until the 1950s that we began to acquire such simple ideas about (for example) memory‹such as the concepts of data structures, cache memories, priority interrupt systems, and such representations of knowledge as 'semantic networks.' Computer science now has many hundreds of such concepts that were simply not available before the 1960s.

Psychology itself did not much develop before the twentieth century. A few thinkers like Aristotle had good ideas about psychology, but progress thereafter was slow; it seems to me that Aristotle's suggestions in the Rhetoric were about as good as those of other thinkers until around 1870. Then came the era of Galton, Wundt, William James and Freud‹and we saw the first steps toward ideas about how minds work. But still, in my view, there was little more progress until the Cybernetics of the '40s, the Artificial Intelligence of the '50s and '60s, and the Cognitive Psychology that started to grow in the '70s and 80s.

Why did psychology lag so far behind so many other sciences? In the late 1930s a botanist named Jean Piaget in Switzerland started to observe the behavior of his children. In the next ten years of watching these kids grow up he wrote down hundreds of little theories about the processes going on in their brains, and wrote about 20 books, all based on observing three children carefully. Although some researchers still nitpick about his conclusions, the general structure seems to have held up, and many of the developments he described seem to happen at about the same rate and the same ages in all the cultures that have been studied. The question isn't, "Was Piaget right or wrong?" but "Why wasn't there someone like Piaget 2000 years ago?" What was it about all previous cultures that no one thought to observe children and try to figure out how they worked? It certainly was not from lack of technology: Piaget didn't need cyclotrons, but only glasses of water and pieces of candy.

Perhaps psychology lagged behind because it tried to imitate the more successful sciences. For example, in the early 20th century there were many attempts to make mathematical theories about psychological subjects‹notable learning and pattern recognition. But there's a problem with mathematics. It works well for Physics, I think because fundamental physics has very few laws‹ and the kinds of mathematics that developed in the years before computers were good at describing systems based on just a few‹say, 4, 5, or 6 laws‹but doesn't work well for systems based on the order of a dozen laws. The physicist like Newton and Maxwell discovered ways to account for large classes of phenomena based on three or four laws; however, with 20 assumptions, mathematical reasoning becomes impractical. The beautiful subject called Theory of Groups begins with only five assumptions‹yet this leads to systems so complex that people have spent their lifetimes on them. Similarly, you can write a computer program with just a few lines of code that no one can thoroughly understand; however, at least we can run the computer to see how it behaves‹and sometimes see enough then to make a good theory.

However, there's more to computer science than that. Many people think of computer science as the science of what computers do, but I think of it quite differently: Computer Science is a new way collection of ways to describe and think about complicated systems. It comes with a huge library of new, useful concepts about how mental processes might work. For example, most of the ancient theories of memory envisioned knowledge like facts in a box. Later theories began to distinguish ideas about short and long-term memories, and conjectured that skills are stored in other ways.

However, Computer Science suggests dozens of plausible ways to store knowledge away‹as items in a database, or sets of "if-then" reaction rules, or in the forms of semantic networks (in which little fragments of information are connected by links that themselves have properties), or program-like procedural scripts, or neural networks, etc. You can store things in what are called neural networks‹which are wonderful for learning certain things, but almost useless for other kinds of knowledge, because few higher-level processes can 'reflect' on what's inside a neural network. This means that the rest of the brain cannot think and reason about what it's learned‹that is, what was learned in that particular way. In artificial intelligence, we have learned many tricks that make programs faster‹but in the long run lead to limitations because the results neural network type learning are too 'opaque' for other programs to understand.

Yet even today, most brain scientists do not seem to know, for example, about cache-memory. If you buy a computer today you'll be told that it has a big memory on its slow hard disk, but it also has a much faster memory called cache, which remembers the last few things it did in case it needs them again, so it doesn't have to go and look somewhere else for them. And modern machines each use several such schemes‹but I've not heard anyone talk about the hippocapmus that way. All this suggests that brain scientists have been too conservative; they've not made enough hypotheses‹and therefore, most experiments have been trying to distinguish between wrong alternatives.

Reinforcement vs. Credit assignment.

There have been several projects that were aimed toward making some sort of "Baby Machine" that would learn and develop by itself‹to eventually become intelligent. However, all such projects, so far, have only progressed to a certain point, and then became weaker or even deteriorated. One problem has been finding adequate ways to represent the knowledge that they were acquiring. Another problem was not have good schemes for what we sometimes call 'credit assignment'‹that us, how do you learning things that are relevant, that are essentials rather than accidents. For example, suppose that you find a new way to handle a screwdriver so that the screw remains in line and doesn't fall out. What is it that you learn? It certainly won't suffice merely to learn the exact sequence of motions (because the spatial relations will be different next time)‹so you have to learn at some higher level of representation. How do you make the right abstractions? Also, when some experiment works, and you've done ten different things in that path toward success, which of those should you remember, and how should you represent them? How do you figure out which parts of your activity were relevant? Older psychology theories used the simple idea of 'reinforcing' what you did most recently. But that doesn't seem to work so well as the problems at hand get more complex. Clearly, one has to reinforce plans and not actions‹which means that good Credit-Assignment has to involve some thinking about the things that you've done. But still, no one has designed and debugged a good architecture for doing such things.

We need better programming languages and architectures.

I find it strange how little progress we've seen in the design of problem solving programs‹or languages for describing them, or machines for implementing those designs. The first experiments to get programs to simulate human problem-solving started in the early 1950s, just before computers became available to the general public; for example, the work of Newell, Simon, and Shaw using the early machine designed by John von Neumann's group. To do this, they developed the list-processing language IPL. Around 1960, John McCarthy developed a higher-level language LISP, which made it easier to do such things; now one could write programs that could modify themselves in real time. Unfortunately, the rest of the programming community did not recognize the importance of this, so the world is now dominated by clumsy languages like Fortran, C, and their successors‹which describe programs that cannot change themselves. Modern operating systems suffered the same fate, so we see the industry turning to the 35-year-old system called Unix, a fossil retrieved from the ancient past because its competitors became so filled with stuff that no one cold understand and modify them. So now we're starting over again, most likely to make the same mistakes again. What's wrong with the computing community?

Expertise vs. Common Sense

In the early days of artificial intelligence, we wrote programs to do things that were very advanced. One of the first such programs was able to prove theorems in Euclidean geometry. This was easy because geometry depends only upon a few assumptions: Two points determine a unique line. If there are two lines then they are either parallel or they intersect min just one place. Or, two triangles are the same in all respects if the two sides and the angle between them are equivalent. This is a wonderful subject because you're in a world where assumptions are very simple, there are only a small number of them, and you use a logic that is very clear. It's a beautiful place, and you can discover wonderful things there.

However, I think that, in retrospect, it may have been a mistake to do so much work on task that were so 'advanced.' The result was that‹until today‹no one paid much attention to the kinds of problems that any child can solve. That geometry program did about as well as a superior high school student could do. Then one of our graduate students wrote a program that solved symbolic problems in integral calculus. Jim Slagle's program did this well enough to get a grade of A in MIT's first-year calculus course. (However, it could only solve symbolic problems, and not the kinds that were expressed in words. Eventually, the descendants of that program evolved to be better than any human in the world, and this led to the successful commercial mathematical assistant programs called MACSYMA and Mathematica. It's an exciting story‹but those programs could still not solve "word problems." However in the mid 1960s, graduate student Daniel Bobrow wrote a program that could solve problems like "Bill's father's uncle is twice as old as Bill's father. 2 years from now Bill's father will be three times as old as Bill. The sum of their ages is 92. Find Bill's age." Most high school students have considerable trouble with that. Bobrow's program was able to take convert those English sentences into linear equations, and then solve those equations‹but it could not do anything at all with sentences that had other kinds of meanings. We tried to improve that kind of program, but this did not lead to anything good because those programs did not know enough about how people use commonsense language.

By 1980 we had thousands of programs, each good at solving some specialized problems‹but none of those program that could do the kinds of things that a typical five-year-old can do. A five-year-old can beat you in an argument if you're wrong enough and the kid is right enough. To make a long story short, we've regressed from calculus and geometry and high school algebra and so forth. Now, only in the past few years have a few researchers in AI started to work on the kinds of common sense problems that every normal child can solve. But although there are perhaps a hundred thousand people writing expert specialized programs, I've found only about a dozen people in the world who aim toward finding ways to make programs deal with the kinds of everyday, commonsense jobs of the sort that almost every child can do. See http://web.media.mit.edu/~minsky/E6/eb6.html


THE INTELLIGENT UNIVERSE: RAY KURZWEIL [10.21.02]

The universe has been set up in an exquisitely specific way so that evolution could produce the people that are sitting here today and we could use our intelligence to talk about the universe. We see a formidable power in the ability to use our minds and the tools we've created to gather evidence, to use our inferential abilities to develop theories, to test the theories, and to understand the universe at increasingly precise levels.


Ray Kurzweil: EdgeVideo (8:45 min.)
DSL+ | Modem
Requires Real Player plug-in (Free Download)

RAY KURZWEIL was the principal developer of the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first CCD flat-bed scanner, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large vocabulary speech recognition. He has successfully founded, developed, and sold four AI businesses in OCR, music synthesis, speech recognition, and reading technology. All of these technologies continue today as market leaders.

Kurzweil received the $500,000 Lemelson-MIT Prize, the world's largest award in invention and innovation. He also received the 1999 National Medal of Technology, the nation's highest honor in technology, from President Clinton in a White House ceremony. He has also received scores of other national and international awards, including the 1994 Dickson Prize (Carnegie Mellon University's top science prize), Engineer of the Year from Design News, Inventor of the Year from MIT, and the Grace Murray Hopper Award from the Association for Computing Machinery. He has received ten honorary Doctorates and honors from three U.S. presidents. He has received seven national and international film awards. He is the author of The Age of Intelligent Machines, and The Age of Spiritual Machines, When Computers Exceed Human Intelligence.

THE INTELLIGENT UNIVERSE

RAY KURZWEIL: The universe has been set up in an exquisitely specific way so that evolution could produce the people that are sitting here today and we could use our intelligence to talk about the universe. We see a formidable power in the ability to use our minds and the tools we've created to gather evidence, to use our inferential abilities to develop theories, to test the theories, and to understand the universe at increasingly precise levels. That's one role of intelligence. The theories that we heard on cosmology look at the evidence that exists in the world today to make inferences about what existed in the past so that we can develop models of how we got here.

Then, of course, we can run those models and project what might happen in the future. Even if it's a little more difficult to test the future theories, we can at least deduce, or induce, that certain phenomena that we see today are evidence of times past, such as radiation from billions of years ago. We can't really test what will happen billions or trillions of years from now quite as directly, but this line of inquiry is legitimate, in terms of understanding the past and the derivation of the universe. As we heard today, the question of the origin of the universe is certainly not resolved. There are competing theories, and at several times we've had theories that have broken down, once we acquired more precise evidence.

At the same time, however, we don't hear discussion about the role of intelligence in the future. According to common wisdom, intelligence is irrelevant to cosmological thinking. It is just a bit of froth dancing in and out of the crevices of the universe, and has no effect on our ultimate cosmological destiny. That's not my view. The universe has been set up exquisitely enough to have intelligence. There are intelligent entities like ourselves that can contemplate the universe and develop models about it, which is interesting. Intelligence is, in fact, a powerful force and we can see that its power is going to grow not linearly but exponentially, and will ultimately be powerful enough to change the destiny of the universe.

I want to propose a case that intelligence — specifically human intelligence, but not necessarily biological human intelligence — will trump cosmology, or at least trump the dumb forces of cosmology. The forces that we heard discussed earlier don't have the qualities that we posit in intelligent decision-making. In the grand celestial machinery, forces deplete themselves at a certain point and other forces take over. Essentially you have a universe that's dominated by what I call dumb matter, because it's controlled by fairly simple mechanical processes.

Human civilization possesses a different type of force with a certain scope and a certain power. It's changing the shape and destiny of our planet. Consider, for example, asteroids and meteors. Small ones hit us on a fairly regular basis, but the big ones hit us every some tens of millions of years and have apparently had a big impact on the course of biological evolution. That's not going to happen again. If it happened next year we're not quite ready to deal with it, but it doesn't look like it's going to happen next year. When it does happen again our technology will be quite sufficient. We'll see it coming, and we will deal with it. We'll use our engineering to send up a probe and blast it out of the sky. You can score one for intelligence in terms of trumping the natural unintelligent forces of the universe.

Commanding our local area of the sky is, of course, very small on a cosmological scale, but intelligence can overrule these physical forces, not by literally repealing the natural laws, but by manipulating them in such a supremely sublime and subtle way that it effectively overrules these laws. This is particularly the case when you get machinery that can operate at nano and ultimately femto and pico scales. Whereas the laws of physics still apply, they're being manipulated now to create any outcome the intelligence of this civilization decides on.

Let me back up and talk about how intelligence came about. Wolfram's book has prompted a lot of talk recently on the computational substrate of the universe and on the universe as a computational entity. Earlier today, Seth Lloyd talked about the universe as a computer and its capacity for computation and memory. What Wolfram leaves out in talking about cellular automata is how you get intelligent entities. As you run these cellular automata, they create interesting pictures, but the interesting thing about cellular automata, which was shown long before Wolfram pointed it out, is that you can get apparently random behavior from deterministic processes.

It's more than apparent that you literally can't predict an outcome unless you can simulate the process. If the process under consideration is the whole universe, then presumably you can't simulate it unless you step outside the universe. But when Wolfram says that this explains the complexity we see in nature, it's leaving out one important step. As you run the cellular automata, you don't see the growth in complexity — at least, certainly he's never run them long enough to see any growth in what I would call complexity. You need evolution.

Marvin talked about some of the early stages of evolution. It starts out very slow, but then something with some power to sustain itself and to overcome other forces is created and has the power to self-replicate and preserve that structure. Evolution works by indirection. It creates a capability and then uses that capability to create the next. It took billions of years until this chaotic swirl of mass and energy created the information-processing, structural backbone of DNA, and then used that DNA to create the next stage. With DNA, evolution had an information-processing machine to record its experiments and conduct experiments in a more orderly way. So the next stage, such as the Cambrian explosion, went a lot faster, taking only a few tens of millions of years. The Cambrian explosion then established body plans that became a mature technology, meaning that we didn't need to evolve body plans any more.

These designs worked well enough, so evolution could then concentrate on higher cortical function, establishing another level of mechanism in the organisms that could do information processing. At this point, animals developed brains and nervous systems that could process information, and then that evolved and continued to accelerate. Homo sapiens evolved in only hundreds of thousands of years, and then the cutting edge of evolution again worked by indirection to use this product of evolution, the first technology-creating species to survive, to create the next stage: technology, a continuation of biological evolution by other means.

The first stages of technologies, like stone tools, fire, and the wheel took tens of thousands of years, but then we had more powerful tools to create the next stage. A thousand years ago, a paradigm shift like the printing press took only a century or so to be adopted, and this evolution has accelerated ever since. Fifty years ago, the first computers were designed with pencil on paper, with screwdrivers and wire. Today we have computers to design computers. Computer designers will design some high-level parameters, and twelve levels of intermediate design are computed automatically. The process of designing a computer now goes much more quickly.

Evolutionary processes accelerate, and the returns from an evolutionary process grow in power. I've called this theory "The Law of Accelerating Returns." The returns, including economic returns, accelerate. Stemming from my interest in being an inventor, I've been developing mathematical models of this because I quickly realized that an invention has to make sense when the technology is finished, not when it was started, since the world is generally a different place three or four years later.

One exponential pattern that people are familiar with is Moore's Law, which is really just one specific paradigm of shrinking transistors on integrated circuits. It's remarkable how long it's lasted, but it wasn't the first, but the fifth paradigm to provide exponential growth to computing. Earlier, we had electro-mechanical calculators, using relays and vacuum tubes. Engineers were shrinking the vacuum tubes, making them smaller and smaller, until finally that paradigm ran out of steam because they couldn't keep the vacuum any more. Transistors were already in use in radios and other small, niche applications, but when the mainstream technology of computing finally ran out of steam, it switched to this other technology that was already waiting in the wings to provide ongoing exponential growth. It was a paradigm shift. Later, there was a shift to integrated circuits, and at some point, integrated circuits will run out of steam.

Ten or 15 years from now we'll go to the third dimension. Of course, research on three dimensional computing is well under way, because as the end of one paradigm becomes clear, this perception increases the pressure for the research to create the next. We've seen tremendous acceleration of molecular computing in the last several years. When my book, The Age of Spiritual Machines, came out about four years ago, the idea that three-dimensional molecular computing could be feasible was quite controversial, and a lot of computer scientists didn't believe it was. Today, there is a universal belief that it's feasible, and that it will arrive in plenty of time before Moore's Law runs out. We live in a three-dimensional world, so we might as well use the third dimension. That will be the sixth paradigm.

Moore's Law is one paradigm among many that have provided exponential growth in computation, but computation is not the only technology that has grown exponentially. We see something similar in any technology, particularly in ones that have any relationship to information. The genome project, for example, was not a mainstream project when it was announced. People thought it was ludicrous that you could scan the genome in 15 years, because at the rate at which you could scan it when the project began, it could take thousands of years. But the scanning has doubled in speed every year, and actually most of the work was done in the last year of the project.

Magnetic data storage is not covered under Moore's Law, since it involves packing information on a magnetic substrate, which is a completely different set of applied physics, but magnetic data storage has very regularly doubled every year. In fact there's a second level of acceleration. It took us three years to double the price-performance of computing at the beginning of the century, and two years in the middle of the century, but we're now doubling it in less than one year. This is another feedback loop that has to do with past technologies, because as we improve the price performance, we put more resources into that technology. If you plot computers, as I've done, on a logarithmic scale, where a straight line would mean exponential growth, you see another exponential. There's actually a double rate of exponential growth.

Another very important phenomenon is the rate of paradigm shift. This is harder to measure, but even though people can argue about some of the details and assumptions in these charts you still get these same very powerful trends. The paradigm shift rate itself is accelerating, and roughly doubling every decade. When people claim that we won't see a particular development for a hundred years, or that something is going to take centuries to do accomplish, they're ignoring the inherent acceleration of technical progress.

Bill Joy and I were at Harvard some months ago and one Nobel Prize-winning biologist said that we won't see self-replicating nanotechnology entities for a hundred years. That's actually a good intuition, because that's my estimation — at today's rate of progress — of how long it will take to achieve that technical milestone. However, since we're doubling the rate of progress every decade, it'll only take 25 calendar years to get there— this, by the way, is a mainstream opinion in the nanotechnology field. The last century is not a good guide to the next, in the sense that it made only about 20 years of progress at today's rate of progress, because we were speeding up to this point. At today's rate of progress, we'll make the same amount of progress as what occurred in the 20th century in 14 years, and then again in 7 years. The 21st century will see, because of the explosive power of exponential growth, something like 20,000 years of progress at today's rate of progress — a thousand times greater than the 20th century, which was no slouch for radical change.

I've been developing these models for a few decades, and made a lot of predictions about intelligent machines in the 1980s which people can check out. They weren't perfect, but were a pretty good road map. I've been refining these models. I don't pretend that anybody can see the future perfectly, but the power of the exponential aspect of the evolution of these technologies, or of evolution itself, is undeniable. And that creates a very different perspective about the future.

Let's take computation. Communication is important and shrinkage is important. Right now, we're shrinking technology, apparently both mechanical and electronic, at a rate of 5.6 per linear dimension per decade. That number is also moving slowly, in a double exponential sense, but we'll get to nanotechnology at that rate in the 2020s. There are some early-adopter examples of nanotechnology today, but the real mainstream, where the cutting edge of the operating principles are in the multi-nanometer range, will be in the 2020s. If you put these together you get some interesting observations.

Right now we have 1026 calculations per second in human civilization in our biological brains. We could argue about this figure, but it's basically, for all practical purposes, fixed. I don't know how much intelligence it adds if you include animals, but maybe you then get a little bit higher than 1026. Non-biological computation is growing at a double exponential rate, and right now is millions of times less than the biological computation in human beings. Biological intelligence is fixed, because it's an old, mature paradigm, but the new paradigm of non-biological computation and intelligence is growing exponentially. The crossover will be in the 2020s and after that, at least from a hardware perspective, non-biological computation will dominate at least quantitatively.

This brings up the question of software. Lots of people say that even though things are growing exponentially in terms of hardware, we've made no progress in software. But we are making progress in software, even if the doubling factor is much slower. The real scenario that I want to address is the reverse engineering of the human brain. Our knowledge of the human brain and the tools we have to observe and understand it are themselves growing exponentially. Brain scanning and mathematical models of neurons and neural structures are growing exponentially, and there's very interesting work going on.

There is Lloyd Watts, for example, who with his colleagues has collected models of specific types of neurons and wiring information about how the internal connections are wired in different regions of the brain. He has put together a detailed model of about 15 regions that deal with auditory processing, and has applied psychoacoustic tests of the model, comparing it to human auditory perception. The model is at least reasonably accurate, and this technology is now being used as a front end for speech recognition software. Still, we're at the very early stages of understanding the human cognitive system. It's comparable to the genome project in its early stages in that we also knew very little about the genome in its early stages. We now have most of the data, but we still don't have the reverse engineering to understand how it works.

It would be a mistake to say that the brain only has a few simple ideas and that once we can understand them we can build a very simple machine. But although there is a lot of complexity to the brain, it's also not vast complexity. It is described by a genome that doesn't have that much information in it. There are about 800 million bytes in the uncompressed genome. We need to consider redundancies in the DNA, as some sequences are repeated hundreds of thousands of times. By applying routine data compression, you can compress this information at a ratio of about 30 to 1, giving you about 23 million bytes — which is smaller than Microsoft Word — to describe the initial conditions of the brain.

But the brain has a lot more information than that. You can argue about the exact number, but I come up with thousands of trillions of bytes of information to characterize what's in a brain, which is millions of times greater than what is in the genome. How can that be? Marvin talked about how the methods from computer science are important for understanding how the brain works. We know from computer science that we can very easily create programs of considerable complexity from a small starting condition. You can, with a very small program, create a genetic algorithm that simulates some simple evolutionary process and create something of far greater complexity than itself. You can use a random function within the program, which ultimately creates not just randomness, but is creating some meaningful information after the initial random conditions are evolved using a self-organizing method, resulting in information that's far greater than the initial conditions.

That is in large measure how the genome creates the brain. We know that it specifies certain constraints for how a particular region is wired, but within those constraints and methods, there's a great deal of stochastic or random wiring, followed by some kind of process where the brain learns and self-organizes to make sense of its environment. At this point, what began as random becomes meaningful, and the program has multiplied the size of its information.

The point of all of this is that, since it's a level of complexity we can manage, we will be able to reverse engineer the human brain. We've shown that we can model neurons, clusters of neurons, and even whole brain regions. We are well down that path. It's rather conservative to say that within 25 years we'll have all of the necessary scanning information and neuron models and will be able to put together a model of the principles of operation of how the human brain works. Then, of course, we'll have an entity that has some human-like qualities. We'll have to educate and train it, but of course we can speed up that process, since we'll have access to everything that's out in the Web, which will contain all accessible human knowledge.

One of the nice things about computer technology is that once you master a process it can operate much faster. So we will learn the secrets of human intelligence, partly from reverse engineering of the human brain. This will be one source of knowledge for creating the software of intelligence.

We can then combine some advantages of human intelligence with advantages that we see clearly in non-biological intelligence. We spent years training our speech recognition system, which gives us a combination of rules. It mixes expert-system approaches with some self-organizing techniques like neural nets, Markov models and other self-organizing algorithms. We automate the training process by recording thousands of hours of speech and annotating it, and it automatically readjusts all its Markov-model levels and other parameters when it makes mistakes. Finally, after years of this process, it does a pretty good job of recognizing speech. Now, if you want your computer to do the same thing, you don't have to go through those years of training like we do with every child, you can actually load the evolved pattern of this one research computer, which is called loading the software.

Machines can share their knowledge. Machines can do things quickly. Machines have a type of memory that's more accurate than our frail human memories. Nobody at this table can remember billions of things perfectly accurately and look them up quickly. The combination of the software of biological human intelligence with the benefits of non-biological intelligence will be very formidable. Ultimately, this growing non-biological intelligence will have the benefits of human levels of intelligence in terms of its software and our exponentially growing knowledge base.

In the future, maybe only one part of intelligence in a trillion will be biological, but it will be infused with human levels of intelligence, which will be able to amplify itself because of the powers of non-biological intelligence to share its knowledge. How does it grow? Does it grow in or does it grow out? Growing in means using finer and finer granularities of matter and energy to do computation, while growing out means using more of the stuff in the universe. Presently, we see some of both. We see mostly the "in," since Moore's Law inherently means that we're shrinking the size of transistors and integrated circuits, making them finer and finer. To some extent we're also expanding out in that even though the chips are more and more powerful, we make more chips every year, and deploy more economic and material resources towards this non biological intelligence.

Ultimately, we'll get to nanotechnology-based computation, which is at the molecular level, infused with the software of human intelligence and the expanding knowledge base of human civilization. It'll continue to expand both inwards and outwards. It goes in waves as the expansion inwards reaches certain points of resistance. The paradigm shifts will be pretty smooth as we go from the second to the third dimension via molecular computing. At that point it'll be feasible to take the next step into femto-engineering — on the scale of trillionths of a meter — and pico engineering —on the scale of thousands of trillionths of a meter — going into the finer structures of matter and manipulating some of the really fine forces, such as strings and quarks. That's going to be a barrier, however, so the ongoing expansion of our intelligence is going to be propelled outward. Nonetheless, it will go both in and out. Ultimately, if you do the math, we will completely saturate our corner of the universe, the earth and solar system, sometime in the 22nd century. We'll then want ever-greater horizons, as is the nature of intelligence and evolution, and will then expand to the rest of the universe.

How quickly will it expand? One premise is that it will expand at the speed of light, because that's the fastest speed at which information can travel. There are also tantalizing experiments on quantum disentanglement that show some effect at rates faster than the speed of light, even much faster, perhaps theoretically instantaneously. Interestingly enough, though, this is not the transmission of information, but the transmission of profound quantum randomness, which doesn't accomplish our purpose of communicating intelligence. You need to transmit information, not randomness. So far nobody has actually shown true transmission of information at faster than the speed of light, at least not in a way that has convinced mainstream scientific opinion.

If, in fact, that is a fundamental barrier, and if things that are far away really are far away, which is to say there are no shortcuts through wormholes through the universe, then the spread of our intelligence will be slow, governed by the speed of light. This process will be initiated within 200 years. If you do the math, we will be at near saturation of the available matter and energy in and around our solar system, based on current understandings of the limitations of computation, within that time period. However, it's my conjecture that by going through these other dimensions that Alan and Paul talked about, there may be shortcuts. It may be very hard to do, but we're talking about supremely intelligent technologies and beings. If there are ways to get to parts of the universe through shortcuts such as wormholes, they'll find, deploy, and master them, and get to other parts of the universe faster. Then perhaps we can reach the whole universe, say 1080 protons, photons, and other particles that Seth Lloyd estimates represents on the order of 1090 bits, without being limited by the apparent speed of light.

If the speed of light is not a limit, and I do have to emphasize that this particular point is a conjecture at this time, then within 300 years, we would saturate the whole universe with our intelligence, and the whole universe would become supremely intelligent and be able to manipulate everything according to its will. We're currently multiplying computational capacity by a factor of at least 103 every decade. This is conservative as this rate of exponential growth is itself growing exponentially. Thus it is conservative to project that within 30 decades (300 years), we would multiply current computational capacities by a factor of 1090, and thus exceed Seth Lloyd's estimate of 1090 bits in the Universe. We can speculate about identity — will this be multiple people or beings, or one being, or will we all be merged? ­ but nonetheless, we'll be very intelligent and we'll be able to decide whether we want to continue expanding. Information is very sacred, which is why death is a tragedy. Whenever a person dies, you lose all that information in a person. The tragedy of losing historical artifacts is that we're losing information. We could realize that losing information is bad, and decide not to do that any more. Intelligence will have a profound effect on the cosmological destiny of the universe at that point.

I'll end with a comment about the SETI project. Regardless of this ultimate resolution of this issue of the speed of light ­ and it is my speculation (and that of others as well) that there are ways to circumvent it ­ if there are ways, they'll be found, because intelligence is intelligent enough to master any mechanism that is discovered. Regardless of that, I think the SETI project will fail — it's actually a very important failure, because sometimes a negative finding is just as profound as a positive finding — for the following reason: we've looked at a lot of the sky with at least some level of power, and we don't see anybody out there. The SETI assumption is that even though it's very unlikely that there is another intelligent civilization like we have here on Earth, there are billions of trillions of planets. So even if the probability is one in a million, or one in a billion, there are still going to be millions, or billions, of life-bearing and ultimately intelligence-bearing planets out there.

If that's true, they're going to be distributed fairly evenly across cosmological time, so some will be ahead of us, and some will be behind us. Those that are ahead of us are not going to be ahead of us by only a few years. They're going to be ahead of us by billions of years. But because of the exponential nature of evolution, once we get a civilization that gets to our point, or even to the point of Babbage, who was messing around with mechanical linkages in a crude 19th century technology, it's only a matter of a few centuries before they get to a full realization of nanotechnology, if not femto and pico-engineering, and totally infuse their area of the cosmos with their intelligence. It only takes a few hundred years!

So if there are millions of civilizations that are millions or billions of years ahead of us, there would have to be millions that have passed this threshold and are doing what I've just said, and have really infused their area of the cosmos. Yet we don't see them, nor do we have the slightest indication of their existence, a challenge known as the Fermi paradox. Someone could say that this "silence of the cosmos" is because the speed of light is a limit, therefore we don't see them, because even though they're fantastically intelligent, they're outside of our light sphere. Of course, if that's true, SETI won't find them, because they're outside of our light sphere. But let's say they're inside our light sphere, or that light isn't a limitation, for the reasons I've mentioned, then perhaps they decided, in their great wisdom, to remain invisible to us. You can imagine that there's one civilization out there that made that decision, but are we to believe that this is the case for every one of the millions, or billions, of civilizations that SETI says should be out there?

That's unlikely, but even if it's true, SETI still won't find them, because if a civilization like that has made that decision, it is so intelligent they'll be able to carry that out, and remain hidden from us. Maybe they're waiting for us to evolve to that point and then they'll reveal themselves to us. Still, if you analyze this more carefully, it's very unlikely in fact that they're out there.

You might ask, isn't it incredibly unlikely that this planet, which is in a very random place in the universe and one of trillions of planets and solar systems, is ahead of the rest of the universe in the evolution of intelligence? Of course the whole existence of our universe, with the laws of physics so sublimely precise to allow this type of evolution to occur is also very unlikely, but by the anthropic principles, we're here, and by an analogous anthropic principle we are here in the lead. After all, if this were not the case, we wouldn't be having this conversation. So by a similar anthropic principle we're able to appreciate this argument. I'll end on that note.



WHICH UNIVERSE WOULD YOU LIKE?
Five stars of American science meet in Connecticut to explain first and last things.
By Jordan Mejias
August 28, 2002

They begin a free-floating debate, which drives them back and forth across the universe. Guth encourages the exploration of black holes, not to be confused with cosmic wormholes, which Kurzweil—just like the heroes of Star Trek—wants to use as a shortcut for his intergalactic excursions and as a means of overtaking light. Steinhardt suggests that we should realize that we are not familiar with most of what the cosmos consists of and do not understand its greatest force, dark matter. Understand? There is no such thing as a rational process, Minsky objects; it is simply a myth. In his cosmos, emotion is a word we use to circumscribe another form of our thinking that we cannot yet conceive of. Emotion, Kurzweil interrupts, is a highly intelligent form of thinking. "We have a dinner reservation at a nearby country restaurant," says Brockman in an emotionally neutral tone.

[English Translation | Original German text]



|Top|