Edge 131—January 12, 2004

(36,000 words)



Edge 7th Anniversary: A Photo Album



" Big, deep and ambitious questions....breathtaking in scope. Keep watching The World Question Center." — New Scientist



The 2004 Edge Annual Question...

"WHAT'S YOUR LAW?"

164 Contributors: George Dyson • Bruce Sterling • William Calvin • Howard Gardner • James J. O'Donnell • Marc D. Hauser • David Lykken • Irene Pepperberg • Daniel Gilbert • Joseph Traub • Roger Schank • Douglas Rushkoff • Karl Sabbagh • Carlo Rovelli • Timothy Taylor • Richard Nisbett • Freeman Dyson • John Allan Paulos • John McWhorter • Kevin Kelly • Brian Goodwin • John Barrow • Marvin Minsky • Garniss Curtis • Todd Siler • Howard Rheingold • David G. Myers • Michael Nesmith • Arnold Trehub • Keith Devlin • Arthur R. Jensen • John Maddox • John Skoyles • Pamela McCorduck • Philip W. Anderson • Charles Arthur • David Bunnell • Esther Dyson • Scott Atran • Jay Ogilvy • Steven Kosslyn • Jeffrey Epstein • Stewart Brand • Piet Hut • Geoffrey Miller • Nassim Taleb • Donald Hoffman • Richard Rabkin • Stanislas Dehaene • Susan Blackmore • Raphael Kasper • Alison Gopnik • Art De Vany • Robert Provine • Stuart Pimm • Chris Anderson • Alan Alda • Andy Clark • Charles Seife • Jaron Lanier • Seth Lloyd • John Horgan • Robert Aunger • Ernst Pöppel • Michael Shermer • Colin Blakemore • Scott Sampson • Verena Huber-Dyson • Gary Marcus • Rodney Brooks • David Deutsch • Steve Grand • Paul Davies • David Finkelstein • Richard Dawkins • J. Craig Venter • Steve Quartz • Philip Campbell • Tor Nørretranders • Julian Barbour • Maria Spiropulu • Eberhard Zangger • David Buss • Mark Mirsky • Lee Smolin • Nancy Etcoff • Anton Zeilinger • Edward O. Laumann • George Lakoff • Haim Harari • Matt Ridley • Daniel C. Dennett • W. Brian Arthur • Samuel Barondes • Jamshed Bharucha • Ray Kurzweil • Adam Bly • Kai Krause • Dylan Evans • Jordan Pollack • Stuart Kauffman • Niels Diffrient • Gerald Holton • Robert Sapolsky • Izumi Aizu • Randoph Nesse • Dave Winer • Rupert Sheldrake • Ivan Amato • Judith Rich Harris •Steven Strogatz • Sherry Turkle • Leonard Susskind • Christine Finn • Simon Baron-Cohen • Henry Warwick • Gino Segre • Neil Gershenfeld • Steven Levy • Paul Ryan • Stuart Hameroff • Leo Chalupa • Terrence Sejnowski • Eduard Punset • Paul Steinhardt • Delta Willis • Rudy Rucker • Al Seckel • Howard Morgan • Clifford Pickover • Beatrice Golomb • K. Eric Drexler • Mark Hurst • Art Kleiner • Joseph Vardi • Nicholas Humphrey • Martin Rees • John Markoff • • Gerd Gigerenzer • Steve Lohr • David Berreby • William Poundstone • Dennis Overbye • Sara Lippincott • Albert-László Barabási • David Gelernter • W. Daniel Hillis • Marti Hearst • Steven Pinker • Lisa Randall • Gregory Benford • Allan Snyder • Mike Godwin • Dan Sperber • Frank Tipler • Andrian Kreye • Eric S. Raymond • Brian Eno • Antonio Damasio • Helena Cronin • Paul Ewald • Charles Simonyi • John Rennie • Alun Anderson

[click here for responses]


CONNECTIONS
Finding the Universal Laws That Are There, Waiting . . .
By Edward Rothstein, January 10, 2004 [free registration required]

Nature abhors a vacuum. Gravitational force is inversely proportional to the square of the distance between two objects. Over the course of evolution, each species develops larger body sizes. If something can go wrong, it will.

Such are some of nature's laws as handed down by Aristotle, Newton, Edward Cope and Murphy. And regardless of their varying accuracy (and seriousness), it takes an enormous amount of daring to posit them in the first place. Think of it: asserting that what you observe here and now is true for all times and places, that a pattern you perceive is not just a coincidence but reveals a deep principle about how the world is ordered.

If you say, for example, that whenever you have tried to create a vacuum, matter has rushed in to fill it, you are making an observation. But say that "nature abhors a vacuum" and you are asserting something about the essence of things. Similarly, when Newton discovered his law of gravitation, he was not simply accounting for his observations. It has been shown that his crude instruments and approximate measurements could never have justified the precise and elegant conclusions. That is the power of natural law: the evidence does not make the law plausible; the law makes the evidence plausible.

But what kind of natural laws can now be so confidently formulated, disclosing a hidden order and forever bearing their creator's names? We no longer even hold Newton's laws sacred; 20th-century physics turned them into approximations. Cope, the 19th-century paleontologist, created his law about growing species size based on dinosaurs; the idea has now become somewhat quaint. Someday even an heir to Capt. Edward Aloysius Murphy might have to modify the law he based on his experience about things going awry in the United States Air Force in the 1940's.

So now, into the breach comes John Brockman, the literary agent and gadfly, whose online scientific salon, Edge.org, has become one of the most interesting stopping places on the Web. He begins every year by posing a question to his distinguished roster of authors and invited guests. Last year he asked what sort of counsel each would offer George W. Bush as the nation's top science adviser. This time the question is "What's your law?"

"There is some bit of wisdom," Mr. Brockman proposes, "some rule of nature, some lawlike pattern, either grand or small, that you've noticed in the universe that might as well be named after you." What, he asks, is your law, one that's ready to take a place near Kepler's and Faraday's and Murphy's.

More than 150 responses totaling more than 20,000 words have been posted so far at www.edge.org/q2004/q04_print.html. The respondents form an international gathering of what Mr. Brockman has called the "third culture" scientists and science-oriented intellectuals who are, he believes, displacing traditional literary intellectuals in importance. They include figures like the scientists Freeman Dyson and Richard Dawkins, innovators and entrepreneurs like Ray Kurzweil and W. Daniel Hillis, younger mavericks like Douglas Rushkoff and senior mavericks like Stewart Brand, mathematicians, theoretical physicists, computer scientists, psychologists, linguists and journalists....


Edge.org Compiles Rules Of The Wise Observations Of Thinking People [free registration required]
January 9, 2004 By John Jurgensen, Courant Staff Writer


Everything answers to the rule of law. Nature. Science. Society. All of it obeys a set of codes...It's the thinker's challenge to put words to these unwritten rules. Do so, and he or she may go down in history. Like a Newton or, more recently, a Gordon Moore, who in 1965 coined the most cited theory of the technological age, an observation on how computers grow exponentially cheaper and more powerful... Recently, John Brockman went looking for more laws.


SCIENCE JOURNAL By Sharon Begley, January 2 , 2004
Scientists Who Give Their Minds to Study, Can Give Names, Too (Subscription Required)

Heisenberg has one, and so do Boyle and Maxwell: A scientific principle, law or rule with their moniker attached.... It isn't every day that a researcher discovers the uncertainty principle, an ideal gas law, or the mathematical structure of electromagnetism. And ours is the era of real-estate moguls, phone companies and others slapping their name on every building, stadium and arena in sight.... So, John Brockman, a New York literary agent, writer and impresario of the online salon Edge, figures it is time for more scientists to get in on the whole naming thing.... As a New Year's exercise, he asked scores of leading thinkers in the natural and social sciences for "some bit of wisdom, some rule of nature, some law-like pattern, either grand or small, that you've noticed in the universe that might as well be named after you."...The responses, to be posted soon on Mr. Brockman's Web site www.edge.org, range from the whimsical to the somber, from cosmology to neuroscience
...You can find other proposed laws of nature on the Edge Web site. Who knows? Maybe one or more might eventually join Heisenberg in the nomenclature pantheon.


A Week in Books: Core principles are needed in the muddled business of books
By Boyd Tonkin, 02 January 2004

The literary agent John Brockman, who makes over significant scientists into successful authors, has posted an intriguing question on his Edge website. He seeks suggestions for contemporary "laws", just as Boyle, Newton, Faraday and other pioneers gave their names to the rules of the physical universe. (That eminent pair, Sod and Murphy, soon followed suit.) Brockman advises his would-be legislators to stick to the scientific disciplines, and you can find their responses at www.edge.org.



From 1981 through 1996, The Reality Club held its meetings in Chinese restuarants, artists lofts, the Board Rooms of Rockefeller University, The New York Academy of Sciences, and investment banking firms, ballrooms, museums, and living rooms, among other venues. In January, 1997, The Reality Club migrated to the Internet as Edge. Here you will find a number of today's sharpest minds taking their ideas into the bull ring knowing they will be challenged. The ethic is thinking smart vs. the anesthesiology of wisdom.

The late Heinz Pagels and I wrote the following statement:

"We charge the speakers to represent an idea of reality by describing their creative work, their lives, and the questions they are asking themselves. We also want them to share with us the boundaries of their knowledge and experience and to respond to the challenges, comments, criticisms, and insights of the members. The Reality Club is a point of view, not just a group of people. Reality is an agreement. The constant shifting of metaphors, the intensity with which we advance our ideas to each other—this is what intellectuals do. The Reality Club draws attention to the larger context of intellectual life.

"Speakers seldom get away with loose claims. Maybe a challenging question will come from a member who knows an alternative theory that really threatens what the speaker had to say. Or a member might come up with a great idea, totally out of left field, that only someone outside the speaker's field could come up with. This creates a very interesting dynamic.

Two continuing Reality Club discussions (see below) are underway recent Edge features: Jaron Lanier's provocative ideas in "Why Gordian Software Has Convinced Me to Believe in the Reality of Cats and Dogs", and Lenny Susskind's radical take on the current state of physics and cosmology in "The Landscape".

As one Edge kibitzer remarked: "Tough crowd."


Re: THE LANDSCAPE: A Talk with Leonard Susskind

Responses by Paul Steinhardt, Lee Smolin, Kevin Kelly, Alexander Vilenkin, Lenny Susskind, Steve Giddings, Lee Smolin, Gino Segre, Lenny Susskind, Gerard 't Hooft, Lenny Susskind, Maria Spiropulu


Paul Steinhardt

Well, the quote is right. I love Lenny, but I hate this recent landscape idea and I am hopeful it will go away.

PAUL STEINHARDT is the Albert Einstein Professor in Science and on the faculty of both the Departments of Physics and Astrophysical Sciences at Princeton University.


Lee Smolin

I want to preface my remarks by saying that since my student days Lenny Susskind has been for me a hero and a role model. The following remarks are offered with great respect and admiration.

To start with, Susskind must be commended for courageously calling people's attention to an apparently fundamental feature of string theory: that it appears to allow for a huge number of different versions (or, as some would prefer, solutions) each of which describes a universe with different laws of physics. Basic features of a universe, such as its dimensionality, the nature and strengths of the different forces and the masses of the elementary particles vary from string theory to string theory.

As Lenny says, this means that the old dream of a unified theory that makes unique and falsifiable predictions appears no longer possible. Much that physicists hoped to explain as necessary features of any possible universe are just contingent, or environmental features of one universe out of many possible ones.

Without in any way diminishing the importance of Susskind's recent views, it should be said that several people have been making the same argument, using very similar language, for many years. My book, The Life of the Cosmos (1997), describes the same scenario of a landscape of string theories, and explores the question of whether this situation is inevitable and, if so, what this means for the future of science. One of the main points it makes, however, is that the anthropic principle is a wrong turn. There are alternatives which can resolve the worries of those who don't like the anthropic principle, while taking into account the surprising scenario described by Susskind.

Of course, the intelligent reader will want to know how strong the actual evidence is that justifies the strong statements Susskind makes. It may help first to explain why Susskind and other string theorists have only recently begun to worry about these problems. Since the late 1980's it has been known that string theory has a great many solutions, which describe universes with different properties. However, until recently, all the known string solutions described universes that disagreed with observations in one or more essential ways. For one thing, most of them did not describe worlds with three macroscopically large dimensions of space. But of those that did, they all had two properties that disagreed with observation: unobserved symmetries (called supersymmetries) and unobserved long range forces (in the technical jargon, massless scalar fields.) To this was added in recent years a third problem: the universe appears to have a positive vacuum energy, but all consistent string theories then known had zero or negative vacuum energy.

Thus, until very recently string theorists could hope that even if string theory has many solutions, there would be only one solution consistent with what observations tell us about the world.

A year ago there were new results that changed the situation quite a bit. Very clever calculations by Shamit Kachru and collaborators gave indirect evidence for the existence of string theories which agree with the following observed aspects of our universe: 1) four large dimension, 2) positive vacuum energy, 3) no unbroken supersymmetry, 4) no massless scalar fields. This was the first evidence for the existence of any version or solution of string theory consistent with all these observed features of our world.

But there was a twist. This new solution was not unique-quite the opposite. Instead, Michael Douglas, Susskind and others argue that if any string theories exist with these characteristics, so do at least 10(100) others. It is the vastness of this number that leads to the apparently revolutionary implications Susskind speaks of.

For the sake of accuracy, it is important to stress that the evidence for these string theories is indirect and not necessarily compelling. Not a single one of these 10(100) string theories has actually been constructed or otherwise shown to exist. Nor can any calculations be done in any of these theories-even to the lowest order of approximation. The results at hand are very far from an actual demonstration of the existence of these theories-even at the loose level of rigor that characterizes much work by theoretical physics.

In fact, no string theories-even the original five supersymmetric theories in ten dimensions-have been conclusively demonstrated to exist. There still remain unproven conjectures such as the finiteness and consistency of any superstring theory, past the first three terms of a certain approximation scheme. But, if a few issues remain unresolved in the best cases, far less is known about the conjectured string theories Susskind is talking about.

So the present results allow three possibilities:

String theory is true, but the string theories Kachru et al find weak evidence for do not in fact exist. Some other way will ultimately be found to construct at least one string theory that agrees with all features of our observed universe. String theory is true and the string theories Kachru et al find evidence for are genuine solutions to it. String theory is false, because no consistent version of the theory exists or no version agrees with all experimental results. One of the alternative approaches to quantum gravity instead will turn out to be the road ahead for physics. Note that even if the first possibility is true we cannot escape the implications of what Lenny is saying. The reason is that even if some day a unique solution to string theory is found that describes our world, we will never get rid of the large number of string theory solutions that do not describe our world. So whatever happens, if string theory is true we have to explain why the solution that describes our world is picked out of a large collection of solutions that describe very different worlds.

Thus, unless string theory is wrong, we cannot avoid what Lenny Susskind is saying.

So does string theory imply the anthropic principle as Susskind seems to suggest? Does it mean that we have to either give up string theory or give up the dream of a fundamental theory that makes falsifiable predictions for real doable experiments?

There is a simple and, so far as I know, irrefutable, argument that leads to the conclusion that no theory that employs the anthropic principle, as advocated by Susskind, could be falsified. This is because it affirms the existence of an ensemble of "universes", at least one of which has the properties already observed to be true of our own. Furthermore, the total number of possible theories believed to exist is so vast that it is reasonable to believe that the subset that agree with all present observations will still be vast. Consequently, there will likely be myriads of theories that agree with any possible result of future experiments. Thus, there will be no way any conceivable experimental result could contradict the theory.

I follow many philosophers and historians in believing that a necessary part of what has made science a successful path to truth is that the ethic of science requires that we study only falsifiable theories. We only consider theories as possibly true if they are vulnerable to falsification by real experiments, and we only believe them after they have survived significant and stringent attempts to so falsify them.

This means that if science is to go on, we must find an alternative to the anthropic principle.

Fortunately, it is not hard to find an alternative to the anthropic principle in the scenario Susskind describes. All one needs to do is to add to the theory two additional hypotheses, which may in fact be themselves consequences of the fundamental theory.

The two hypotheses are: i) black hole and cosmological singularities bounce, due to quantum gravity effects, and are replaced by the birth of new universes, ii) each new universe that results is only slightly different than its parent, in that the parameters of their physical laws differ by small numbers.

As I described in my book, and related papers, these two hypotheses give the "landscape" of theories the structure of a fitness landscape. These are mathematical models from evolutionary biology. It is easy to see that, once these are added to the theory, falsifiable predictions can be obtained. For example, the observation of a single neutron star with a mass greater than twice that of the sun would rule the theory out.

Of course, this means the theory may very well be proven false in the near future. This means it is science. What we must avoid is the situation Susskind describes, in which a theory is believed despite there being not a single prediction for a genuine experiment whose results could falsify it.

It can also be mentioned that recent work by Martin Bojowald and collaborators provides strong evidence that hypothesis i) is a prediction of at least one quantum theory of gravity (loop quantum gravity). If Bojowald's techniques could be applied to string theory-and I believe it likely they can be- one might very well be able to test hypothesis ii).

To summarize, after the recent evidence summarized by Susskind, the key question still appears to be the following: Is there any alternative to either a) science proceeding without a falsifiable fundamental theory or b) cosmology and physics relying on dynamical mechanisms like natural selection to give falsifiable accounts of how our universe came to be described by the laws we observe. If there are alternatives, I hope someone will find one soon. If not, I certainly hope that b) is true, because I believe strongly that rational argument about experimental evidence is our only reliable path to truth.

Before closing, I want to inject a note of caution about Susskind's claim that string theory has resolved the puzzles about black holes posed by Hawking. Susskind makes the claim that, ""To this day, the only real physics problem that has been solved by string theory is the problem of black holes." I do not want to diminish the importance or the beauty of the string theory results that pertain to black holes. As far as they go, they are extremely impressive. But it should be noted that many experts in quantum gravity are unconvinced that the problem posed by Hawking has been solved by the actual results in string theory. The reason is that the string theory results which give exact agreement with the earlier work of Hawking are mainly restricted to a very special class of black holes. These are black holes which have as much, or nearly as much, charge as possible, given their mass. These do not include real physical black holes, such as those the astronomers have evidence for.

Furthermore, it is not yet possible in string theory to study directly the spacetimes of even these very special black holes. The most precise results are gotten by extrapolating very cleverly from certain systems without gravity. These have similar statistical properties to these very special black holes-but they are not actually black holes.

At the same time, there has been genuine progress understanding real black holes in other approaches to quantum gravity, such as loop quantum gravity. The fact that string theory has been unable to duplicate these results is related to the fact that string theories so far can only describe in any detail worlds with the unphysical characteristics referred to above, such as exact supersymmetry. As a result, many experts believe that the jury is still out on whether Hawking's conjectures about black holes and information are true or not.

LEE SMOLIN. a theoretical physicist, is a founding member and research physicist at the Perimeter Institute in Waterloo Canada. He is the author of The Life of The Cosmos and Three Roads to Quantum Gravity.


Kevin Kelly

The best, most amazing Edge interview yet. It was educational beyond the call of duty, full of insider gossip, and funny! I inhaled it in one breath. Great going.

KEVIN KELLY is Editor-At-Large, Wired; Author of Out of Control: The New Biology of Machines, Social Systems, and the Economic World; New Rules for the New Economy;and Cool Tools.


Alexander Vilenkin

I would like to comment on Lee Smolin's view, that anthropic arguments are unpredictive, unfalsifiable, and therefore unscientific. There has been a lot of confusion about what the anthropic approach is and how it should be used. Here I will argue that, when properly used, this approach does yield testable predictions, and thus meets all the standards of a scientific theory. Let me first clarify what I mean by the anthropic approach. The definition Lenny Susskind gives in his article is a bit too simplistic: "The kind of answer that this or that is true because if it were not true there would be nobody to ask the question is called the anthropic principle". In other words, if some constant of Nature has certain values which do not permit the existence of intelligent observers, then the "anthropic principle" says that such values are not going to be observed. This "principle" is, of course, guaranteed to be true. If this were all there is to anthropic arguments, I would have to admit that Lee Smolin has a point. But there is more to it than that.

Suppose our theory predicts that the constants of Nature vary from one part of the Universe to another, and we want to extract testable predictions from that theory. Then, instead of looking for extreme values of the constants that make observers impossible, we can try to predict what values will be measured by a typical observer. In other words, we can make statistical predictions, assigning probabilities to different values of the constants. If any principle needs to be invoked here, it is what I call "the principle of mediocrity" – the assumption that we are typical observers in the Universe, so the values of the constants we observe should be close to the maximum of probability. If instead we measure a value very far from the probability peak, this should be regarded as evidence against the theory. For example, if the observed value has probability of 1%, we can say that the theory is ruled out at 99% confidence level.

To illustrate my point, it's best to look at a specific example. Let us consider the parameter that Lenny mentioned in his article: the cosmological constant that causes the Universe to expand with acceleration. The larger this constant is, the earlier the accelerated expansion begins. And once this happens, the process of galaxy formation, which is crucial for the evolution of observers, comes to a halt. If the cosmological constant varies from one part of the Universe to another, then regions where it is larger will have fewer galaxies. This point was recognized by Steven Weinberg, who showed that regions where the cosmological constant is more than 100 times greater than the present density of matter in the Universe would have no galaxies at all, and therefore no observers. Clearly, such values will never be observed.

To improve on this analysis, we can use the theory of galaxy formation to determine the probabilities for different values of the cosmological constant. If we pick a galaxy at random, we can ask, what is the probability that this galaxy is in a region where the cosmological constant has such and such a value. The answer is that the cosmological constant measured by most observers in the Universe should be a few times greater than the present density of matter. Observations in our local region show that it is greater by a factor of about 3, as expected. Remarkably, the prediction was made in 1995, more than two years before the cosmological constant was actually measured. If the value turned out to be much greater or much smaller than it actually is, the anthropic explanation would be ruled out at a high confidence level.

ALEXANDER VILENKIN is Director, Tufts Institute of Cosmology.


Leonard Susskind

First I want to thank Paul Steinhardt for his concise summary of the views of the other side in this debate.

As to Smolin's less concise summary I am afraid of getting into an endless debate so I will say what I usually say to the students in my premed class: Hear me carefully because I will not explain again.

Smolin is correct. He did recognize the kind of diversity in the laws of physics that string theory suggests. He is also correct that string theorists can not prove that any of the solutions to string theory are really solutions, even the supersymmetric ones. Nor can anyone prove the sun will rise tomorrow. The level of confidence that string theorists have for their theory is based on a web of interconnected pieces of evidence that is so compelling that genuine mathematicians have no doubt about it's validity.

More relevant is Smolin's claim about the new non-supersymmetric solutions of my colleagues at Stanford and the Tata institute, KKLT. They have not undergone sufficient scrutiny. But the outsider to the subject should understand that string theorists watched with horror, not pleasure, the discovery of the gigantic landscape of solutions. And yet no string theorist that I know is prepared to say they these solutions don't exist. Like Steinhardt they quake in their boots and pray for deliverance. It is not impossible but all agree that it is unlikely.

As for Smolin's speculations about the evolution of the universe, let me say that almost all cosmologists would agree that the universe is reproducing. But they would not agree that the dominant mechanism is universes inside black holes (talk about unobservable!).

The most efficient mechanism according to cosmologists and one that is gaining strong observational support, is eternal inflation. Inflation is the exponential reproduction of the universe due to a cosmological constant. Perhaps black holes add to the process but I doubt it. In any case it is absolutely clear that we do not live in the fittest kind of universe which would be the universe with the largest cosmological constant. We live in a universe, which is fit to live in and a large cosmological constant would render our universe fatal to nuclei, atoms and life.

Alexander Vilenkin is a hero of the revolution and I always listen very carefully to what he says. He says that my statement "The kind of answer that this or that is true because if it were not true there would be nobody to ask the question is called the anthropic principle" is simplistic. Yes it is and it was intended that way. It's a definition that entirely misses the subtleties that Vilenkin explains. However it does express a broad-brush definition that covers the many things that are called the anthropic principle.

My own view is that we don't yet know enough to use the A.P. in a predictive way. Vilenkin disagrees. But what I am sure we, and also Paul, would agree is that we will be in a much better position to argue the merits of the AP when the landscape is more thoroughly explored. This is probably a job for the string theorists.


Steve Giddings

Some thoughts on the landscape and the anthropic principle:

I'm not a big fan of the anthropic principle. But physics is not designed for you or me to like—it is what it is, and that may mean certain features of our physical world are explained by anthropic reasoning.

If true, this is simply one more step down the Copernican path. Copernicus taught us that the Earth is not the center of the universe. If the idea of the "string landscape" and its population through effects like eternal inflation hold true, then the entire visible universe is not particularly special or unique, but rather is just a small and unremarkable part of an even larger universe. The constants of nature in our region aren't specially tuned to any particular a priori values. Rather we must take a more Darwinian view: life evolves where it can, and in our particular region of the larger universe, or "megaverse," it evolved because the conditions—the strength of electromagnetism, the magnitude of the cosmological constant, and so on—allow life to evolve. Our kind of life couldn't have evolved in a region where these constants took a significantly different value.

I find this viewpoint no more disturbing than the simple observation that life didn't evolve in the center of the sun. There are regions of the visible universe that are hospitable to life and those that aren't, and the same could hold for the megaverse.

One of the thing that disturbs many physicists with this picture is it's apparent lack of predictability. There are many different possible values for the many physical parameters, and figuring out what region of this space is the "L=1" surface, where life has unit probability of emerging, is an enormously complicated and perhaps not wholly tractable problem. No longer can we follow the dream of discovering the unique equations that predict everything we see, and writing them on a single page. Predicting the constants of nature becomes a messy environmental problem. It has the complications of biology.

But I feel the views of some, that such a picture is unscientific, or a cop-out, are extreme. In particular, understanding the laws that give rise to the megaverse is a very scientific question, and one that I think is well worth studying further. For example, in a paper with Kachru and Polchinski, we outlined a lot of the basic structure underlying one piece of the megaverse that people are talking a lot about today. But we have a ways to go in fully understanding even this piece of the megaverse—indeed its internal consistency has been questioned by Banks and Dine, and it's conceivable the picture could collapse entirely. And assuming that this piece is eventually well understood, it may well be the tip of the iceberg, with many other interesting pieces of the megaverse yet to be explored.

This may force us to rethink the kinds of questions that we hope to answer—such as trying to predict the precise value of the cosmological constant. But it does open up the possibility of investigating other kinds of questions, and could well be testable, once we figure out how to test string theory experimentally. If we're very lucky that could even happen with the Large Hadron Collider, cosmological observations, or perhaps other ways we haven't thought of.

Another fascinating part of the picture is a generic feature of the "landscape." Indeed, this feature would appear to be present even if string theory proves not to be the correct theory of quantum gravity. This feature regards the ultimate fate of the Universe. Indeed, as long as there are extra dimensions of space, and the presently observed positive value of the cosmological constant, it appears that the extra dimensions of space will ultimately become unstable, and can begin to grow. Having a positive cosmological constant is like being in a high mountain valley, and sooner or later, through quantum effects or otherwise, the universe should find its way down to the plains. Thus whether or not we find the extra dimensions of space, ultimately they will find us.

STEVE GIDDINGS is a theoretical physicist at University of California, Santa Barbara.


Lee Smolin

Regarding Susskind's always vivid comments, I am glad we agree about the basic point that string theory leads to a landscape of theories. The issue I have been concerned with for some time is the same Susskind closes with: how can we get predictions from a theory of this kind? Two possible answers are the anthropic principle and cosmological natural selection. The conclusion I have come to after a lot of thought is that the latter is likely to lead to a larger number of falsifiable predictions.

To avoid confusion it must be emphasized that the term "anthropic principle" is used with several meanings. I agree that the definition Alex Vilenkin gives is nothing but commonsense logic and that, "when used properly", in conjunction with physical hypotheses, it can lead to some falsifiable predictions. However, in these cases, what is falsifiable is not the commonsense logic, but the physical hypotheses it is combined with. This is the case in the example Alex, gives, regarding the cosmological constant. Here the calculations depend on hypotheses about quantum cosmology and the physics of galaxy formation. If his predictions are proved wrong, he will want to amend those hypotheses, and not the logic used in his reasoning.

My comments were addressed to a different version of the anthropic principle in which someone posits a multiverse model, and then claim its predictions are verified because the ensemble of universes contains at least one universe that has the properties we observe ours to have. Problems with falsifiability arise when the ensemble is so vast, that there will be members that agree with any possible future experiments. No falsifiable prediction are possible, because whatever is observed will be true of some members of the ensemble.

But even if we agree to employ Alex's weaker definition, there are further questions. Can we predict the value of any parameter we can measure, or are we restricted to making predictions about just a few parameters ?

For example, as pointed out by Anthony Aguirre, there are many possible universes that contain life, but are very different from our own, such as universes where the big bang was cold rather than hot. The anthropic principle cannot explain why we do not live in one of these universes. Hence there are basic features of our world it cannot explain or predict.

There are also problems when the anthropic principle is used to save a theory that otherwise makes incorrect predictions. This can happen when very few members of the ensemble of universes predicted by the theory resemble our world. In such cases, to make reliable predictions about a parameter , x, both the a priori probability given by the theory to members of the ensemble and the probability for life, must depend strongly on x. The cosmological constant is one case in which this is satisfied. But there will be many cases in which it is not satisfied. In these cases the theory cannot make predictions. A good example of this is eternal inflation. In eternal inflation the probability, or fitness depends as Susskind says, strongly on the cosmological constant. However, the probability depends only very weakly on most measurable parameters such as the masses and charges of the stable elementary particles. This is because their values have little effect at the physical scales at which the reproduction of universes takes place (which are much higher in energy than those so far probed experimentally.) Thus, eternal inflation, by itself, cannot explain or predict the values of these observable parameters. Even when the anthropic principle, in Alex's sense, is added, it is still very difficult to make predictions for future measurements, having to do with unstable particles, whose existence and properties affect neither the probability for observers or the probability for inflation.

Let us compare this with the cosmological natural selection scenario in which the mode of reproduction is through black holes. The rate of reproduction of universes through black holes does depend very sensitively on many observable parameters. This is because the properties of ordinary matter determine the rate of formation of massive stars that become black holes. As a result, almost all members of the ensemble generated will, if the theory is true, resemble our universe. There is no need to call on the anthropic principle to extract a sub-ensemble consisting of otherwise extremely improbable universes. Hence, if black hole formation dominates reproduction of universe, we have an opportunity to explain the values of all those parameters, without relying on the anthropic principle. As a result, the theory gives falsifiable predictions, testable by observations of things like neutron stars. This gives this theory, if true, much more potential explanatory power.

Regarding string theory, here also my intention is to be constructive. I think it is useful in the development of a theory to keep clearly in mind exactly what has been proved, and what remains open and still requires proof. It is unfortunately the case that many key links in the "web of interconnected pieces of evidence" that support string theory remain unproven conjectures, even at a physicists level of rigor, despite many years of study by many very smart people. It is true that some, "genuine mathematicians have no doubt about it's validity". But other genuine mathematicians who have studied the technical issues involved do have serious doubts. Given that the theory so far makes no contact with experiment, it is to be hoped that further work will improve this situation.

Similarly, Susskind's claim that the fittest universe is the one with largest cosmological constant depends on internal inflation being true. But eternal inflation is much more than just the claim that the universe inflated at early times. It is a large step from present observations to the claim that eternal inflation has strong observational support. That step requires a number of assumptions, which we can hope will be checked as both theory and observation become more precise.


Gino Segre

It may well be that we are part of a megaverse, as Lenny says. This may be the next step in a 500 year progression of our thinking. In 1543 Copernicus proposed that the Earth was not the center of the Universe. Some 70 years later, Galileo showed with his telescope that those milky looking objects in the sky were made up of many stars. From this the notion of many galaxies eventually evolved, but humans still clung to the idea that Earth was at the center of their own galaxy. That notion was finally disproved by Shapley in the 1920s .

We now believe we live on an ordinary planet, one of many, circling an ordinary star, one of many, in an ordinary galaxy, one of many. Perhaps we need to take the next step, admittedly a revolutionary one, of saying we live in an ordinary universe, a very small part of an enormous megaverse. However , as controversial as each one of those earlier proposals was, they were all confirmed unambiguously by scientific observations. Science has both a revolutionary and a conservative side, revolutionary in the proposing of dramatic new possibilities and conservative in the requirement of demanding experimental evidence before they are accepted.

As with past notions, the idea of a megaverse will require experimental confirmation before it is accepted. Superstring theory and the existence of extra dimensions will likewise have to clear the same hurdle. Megaverse may be the right path and it may not— the existence of a cosmological constant has caught us all by surprise and some genius may yet calculate its value in a way we cannot even imagine right now, showing us a new road to follow.

Whatever happens, we are all grateful that some very exciting experiments in both particle physics and cosmology will be taking place in the coming years. Hopefully they will help us sort it out.

GINO SEGRE, a professor of Physics and Astronomy at the University of Pennsylvania, is the author of A Matter of Degrees.


Lenny Susskind

A year or two ago most theoretical high energy physicists would have dismissed any talk of the anthropic principle as anti-science. However, as I said in the interview; "because of unprecedented new developments in physics, astronomy and cosmology these same physicists are being forced to reevaluate their prejudices about anthropic reasoning." The attitude among the more thoughtful physicists has softened to "hmmm, maybe we better think about this." The messages of Steve Giddings and Gino Segre reflect this less biased mindset. Segre correctly emphasizes the importance of experimental tests of theoretical ideas. In this connection I want to point out that Weinberg predicted that if the AP is correct, the cosmological constant would turn out to be non-zero. Moreover he predicted the correct order of magnitude. This was more than a decade ago. Finally I want to re-emphasize that it's not just the cosmological constant that is pushing us in the "anthropic landscape" direction. The success of inflation strongly suggests that we live in a very big universe. The other clear fact is that string theory gives rise to a stupendously rich landscape with perhaps 10(500) vacua with no reason to prefer one over the other. Sure it's possible that some genius will come along and explain the cosmological constant by some mathematical magic but things sure don't seem to be going in that direction.


Gerard 't Hooft

During the '80s, a number of physicists became more and more excited about what was called "super string theory". The rather bizarre mathematical equations that emerge if one attempts to subject "relativistic strings" to the laws of Quantum Mechanics, had previously appeared to be inconsistent, but are now recognized as possibly describing fundamental elementary particles together with gravitational forces quite similar to those of Einstein's general theory of relativity. Even so, inconsistencies continued unless one postulated very special kinds of projection schemes and symmetries, such as supersymmetry.

Supersymmetry of the type needed has not yet been detected among the real particles of Nature, and also other predictions of the theory could not yet be checked against experiment. These are by themselves no reasons to dismiss the theory; supersymmetry is also predicted by other arguments, and the domain of physics where the theory should apply directly, the so-called Planck domain, is so far separated from what can be observed under controlled circumstances, that one should really admire these deep and stimulating ideas than try to ridicule them, as some other physicists are sometimes seen doing.

However, when I hear Lenny say that "this theory is going to win, and physicists who are trying to deny what is going on are going to lose", then to my opinion he is going too far. I have several reasons for advising my friends to practice caution, modesty and restraints when they air their suspicion that this theory "is" the everlasting and complete theory of the Universe. If this theory indeed allows for 10^500 distinct solutions out of which we somehow have to choose—some say it is 10^1000 solutions, nobody really seems to know—then this must be seen as an enormous setback. Less than a decade ago we still hoped that some stability argument could be used to single out the single, " correct" solution; apparently this hope has been abandoned. Now, they are invoking the "anthropic principle", which really means: try all of these solutions until you find a Universe that looks like the world we live in. This is not the way physics has worked for us in the past, and it is not too late to hope that we will be able to find better arguments in the future.

On top of this, there are even more serious objections against "superstring theory". It has already been recognized now that superstring theory itself only describes a tiny corner of our world, the corner where these strings happen to interact only weakly, because as soon as they interact more strongly, nobody can follow the equations anymore, let alone solve them. In the past, whenever I complained about this, my voice was hardly heard, but now all string theorists say: "O, yes, but then the theory can be reformulated in terms of another theory that is related to the previous version by what is called 'duality'." And, for convenience, it is then forgotten that this new theory, called 'M-theory', again only exists in a few tiny little corners of the world. How do we plan to formulate and understand the complete picture? Can one obtain a complete picture along such lines at all? String theorists are so confident of their expectations that such questions are usually ignored.

This is because the duality schemes that have been discovered are extremely suggestive. Indeed the mathematical equations repeatedly turn out to show a magnificent degree of perfection. But what does all of this really mean? String theorists say: " this can only mean that our theories are true, and this is the scheme used by God to create our Universe."

It is hard to argue with that, since such arguments have some religious overtones. My own "religion" tells me that theories of this sort can never be more than approximations. Perhaps the approximations contain some truth, but the ultimate laws of Nature must contain a fundamental and simple, concise relation between 'cause and effect', between past and future, between close-by and far-away. Such principles could not be built in whatever formulation of 'M-theory' people could give. This is because the duality arguments that are being used do not refer to the local equations, but to their symmetry properties instead. This should be recognised as a weakness of the theory. Take the proud boasts concerning black holes; the resulting picture leaves no shred of locality or causality in the laws controlling these mysterious objects. But this is what I am waiting for. Such a simple demand is unfortunately far too much to ask from what is now called superstring theory or M-theory, and as long as I don't see any progress in this respect I treat the claims with caution and restraint.

GERARD 'T HOOFT, Nobel Laureate, is Professor of Theoretical Physics at University of Untrecht.


Leonard Susskind

Gerard advises caution and restraint. That's hard to argue with. I consider myself to be a cautious, rather conservative physicist. I really don't like new ideas. But I also find wisdom in a quote from Sherlock Holmes; "When you have eliminated all that is impossible, whatever remains must be the truth, no matter how improbable it is." A couple of times I have reached the point where I felt forced to a very unconventional idea, because I could see no way out of it. One case that particularly comes to mind is the "Holographic Principle." This was a crazy idea but I would guess that Gerard felt the same way as I did; all conventional alternatives led to paradox or inconsistency. That is exactly the way I feel about the cosmological constant.

I've watched for 40 years as people tried this scheme, and that scheme, to explain the absence of vacuum energy, but they all failed. I've also seen string theorists fail over and over in trying to find a "vacuum selection" principle that would pick out a particular version of the theory. Add to this the fact that astronomers find that the cosmological constant is non-zero but just barely small enough for galaxies to form, I personally feel that we have come to a point where "whatever remains must be the truth, no matter how improbable it is." Here's what we know:

The cosmological constant is probably not zero but falls in the narrow range of values that allows galaxies, stars and planets to form. The evidence for this is empirical.

There is growing empirical evidence confirming the inflationary theory of cosmology. It follows that the universe is much larger than what we can observe.

Theories of inflation tend to produce domains of space with varying vacuum properties such as the vacuum energy (cosmological constant). This is from theoretical studies.

String theory has a very large number of vacuum solutions. Some are supersymmetric but these do not support ordinary chemistry. In addition there appear to be a huge number of non-supersymmetric vacua with non zero cosmological constant. As Gerard says, the numbers could be as large as 10 to the 500 power or bigger. The evidence for this is mathematical but not rigorous.

Gerard may not find a pattern here but I do. It's a matter of taste and judgement.

My comments about the "theory winning" and "theorists in denial" was mainly aimed at those string theorists want to avoid the facts. Their own theory is pointing in a very different direction than what they hoped. I did not have in mind people like 't Hooft who remain skeptical of string theory. However I do take exception to his claim "the resulting picture leaves no shred of locality or causality in the laws controlling these mysterious objects." Here I can only say that I believe Gerard is wrong.

Finally, I would ask Gerard; do you have a better idea?_____ I want to add one technical comment to the above response. In Gerard's message he says "the ultimate laws of Nature must contain a fundamental and simple, concise relation between 'cause and effect', between past and future, between close-by and far-away. Such principles could not be built in whatever formulation of 'M-theory' people could give." I completely agree with the first sentence in quotes. I don't agree with the second. The present formulation of (uncompactified) M-theory is called M(atrix) theory. It is a conventional quantum mechanical theory with a Hamiltonian and a Shroedinger equation. The relation between past and future, cause and effect are exactly the same as in any other quantum mechanical system. While I certainly agree that there is a lot missing, I think it is too much to say, "the resulting picture leaves no shred of locality or causality."


Maria Spiropulu

I don't know how else to understand the anthropic principle other than the "simplistic" way. Does anybody have a scientifically precise definition of this principle and how to apply it?

In the physics I have learned there were many examples of where the mathematics was giving infinite degenerate solutions to a certain problem (classical mechanics problems e.g.). There the problem was always a mistake in the physics assumptions. Infinity is mathematical not physical, as far as I know.

There lies the difference between math and physics. In math you have the equation and you look for the solution—the solution can be a set of solutions-infinite solutions. In physics you start from the answer—the real world (scale by scale as I learned from Polchinski) and you seek the equation. There are measurements (well there are many measurements, many experiments, resulting in one arithmetic value for this or that), and you look for the equation. If the equation gives you nonsense, then it is not the measurement that it is wrong but the equation.

In other words one should not expect to derive the uniqueness of the universe starting from an infinite set of solutions to a beautiful equation. One should start from the universe, which is the one universe that we measure, and try to find a theory that describes it.

I don't understand anthropic remarks like the sun-earth distance is just right to allow the appropriate chemistry for humans to be. Of course it does. But before the chemistry was there, the distance was the same. It is more interesting to research the thermonuclear reactions in the sun, discover something about the neutrinos, understand the radioactive warming of the earth's core, study the earth's atmosphere, and in general find why the temperature and chemistry is what it is—not for us to be here but for the phenomena to be what they are. And I find it rather absurd to believe that if we were not here the sun-earth distance would be different and the universe would be upsidedown.

The whole anthropic thinking seems to me intellectually decadent. It takes obviously true positive statements, then negates them to makes a conditional negative argument, which is then regarded as profound or scientific.

The argument "The environment has to be right for us to exist" is obviously right. But scientifically I find it is a redundant statement. Of course I cannot be in an environment that I cannot survive in, and study that environment at large. But I can study the enviroment I live in and this is what I do. The life-centric view of the works of the cosmos seems to me too mystical to be able to deal with scientifically.

MARIA SPIROPULU, a physicist, is currently at CERN. She has been working at the Tevatron with UCSB and was an Enrico Fermi Fellow at the EFI/University of Chicago.


Back to THE LANDSCAPE: A Talk with Leonard Susskind


re: WHY GORDIAN SOFTWARE HAS CONVINCED ME TO BELIEVE IN THE REALITY OF CATS AND APPLES: A Talk with Jaron Lanier

Responses by John Smart, Daniel C. Dennett, Dylan Evans


John Smart

To Dylan Evans:

You made the following statement in your response:

the distinction between serial and parallel processors is trivial, because any parallel machine can be
simulated on a serial machine with only a negligible loss of efficiency.

I found that statement fascinating. I've heard it vaguely before, and it exposes a hole in my understanding and intuition, if true. I was wondering if you could point me to a reference that discusses this further. My training is in biological and systems sciences, with only a few semesters of undergraduate computer science, so I'd appreciate any general overview you might recommend.

I also have two specific questions, which I am hopeful you can address with a sentence or two:

1. I would expect connectionist architectures such as neural networks and their variants to be simulable on serial machines for small numbers of nodes with only a negligible loss of efficiency. But how could that scale up to millions or billions of nodes without requiring inordinate time to run the simulations? Isn't there a combinatorial explosion and processing bottleneck here?

I just can't believe, unless you understand some interpretation of Turing and Von Neumann, et. al. that I've never learned, that there wouldn't be a scale up problem using serial systems to simulate all the possible nuances of the "digital controlled analog" synaptic circuitry in a mammalian brain, with all its microarchitectural uniqueness.

A related and equally important problem, to my mind, involves the timing differences between differentiated circuits operating in parallel. Neurons have learned to encode information in the varying timing and frequency of their pulses. Various models (e.g. Edelman's "reentrant" thalamocortical loops) suggest to me that nets of differentiated neurons would be very likely to have learned to encode a lot of useful information in the differential rates of their computation. Therefore, even if there is only a "negligible" slowdown in the serial simulation of a particular set of neurons, if it were real, it would seem to me to throw away a lot of what may be the most important information that massively parallel systems like the brain have harnessed: how to utilize the stably emergent, embodied, subtly different rates of convergence of pattern recognition among different specialized neural systems.

2. Our brain apparently uses trillions of synaptic connections, each of which has been randomly tuned to slightly different representations of reality bits (as in visual processing), in order to discover, in a process of neural convergence, a number of emergent gestalt perceptions, then aren't we going to need massive self-constructing connectionist capability in order to emulate this in the hardware space?

Teuvo Kohonen (one of the pioneers of Self-Organizing Maps in neural networks) once said something similar to this to me, and he expects his field to take off once we are doing most of our neural net implementation in hardware, not software.

For what it's worth, I am currently entertaining the model, borrowed from developmental biology (including the developing brain) that about 90-95% of complexity in any interesting system is driven by bottom up, chaotically deterministic processes (which must fully explore their phase space and then selectively converge), and about 5-10% involves a critical set of top down controls. These top down controls are tuned into the parameters and boundary conditions of the developing system (as with the special set of developmental genes that guide metazoan elaboration of form). Serial processing in human brains seems to me to be a top down process, one that emerged from a bottom up process of evolutionary exploration, one that is very important but only the tip of the iceberg, so to speak. The limited degree of serial and symbolic processing that our brains can do, versus their massive unconscious "competitions" of protothoughts, seems to me to be a balance we can see in all complex adaptive systems. (Calvin's Cerebral Code provides some early speculations on that, as does Edelman's Neural Darwinism).

I see today's serial programming efforts essentially as elegant prosthetic extensions of top down human symbolic manipulation (the way a hammer is an extension of the hand), but some time after 2020, when we've reached a local limit in shrinking the chips, there will for the first time be a market for multichip architectures (e.g., evolvable hardware can be commercially viable at that point), and it is at that point that I expect to see commercially successful biologically inspired bottom up driven architectures. IIt is at that point that I expect technology to transition from being primarily an extension and amplifier of human drives to becoming a self-organizing, and increasingly autonomous computational substrate in its own right.

Neural nets controlled by a hardware description language that had the capacity to tune up the way it harnessed randomness in network construction, and to pass on those finely tuned parameters, in the same way that DNA does, would seem to me to be a minimum criterion for applying the phrase "biologically inspired." But this seems to be something we are still decades away from implementing (beyond toy research models). I would see such systems, once they have millions of nodes and have matured a bit, as potential candidates for developing higher level intelligence, even though the humans tending them at that time may still have only a limited appreciation of the architectures needed for such intelligence.

This may be more than you want to address, but any responses you (or any of the other thinkers on this thread) might share would be much appreciated, as I'm in a bit of cognitive dissonance now given your interesting statement and Dan Dennett's implicit support of it (below). Thanks again for any help you may offer in clarifying your statements.

JOHN SMART is a developmental systems theorist who studies science and technological culture with an emphasis on accelerating change, computational autonomy and a topic known in futurist circles as the technological singularity.


Daniel C. Dennett

Dear Mr. Smart,

I think you are right about the relation of speed and efficiciency to parallel processing (see, e.g., my somewhat dated essay "Fast Thinking" in The Intentional Stance, 1987) but I took Jaron to be taking himself to be proposing something much more radical. Your idea that timing differences by themselves could play a large informational role is certainly plausible, for the reasons you state and others. And if a serial simulation of such a parallel system did throw away all that information, it would be crippled. I take it that your idea is that the timing differences would start out as just being intrinsic to the specific hardware that happened to be in place, and hence not informative at the outset, but that with opportunistic tuning, of the sort that an evolutionary algorithm could achieve, such a parallel system could exploit these features of its own hardware.

So I guess I agree that Dylan overstated the case, though not as much as Jaron did. If Jaron had put it the way you do, and left off the portentous badmouthing of our heroes, he would have had a better reception, from me at least.


Dylan Evans

To John Smart:

Thanks for your comments and questions. A general overview of computational complexity theory, including the question of serial vs. parallel computing, can be found in Algorithmics: The Spirit of Computing, by David Harel (Addison-Wesley, 3rd edition 2003).

Let me take your questions sequentially:

1. You are right to think that there would be a scale-up problem when using serial systems to simulate a mammalian brain in all its "microarchitectural uniqueness", but this does not contradict my point about simulating anyparallel machine on a serial machine "with only negligible loss of efficiency", for two reasons:

(a) By "negligible", I meant only a polynomial time difference. This is "negligible' in terms of computational complexity theory but not always negligible in the context of a particular technological application at a particular time. An engineer wanting to simulate a mammalian brain today might use a massively parallel machine such as a Beowulf cluster, because for him or her the difference between a year and two days is very significant. But in ten year's time, advances in computing speed might reduce this difference to, say, that between 3 days and 20 minutes.

(b) More importantly, I think your question is premised on a fundamental misunderstanding of classical AI. In classical AI, computers are used not to model the brain in all its molecular glory, but rather to model the mind—to understand the software, in other words, rather than the hardware. By software, I mean algorithms. And it is here that the research into sequential and parallel processing really becomes relevant. For while a parallel machine might work very differently to a serial machine as far as the hardware is concerned (and will therefore employ algorithms specially tailored for parallel architectures), there are no problems that a parallel machine can solve which a serial machine cannot. So we can run equivalent algorithms (equivalent in the sense that they solve the same algorithmic problem) on brains and serial computers. Brains are parallel machines made of very slow components (neurons), while serial computers are sequential machines made of very fast components (silicon circuits). So now the time differences you mention are not so clear. Some people (eg. Nicolelis) have already used serial computers to compute the algorithms running on the brain faster than the brain itself does.

2. Neural networks are, as far as I'm concerned, a huge red-herring. They may make good models of the brain, but they tell us absolutely nothing about the mind. In other words, they are a useful tool for neuroscientists, but not for cognitive scientists or those in AI, who wish to discover what algorithms the brain is running, not the architecture on which it runs them. Besides, all neural networks at the moment are simulations that are written in software of an essentially serial nature which runs on serial processors. Every neural network can, in principle, be reduced to either (a) an algebraic equation or (b) a set of coupled differential equations. From the point of view of someone who wants to understand how the mind works, it is much more important to understand what these equations are, and this may be done more easily and transparently by coding these equations directly than by dressing them up in a Gordian neural network.

You are right that today's serial processing efforts are essentially "elegant prosthetic extensions of top-down human symbolic manipulation", but this doesn't mean that they are not the best way to understand the rest of the mind. In fact, it is precisely because they are extensions of our powers for symbolic manipulation that they constitute such a good way to understand the rest of the mind, rather than merely to simulate it. This is an important point: if you built a neural network that was a perfect model of the brain, in all its detail, that would not tell you very much about the mind. On the other hand, given a representation in a language like C++ of the algorithms running in the brain, you would have a complete understanding of the mind, and you could trace every subroutine down to the last loop. It would be perfectly transparent, in the sense that a good mathematical proof is transparent.

So, I can see why neural networks would be of great relevance to your research in developmental biology, but I hope you can also see why they don't actually help very much if one's aim to discover the algorithms that constitute human intelligence.

To finish, I enjoyed your speculations about forthcoming developments in computer technology. I hope you are right! But creating intelligent artefacts will not necessarily tell us much about the human mind, especially if the artefacts are allowed to evolve in such a way as to become as opaque as all the other examples of evolution we see around us!


Back to WHY GORDIAN SOFTWARE HAS CONVINCED ME TO BELIEVE IN THE REALITY OF CATS AND APPLES: A Talk with Jaron Lanier

TED 2004 Conference | Monterey, CA | 11:00 am | Wednesday, February 25

An Edge Reality Club Meeting at TED (Technology, Entertainment, Design)
De Anza Ballroom 1 — Double Tree Hotel

WHAT'S NEW IN THE UNIVERSE?
Three of the World's Leading Physicists Ask Each Other the Questions They are Asking Themselves


Panelists:
Alan Guth, Paul Steinhardt, Leonard Susskind

Moderator: John Brockman

open to the public | admission free
 
 
Paul Steinhardt on
"The Cyclic Universe"
Leonard Susskind on
"The Landscape"

~~~

"The conventional inflationary picture received a great boost over the past few years by the somewhat shocking revelation of a new form of energy that exists in the universe, the energy that for lack of a better name is typically called 'dark energy.'...But let me start the story further back. Inflationary theory itself is a twist on the conventional Big Bang theory. The shortcoming that inflation is intended to fill in is the basic fact that although the Big Bang theory is called the Big Bang theory it is, in fact, not really a theory of a bang at all; it never was."

— Alan Guth, the father in the inflationary theory of the Universe, is Victor F. Weisskopf Professor of Physics at MIT; author of The Inflationary Universe. [click here]

~~~

"I've been involved in the development of an alternative theory that turns the cosmic history topsy-turvy. All the events that created the important features of our universe occur in a different order, by different physics, at different times, over different time scales-and yet this model seems capable of reproducing all of the successful predictions of the consensus picture with the same exquisite detail."

— Paul Steinhardt, a leading theoretical cosmologist, is the Albert Einstein Professor in Science and on the faculty of both the Departments of Physics and Astrophysical Sciences at Princeton University. [click here]

~~~

"What we've discovered in the last several years is that string theory has an incredible diversity-a tremendous number of solutions-and allows different kinds of environments. A lot of the practitioners of this kind of mathematical theory have been in a state of denial about it. They didn't want to recognize it. They want to believe the universe is an elegant universe-and it's not so elegant. It's different over here. It's that over here. It's a Rube Goldberg machine over here. And this has created a sort of sense of denial about the facts about the theory. The theory is going to win, and physicists who are trying to deny what's going on are going to lose."

— Leonard Susskind, the father of string theory, is Felix Bloch Professor in theoretical physics at Stanford University. [click here].


|Top|