EDGE 9 — March 11, 1997


A Talk with Joseph Traub

A central issue is the relation between reality and models of reality. I like to talk about this in terms of four worlds. There are two real worlds: the world of natural phenomena and the computer world, where simulations and calculations are performed. There are two model worlds: a mathematical model of a natural phenomenon and a model of computation. The mathematical model is an abstraction of the natural world while the model of computation is an abstraction of a physical computer.



John Perry Barlow, Stewart Brand, Dave Winer, Jaron Lanier, Kevin Kelly, Marvin Minsky, Paul Keegan, David Bunnell


Paolo Pignatelli: Further Questions for Joseph Ledoux

(8,928 words)

John Brockman, Editor and Publisher | Kip Parent, Webmaster



From: John Perry Barlow
Date: Tuesday, March 04, 1997 7:32 PM
To: Paul Keegan
Subject: Upside, You Should Be Ashamed of Yourselves...

Mr. Keegan,

I have just sent the following to the editors of Upside regarding your hatchet job on Louis Rossetto. You may be assured I will never return one of your research calls and I will advise everyone I know in the industry to do likewise.

Up yours is more like it.

As someone who shares many of Louis Rossetto's convictions and has, in fact, helped develop a few of them, I expected to feel a little defensive about your attack on him, Jane Metcalfe, and Wired. But when all I knew of the piece was the sniggering cover "photo" of your naked prey, I advised Louis to shrug it off, as I had advised him to shrug off the propaganda campaign the old media conducted against Wired's IPO.

"I've always figured that if they're picking on me, they're leaving someone else alone," I told him. Besides, I said, there is much consolation in truly believing that the future will prove you right, no matter how nuts the present may find you.

Indeed, I recommended he regard such abuse as a compliment. After all, as Chairman Mao once observed, "A revolution is not a dinner party." As revolutionary movements become real, the forces of Business as Usual usually start behaving unpleasantly. "At least they haven't shot any of us yet," I pointed out to him.

Well. Not quite. But when I actually read your special Death to Wired Issue, I beheld something as close as you can get to assassination without actually using bullets. It was, quite simply, the most gratuitously nasty piece I've ever read that was written by neither a neo-Nazi nor a New York art critic.

Without actually refuting a single one of Louis' beliefs on its merits, Paul Keegan smeared them all under a thick coat of snotty adjectives. He must have really dog-eared the old thesaurus to keep his dismissals so evenly mean-spirited. On a bad day, Rush Limbaugh extends Hilary Clinton more decency and compassion than Keegan allows Louis.

Keegan's article isn't journalism. This is little more that the schoolyard bully rallying the other kids to beat up on the new boy with glasses.

Nor were Keegan's hatchet hacks sufficient. You also felt obliged to mutilate the corpse of Wired's IPO in similarly adjectival invective. And polished off the whole delightful massacre with your end-piece whack at "Nicholas Negropompous." (Which was, at least, pretty funny.)

I kept wondering what these folks had done to deserve insults of such personal cruelty. Has optimism really become so heinous? It is now a crime to still believe anything you believed in 1969?

But then I looked back to your cover. And lo, there was the answer: "Upside, the Business Magazine for the Technology Elite." Of course! The reason you are so sanctimonious about Wired's elitism, techno-enthusiasm, shallow materialism, and general sucking up to the "top bosses" is because you mean to supplant it with your own '80's version of the same thing.

Upside is trying to destroy Wired for the same reason Cain slew Abel: fratricidal envy. It wants to take what Wired presently has and reproduce it in a form more palatable to the Powers That Were. Upside aspires to be Wired Lite, same great advertising base, no troubling new ideas. And a smart-ass put-down for anyone who might originate some.

You and Keegan may characterize yourselves as the responsible parents who, by your mature understanding of the "Real World," will clean up the mess technology is about to visit on us all, but vilifying those who warn of storms already raging in the service of your own hypocritical self-interest is anything but mature behavior.

Wired and its progeny will continue to be vital historical forces when Upside is sharing its fossil layer with the other corporate reptiles of the Industrial Period.

Sincerely as hell,

John Perry Barlow

JOHN PERRY BARLOW is cofounder of the Electronic Frontier Foundation, a former lyricist for the Grateful Dead, and a former Wyoming cattle rancher.

From: Stewart Brand
Date: 3/4/97

Well hissed, John Perry. I agree.

A rather similar hit piece on the cover of Esquire about John Lennon led directly to his murder, by the fucking way. The little shit whatshisname read it in Hawaii and felt he'd been given the final encouragement/permission to liberate the object of his fandom from his too too flawed flesh.-

STEWART BRAND is founder of the Whole Earth Catalog, cofounder of The Well, cofounder of Global Business Network, and author of The Media Lab: Inventing the Future at MIT (1987) and How Buildings Learn (1994).

From: Dave Winer
Date: 3/4/97

It's fascinating how these ad hoc push channels go!

Seems to me that Wired dishes out lots of this kind of stuff, what's wrong with a little coming back? Isn't that the way the world works? Must be good for circulation. Wired isn't the Beatles. Wired is Esquire, on its best days. On its worst days, it's less.


DAVE WINER is a software developer and the publisher of DaveNet.

From: Jaron Lanier
Date: 3/4/97

As they used to say in the schoolyard, two wrongs don't make a right. The Wired pieces that you're thinking of might include the pieces on Ted Nelson and Bob Stein.

What's notable about all the hit piece on the table is that the target is largely condemned for smelling like the sixties. There's some kind of dark business about all this, a rage against our cultural "other".



JARON LANIER, a computer scientist and musician, is a pioneer of virtual reality, and founder and former CEO of VPL.

From: Stewart Brand
Date: 3/5/97

That's perceptive and very interesting, Jaron. I see that I'm rising up before dawn in London to respond to it.

To expand on your point, Wired's piece on Paul Allen was also of the Nelson and Stein ilk. And he also was dinged for smelling like the sixties. It's as if in private we sixties survivors brag about being more sixties than each other, and in public less sixties than each other.

Still, the Upside piece on Louis was shallow, sloppy, and not even funny. The Wired articles about Ted Nelson and Bob Stein were serious and rather insightful. (The Paul Allen piece was serious but basically wrong, I think — -among other things it mocked Allen's investments. If any of us had invested in parallel with Allen, we would be doing very well.)

How does one report organizational failures? I think it's extremely important that failures are examined. Mostly in business it's taboo to even discuss. Xanadu failed. Voyager failed. Whole Earth failed. VPL failed. Thinking Machines failed. Content.com failed before even starting. EFF nearly failed.

In each case the founders were in the thick of the failure. Now, was it personality flaws that did our organizations in? Hubris? (Watch out, Wired.) Wrong theories? This stuff is worth dissecting, as a warning to the others. It needs to be done with respect — -maybe even humor — as an inspiration to the others.

Lately I've been reading a good book, which I'll send out to GBNers shortly. It's called THE IDEA OF DECLINE IN WESTERN HISTORY, and it's full of painful revelation, at least to me. Hoo boy, do us apocalyptoids have a sordid lineage — -viciously racist, among other things.

Not long ago, the Enlightenment gave way to Romanticism — -a form of objective optimism was replaced by a form of subjective pessimism. Joyously finding new species all over the world was replaced by mooning over sinking Venice. Artists bravely explored their own intestinal tracts. Over the decades Romanticism waxed and waned. It waxed in the Russian revolution, and with the Nazis. In our sixites it waxed, along with political theories so hysterically aglay that our generation venerated one of the three dominant monsters of the century, Chairman Mao.

It may be reasonable to posit that our theories and our Romanticism had something to do with our organizational failures.

From: Dave Winer
Date: 3/6/97

How does one report organizational failures?

Keep struggling with it, you'll never figure it out because we're in no position to judge something as failed or not. The best we can do is dig, learn, and report what we learned. I don't know if Nelson is a failure or not. But I remember reading the Wired piece, and I even wrote something about it, the text is at the end of this email. The Wired piece evoked something for me. That makes it good writing. No point grappling with the bigger issues.

This failure thing and its related diseases, products killing products, hero geeks saving the world, these are all myths. We die here. Before that we can find other people and play and learn. That's all there is.

I see the Upside piece as a totally positive thing. It got me emailing with Louis. He's smart man. I have a much better idea where he's at. He's taught me a lot. Many thanks!

I hear from Jaron, which is cool too. Two wrongs don't make a right? There's always another point of view. Not bad.

No heroes. Let's just play and learn.


PS: My opinion: the Upside piece was a skimmer. Not great writing. The cover scared me, it got me to open the book, but the story behind the cover didn't hold my attention. Maybe I was busy processing something else.

From: Kevin Kelly
Date: 3/7/97

One of the things we've been trying to do more of in Wired is write more about (and write more insightfully about) failure. This I know, it is far more difficult to write about failure than success. If you mess up the details of success, no one cares. If you mess up the details of failure, you make enemies real fast.


KEVIN KELLY is the executive editor of Wired magazine. He is the author of Out of Control.

From: John Perry Barlow
Date: 3/7/97

I think it is critical to write about failure — and you folks *should* do more of it yourselves — but it is more important to understand that the people involved in failures are not necessarily failures themselves. And are often caught in the vortices some larger system as inexorable as Greek tragedy.

The Greeks knew the importance of understanding the physiology of tragedy, and while hubris generally kicked it off, there was always in Greek tragedy a sense of compassion for those being punished for that sin, particularly since most of them came down with hubris without even knowing what it was. And since most of us have been guilty at one point another without getting shaken and baked.

I don't believe in ignoring error, especially when it's one's own or uncomfortably close to home. It is a well-qualified cliche that you must be aware of your mistakes to learn from them. And we can learn from one another's mistakes...though in my experience, not much...

But to sneer at the afflicted, to rejoice in such schadenfreude as did Keegan and Upside teaches us nothing and devalues the moral currency.

To say — as several have in Upside's defense — that Wired deserved this for its own similar sins only demonstrates how depressed the coin of decency has already become.

Let's get back to thinking about responsibility and quit savoring these ugly pleasures of blame. Let's study the sin and give the sinner the empathy he usually deserves.

Your loyal pal,

John Perry

From: Marvin Minsky
Date: 3/7/97

In many of the cases mentioned earlier in this discussion — the "failures" were the result of introducing ideas too early for the larger culture to know how to absorb them at that time. For example, I couldn't convince people to build Confocal Microscopes forty years ago but now (with cheap computers and lasers) there must be a billion dollars of them.

Jaron Lanier's VR visions have already become successful — but for other companies..

There very likely *will* be successful LISP machines again, though it's hard to guess when.

Same, probably, for Connection Machines. Same for Nelson and Bob Stein.

I think the problem is that too many writers confuse
"The inventor 'failed' to make a huge personal fortune" with
"The inventor failed to have the idea eventually adopted."

An excellent example of this is the frequently-repeated story that

"In the 1980s, the idea of 'Rule-Based Expert Systems' was overly hyped,
and the resulting disappointment led to an 'AI winter'."

In fact, yes, the public companies formed to exploit the idea did not make much money, and the investors were disappointed. So Wall Street called it a failure. But in fact it became a multibillion dollar industry, but distributed among thousands of companies. The "trouble" was that the technology became popular and did not remain proprietary to the entrepreneurs. Tough for those investors — but for the rest of the world these ideas were extremely successful.

The trouble, in my view, is that too many reporters have unwittingly become 'tools of capitalist ideaology'. Hey, Kevin Kelly! How about a satire about Leonardo Da Vinci as a huge failure, because (1) his airplanes didn't fly, (2) he finished only a couple dozen paintings (better fact-check this) and, (3) he didn't even get a dime for his notebooks, although Bill Gates paid many millions for one (or more?) of them.

MARVIN MINSKY is a mathematician and computer scientist; Toshiba Professor of Media Arts and Sciences at the Massachusetts Institute of Technology; cofounder of MIT's Artificial Intelligence Laboratory, Logo Computer Systems, Inc., and Thinking Machines, Inc.; laureate of the Japan Prize (1990), that nation's highest distinction in science and technology; author of eight books, including The Society of Mind.

From: Kevin Kelly
Date: 3/7/97

Exactly. This is sort of the Ted Nelson story. I think Ted is a hero. I think his ideas were success, in the way Marvin indicates. But the story was about how a hero with the right idea got caught up in such an epic failure. And what might or might not be learned from that.


From: Paul Keegan

At John Perry Barlow's suggestion, I am forwarding you a copy of my reply to his email about my recent piece for Upside.

Paul Keegan

Dear Mr. Barlow,

Gosh. I'm (almost) speechless. Was my critique of Louis Rossetto and Wired really "as close as you can get to assassination without actually using bullets"? I pulled out a copy of my story and searched madly for "insults of such personal cruelty" that they actually conjured images of the "schoolyard bully rallying the other kids to beat up on the new boy with glasses." And that's not the worst of it: I'm an even bigger meanie than Rush Limbaugh!

As much as I'd love to defend my piece, I'm having trouble discerning from your email what exactly triggered these inspired bursts of outrage. You don't cite any of the "snotty adjectives" and "mean-spirited dismissals" that my story is supposedly chock full of — but then again, I had the unfair advantage of using a thesaurus.

I did find a few clues about what might have vexed you so: You imply that I consider optimism heinous and that I think it's a crime to still believe anything you believed in 1969. Rummaging through my piece, however, (with the same optimism I've maintained since well before the Summer of Love) I could find no such sentiments.

I also noticed that you referred to my story as being "sanctimonious about Wired's elitism, techno-enthusiasm, shallow materialism, and general sucking up to the `top bosses.'" Since nowhere in that sentence do you use the word "alleged," I can't tell if you are challenging or agreeing with such a characterization of Wired. Please advise.

Near the end of your vastly entertaining screed, I finally unearthed a paraphrase from my piece — the part about the parents of the world being left to clean up, as you phrased it, "the mess technology is about to visit on us all."

The metaphor of a storm on the horizon, you should know, comes not from me, but from Louis Rossetto, who both in conversation and in his magazine routinely evinces an utter lack of concern about the deleterious effects such a digital squall could have (or may already be having) in a world in which some homes are constructed less sturdily than others. In fact, Wired seems to be delighted to announce the impending chaos. You argue that my pointing this out shows a "hypocritical self-interest," which I find amusing given the fact that Wired proudly says that it both covers and champions the digital revolution. But since I'm not even on staff at Upside, I fail to see my hypocrisy or where my self-interest comes in — unless you mean attempting to eke out a living as a free-lance writer.

Anyway, those are all the clues I could find to back up your charge that I'm some kind of character assassin. I'd be glad to respond further if you'd care to provide more examples — but wait! I nearly forgot. You've threatened not only to refuse to speak to me, but to blackball me throughout the industry! The note attached to your letter to Upside says, "You may be assured I will never return one of your research calls and I will advise everyone I know in the industry to do likewise."

Sounds like the fearless leaders of the digital revolution — all those new kids with glasses running from bullies like me — have locked the clubhouse door so they can furiously scribble their manifestos without being constantly interrupted by adults reminding them that it's time for dinner. Precisely the point of my story.

Nevertheless, please allow me to hope that your quotation of Chairman Mao does not also imply an agreement with his views about the best way to handle dissenting opinion. I'd always thought the Digerati loved to debate and never shrink from conflict. If we can at least agree on the value of intelligent discussion, you could take the constructive step of supporting your histrionic allegations. As Louis himself says, "You say you want a revolution, you're not going to get anywhere with pictures of Chairman Mao."

Sincerely as heck,

Paul Keegan

cc: Richard L. Brandt, editor, Upside magazine

From: John Perry Barlow
Date: 3/7/96
To: Paul Keegan


I am still mulling over my response to your genuinely thoughtful reply.

In fairness, I should tell you that I sent my original message not only to to you and Upside, but also the list of "digerati" cc'd above. I thought about forwarding your message directly on to them, but I feel that is yours to do.

And, in the spirit of open discourse, I hope you will do so.

Still hot, but cooling,

John Perry

P.S. I apologize for my bluster about advising friends in the industry against talking to you in connection with a story.

I don't think *I'll* talk to you, but I would never try to blackball anyone. Freedom of speech includes the freedom to have an audience. (Though, as Hubert Humphrey once said, it doesn't include a right to be taken seriously.)

From: Paul Keegan
Date: 3/7/97

Dear John Perry,

Thank you for the invitation to join in the discussion you so spectacularly provoked with your howling email about my piece in the February issue of Upside. I'm glad to see you're cooling a bit and that you have removed your hex on me — though I should admit that I hadn't ascribed supernatural powers to you anyway, supposing that the more reasonable members of your cc club would have made up their own minds about whether shunning me is an appropriate response to my failure to agree with you.

And I accept your apology, but as for your line, "I would never try to blackball anyone," spare me the pious nonsense — that's exactly what you tried to do before I called you on it.

Now that you've so generously allowed me to speak, I'd like to respond to some of the more outrageous comments that have ensued. What can one say about Stewart Brand's implication that Upside and I would be responsible for the next Mark David Chapman — "A rather similar hit piece on the cover of Esquire about John Lennon led directly to his murder, by the fucking way" — except that psychological counseling may be in order?

As for the David Winer/Jaron Lanier exchanges about whether Wired or Upside is worse when it comes to blasting people, that seems rather beside the point. Of course magazines will trash each other out of self-interest. But since we're all fairly sophisticated about how the media works, I'd assumed that would be a given. So let's forget the cover for a moment. What about the story?

Why is it that none of the people attacking my piece feel compelled to substantiate their charges? Have they actually read it? You've already gone into apoplectic fits but still haven't really said why. Where are the factual inaccuracies? The quotes taken out of context? The specious arguments? Please name them so we can have an intelligent discussion. As for Stewart Brand: What was "shallow" and "sloppy" about my story? There's nothing more shallow and sloppy — not to mention cowardly — than leveling such a charge in a throwaway phrase without backing it up. Finally, if the piece was so wrong and unfair, why have we heard nothing from the subjects themselves, Louis and Jane, nearly two months after the piece first came out?

My concern, as you might imagine, is that your charges against me are then repeated or taken as true by others in the long list of people you copied your flaming email to, not to mention the readers of Upside's letters page. Before anyone else picks up the slanderous thread you've tossed into the ring, John, please explain to me what you know about me that justifies your rather serious charge that my motives were anything but journalistic. Let's be clear: Your characterization of my pieces as an "assassination" and accusing me of having some mysterious "hypocritical self-interest" is mud-slinging. What I wrote was not.

In fact, one reason Upside chose me to write the story is precisely because I have no axes to grind. Unlike many of Wired's critics, I've never written for any of Wired Ventures' media outlets, nor do I aspire to. I wrote a long piece about Rossetto-Metcalfe and Wired for The New York Times magazine in May of 1995 and I did a story about John Brockman for Details in December of 1996. That's pretty much the extent of my dealings with Wired or the Digerati. And don't even try arguing that this gives me an inherent conflict-of-interest as an Old Media guy: Your "Digerati" emailing list is full of people who earn their living from the printed page — including, most famously, Rossetto and Wired.

I have found, however, that having a certain distance from a subject to be immensely helpful, whether it's high-school football players, corrupt cops, or Wired's Digerati. One advantage is that broad historical themes are more readily apparent than they might be to a writer surrounded by trees and thus blind to the shape of the forest. I was relieved to see that Lanier and Brand did briefly touch on one important subject that actually WAS in my piece, when they discussed the Sixties. But Lanier was flat wrong when he said that, in my article, Rossetto was "condemned for smelling like the sixties" and that there was some "dark...rage against our cultural `other'".

As would be apparent to anyone who read my piece, I attempted to trace Rossetto's intellectual development and Wired's philosophy back to an era that sheds much light on today's so-called "Digital Revolution" — for the simple reason that many of today's most influential members of the Digerati emerged from that ferment. My story offered an even-handed analysis of how certain ideas of that time intersected to create some of the founding principles of Wired's techno libertarian approach to the digital phenomenon, which are shared by many members of the Digerati today. My analysis was based on extensive interviews with Brand, Rossetto, Metcalfe, Kelly, and many others and their ideas are faithfully reproduced in the article.

I added my own interpretation, of course, which most of us would probably agree is a crucial part of magazine journalism. But offering a point of view is hardly the same as doing a "hatchet job," as you phrased it in your ad-hominem attack. Perhaps you object to the idea that biography is a legitimate journalistic or literary tool for examining ideas, cultural movements, and business strategies. But who knows what you really think, since you don't really explain? Obviously, I believe that looking at the key players in any movement is important and that such a biographical approach would surely involve interpretation — after the facts have been established, of course, and the relevant ideas accurately rendered.

Take all the time you want, John, to think hard about how you want to respond to my emails. If you have an ounce of courage you'll either apologize for calling me a character assassin or offer something of substance to support your irresponsible, slanderous screed. If you indeed do care, that is, about being taken seriously.

As for the threaded discussion you've begun, if we're all just hanging around a dinner party trading bon mots, fine. But, please, let's not pretend it's a serious debate. I don't have time for the vapid posturing of such gatherings, so I certainly wouldn't join one in cyberspace, where I'm assured of not even getting a decent meal.

Paul Keegan

From: David Bunnell
Date: 3/11/97

UPSIDE published this story because our readers are interested in the Wired story. We have gotten a ton of mail which only validates this. It seems to me that the main criticism of the story from some of the "digerati" is that they don't like the point of view of the writer. Certain points of views, it appears, should be abolished in cyberspace. John Perry Barlow's note to Paul Keagan proves my point.

Unlike all the other technical magazines, UPSIDE takes a critical look at the companies and people who make up the so-called Digital Revolution. We've been doing it for seven years now and there's nothing stopping us.

DAVID BUNNELL, founder of PC Magazine, PC World, MacWorld, Personal Computing, and New Media, is CEO and publisher of UPSIDE.


A Talk with Joseph Traub

Joseph Traub, starting in 1959, pioneered research in what is now called "information-based complexity". Computational complexity theory studies the intrinsic difficulty of solving mathematically posed problems; it can be viewed as the thermodynamics of computation. "Information-based complexity" studies the computational complexity of problems with only partial or contaminated information. Such problems are common in the natural and social sciences and he is applying "information-based complexity" to a wide range of problems Other work ranges from new fast methods for pricing financial derivatives to investigating what is scientifically knowable.


JOSEPH F. TRAUB is the Edwin Howard Armstrong Professor of Computer Science at Columbia University and External Professor at the Santa Fe Institute. He was founding Chairman of the Computer Science Department at Columbia University from 1979 to 1989, and founding chair of the Computer Science and Telecommunications Board of the National Academy of Sciences from 1986 to 1992. From 1971 to 1979 he was Head of the Computer Science Department at Carnegie-Mellon University. Traub is the founding editor of the "Journal of Complexity" and an associate editor of "Complexity". A Festschrift in celebration of his sixtieth birthday was recently published. He is currently writing his ninth book "Information and Complexity" Cambridge University Press, 1998.


JB: To what extent is your interest in the limits of scientific knowledge influenced by the work of Gödel?

TRAUB: In 1931 a logician named Kurt Gödel announced a result that astonished the scientific world. Gödel said that there are statements about arithmetic that can never be proved or disproved. This impossibility result is about elementary arithmetic, not some arcane corner of mathematics. To the educated lay person, Gödel's undecidability theorem may be the single most widely-known mathematical result of the 20th century.

Gödel's theorem is just one of numerous impossibility results established in the last 60 years stating what cannot be done. Another famous negative result, due to the British genius, Alan Turing, states that you cannot tell in advance if a certain abstraction of a digital computer called a Turing machine will ever halt with the correct answer. Now, what all these impossibility results have in common is that they are about the manipulation of symbols, that is, they are about mathematics.

I've spent some of my time for most of a decade asking myself what does this tell us about the unknowable in science. Indeed, the first time that I spoke publicly about this subject was on February 1, 1989 at a panel discussion in memory of Heinz Pagels organized by John Brockman.

Science is about understanding the universe and everything in it. Examples of scientific questions are: Will the universe expand forever, or will it collapse?; Will there be major global changes due to human activities, and what will be the effects on earth's ocean levels, and on agriculture and biodiversity? Note that there are, a priori, no mathematical models that accompany these questions. Science uses mathematics, but it is also very different from mathematics. Can we up the ante from mathematics and prove impossibility results in science?

Ralph Gomory, the President of the Alfred P. Sloan Foundation, proposes a tripartite division of science: the known, the unknown, and the unknowable. The known is taught in the schools and universities and is exhibited in the science museums. But scientists are excited by the unknown. Parenthetically, artists go to art museums to learn; scientists do not go to science museums because those museums act as if it's all known and preordained. That may be changing; exemplars are the Exploratorium in San Francisco and the American Museum of Natural History.

Gomory's tripartite division proposes three distinct areas: the known, the unknown which may someday become known, and the unknowable, which will never be known. The unknown and the unknowable form the boundary of science. Here are examples of questions for which the answers are today unknown.

How do physical processes in the brain give rise to subjective experience? That is, explain consciousness. Can the healthy, active lives of humans be significantly prolonged by, say, a factor of two or three? How did life originate on earth? Will the universe expand forever, or will it collapse? Can we develop a grand unified theory of the fundamental physical laws? Why do fundamental constants, such as the speed of light, have their particular values? Is there life elsewhere in the universe? Is it intelligent? How do children acquire language?

For which of these are the answers unknowable? We cannot prove scientific unknowability. That can only be done in mathematics. This is sometimes not understood, even by professionals. I expressed my interest in the unknowable to a very senior European scientist. He immediately responded that this had been, of course, settled by Gödel's theorem. Not so; Gödel's theorem limits the power of mathematics and does not establish that certain scientific questions are unanswerable.

What are some of the reasons why a scientific question might be unanswerable? I'll limit myself to just three here. The first is that insufficient data has survived. That can be a problem in ur-linguistics, archaeology, and history. The second is that contingent events, sometimes called frozen accidents, may limit our ability to explain certain phenomena. (On the other hand, as Stephen Jay Gould eloquently argues, historical explanations in science can be as convincing as those arising from general theories.) Finally, resources, such as energy, may simply not be available in our part of the universe to discriminate among contesting theories about the universe.

Of course we must be very careful in stating that something is impossible or unknowable. We're all familiar with some of the notorious announcements concerning impossibility, such as, there cannot be a heavier-than-air flying machine.

The unknowable has long been the province of philosophy and epistemology, with questions raised by giants such as Immanuel Kant and Ludwig Wittgenstein. My goal is to move the distinction between the unknown and the unknowable from philosophy to science and thereby enrich science.

What is the basis for my belief that we might succeed? The Zeitgeist seems right for tackling such questions. We have had great success in establishing impossibility results in mathematics and theoretical computer science. Although these ideas cannot be directly applied to science, I'm hopeful that the modes of thought might be transferable. Recent workshops at the Santa Fe Institute have brought together leading physicists, economists, cognitive scientists, biologists, computer scientists, and mathematicians who have strong interests in defining the unknowable in their own fields.

JB: What kind of predictive models will you use?

TRAUB: A central issue is the relation between reality and models of reality. I like to talk about this in terms of four worlds. There are two real worlds: the world of natural phenomena and the computer world, where simulations and calculations are performed. There are two model worlds: a mathematical model of a natural phenomenon and a model of computation. The mathematical model is an abstraction of the natural world while the model of computation is an abstraction of a physical computer. Incidentally, although most people are only aware of the Turing machine as the abstract model of computation, the real-number model is probably more appropriate for the continuous models of science, but that's a story for another day.

I'll give you one concrete example concerning reality and models of reality. I wrote about this in "On Reality and Models" which is a chapter in the recent book, "Boundaries and Barriers" edited by John Casti and Anders Karlquist, and which is also a Santa Fe Institute report.

All living matter is built out of proteins, and these proteins fold effortlessly. It takes nature less than a second. But even with the most powerful supercomputers, we cannot simulate protein folding, and theory suggests that the problem is what is technically called NP-complete, meaning it's conjectured to be computationally intractable. Why is there this dissonance between our models and reality? I'll confine myself here to just one possible explanation. Nature doesn't fold arbitrary molecules; the molecules that exist in nature have been selected by evolution for ease in folding. But in our theory we don't know how to model this selection.

Trying to understand the relation between reality and our perception of reality is an old issue. Some 200 years ago Immanuel Kant believed that three space dimensions and one time dimension has more to do with our brains than with "reality". Niels Bohr and Albert Einstein argued about this. Bohr said all I want from a mathematical model is the ability to predict and I don't know or care about "reality". Einstein felt that there is a reality which our theories describe. The debate continues today between Stephen Hawking and Roger Penrose with Hawking taking Bohr's view and Penrose taking Einstein's.

JB: Talk about the end of science versus the limits of science.

TRAUB: I assume that you're referring to John Horgan's book, "The Ends of Science". John sent me the manuscript last fall for my comments. I suggested some minor technical corrections and told him I totally disagreed with his thesis that science had made such extraordinary progress that its golden age was over and only mopping up was left. Incidentally, the manuscript was titled "The Ends of Science", which is an ambiguous and far more interesting title. Apparently the publisher changed that to "The End of Science", hoping to derive some advantage from the success of books titled The End of You Name It, starting with Francis Fukuyama's foolish "The End of History".

John writes very well indeed; he is a senior writer for "Scientific American", and his book features juicy anecdotes about many scientists who are household names. However, I never would have predicted the amount of media attention that the book has actually received. Its message is basically pessimistic. For example, a column in the New York Times stated that Horgan found in his interviews with some of today's leading scientists an atmosphere of anxiety and melancholy and little acknowledgment that the great era of scientific discovery is over.

Those are not the emotions of the scientists that I know. The ones with whom I'm in touch are vitally excited by their work. There's more to be done than ever, and we can't wait to get on with it.

I'm not saying that there aren't difficulties. Funding for research has leveled off and will probably decrease. Universities don't have tenure positions available for young scientists. The emphasis in some of the leading corporate laboratories has shifted away from basic research. The Federal laboratories are in turmoil due to budget cuts and re-direction. But such difficulties are to be expected after the period of unparalleled growth which followed the Second World War. Horgan is claiming it's all over because the fundamental discoveries have been made.

Earlier I mentioned a very partial list of big scientific questions. Let me repeat a couple of items from that list: How do physical processes in the brain give rise to subjective experience? That is, explain consciousness. Is there life elsewhere in the universe? Is it intelligent? Will the universe expand forever or will it collapse?

I don't find John Horgan's thesis, that all the important discoveries are behind us, very compelling. Furthermore, each major advance leads to important new questions. Reports of the death of science have been greatly exaggerated. Indeed, I believe they're just plain wrong.

JB: What are the great unknowables that you see worthy of study.

TRAUB: Remember that we have to distinguish between the unknown and the unknowable. There's a very big list of things we don't know. Which important questions might be unknowable? I'll mention just four of them here. Although what I'd said earlier about mathematical models is solidly grounded, what I'll now say is highly speculative. I hope the readers of this interview will be forgiving.

The first has to do with earth systems predictability. There are many interesting questions here. Some people believe earthquakes, like Per Bak's sandpile, are a self-organizing phenomenon and are intrinsically unpredictable. On the other hand, my colleague Lynn Sykes at Lamont Doherty Earth Observatory believes that though we probably cannot predict two to three earthquakes out, we can predict the next big one in a geographic area. Of course, warning a population that a big one is coming has major social consequences. Another example of a question with enormous implications for humans is whether there will be major global changes due to human activities. What will be the effects on earth's ocean levels, and on agriculture and biodiversity?

A second is the likelihood of other intelligent life in the universe. Stephen Jay Gould argues that it's extremely improbable because so many unlikely accidents have to happen. He believes if the tape were to be replayed, we would not have evolved. My Santa Fe Institute colleague Stuart Kauffman argues that order comes for free and that we are "at home in the universe". Other than by discovering intelligent life forms elsewhere, can this question be resolved?

A third is whether we can understand consciousness. Some believe this to be unknowable while others believe the answer can be found within science. I have a 16-month old grandson and as I was watching him while he was still pre-verbal I wondered about his cognitive learning processes. He sure was busy crawling, looking and manipulating but what was going on inside? And what did he learn in the fetal stage?

My final question is whether we can keep the U.S. economy, let alone the international economies with which we are so interdependent, on an even keel. Basically, things have been pretty good since the Great Depression. Right now, it looks good. But we have coupled nonlinear systems and we know that such systems often exhibit chaos. Can the system be kept going for a "long" period of time without dreadful economic disasters. How might this be achieved?

There are so many wonderful questions. I guess I've been very lucky. I got interested in computers over forty years ago and I keep expecting to run out of interesting questions. But I never have to strip-mine. I just walk along and pick up diamonds.


Paolo Pignatelli: Further Questions for Joseph Ledoux

From: Paolo Pignatelli
To: Joseph LeDoux
Submitted: 3/10/97


Thank you for your detailed and illuminating reply to my previous questions, I have a few more, if you could indulge my curiosity on this subject of which I unfortunately know so little about.

You say, "I think it's safe to say fear behavior preceded fear feelings in evolution." If fear behavior precedes fear feeling, in neurological differentiation, is it an evolutionary descendant of it, fear behavior at the (relative) apex, fear feeling a few levels below it? Or is the structure a hybrid of fear behavior and other "centers"? (That was the reason for the introduction of the idea of the similarity of polymorphism as understood in the computing community with that of the understanding of polymorphism as understood in the biological community. When I first ready your fascinating interview, the famous sheep clone experiment had not yet hit the press, but the experiment did in a way clarify some things, and bring other questions to mind. The interesting thing from my point of view was how the experimenters succeeded in making the cell revert back to a more "primitive" communication and organizational structure. If we take those clues and apply them to the brain, how would we "starve" one part of the brain so that it could revert to a more primitive "mode", and then re-build itself (from amygdala to hippocampus, for example)? I was further interested in the dissipation of information that would occur in that process, where would the lost energy (energy used to store something) end up? Could the brain then be made to, using principles of computing polymorphism, store that "stack" that was being unloaded in order to make that trip up and then down the evolutionary "tree", in some "parallel structure it created (by manipulation similar to the "clone one")?

In your reply you say, "While the amygdala was out, the rats, which were otherwise fully awake, underwent a standard fear conditioning procedure."

Were there differences in the amount of work necessary to induce the neuronal potentiation in the amygdala as opposed to the hippocampus? By work I mean that force or energy necessary to change the structure in the desired direction (the final structure as affected by learning (through potentiation).

As may become apparent, I am very interested in what a next generation of computers would be like, what would they need to be like in order for them to be much more "friendly", how could we imbue in them 'common sense", or the understanding of a language. As slowly science breaks down the barriers, many based on language and its artifacts, in my opinion, that may have prevented us from delving into some truly interesting questions of science, for example, is self-awareness possible in a machine (yes, I believe, since I read that certain lesions can lead to a loss of that at least physical self awareness), we may see that a mechanistic interpretation of Man need not be in contrast with "humanity".

You mention the support of the biological-based companies for your work. Have the computer companies shown much interest?

Paolo Pignatelli

Copyright ©1997 by Edge Foundation, Inc.


Home | Digerati | Third Culture | The Reality Club | Edge Foundation, Inc.

EDGE is produced by iXL, Inc.
Silicon Graphics Logo

This site sponsored in part by Silicon Graphics and is authored and served with WebFORCE® systems. For more information on VRML, see vrml.sgi.com.