THE REALITY CLUB
"One Half Of A Manifesto" by Jaron Lanier
Part I



From: George Dyson
Date: September 21 , 2000

Without taking one side of Jaron's dogma or another (place me somewhere else entirely) I would disagree strongly with his "Argument from Software" — which is as flawed as Bishop Wilberforce's Argument from Design.

Back in the days when programs could be debugged but processing could not be counted on from one kilocycle to the next, John von Neumann wrote his final paper in computer theory: "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" [in Claude Shannon and John McCarthy, eds., Automata Studies (1956) pp. 43 — 99]. It makes no difference whether you have reliable code running on lousy hardware, or lousy code running on reliable hardware. Same results.

What should reassure the technophiles, and unsettle the technophobes, is our world of lousy code. Because it is lousy code that is bringing the digital universe to life, rather than leaving us stuck in some programmed, deterministic universe devoid of life. It is that primordial soup of archaic subroutines, ambiguous DLL's, crashing Windows, and living — fossil operating systems that is driving the push towards the sort of fault embracing template — based addressing that proved so successful in molecular biology, with us — and our computers — as one of its strangest results.

Let us praise sloppy instructions, as we also praise the Lord.


From: Freeman Dyson
Date: September 21, 2000

Dear George, your reply to Lanier is brilliant, profound, and also true. I remember that I wrote, at the end of Origins of Life, that the evolution of complex organisms became possible when the essential sloppiness and error — tolerance of life were transferred from the hardware to the software, from the metabolic apparatus to the genes. And now you are saying that exactly the same thing happened in the evolution of complex computer — systems. Obviously, that's the direction you have to go if you want to combine robustness with creativity. All I can say is, why didn't I think of that?


From: Cliff Barney
Date: September 21, 2000

Jaron Lanier argues persuasively, but in a social vacuum. Cyberarmageddon, feasible or not by 2020, would be not a technological but a social phenomenon. Lanier argues that it won't happen because it can't, computers being what they are; possibly true but irrelevant to the great social upheavals that are occurring today in 2000 as information technology develops. These changes, as much as Moore's Law, will determine how technology develops. "Society doesn't work technologically," says Manuel Castells; "technology is used and reused and adapted by society."

The Dalai Lama put it another way: "Technology is not the basis of our society, compassion is the basis of society."

We do in fact have one million — fold increase in computer power to look at: the jump between 1968, when Doug Engelbart invented the mouse, and 1998, when he and his many friends lamented at Stanford that it hadn't changed the world as much as they imagined it would (see www.netfront.to/Engel1.html). Thirty years, doubling every year and a half, gives us the millionfold power increase, and we went from mainframes with bales of wire hanging out the back to palmtops and satellites. What were the social changes?

Castells has catalogued these, and offered a hypotheses for understanding the change, in his three — volume survey of The Information Age, in which he describes the Network Society that has emerged in the past 30 years. We can see some of the results in the post — Seattle streets, as individuals attempt to find an identity vis — — vis a global economic network. This is Moore's Law, social edition.

In this respect I wish Lanier had written the other half of his manifesto — the part about the "lovely global flowering of computer culture already in place." This is more likely to affect Armageddon than Dr. Moore's relentlessly shrinking etchings.


From: Bruce Sterling
Date: September 22, 2000

Jaron has written a very beautiful work. This screed is truly a native document of the year 2000 AD. I felt very privileged and happy to read this. It really floods the mind with its clarity and insight. It's very musical.

I've been thinking a long time about the "eschatological cataclysm" detailed in "Belief #6." This is known in my trade as the "Vingean Singularity," and us undignified pulp science fiction writers consider it particularly galling, because this is a point at which our craft breaks down. It's a Lanierian software traffic jam for science fiction, really, where our ability to generate and scatter mindblowing concepts outruns the inherent limitations of our merely human frontal lobes.

I have now rubbed — up against the stark cosmic horror of Belief #6 long enough to get rather chummy and cozy with it, and would like to offer a new, brief set of corollary beliefs.

1. There is no one Singularity. Any area of scientific inquiry, pushed far enough, could provide its own native version of a cataclysm: biological, cognitive, mechanical, cybernetic, you could name it. If man is the measure of all things, then there probably is no measure by which we can't be made more than human.

2. A Singularity ends the human condition (because that is its definition), but it resolves nothing else. It would almost certainly be followed by a rapid, massive explosion of following Singularities. These ultra — cataclysmic events would disrupt the first Singularity even more than the first Singularity disrupted the human condition.

3. The posthuman condition is banal. It is crypto — theological, and astounding, and apocalyptic, and eschatological, and ontological, but only by human standards. Oh sure, we become as gods (or something does), but the thrill fades fast, because that thrill is merely human and parochial. By the new, post Singularity standards, posthumans are just as bored and frustrated as humans ever were. They are not magic, they are still quotidian entities in a gritty, rules — based physical universe. They will find themselves swiftly and bruisingly brought up against the limits of their own conditions, whatever those limits and conditions may be.

4. Messy, embarrassing, reversible, goofy, catch — as — catch — can posthumanism is politically preferable to sleek, streamlined, sudden, utter, Final Solution posthumanism. The best way to encounter a Singularity would be to nick over the event horizon for a minute or two and have somebody else yank you back. Then the rest of us would be able to debrief you, and see if you could still write as well as Jaron Lanier.

From: Rodney Brooks
Date: September 25, 2000

I do not at all agree with Moravec and Kurzweil's predictions for an eschatological cataclysm, just in time for their own memories and thoughts and personhood to be preserved before they might otherwise die. I do not discount that the logical consequence of some version of cybernetic totalism might ultimately happen. I just happen to think it is going to be somewhat different in form than the version discussed by Lanier, and probably will happen much more gradually over some centuries, with no visible cataclysm, and no real eschatological division between before and after. And, I agree entirely with Lanier that the particular arguments of Moravce and Kurzweil seem to rely too much on it all happening just because there will be lots of more of Moore's law computer power. Neither Moravec or Kurzweil ever give a hint of what technical innovations need to be made to get to intelligent machines that will be able to do all the things they predict. Lanier however seems to deny that any such thing could ever happen, and his arguments largely boil down to an inbuilt fear of losing a last bastion of human specialness.

But first let me complain about one particular technical view expressed in Lanier's manifesto. Occasionally emerging is a fear of nanotechnology. It is not clear exactly which version of nanotechnology he fears, but I have become increasingly annoyed at the hyping up of concerns about "strong" nanotechnology that Bill Joy and others have recently engaged in. Lanier's super — nanobots in his conclusion certainly smack of strong nanotechnology. Strong nanotechnology, the version that is most popular in science fiction, has molecular machines which can manipulate matter, disassemble arbitary raw materials atom by atom, and build copies of themselves. We do not know whether the physics of our universe allows such machines to exist, or whether self reproducing machines need to use the molecular mechanisms of biology and must be on the order of billions of atoms in size. What we have seen so far in nanotechnology is the ability for us to manipulate single atoms in carefully controlled conditions using multi — kilogram machines. We have no evidence that non — biological nanotechnology machines will even in principle be able to manage energy supplies, manipulate single atoms in arbitrary ways, break down raw materials, both decode and copy a description of themselves, implement the computational resources necessary to control their behavior, and avoid being ripped asunder by the presence of other nearby matter. We have no clue when we will be able to answer whether such machines can exist, even in principle. Worrying about whether nanotechnology machines might "get away" from us and eat the fabric of our world, or evolve to do so, seems to me to be on a par with worrying about how the world will fare with the screwups in temporal consistency that will occur once we have figured out how to build time travel machines. Another topic popular in science fiction.

Now to the main disagreement I have with Lanier.

The first problem I have is with his dismissal of Artificial Intelligence as being based on an intellectual mistake. His argument is all smoke and mirrors with no viable logic. He uses the Turing test as the touchstone for AI, and argues that besides the computer getting as smart as a person, the Turing test could also be passed by a computer if the people get dumber. He claims the second is happening, and with a flourish worthy of a stage magician draws attention away from the first possibility, in effect negating that it might ever happen, just because he has anecdotal frustrations with business software systems illustrating cases of the second. This is no argument!

Then we get to Lanier's real failure. He turns out to be a closet Searlean. He "experiences" life, and no computer, he implicity argues, can every "experience" life. Why not? More smoke and mirrors...he has talked to philosophers who do not tackle his argument head on. If we accept that living systems are made up of physical molecules, and nothing non — tangible external to an understanding of the physics of the world, no essence, no immortal soul, no elixer of life, then we humans are machines and we humans do "experience" life. I do. A lot of the time. I see no reason therefore that other machines, that don't happen to have the same biological history as me can not also "experience" life. Searle argues that an atom for atom reproduction of me will act like me by will not really "experience" life. Lanier does not get into this level of detail, but clearly he (and Searle) and I have different dogmatic understandings of the universe. He requires some implicit specialness for biological people, I require that in principle non — biological machines can "experience" life. I do not quite know how to build such machines yet in detail, but it is perhaps no more of a stretch to have explained the heart as a pump delivering oxygenated blood to the body before the structure of hemoglobin was understood. The explanation certainly semed right, even before the details were known.

Mankind, and probably Lanier, has had to give up the notion that the earth is special and the center of the universe, has had to give up the notion that god created animals and humans in fundametally different ways but instead both were produced by evolution and natural selection, and has had to give up the notion that we are vastly different from yeast in our fundamental biochemical pathways. What is left for us proud humans is that we are different from machines in some fundamental ineffable way. Lanier does not want to give this up. I am willing to.

I'll take the null hypothesis. We are machines until proven otherwise, rather than just wished otherwise. Whether people are smart enough to build machines that "experiences" the world is another question. But in principle it can surely be done, and hence the cybernetic totalism that Lanier so irrationally, and tribally, fears.


From: Henry Warwick
Date: September 25, 2000

Responding to all of Mr Lanier's lengthy Manifesto would make for an enormous essay several times longer than his Manifesto. Rather than engage in lengthy interplay of point by point analysis, my contribution to this discussion will first set out what I believe/perceive to be true, then go into my own prognosis of the future, specifically the anti — utopian vision of what Mr Lanier calls Cybernetic Totalism. I call it delusional technocratic arrogance, but I won't quibble about that. In deference to his essay, I'll refer to it as "CT"...

Mr Lanier sets out (what he believes) are six component beliefs of CT. I think it's actually much simpler than that, and it fundamentally breaks down into a basic core group of related beliefs/predictions:

  1. Someday, soon, we will either replace ourselves or be replaced by robots/computers.
  2. Failing (or in addition to) that, we will divide the human genome into an enhanced variety and the rest ["archaic"] of humanity.
  3. Related to #2, we might also divide the race off as bio — mechanical creatures, what I call the "Borg Fantasy"

Point 1 will never happen — because it can't.

Point 2 will happen, but the results will probably be different than we envision, and the timing on it will likely be much later than sooner.

Point 3 won't happen, as the extreme variety as envisioned by various contemporary fantasies like Star Trek's Borg are just plain stupid, and while future technologies will help us in many ways, especially in terms of communications, incorporating them as body parts seems inherently dimwitted given Moore's Law. Attaching or putting machines into ourselves just doesn't make a whole lot of sense.

The rest of Mr Lanier's discussion is spent blasting their theoretic superstructure. As admirable such an effort may be, I see it as unnecessary, much as it is unnecessary for a democrat to argue Courtly Manners with a monarchist. The point is the sham of the divine right of kings, not whether bowing is bad for your back.

So, directly to a basic point — beneath the CT position is a fundamental and unspoken axiom — the Pythagorean Conjecture that the universe is mathematical, and deeper still, that the universe is fundamentally understandable by humans. Pythagoras took it to a numerological extreme, but the fundamental myth still obtains with many people who work in science — everyone is looking for the Equation/Theory/axiomatic system that will explain Everything Forever. The CT position depends on this assumption. Yet, we have never had, nor do we have now, any conclusive proof that the universe is humanly understandable in the first place, much less representable in some reductivist symbology of mathematics or any other language for that matter. Indeed, with Godel et al, we have a number of theories demonstrating the very limitations of such endeavors in the first place.

The CT position assumes that the world is computable and their thinking machine project logically follows — logical machines for a logical universe.

My thinking is this: The Universe is beyond human comprehension,

[Re: Haldane: "The Universe is not only weirder than you think — it's weirder than you can think" and Brockman: "Nobody knows and you can't find out."]

and is therefore not computable.

However — because of our inquisitive nature and history of inquiry and Inquisition, we have to continue the effort of the Scientific Project — just because there is no possibility of coming to a complete understanding and total knowledge of everything doesn't mean that we can't come to understandings that are useful and provide a reasonable and coherent sense of the universe and its workings, given our limited capacities to understand it. For example — cultural anthropology is a program that can never be finished, because there are cultures that would be changed irrevocably or destroyed just upon their being observed by the Western Cultural Anthropology Industry. Does this mean that anthropologists should pack up their tents and surrender? Of course not. The same goes for all the other disciplines of science. The Scientific Method works extremely well — we should keep that — but we should be more humble in matters regarding our actual abilities, as we use the Scientific Method to expand such abilities.

Once we abandon this obsessive fanaticism of absolute complete knowledge, we can continue on with our process of discovery without the headache of a deadline. Knowing there are actual limits allows us to push our perceived limits in a way that we can pick and choose our battles with the Great Unknown. When we give up on knowing every detail of the Mind of some imaginary Friend, we might acquire some of our Friend's imagination and wisdom.

Now, regarding the "We Will Have A Sentient Machine by 2030 and We Will All Be Replaced By Robots or Evolve Into The BORG" nonsense —

First off, the notion is so blatantly Millennialist and stupendously lacking in imagination, I find it rather sad that otherwise intelligent and reasonable people actually hold such paranoid malarkey as a position worth defending with as much vigor as they do. I dismiss the CT prediction wholesale directly for that general reason. They remind me of an old encyclopedia I found in the trash as a young boy. It was from 1927, and when I found it, it was 40 years old and looked 100. It contained some illustrations of what a city in the year 2000 would look like — giant skyscrapers separated by 20 lane highways, dozens of dirigibles and hundreds of airplanes flying between the buildings. Factories were invisible, and there wasn't a toxic waste dump in sight. I see the CT predictions in much the same way. Yes, air travel expanded a lot since 1927, and while we don't have many dirigibles, the NJ Turnpike and the 5 in LA certainly qualify as giant highways. The same will go for AI in 2030. To the chagrin of the CTs, machines won't think (because they can't) but thanks to the tireless efforts of the CTs, computers will do a lot of useful work for us, even more so than now, and will invent entire new categories of productive labor for humans.

In general, I find the CT position laughable and tragic. In specific, there are other points regarding their philosophic superstructure they have erected to defend their position that should be addressed.

First the Turing Test.

My objection to the Turing Test is this:

  1. The very basis of the Turing test is one that knows that machines don't think — the whole thing is based on deception.
  2. Who is to do the judging?

As far as the judges go — I'm sure as hell not going to trust the geeks who make the first "thinking machine" to tell me it's really thinking. I may be eccentric and a bit deranged, but I'm not that stupid. Yet.

Regarding point 1 — Deception:

There's one thing the CT crowd really doesn't want to accept — that they are deceiving themselves, and the Turing Test is the tool of their deception.

Fact: Machines don't and can't think. Existence precedes essence. Computers pass voltages. Period. They don't remember anything. They don't think about anything. Everything we discuss or sense about them is secondary and something we bring to it. Saying that computers think is like discussing the political persuasions of rock formations.

Once we see the Turing Test for what it really is, the real CT/Turing Test project is now revealed:

Can we make machines operate in such a way that we can — deceive — ourselves into thinking there's actually a sentient human in it? And can we deceive/bludgeon others into agreeing with us?

The Turing Testers know that machines aren't sentient, as they wait for the next rev of some machine to trick them. And once "tricked" — what makes them think they or anyone else wouldn't know it's a trick — everytime?

"Gee — last week, the HAL 9000 passed the Turing test. Well wuddya know — that last algorithm really did the trick. Let's check it out now. UhOh. Today it's not passing the Turing test…so I guess it isn't sentient anymore…"

That we so deceive ourselves does not mean the condition of sentience is or ever was actually present — it simply means that the required conditions to our test have been met at a particular historical juncture — on a given day, the machine has "fooled" us into thinking it can think. It's been programmed in such a way that we are led to believe it has a mind. This doesn't mean it actually has one. With the Turing Test, the machine must simply be able to do what we expect of a human within a certain range of activity. But is it Sentient? Hell No. It doesn't take Albert Einstein to see how nekkid that Emperor is.

Another objection I have to the Turing Program is — why bother? Humans are such a contemptible lot of petty, ignorant, messy, obscene and violent whiners, why would we ever want to make a computer act like one? I find the idea of simulating human behavior so ludicrous, it's appalling that it has occupied so much airtime for the past several decades. It's a sad testament to our ignorance and vanity.

However, this doesn't mean that machines that attempt to simulate awareness can't do useful work. On the contrary, I am firmly convinced that they can and should do the work that humans are simply not designed for — space exploration, deep sea work, and a thousand other extremely dangerous but mission critical activities.

Lanier is right on the money with his circle of empathy. The computer might whine, complain, threaten violence, whatever. Just unplug the thing. It's not a person, it's not sentient, and people who think it is need some guidance on personal boundary conditions.

So, the Robot/Computer Mind isn't going to happen, because it can't. We will have machines that can do some amazing things, but sentience ain't one of ‘em.

Now, for the other points — the biological and the biomechanical.

Assuming homo sapiens doesn't go extinct without issue, homo futurus is inevitable. It's not a question of if; it's a matter of when and how. If civilization goes completely belly up, and we're all reduced to wandering bands of nomads in a world filled with toxic waste dumps and highly oxidized metal particles, homo futurus will be a hardy and tough human built for life on the run hunting the giant rattus futurus for food and avoiding the roving psychotic packs of rotweilerus futurus. It'll be a tough life, but not without its rewards. We will evolve to adapt to such circumstances.

If we get some collective sense in our skulls, and reduce our numbers to a sustainable value (200 — 400 million?) with our science we can eventually biologically enhance ourselves into a homo futurus — a creature of our own design. Socially speaking, the introduction of such technology would be simple enough — if someone said,

"Mr and Mrs Warwick — the next girl you have will live to be about 140 and die with the body of a 50 year old, have 20/10 eyesight including some sensitivity in the infrared and UV spectra, have hearing between 5 Hz and 60kHz and she'd look like a buff cross between Marilyn Monroe and Katherine Hepburn who never sunburns, and be able to dance better than Ginger Rogers and have an IQ in the high 4 digits all for only $260,000 please sign at the bottom in ink please."

We'd be there signing paper with my Pelikan in a New York Second. We'd also be in hock for the rest of our comparatively short lives, but we'd do it in a heartbeat, and I think many other people would too.

So, the biological working of the species will, I believe, be inevitable as we learn more and more about the human genome. However, I don't think this level of understanding will be any time soon. If we're diligent and work hard — maybe we'll have it in a few hundred years. I imagine there will be a bunch of people opposed to it on "ethical" grounds, and I can't imagine what the test trials would be like, but eventually it will happen if DNA technology keeps a pace even vaguely resembling Moore's Law.

I'm not too concerned about having two species around, either — as these treatments become more commonplace, they'll go down in price, and if different companies compete, we could have the most enhancement for the lowest price, and most every genetic line/family will eventually be able to have their progeny continue into the next phase of human evolution. In fact the later adopters might even have some advantages compared to the early adopters. Like IQ in the high FIVE figure range…cool…

As I said — barring extinction of sapiens, homo futurus is inevitable. It's not a matter of if; it's just a question of when and how, and it could be a Very Good Thing — not a future to fear. There's also a non — zero chance homo futurus will be wearing deerskin and chasing bunnies for dinner, but that's not something under our immediate control.

As far as the biomechanical future goes, I think that is a dubious future, as I the "Borg model" is absurd, paranoid, and juvenile. It's an irrational fantasy based more in Cold War politics than honest and reasonable technological conjecture. There's been a lot of discussion about nanotechnology and technological implants since Drexler et al back in the 1980s, but so far it's been mostly just that — a lot of discussion with only a few scattered developments, including a fellow with my last name in the UK. I don't think the research is bad — I just don't have any faith in its applications.

On the other hand, I do believe biomechanical devices could be of some use, especially if they are wearable — with technologies in the near future, email and voice communications could be as simple as wearing two small transducers that stick to the bony points behind your ears. Implants? They're so messy and atavistic.

And Finally —

There will be no Cybernetic Cataclysm in 2030, just like there was no Armageddon in 1999. Short of a wayward asteroid coming to visit and ruin an afternoon for a few million years, things generally don't work that way around here. We are far too good and earnest to deserve some techno Hell, and we're way too selfish and myopic to understand Heaven. The machines, as cheery and responsive as we might make them, aren't and won't be sentient. So, we're stuck here, alone but for our chimp cousins, on this little green planet. It's a nice place. We need to take care of it a lot better than we have been. We need to clean it up and invent a fun, clean, future. Send the machines into space — they can tell us of other planets. Maybe a few of us will go check out the nicer ones. Maybe even bring a few of our chimp cousins.

Don't worry about the Borg or the Forbin Project taking over. That only happens in lame Hollywood movies written for 15 — year — old boys. Frankly, I'm a lot more concerned about the very human Supreme Court repealing the Bill of Rights on the altar of the Permanent Wartime Economy, and how we're going to come up with the energy needed to run our machines, heat our homes, and cook our food, when the oil runs out.


From: Kevin Kelly
Date: September 26, 2000

Jaron doesn't have to worry about the cybernetic metaphor, because he says his main concern is that it has become sole metaphor of our time, or at least the sole metaphor of our tribe. If that were really true, I'd worry too. But it isn't.

What the cybernetic metaphor is an extreme perspective, an inverted perspective that will eventually play out its usefulness. It is similar (and related) to Richard Dawkins' famous view of the selfish gene. Dawkins says that you can understand a lot which is new, and a re-understand a lot of the old orthodoxy, by looking at the world from the view of genes. In fact you can begin to look at everything that way, and for a while wherever you look, the world looks different. This view can unleash new understandings. What is important to remember is that while Dawkins looks at the world that way, this is not the only way he looks at it. In his daily life he adopts a quite ordinary view of the world. I have looked at the world in Dawkins selfish-gene way, and then the next minute I have looked at the world in Jaron's way. Most of the time (but not all!) I see more new things via Dawkins way. I might also look at the world via Freud's way, or Marx's way, but I usually don't see much interesting to me that way.

The new cybernetic metaphor, on the other hand, is very powerful. We can look at almost anything now, from physics, to emotions, to nature, to experience itself, and find new things when we imagine it as computation. We can imagine people as robots and learn all kinds of things. I can do that one minute and then the next minute I can play with my little 4-year old boy, and see him only through the eyes of a naked primate. Eventually we (as a culture) will finish examining everything via the cybernetic metaphor, and then we'll get bored. But the important thing is that right now almost anything we examine will yield up new insights by imaging it as computer code. And -- this is important -- while one re-examines the world in this way, it is vital that you take the metaphor seriously. It should be the only metaphor you see while you are looking through it. The next minute we can adopt another view.

I think we have not come close to exhausting this metaphor, and as my earlier essay on it (called the Computer Metaphor) suggests, I think it will overturn our current ideas of physics and culture first, before we abandon it. It is dangerous, but not because it is our only tool.


From: Margaret Wertheim
Date: September 27, 2000

I'd like to applaud Jaron's demi-manifesto. I heartily agree that what he called "cybernetic totalism" needs to be exposed. This indeed was one of the major themes of my own recent book The Pearly Gates of Cyberspace. I liked Jaron's analysis of what is wrong with cybernetic totalism very much, what was missing I think was an historical dimension as to why this way of thinking has evolved. Jaron rightly notes that this kind of thinking goes back to the dawn on the computer project with the work of Weiner and Shannon etc, but in fact this whole style of what I would label "techno-eschatology" has a much deeper history, going back to at least the middle Ages. Throughout Western history — since at least the twelfth century — there has been a very deeply ingrained tendency to link technology (in whatever is its recent mode) to an eschatological vision. Anyone interested in this subject should certainly read historian David Nobel's book The Religion of Technology, which traces the linking of technology to religious visions for the last millennium. In my own book I focus particularly on what might be briefly summarized as the religiosity inherent in our concepts of space, revealing the long historical roots of the belief in a transcendent "heavenly space" and the contemporary idea that cyberspace can be a new/ultimate realm of transcendence. Jaron is right that modern information theory has underlied the emergence of the belief that everything can be dissolved into information, but in parallel with this has also been a belief that beyond the mundane physical realm there exists an idealized "Platonic" realm of pure forms, pure data, pure knowledge. This is also a critical dimension of cybernetic totalism, one which also has a long history in our culture.

What we need to understand, I suggest, is that the current iteration of techno-eschatology is nothing new in Western culture, that the techno/scientific culture of the West has indeed been pervaded by this spirit from the beginning. Which is not to say, of course, that all scientists and technologists think this way, only that there has always been a large contingent of our community who do. Like Jaron I believe we need to challenge this ideology — and it is an ideology — an especially pernicious one, I would argue. Like Jaron, I believe this ideology is crippling the advancement of science and technology (for this spirit inheres in much of the scientific community as well). It is also, as Jaron suggests, a force for exacerbating, not diminishing social equity. I am delighted to see this challenge being presented on the Edge, for on occasion I think that our community too has been too-heavily pervaded with a techno-eschatological spirit.


From: John Baez
Date: September 27, 2000

I found Jaron Lanier's half-manifesto very interesting. I doubt my friends in the academic humanities know people who are actively worrying about (or looking forward to) nanotech or a Vingean "singularity". They would probably dismiss such ideas as nuts. But as a scientist, I know quite a few such people: Extropeans, folks associated with the Foresight Institute, cypherpunks, fans of cryonics, and so on. So it makes sense to think seriously about what they are saying. I'm glad Jaron is doing this.

I don't have much to add except a couple of random remarks:

1) "The coming cybernetic cataclysm" takes various forms in the literature. Bill Joy's idea that autonomous machines will take over the world is actually a rather optimistic version. It assumes that machines, possibly with the help of "evil humans", will get good enough to beat us at our own game. My cybernetic totalist friends don't seem to worry about this scenario much. In fact, they may even relish the prospect! (Perhaps they are among those "evil humans" Joy talks about.)

What they worry about more is a "gray goo scenario" where due to some screwup, self-replicating unintelligent nanotech gets loose which eats the entire biosphere. Myself, I'm not sure if this a paranoid fantasy or a realistic possibility, since I don't know whether the biosphere is operating near maximal efficiency or whether a small, simple new entity could manage to eat everything in its path without being eaten itself. But in general, I'm more afraid of stupid mistakes than an attack of superintelligent beings. The gray goo scenario is just one of many possible mistakes we might make. The really dangerous ones are the ones we won't think of until they're already happened.

2) I liked Jaron's remark about physicists being the "alpha-academics" for most of the last century. It's curious being a mathematical physicist now that this era is over. I went into this field as a kid because I thought it was the coolest thing around. Gradually I realized that it's not — at least, not as measured by the standards of money and power! At first this was a bit of a letdown, but now it seems liberating in some respects. I don't have to worry that my research on quantum gravity will be used to create a super-bomb or destroy the universe — at least not in the near future — because we have no way of accessing the energy scales needed to wreak havoc in that way. Besides, we can blow ourselves up quite nicely already. Now it's the computer science, biotech and nanotech people who have to shoulder the responsibility of doing science that seriously affects human lives, while I enjoy playing around with my equations.

Yes, I'm being a bit sarcastic here, but it is very interesting how these things change.


From: Lee Smolin
Date: September 27, 2000

Jaron is raising some very important points that deserve closer examination and discussion. Among them is his challenge to the idea that the optimization of present day computers could produce anything with the capabilities of living, intelligent animals, cats let alone people. I think Jaron is right to point out that the arguments for this thesis rest on incorrect assumptions. I believe that Jaron's argument can be strengthened and I would like to explain how. The following is just a sketch, but I hope it suffices to stimulate the debate.

The problems to be addressed are 1) what kinds of problems can computers solve and whether they differ in kind from the kinds of problems humans solve. 2) What kind of problem is it to design a computer and whether it differs in kind from the problem of designing a human, or a creature with equal capabilities.

To approach these questions it helps to begin with the idea that some design problems involve searching a space of possible design parameters. We know that in these cases there are simple optimization algorithms that will find the local extrema in whatever basin of attraction one happens to be in. However, optimization is a small part of design because it can be used reliably to solve only a small subset of possible design problems. To talk about this we may distinguish five classes of design problems.

CLASS 1: Local optimization problems problems which can be solved with standard hill-climbing techniques.

CLASS 2: Locate a pretty good, but not necessarily global extremum in a configuration space with many local extrema and many basins of attraction.

CLASS 3: Locate the global extremum in a configuration space with many local extrema and many basins of attraction.

CLASS 4: Find local extrema in a landscape which changes unpredictably on the same time scale it takes to find local optima.

CLASS 5: find local extrema in cases in which the computation time required to construct the configuration space and/or calculate the fitness function is either infinite or much longer than the time available. These are the class of problems which have to be invented or discovered before they can be solved, as there is no algorithm that can lead to their formulation or complete specification.

Let us first discuss the first question. At least so far, computers are very good at solving CLASS 1 problems, and there are decent algorithms for simple CLASS 2 problems. But we do not have good methods for finding global extrema and hence solving CLASS 3 problems. To my knowledge computers can do decently at some simple CLASS 4 problems, but can easily fail when they become more complex. By definition computers have problems solving CLASS 5 problems, as the computation time to set up the extremization problem is prohibitive. However humans can often solve CLASS 3 problems and are also quite good at CLASS 4 problems. This should be no surprise, this is part of our biological specialization. This is what is required to flourish in a new environment, domesticate a new species, become farmers, populate almost all the ecological zones on the planet and so forth.

But humans can do even better than that, we can both invent and solve CLASS 5 problems. This is what poetry, art, music and science, are about. We invent the forms and traditions and then we master them. We can thrive in a domain in which we create optimal versions of things that did not even exist a short time before. We are not extremizing in a landscape, we are building the landscape on the same time scale that we master it.

One correspondent suggested that anyone who thinks people are different from machines are naive romantics. This is not true, we are different because we have vastly different capabilities. It is irrelevant to talk of the universality of Turing machines, for Turing machines are entities that run programs that must be written by an external entity. So far at least the only entities we know of who can function as those external programers are humans. Humans are intelligent creatures that do not need to be programmed by any external agency. Turing machines are designed, we are the result of natural selection. We need then to examine the second question, whether designing or programming a computer is in the same CLASS of problems as the problems natural selection solved in the course of evolution.

Of course inventing the idea of a digital computer was a CLASS 5 problem. But once we had the idea, the optimization of digital computers is mainly a CLASS 1 problems. This is what Moore's law is about, it tells us how quickly local optimization can work when ample resources are available. One of the points Jaron is making is that the design of software required to do justice to the exponentially increasing capabilities of our machines are not CLASS 1 problems. Moore's law tells us that the fitness landscape for software is changing on a time scale comparable to the time required to write and debug software. Thus writing software involves problems of at least CLASS 4. This is of course just a different way of making one of Jaron's arguments.

For there to be a danger of robots taking over, or even being able to do a decent job entertaining us, replacing songwriters and singers,artists, scientists and comedians, one of two things have to happen. Either we will be able to design a machine that could replace us, which means a machine that can solve problems of CLASS 5, or we will be able to design a machine that could in turn design a machine that could solve CLASS 5 problems.

But while we can solve problems up to CLASS 5, so far we have only been able to design machines that can solve CLASS 2 problems reliably. And so far machines are not able to design other machines to solve even CLASS 1 problems. When one puts it this way it is clear that it is not just a matter of Moore's law, designing one of us is a very different kind of problem then optimizing a programmable digital computer.

What kind of problem is it to design an entity that can solve CLASS 5 problems? We know we were created by natural selection, acting on not only us but the whole collection of living species. This is at least a CLASS 4 problem, but it is very likely at least a CLASS 5 problem. The interactions among many species as they evolve under the rules of natural selection is a CLASS 4 problem, as is shown by models of Bak and Sneppen, Kauffman, Sola and others. But there are good arguments, summarized in Stuart Kauffman's forthcoming book, that natural selection and cultural evolution are really CLASS 5 problems. He argues that they are problems in which the construction of the fitness landscape itself is so computationally intensive that it is not correct to separate the specification of the fitness landscape from its optimization. Instead, both take place together. This means really that the metaphor of optimization has broken down completely. Whatever evolution is doing cannot, he argues, be conceptualized as extremization on a pre-existing fitness landscape.

Thus, the problem of designing an entity that can solve CLASS 5 problems is at least a CLASS 4 problem, and very likely is a CLASS 5 problem. But is it only this hard, or harder still? Human's can solve some CLASS 4 and 5 problems, but it is not at all obvious that the problems of these kinds that we can solve are comparable to the problems that natural selection has solved in designing us. At the very least, it is likely that the time required to solve the problem of designing us may take a great deal longer than the tine it takes to solve the CLASS 4 and 5 problems we have so far dealt with. It took natural selection 4 billion years to design us. Let us assume that we could do it much faster. How much faster? Let us assume that we could use genetic engineering to engineer an artificial speciation in an animal. Speciation is a process that takes on the order of 100,000 years. Given very optimistic assumptions it is possible to imagine that some years from now this is something we will be able to accomplish in on the order of 100 years. It could certainly not be less than that as we cannot do it faster than the time it takes for several generations to grow to maturity. (Because the interaction of an animal and its environment is a CLASS 5 problem, we are not likely to be able to simulate it reliably enough to replace the phase where we grow the animal and observe what happens.) This would mean that we had the tools to speed up natural selection by a factor of 1,000. Even with this fantastic increase of speed it would still take us a million years to invent something like ourselves, starting from scratch. (Note that this is true even if we skip the pre cambrian stages of evolution, which begins with creatures whose cell biology and biochemistry is far advanced of what we have so far designed. Note also that many biologists working in parallel won't help as natural selection also works in parallel.)

This is on the order of the lifetime of a species. A problem like this, whose minimum time for solution is on the order of the lifetime of a whole species of creatures that can solve CLASS 5 problems deserves a separate class. So we may call this a CLASS 6 problem.

Is it possible that there is a way to do it much faster, by taking a route that natural selection could not have? One cannot say this is impossible, but all this means is that so little is known about the problem that it is in a class of problems we have no idea how to solve.

To summarize: the claim that optimization of present computer designs could produce something that is "as powerful" as humans requires that there is only one kind of intelligent entity, and they all live in a in a fixed landscape with a single local extremum. But we are not only not in the same basin of attraction as present day computers, it is not even obvious that the problem of constructing us has anything in common with problems we have so far solved. This is not to deny that someday humans may learn how to solve the problem of designing creatures that can themselves solve CLASS 5 problems. The point is only that there is no rational basis for predicting when or even whether this may happen, as the solution to this problem is not closely related to the kind of optimization problems that human designers have so far learned to solve.


From: Stewart Brand
Date: September 28 , 2000

What a juicy piece of work by Jaron!

For me, one ancillary proof of much of his thesis is the phenomenon of Libertarian politics, which I've considered to be algorithmic political pseudoscience and now, thanks to Jaron, consider to be an offshoot of Cybernetic Totalism. Libertarian thinking is a common (though certainly not universal) affliction of working computeroids and their followers. Struck dumb by the cybernetic marvel of the marketplace, with its self-balancing and even fractal Invisible Hand, Libertarians seem unwilling to consider the equally marvelous cybernetic structure of the US Constitution or to consider that the sheer messiness of democracy in action is part of the system's long-term health.

Libertarians get caught up in simplistic analyses such as that since police departments require crime in order to exist, therefore they are incented to make sure that crime is never "solved," creating it themselves if necessary. Or, more subtly, that since competition forces competitors to become more alike, therefore police will become like criminals so much that they are, in fact, criminals after a while. Both ideas are helpful, but there is no place in such analyses for trans-logical concepts like "honor" or "service," and they are what drive a huge part of effective police work.


From: Rodney Brooks
Date: October 1, 2000

Lee Smolin wrote:

"One correspondent suggested that anyone who thinks people are different from machines are naive romantics. This is not true, we are different because we have vastly different capabilities. It is irrelevant to talk of the universality of Turing machines, for Turing machines are entities that run programs that must be written by an external entity."

This is exactly the sort of naive romanticism to which I was refering. I was not comparing humans to a PC running Windows 2000. I am saying that people are machines in the sense that there is, as far as we have any scientific knowledge at this time, nothing in them outside the laws of physics of the universe which govern all matter. People are made of matter and that matter obeys the physical laws of the universe. Unless one hypothesizes an eternal soul, an elixir of life, an ineffable essence, or some other extra-physicalness to humans (and also to other animals, all the way down to bacteria?), then humans are machines. It has absolutely nothing to do with Turing machines, or programming computers.

Get over your fear of being a machine. We are not the center of the universe, and God does not exist. That is what this disagreement boils down to.


From: Lee Smolin
Date: October 2, 2000

In reply to Rodney Brooks:

I believe strongly that our entire existence is as part of the natural world. I am not afraid of this; my book, The Life of the Cosmos, is a kind of homage to that idea. My guess is that we agree broadly on metaphysics, but my comment had nothing to do with God, cosmology, consciousness or any kind of romanticism. I was trying to make a point about science, one that is well within the boundaries of our shared metaphysics.

In my comment I raised two issues. First, whether everything that is part of the physical universe can be described in terms of a Turing machine, second, whether the way that living animals process information is enough like how digital computers work that it is rational to hope to construct a reasoning animal based on models of digital computers. As these seem to be very open issues given the present evidence, it seems far from clear that the metaphor of a machine will in the end be very helpful to us as in understanding in physical terms what animals are. In addition there is a problem with using the word machine in this context, which is that it carries with it the implication that something was made by human beings. This is not just semantics because ignoring the deep differences-as physical systems-between living animals and human made machines has led to some predictions for the future of machines that may not be consistent with our developing understanding of what life is.

To expand on this last point, I do believe that we will someday understand what we are in terms of physics. But before we do that we must first understand what a living thing is in terms of the laws of physics. We have made a lot of progress towards this in the last years and I believe more will be made shortly. Everything we have learned suggests that there are important differences, expressible in completely physical terms-more particularly in terms of statistical physics, between systems that are made and systems systems that arise by a spontaneous process of self-organization. Both may process information, but they may do so in different ways, so that they are generally able to solve different classes of problems.

A related point is made by Stuart Kauffman in recent papers and a forthcoming book: there is a fundamental difference between a physical system that can be termed an "autonomous agent" and one that cannot be. Part of Kauffman's definition of an autonomous agent is that it is a self-reproducing system, able to carry out at least one thermodynamic work cycle. Computers are not autonomous agents to the extent that they are constructed and programmed. But computers are Turing machines-which is why that idea is useful for this discussion.

Living animals are autonomous agents. They are not, so far as has been shown, Turning machines. There is no obvious relationship between the definition of a Turing machine and the definition of an autonomous agent; it is certainly very unlikely that they are equivalent. Thus, while it is of course possible that we may some day be able ourselves to make living things, there does not seem to be any good reason to expect that such articial animals will have a strong resemblance structurally or functionally to computers. (The fact that one can model certain aspects of life in computer software does not change this.)

Computers are wonderful tools and fantastic toys. But if machine is to mean anything at all besides "something found in the universe" (remember that we have the same metaphysics) then computers are machines, and animals are not.


From: Daniel C. Dennett
Date: October 4, 2000

A friendly alert to Jaron Lanier

Unalloyed enthusiasm for anything is bound to be a mistake, so thank goodness for the critics, the skeptics, the second-thought-havers, and even the outright apostates. Apparently the price one must pay for jumping off a fast moving bandwagon is missing the target somewhat, since it seems that apostates usually overstate the case and land somewhere rather far from where they aimed. Reading Jaron Lanier’s half a manifesto, I was reminded of an earlier critic of digital dreams, Joseph Weizenbaum, whose 1976 book, Computer Power and Human Reason, was an uneven mix of serious criticism in the tradition of Norbert Wiener and ill-developed jeremiads. Weizenbaum, in spite of my efforts (for which I was fulsomely thanked in his preface), could never figure out if he was trying to say that AI was impossible, or all-too-possible but evil. Was AI something we couldn’t develop or shouldn’t develop? Entirely different cases, requiring different arguments. There is a similar tension in Lanier’s writing: are the Cybernetic Totalists just hopelessly wrong—their dream is, for deep reasons, impossible—or are they cheerleaders we must not follow—because we/they might succeed? There is an interesting middle course, combining both options in a coherent possibility, and I take it that this is the best reading of Lanier’s manifesto: the Cybernetic Totalists are wrong and if we take them seriously we will end up creating something—not what they dream of, but something else—that is evil.

But who are the Cybernetic Totalists? I’m glad that Lanier entertains the hunch that Dawkins and I (and Hofstadter and others) "see some flaw in logic that insulates [our] thinking from the eschatalogical implications" drawn by Kurzweil and Moravec. He’s right. I, for one, do see such a flaw, and I expect Dawkins and Hofstadter would say the same. My reason has always been that the visionaries who imagine self-reproducing robots taking over in the near future have bizarrely underestimated the complexities of life. Consider the parallel flaw in the following passage from truth to foolishness:

 

TRUE: living bodies are made up of nothing but millions of varieties of organic molecules organized by the trillions into complex dynamic structures such as cells and larger assemblies (there is no élan vital, in other words).

FOOLISH CONCLUSION: therefore we shall soon achieve immortality; all we have to do is direct all our research and development into molecular biology with the goal of replacing those individual molecules, one at a time, as they break or wear out.

You don’t have to be a vitalist to reject this technocratic fantasy, and you don’t have to be a dualist, an anti-mechanist, to reject simplistic visions of some AI utopia just around the corner. Lanier is wistful about the possibility "that in rational thought the brain does some as yet unarticulated thing that might have originated in a Darwinian process, but that cannot be explained by it [my italics]," but why should it matter? Lanier is too clever to ask for a skyhook, but he can’t keep himself from yearning for . . . . half a skyhook.

It is ironic that when Lanier succumbs to temptation and indulges in a bit of cybernetic totalism of his own, he’s pretty good at it. His speculative analysis of the inevitability of what might be called legacy inertia, creating diminishing returns that will always blunt Moore’s law, is insightful, and I welcome these new reasons his essay gives me for my skepticism about the cybernetic future. But I wish he didn’t also indulge in so much presumptive caricature of those positions he finds threatening. He apparently doesn’t want there to be subtle, nuanced, modest versions of the theses he resists, since those would be so hard to sweep away, so he follows the example of one of his heroes, Stephen Jay Gould, and stoops to the demagogic stunt of creating strawpeople and then blasting away at them. He’s got me wrong, and Dawkins, and Thornhill and Palmer, to name the most obvious cases. It’s child’s play to hoot at parodies of me on consciousness, Dawkins on memes, Thornhill and Palmer on rape. Grow up and do some real criticism, worth responding to. We’re not the bad guys; we hold positions that are entirely congenial to his trenchant criticisms of simplistic thinking about computation and evolution.

Joseph Weizenbaum soon found himself drowning under a wave of fans, the darling of a sloppy-thinking gaggle of Euro-intellectuals who struck fashionable Luddite poses while comprehending almost nothing about the technology engulfing them. Weizenbaum had important, reasoned criticisms to offer, but all they heard was a Voice on Our Side against the Godless Machines. Jaron, these folks will love your message, but they are not your friends. Aren’t your criticisms worthy of the attention of people who actually will try to understand them?


From: Philip W. Anderson
Date: October 16, 2000

I was very happy to see Jaron Lanier's paper, in that it was saying a lot of things I had felt to be true, and saying them from within the digital world. The twenty-year prediction for conscious robots reminds me, for instance, of the twenty years since Stephen Hawking's prediction that in the year 2000 there would be no more theoretical physicists, only computers. What has actually happened has been that the currently fashionable field in theoretical physics, superstring theory, is an almost entirely analytical development. Computers can't even yet do respectable field theory for simple systems in four dimensions, much less 10 or more. What is happening in the rest of theoretical physics is even more depressing — if you find that depressing, that is — which is that the government agencies have been sold a bill of goods by you digerati, and will fund happily only theoretical physics done by computer; whereas the real problems are those which have not yet been conceptualised and simplified enough to use a computer.

I ran across a quote from, oddly, G. K. Chesterton, which makes one of the points nicely. "life is a trap for logicians; it looks just a little more mathematical and regular than it is. Its exactitude is obvious, but its inexactitude is hidden; its wildness lies in wait."

I also felt a resonance with another story. I read recently a review of a book in which it is shown that the Alexandrians had reached a level of scientific sophistication by 150 BC which was close to that of 17th century England; for instance, that much of Newton's Principia borrowed ideas from Greek texts. The science was lost, he claimed, not by the Church, although it sure helped, but by the practical engineering bent of the Romans, who took the useful engineering rules of thumb like Ptolemy's cycles and ignored the science behind them. In other words, exactly the "dumbing down" process that Lanier describes.

I am also fascinated by Lanier's idea that there is something between simple digital representations of input data and the "qualia" of a dualistic animalcule. The architecture of the brain attaches a very complex structure to each region of the visual or tactile field, a kind of a minibrain connected to all the other minibrains. Presumably this minibrain doesn't tell all the others that its part of the field has thus-and-so spectrum, it tells them that it's red (or at least redder than their part).

In general, I think there is much too much of a tendency to think that a representation of the world in terms of bit strings is a satisfactory one (even if complete). If this is so, why does the quantum computer do new things? Why is complexity theory such a poor guide to the real world of problems?

A decade ago I reviewed a book about Ed Fredkin (among others) in which he expressed the opinion that even the ultimate fine structure of space-time was digital. This bad idea was later taken up by John Wheeler (he calls it "it from bit") as well as a number of other less able physicists. The problem with it is that all of our success with particle physics — the Standard Model — is based upon continuous symmetries to which a digital picture is maximally unsuited. Modern quantum gravity actually claims to be seeing the scale at which it all stops, and if you can believe their picture it sure doesn't look digital at all. (They describe it as all the theories seguing into each other, kind of, but none of them are discrete.)

I guess the problem I have is that discrete mathematics feels too anthropomorphic — too much creating the world in our own image. No matter how far Moore's law carries us, it is still digital. I am not agreeing with Penrose, nor do I believe we are anything but a machine — but are we a digital machine? To put it less mystically, is a digital representation practical?





| Top |