THE REALITY CLUB
"One Half Of A Manifesto" by Jaron Lanier
Part II

Jaron Lanier responds to Reality Club comments on the .5 Manifesto by George Dyson, Freeman Dyson, Cliff Barney, Bruce Sterling, Rodney Brooks, Henry Warwick, Kevin Kelly, Margaret Wertheim, John Baez, Lee Smolin, Stewart Brand, Rodney Brooks, Lee Smolin, Daniel C. Dennett, Philip W. Anderson

Lanier's postscript on Ray Kurtzweil


Jaron Lanier
Date: November 11, 2000

Hello to two generations of Dysons, Freeman and George, both of whom I admire. I must say that it is immediately apparent that our priorities are different. As I hope my essay makes clear, I am more concerned with how people design technology and relate to it psychologically than with the long term fate of the machines themselves. Whether or not George Dysons' critique is technically correct, in my opinion it is esthetically, ethically, and politically misguided, in that he is looking at questions solely from the perspective of the machines rather than from the perspective of people. I see that I have genuinely failed to communicate this most essential point in my essay across a cultural chasm, and it saddens me. My failure is made more plain by the flip theological references in the George Dyson's note; he is apparently more comfortable deifying software than in recognizing the value of human aspirations to rational design.

If a future develops in which Dyson would perceive new life forms to have arisen from adaptations of messy software, I would perceive instead a lot of anti-human programming and design resulting in opaque user interfaces, i.e. machines that no longer made sense to people. I would also perceive a loss of human drive to achieve elegance in software design and an abandonment of rational planning. The most important point in my essay is that our two differing interpretations would each be reasonably applicable to the same outcome. I am advocating one interpretation over the other for reasons that arise from human, rather than technical concerns.

The argument that the Dysons do address is a secondary one in my mind; to what degree messiness limits or enhances the future of software. The key question here is whether different kinds of unreliability are effectively interchangeabled. George Dyson equates the failure modes of primordial chemistry with failure modes seen in contemporary software. This shouldn't be understood as a comparison between hardware and software per se, but between elements whose connections can only be described by statistics, like molecules, or indeed physical gates in a computer, versus elements that connect by Platonic logic.

Certainly the Dysons are correct to a degree, in the sense that error recovery algorithms can grant a "soft knee" to software failure modes that is reminiscent of the type of "statistical binding" seen in natural systems. Real computers as we know them are not built this way, of course. A thought experiment is different from a real-world viable machine.

In George Dyson's original posting, he said, "It is that primordial soup of archaic subroutines ... that is driving the push towards the sort of fault embracing template-based addressing that proved so successful in molecular biology".

If the question is framed in the future tense, then I understand what conversation we are having. (We're asking if evolving machines could hypothetically come to be in the future, perhaps the very far future.) I think this idea can be examined, and as I hope I made clear, I am open minded about it, although I maintain that an excessive emphasis on this possibility has negative effects on contemporary technology design and culture.

In more recent correspondence, George said quite plainly that, with regard to gaining autonomy through evolution, machines, "have done so *already*".

This I truly cannot accept. If people stopped maintaining today's machines they would not only cease to change, they would cease to operate entirely. I'm sure George must agree with that- that evolution based on small variations (mutations) allowed by error correction is not a possibility in machines as they exist today. So George must be talking about a system made of people and computers together. And here, certainly, I think we must agree that there is room for alternate interpretations- that one person's autonomous machine might equally well be another's machine with an inscrutable user interface. If we can agree on this chain of reasoning, then I would hope to discuss whether there are pragmatic reasons to favor one interpretation over the other in specific circumstances, such as our own.

In correspondence, George suggested that we should start to think of the internet as already being somewhat autonomous, since it runs even though people don't fully understand it anymore. (I hope I'm doing justice in my paraphrasing.)

My experience of current digital tools is that while there are certainly numerous instances in which people no longer understand the tools, it is also true that these are precisely the same instances in which the tools fail- in which they crash. The changes that result from a human observing a crash are usually not incremental mutations, searching a space blindly for better configuration, but rather analysis-driven adjustments that force the machine to conform to a rational plan that was written down prior to testing. The plan might change, of course, but only on the human side of the system. I am not claiming that this is always the way that debugging happens (in fact I love to make little virtual worlds with quirky bugs I don't quite understand), but I am claiming that it is more true the larger a system gets.

The fact that Y2K bugs didn't destroy the world as feared is one piece of evidence that we are actually in charge of our machines, even though we like to fantasize that we aren't.

The examples I gave of people "making themselves stupid" in order to make software seem smart, as in the credit rating system, are ones in which people most definitely do understand the machines, to a fault.

The Internet as a machine seems comprehensible to me. At Advanced Network and Services, where the Internet 2 Engineering Office is located, and which is my primary perch these days, there's a fine project to measure activity on the net with probes all over the world, and the data are useful for rationally improving performance. No alien communication signals have appeared.

The failure modes of practical software are quite different from what is seen in chemical/biological systems. When a computer crashes (and I mean a real computer, not a thought experiment in a math journal), nothing else happens. The is no more processing. When an organism crashes, it turns into food for other organisms. Its information is not entirely lost from the system. I recognize that this point will probably fall on deaf ears to respondents who think of computers as already being autonomous and biological in some sense. I think a careful examination of computers as they are in the real world will show that all the "biological" properties of digital technology are brought to the table by the people who maintain the technology.

I don't think we know enough yet to say definitively whether the two kinds of unreliability (digital and biological/statistical) are ultimately, at some extreme of scaling, interchangeable.

I also don't perceive the evolution that George does in some of the examples he suggested in correspondence. In what ways have operating systems gotten better since the 70s? There are a few, but far fewer than anyone in the field ever imagined there would be. UNIX was, to a remarkable degree in retrospect, pretty much there at the start. I suppose it comes down to a subjective evaluation of how important various modifications since then have been.

The internet might provide better examples of the kinds of ongoing "evolution" George is talking about. There are still opportunities to create useful new subsystems, along the lines of the one operated by Akamai, for example. As another example, the TCP/IP protocol is probably the most common "soft failure mode" protocol in use, and it has improved over time, most notably with the advent of "slow start". But this happened when a human, Van Jacobson, had one of those thus far inscrutable "aha!" moments.

Ironically, I have for a long time nurtured a scheme to build an operating system out of components that would bind together using a pattern recognition approach (with so-called "neural nets") instead of literal reference, as part of my own war against "brittleness". Such a system, if I could ever get it to work, and I've tried, believe me, would be more in line with the Dysons' take on software than other architectures I am aware of out there in the real world today. (One sub-project of the Tele immersion Initiative, bearing the acronym SOFT, which has been created in the last two years at the Computer Science Department of Brown University, could perhaps be seen as an early example of a "soft binding" architecture.)

To Cliff Barney:

Hey, I'm thinking as socially as I can. Wish it were social enough for you!

I gave the closing talk at Stanford University's Englebart event that you mention. I presented a condensed version of the "missing half" of the manifesto there, and it's available on video (see http://unrev.stanford.edu/index.html). My preternaturally angelic and patient publishers are confident that I will somehow, someday soon finish the long overdue book that will unite both halfs.

Human society didn't change all THAT much during the course of the million-fold increase in computer power that you identify, from 1968 to roughly the end of the century. Certainly society changed more (as a result of technological provocation) in the previous 30 years, which saw the introductions of television, the birth control pill, factory-based genocide, the atomic bomb, LSD, the electric guitar, suburbia, the freeway, the middle class, and so much more. Globalism isn't all that new either. You can read passages in Marx on the internationalization of capital that sound exactly like dot com press releases from the recent boom years.

The last thirty years have seen such things as the rise of Gay rights and working moms, but it seems to me that many of these changes are most easily interpreted as extensions of processes that began before 1968. (As an example, I'm amazed that so much of today's teenage culture is as similar as it is to that of the 1950s and 1960s. The (white) music even sounds about the same as it did in the 1960s. The music of 1968 sounded quite different from the music of 1938.)

People talk about digital technology more than they use it. They tend to overstate how much they have been effected by it. I don't say this as a criticism. It's a most fascinating thing to talk about. Here I am doing it.

I think what's going on is that digital technology does not effect the lives of people until new culture, expressed both in software implementations and in changing human habits, is invented for it. Non-digital technologies, on the other hand, present instant opportunities for meaningful events to take place. Point a movie camera at the world and that world is changed forever, even if an initial subject is nothing more than an approaching train. Digital technology is different because an intensely time consuming process must precede its efficacy. An excessive degree of conscious forethought (thwarting pretensions to Dyonesian digital flights of fancy) and cumulative boredom characterize digital culture more than surprising revelation. The tedium gets to us all once in a while, and I think intellectual positions such as George Dyson's might serve as psychic comfort.

I am a true believer in the long term, lovely improvement of the human condition to be brought about by digital technology, but it's going to be a slow ride, because we have to build the code, piece by piece.

To Bruce Sterling:

A warm, brotherly bear hug for you!

To Rodney Brooks:

Your way of thinking is all too familiar, the standard issue point of view found in elite computer science departments. Glad you showed up, just in case anyone might have wondered if I was making up a straw man.

I made no claim as to whether machines could in theory become conscious or not. Instead I argued that such ultimate questions are not answerable, at least by anyone in our contemporary conversation.

I maintain, once again, that the most useful conversations we can have on such topics must be motivated by pragmatic, esthetic, and moral considerations.

Your certainty that you alone can identify the one true null hypothesis is a religious claim.

I hope it's clear that I was being snide and flip when I brought up nanobots. They are actors in a thought experiment, no more meaningful than artificial intelligence, and no more useful in thinking about how to design real machines, societies, and philosophies.

To Henry Warwick:

I'd like to address a plea to you and to other people who largely agree with me. Would you consider becoming immersed for a time in the other side's arguments, if only for the sake of dialog? They aren't stupid ideas, they're just wrong, and they deserve respect as smart, wrong ideas. If we humanists aren't willing to engage the CT crowd on their own terms once in a while, we can hardly expect them invest in understanding our terms.

I'd also suggest decoupling such questions as whether the universe is deeply "mathematical", or whether it can be fully understood, from the design, legal, esthetic, and social levels where the ideas that root in the heads of technologists come to matter. The deep questions might never be answered. They must be asked, of course, but it is best to ask them separately. The pragmatic questions can not only be answered, but will be answered by our collective actions, whether we like it or not.

To Kevin Kelly:

I wrote the essay for my colleagues in the technology world, such as Rodney Brooks. Whether any of them are persuaded by it remains to be seen. My sense of this world is that it is currently not benefiting from a variagated ecology of metaphors, but rather is locked into a standard release of one metaphor.

To Margaret Wertheim:

I agree. Once Western culture defined itself as being on a ramp, the ramp had to go somewhere. The "other half" of the manifesto will be concerned with alternate ways of conceiving of the ramp's destination.

To John Baez:

Thank you for pointing out that a lot of folks in the "extropian" crowd seem to actually like the idea of goo taking over. I have come across this sentiment again and again. It is interesting in its own right, completely aside from whether Genghis Goo is a realistic scenario or not.

To Lee Smolin:

Thank you for this fascinating post.

I wish Stuart Kauffman would name his objects something other than "autonomous agents", since that is almost the same language CTers use to describe such things as the idiotic dancing paper clip that confuses users of Windows.

I'd like to encourage other respondents to address your ideas directly, instead of dragging the conversation down once again into eternal imponderables.

Some of the next deep (askable) questions: Will we someday be able to estimate how efficient natural evolution has been, in comparison to a theoretical ideal? Is evolution close to being as fast as it could be in searching the configuration spaces at hand, in the way that retinas are almost as sensitive to visible light as they could possibly be, or is there a lot of room for making evolutionary machines that would search practical configuration spaces much more quickly?

I'm also struck by how much more past computation is implied in some configurations than in others, and therefore wonder how your ontology relates to the various definitions of "information". Irreducible overhead in optimizing a configuration space (including legacy effects) might also be treated as a fundamental "distance" between configurations, and might serve as a basis for formal definitions of such things as species boundaries. This type of distance is also similar to some ideas about physical distance in recent computation quantum gravity models.

To Stewart Brand:

Yes, yes yes! This is the explanation for the preponderance of exceedingly strident expressions of libertarian ideals in digital culture.

To Daniel Dennett:

You'll be happy to know I turned down Harpers Magazine and instead accepted Wired's offer to print the .5 Manifesto. I assure you I am in no danger of drowning in a friendly tsunami of Euro-admirers, for the simple reason that I am also a composer, and therefore the class of professional culture critics is sworn by blood oath to make my life difficult.

I'd like to be able to assert that neither of us understand something without being accused by CTers of sentimental, softheaded, retrograde religious dependency. I made no claim that there could never be an explanation for how people think, just that Darwin alone might not provide the framework for an explanation. No "half a skyhook", just an unsolved problem.

Straw men?

Read Rodney Brooks' posts and you'll see what I'm up against.

The rape book is silly, you just have to admit it. I could have quoted from dozens of clunkers in this odd text. There was a great passage about a woman raped by an orangutan who's husband (the woman's husband, that is) as well as she herself reported less consternation than they would have expected to experience if she had been raped by a person. No control group, sample size of one, reliance on subjective reportage, suspicious story; you could hardly come up with a more lousy experiment. And yet this example was used to reinforce the idea that the real reason rape is disliked is selfish genes; that bestiality is relatively delightful because it doesn't interrupt human mating schemes. I'm not saying, and have never said, that the ideas in this book are completely or exactly wrong, but rather that the book is inept. I sympathize with your position. You're a little like a member of a political party who has to defend an incompetent candidate. The important question to ask here is whether the CT community is too self-satisfied. I haven't met the authors of the rape book, but I imagine they must be intelligent and well meaning, and that perhaps the giddy team spirit of CT blinded them and made them sloppy.

I didn't attack Dawkins in the piece, and in fact a genial debate between he and I has been published. He is, as I have pointed out in past writings, not a meme totalist, even though he spawned a generation of them. As for you on consciousness, I am gently teasing you, and you must admit that you have been quite a rough player in your own writings in the past.

To Philip W. Anderson:

Thank you for your provocative note.

An interesting thought experiment is to imagine what the history of science and civilization might have been like if digital computers had become practical before Newton. This is not an unimaginable sequence. The ancient Alexandrians or Chinese might have done it if fortune had granted either of them a millennium or so of tranquility. The Chinese scenario might be more likely, since they weren't thinking in terms of mathematical proof, but were very good at coming up with clever technologies and building massive works. They would perhaps have built stylish city block-sized medieval computers out of electromechanical switches. These would have emitted marvelous rhythms, and perhaps there would have been dancing on the sidewalks around them.

I suspect our counterfactual predecessors could have gotten to the moon, but not built semi-conductors or an atomic bomb. They wouldn't have been forced to notice the problems that lead us to understand relativity and quantum mechanics.

I think there would have been less of a divide been the sciences and the mainstream of society, because it is easier to write fresh and fun computer programs than it is to do original work in continuous mathematics. Instead of being shrouded in esoteric mystery, science and engineering would have seemed more accessible to the lay person. Kant or his equivalent would have built huge simulations of competing metaphysics instead of seeking proofs.

Back to the present: Computers might yet yield important new physics. Stephen Hawking simply made the usual error of underestimating the time it takes to figure out how to write good software. We shouldn't expect deep understanding of software to improve any faster than deep understanding of other things. Think of the time it took to move from Newton to Einstein. Intellectual progress is not governed by Moore's Law.


Postscript:

Re: Ray Kurtzweil

Much to my surprise, Ray Kurtzweil and I spoke in succession (in Atlanta, at one of Vanguard's events) just as I was writing these responses. We see the world quite differently. He would certainly reject my last claim above, that fundamental intellectual achievement isn't inexorably speeding up.

I see punctuated equilibria in the history of science. Right now we're in the midst of an explosion of new biology. Around the turn of the last century there was an explosion of data and insight about physics. Physics is now searching for its next explosion but hasn't found it yet.

I also see a distinction between quantity and quality that Ray doesn't. I see computers getting bigger and faster, but it doesn't directly follow that computer science is also improving exponentially.

Ray sees everything as speeding up, including the speed of the speedup. In Atlanta, he collected varied graphic portrayals of exponential historical processes in a slide show, and labeled these a "countdown" to the singularity he predicts will arrive about a quarter of the way into the new century.

His exponential histories blend what others might think of as varied phenomena together into categories without differentiation. For instance, he showed a slide about Moore's Law, but with the timeframe not limited to the era of the silicon chip. Instead, he defines chips as just one of five technological phases that have upheld the exponential speedup of computation that started with the earliest mechanical calculation devices. He infers that the curve will be continued with nanotechnological or other devices once the limits of chip technology are reached, in perhaps twelve years. Likewise he showed a grand exponential account of the history of life on Earth that started with items like the Cambrian Explosion at the foot of the curve and soared to modern technological marvels at its heights, as if these were all of a kind.

I hope I can avoid being cast as the person who precisely disagrees with Ray, since I think we agree on many things. There are exponential phenomena at work, of course, but I feel they have robust contrarian company. I believe our human story is not best defined by a smooth curve, even at a large scale (although I try to make one exception, which I'll describe below). If there was ever a complex, chaotic phenomenon, we are it.

One question I have about Ray's exponential theory of history is whether he is stacking the deck by choosing points that fit the curves he wants to find. A technological pessimist could demonstrate a slow-down in space exploration, for instance, by starting with sputnik, and then proceeding to the Apollo and the space shuttle programs and then to the recent bad luck with Mars missions. Projecting this curve into the future could serve as a basis for arguing that space exploration will inexorably wind down. I've actually heard such reasoning put forward by antagonists of NASA's budget. I don't think it's a meaningful extrapolation, but it's essentially similar to Ray's arguments for technological hyper-optimism.

It's also possible that evolutionary processes might display local exponential features at only some scales. Evolution might be a grand scale "configuration space search" that periodically exhibits exponential growth as it finds an insulated cul-de-sac of the space that can be quickly explored. These are regions of the configuration space where the vanguard of evolutionary mutation experimentation comes upon a limited theater within which it can play out exponential games like arms races and population explosions. I suspect you can always find exponential sub processes in the history of evolution, but they don't give form to the biggest picture.

Here's one example: The dinosaurs were apparently "scaled" (maybe in both the traditional and Silicon Valley senses of the word!) by an "arms race", leading to larger and larger animals. Dinosaurs were not the only creatures at the time that relied on gigantism as a strategy. Much of the animal kingdom was becoming huger at once. I doubt the size competition proceeded at a linear rate. Arms races rarely do.

If we were dinosaurs debating this question, the Kurtzweilosaurus might argue that our descendants would soon be big enough to stand on their toes and touch the moon, and not long after that become as big as the universe. (Tribute is due, as always, to Mark Twain and his erectile Mississippi.)

The race to bigness came to a halt, perhaps because of a spaceborne cataclysm. Whatever the reason for the dinosaurs' disappearance, they could not have become bigger without bounds. Furthermore, the race to bigness did not inexorably reappear, but was replaced by other races. The mere appearance of an exponential sequence does not mean that it will not encounter an impassable boundary, or become untraceable as other processes exert their influences.

I see a scattered distribution of local, bounded exponential processes in the history of life, while Ray sees these processes all focusing like a coherent laser on a point in time we will likely live to see.

Smart people can be fooled by trends. For instance, in 1666, when technological optimism was perhaps even more pronounced than it is today (when space exploration seemed to be progressing exponentially, for instance), Time Magazine presented what it thought was a sober prediction: That by the year 2000 technology would have advanced to the point that no one in America would work for a living. Automation would take the drudgery out of life. Each American citizen would receive a healthy middle class stipend in the mail every month simply for being American. A specific dollar amount ($30-$40,000 in 1966 dollars) was even projected for the stipend. (Thanks to GBN's Eamonn Kelly for pointing out this example.)

Time Magazine was making what it saw as a perfectly reasonable extrapolation based on legitimate data. What went wrong with Time's prediction? There's no doubt that technology continued to improve in the second half of the twentieth century, and by most interpretations it did so at an exponential clip. Productivity faithfully increased on an exponential curve as well.

Here are a few candidate failings: Public rejection of key predicted technologies such as nuclear energy; "lock in" of such things as cars and freeways, which did not scale cheaply or elegantly; population explosions; increasingly unequal distributions of wealth; entrenchment in law and habit of the work ethic; and perhaps even the beginning of the "planet of helpdesks" scenario that made a cameo appearance in the .5 manifesto. This last possibility provides an alternate way to think about the growing "knowledge economy".

Note that some of these countervailing elements are exponential in their own right. Population growth is a classic example of an exponential process that can absorb an exponential increase in available resources. This is what has happened with high yield agriculture in India.

What's really tricky is figuring out when one process will outrun its surroundings for a while in a meaningful way, as the Internet has grown at a faster rate than the population or the larger economy.

I have to admit that I want to believe in one particular large scale, smooth, ascending curve as a governor of mankind's history. Specifically, I want to believe that moral progress has been real, and continues today. This is not an easy thing to believe in. I formed my desire to believe in it at about the same that Time Magazine made it's prediction about the end of work.

I remember being a child in the 1960s, and there was a giddy feeling in the air of accelerating social change. While the language was different, the idea wasn't that different from today's digital eschatology. It felt like the world was on an exponential course of change, approaching a singularity.

The evidence was there. You could have plotted the points on a graph and seen one of Ray's curves, but no one thought to do it explicitly at the time. 1776, Civil War, Women's Suffrage, Civil Rights Struggle, Anti-war movement, Women's lib, Gay Rights, Animal rightsS You could plot all these on a graph and see an exponential rate of expansion of the "Circle of Empathy" I wrote about in the .5 Manifesto. This process seemed to be destined to zoom into a singularity around 1969 or so, when I was nine years old. People were quite depressed when the singularity did not happen. Younger people today might not realize how deeply that singularity's no-show marked the lives of a vast number of Baby Boomers.

Dinosaurs did not become as large as the universe, work did not disappear in 2000 (at least not by November, 2000, as I write this), and love did not conquer all in 1969. All the trends were real, but were either interrupted, outran their own internal logics, ran out of world to expand into, or were balanced or consumed by other processes.


Back to "One Half of a Manifesto by Jaron Lanier; Reality Club Comments

Re: Ray Kurtzweil

Much to my surprise, Ray Kurtzweil and I spoke in succession (in Atlanta, at one of Vanguard's events) just as I was writing these responses. We see the world quite differently. He would certainly reject my last claim above, that fundamental intellectual achievement isn't inexorably speeding up.

I see punctuated equilibria in the history of science. Right now we're in the midst of an explosion of new biology. Around the turn of the last century there was an explosion of data and insight about physics. Physics is now searching for its next explosion but hasn't found it yet.

I also see a distinction between quantity and quality that Ray doesn't. I see computers getting bigger and faster, but it doesn't directly follow that computer science is also improving exponentially.

Ray sees everything as speeding up, including the speed of the speedup. In Atlanta, he collected varied graphic portrayals of exponential historical processes in a slide show, and labeled these a "countdown" to the singularity he predicts will arrive about a quarter of the way into the new century.

His exponential histories blend what others might think of as varied phenomena together into categories without differentiation. For instance, he showed a slide about Moore's Law, but with the timeframe not limited to the era of the silicon chip. Instead, he defines chips as just one of five technological phases that have upheld the exponential speedup of computation that started with the earliest mechanical calculation devices. He infers that the curve will be continued with nanotechnological or other devices once the limits of chip technology are reached, in perhaps twelve years. Likewise he showed a grand exponential account of the history of life on Earth that started with items like the Cambrian Explosion at the foot of the curve and soared to modern technological marvels at its heights, as if these were all of a kind.

I hope I can avoid being cast as the person who precisely disagrees with Ray, since I think we agree on many things. There ARE exponential phenomena at work, of course, but I feel they have robust contrarian company. I believe our human story is not best defined by a smooth curve, even at a large scale (although I try to make one exception, which I'll describe below). If there was ever a complex, chaotic phenomenon, we are it.

One question I have about Ray's exponential theory of history is whether he is stacking the deck by choosing points that fit the curves he wants to find. A technological pessimist could demonstrate a slow-down in space exploration, for instance, by starting with sputnik, and then proceeding to the Apollo and the space shuttle programs and then to the recent bad luck with Mars missions. Projecting this curve into the future could serve as a basis for arguing that space exploration will inexorably wind down. I've actually heard such reasoning put forward by antagonists of NASA's budget. I don't think it's a meaningful extrapolation, but it's essentially similar to Ray's arguments for technological hyper-optimism.

It's also possible that evolutionary processes might display local exponential features at only some scales. Evolution might be a grand scale "configuration space search" that periodically exhibits exponential growth as it finds an insulated cul-de-sac of the space that can be quickly explored. These are regions of the configuration space where the vanguard of evolutionary mutation experimentation comes upon a limited theater within which it can play out exponential games like arms races and population explosions. I suspect you can always find exponential sub processes in the history of evolution, but they don't give form to the biggest picture.

Here's one example: The dinosaurs were apparently "scaled" (maybe in both the traditional and Silicon Valley senses of the word!) by an "arms race", leading to larger and larger animals. Dinosaurs were not the only creatures at the time that relied on gigantism as a strategy. Much of the animal kingdom was becoming huger at once. I doubt the size competition proceeded at a linear rate. Arms races rarely do.

If we were dinosaurs debating this question, the Kurtzweilosaurus might argue that our descendants would soon be big enough to stand on their toes and touch the moon, and not long after that become as big as the universe. (Tribute is due, as always, to Mark Twain and his erectile Mississippi.)

The race to bigness came to a halt, perhaps because of a spaceborne cataclysm. Whatever the reason for the dinosaurs' disappearance, they could not have become bigger without bounds. Furthermore, the race to bigness did not inexorably reappear, but was replaced by other races. The mere appearance of an exponential sequence does not mean that it will not encounter an impassable boundary, or become untraceable as other processes exert their influences.

I see a scattered distribution of local, bounded exponential processes in the history of life, while Ray sees these processes all focusing like a coherent laser on a point in time we will likely live to see.

Smart people can be fooled by trends. For instance, in 1666, when technological optimism was perhaps even more pronounced than it is today (when space exploration seemed to be progressing exponentially, for instance), Time Magazine presented what it thought was a sober prediction: That by the year 2000 technology would have advanced to the point that no one in America would work for a living. Automation would take the drudgery out of life. Each American citizen would receive a healthy middle class stipend in the mail every month simply for being American. A specific dollar amount ($30-$40,000 in 1966 dollars) was even projected for the stipend. (Thanks to GBN's Eamonn Kelly for pointing out this example.)

Time Magazine was making what it saw as a perfectly reasonable extrapolation based on legitimate data. What went wrong with Time's prediction? There's no doubt that technology continued to improve in the second half of the twentieth century, and by most interpretations it did so at an exponential clip. Productivity faithfully increased on an exponential curve as well.

Here are a few candidate failings: Public rejection of key predicted technologies such as nuclear energy; "lock in" of such things as cars and freeways, which did not scale cheaply or elegantly; population explosions; increasingly unequal distributions of wealth; entrenchment in law and habit of the work ethic; and perhaps even the beginning of the "planet of helpdesks" scenario that made a cameo appearance in the .5 manifesto. This last possibility provides an alternate way to think about the growing "knowledge economy".

Note that some of these countervailing elements are exponential in their own right. Population growth is a classic example of an exponential process that can absorb an exponential increase in available resources. This is what has happened with high yield agriculture in India.

What's really tricky is figuring out when one process will outrun its surroundings for a while in a meaningful way, as the Internet has grown at a faster rate than the population or the larger economy.

I have to admit that I want to believe in one particular large scale, smooth, ascending curve as a governor of mankind's history. Specifically, I want to believe that moral progress has been real, and continues today. This is not an easy thing to believe in. I formed my desire to believe in it at about the same that Time Magazine made it's prediction about the end of work.

I remember being a child in the 1960s, and there was a giddy feeling in the air of accelerating social change. While the language was different, the idea wasn't that different from today's digital eschatology. It felt like the world was on an exponential course of change, approaching a singularity.

The evidence was there. You could have plotted the points on a graph and seen one of Ray's curves, but no one thought to do it explicitly at the time. 1776, Civil War, Women's Suffrage, Civil Rights Struggle, Anti-war movement, Women's lib, Gay Rights, Animal rightsŠ You could plot all these on a graph and see an exponential rate of expansion of the "Circle of Empathy" I wrote about in the .5 Manifesto. This process seemed to be destined to zoom into a singularity around 1969 or so, when I was nine years old. People were quite depressed when the singularity did not happen. Younger people today might not realize how deeply that singularity's no-show marked the lives of a vast number of Baby Boomers.

Dinosaurs did not become as large as the universe, work did not disappear in 2000 (at least not by November, 2000, as I write this), and love did not conquer all in 1969. All the trends were real, but were either interrupted, outran their own internal logics, ran out of world to expand into, or were balanced or consumed by other processes.





| Top |