Edge: ONE HALF AN ARGUMENT


In his "Postscript regarding Ray Kurzweil," Jaron asks the rhetorical question "about Ray's exponential theory of history. . . .[is he] stacking the deck by choosing points that fit the curves he wants to find?" I can assure Jaron that the more points we add to the dozens of exponential graphs I presented to him and the rest of the audience in Atlanta, the clearer the exponential trends become. Does he really imagine that there is some circa 1901 calculating device that has better price-performance than our circa 2001 devices? Or even a 1995 device that is competitive with a 2001 device? In fact what we do see as more points (representing specific devices) are collected is a cascade of "S-curves," in which each S-curve represents some specific technological paradigm. Each S-curve (which looks like an "S" in which the top portion is stretched out to the right) starts out with gradual and then extreme exponential growth, subsequently leveling off as the potential of that paradigm is exhausted. But what turns each S-curve into an ongoing exponential is the shift to another paradigm, and thus to another S-curve, i.e., innovation. The pressure to explore and discover a new paradigm increases as the limits of each current paradigm becomes apparent.

When it became impossible to shrink vacuum tubes any further and maintain the requisite vacuum, transistors came along, which are not merely small vacuum tubes. We've been through five paradigms in computing in this past century (electromechanical calculators, relay based computers, vacuum-tube-based computing, discrete transistors, and then integrated circuits, on which Moore's law is based). As the limits of flat integrated circuits are now within sight (one to one and a half decades away), there are already dozens of projects underway to pioneer the sixth paradigm of computing, which is computing in three dimensions, several of which have demonstrated small-scale working systems.

It is specifically the processing and movement of information that is growing exponentially. So one reason that an area such as transportation is resting at the top of an S-curve is that many if not most of the purposes of transportation have been satisfied by exponentially growing communication technologies. My own organization has colleagues in different parts of the country, and most of our needs that in times past would have required a person or a package to be transported can be met through the increasingly viable virtual meetings made possible by a panoply of communication technologies, some of which Jaron is himself working to advance. Having said that, I do believe we will see new paradigms in transportation. However, with increasingly realistic, high resolution full-immersion forms of virtual reality continuing to emerge, our needs to be together will increasingly be met through computation and communication.

Jaron's concept of "lock-in" is not the primary obstacle to advancing transportation. If the existence of a complex support system necessarily caused lock-in, then why don't we see lock-in preventing ongoing expansion of every aspect of the Internet? After all, the Internet certainly requires an enormous and complex infrastructure. The primary reason that transportation is under little pressure for a paradigm-shift is that the underlying need for transportation has been increasingly met through communication technologies that are expanding exponentially.

One of Jaron's primary themes is to distinguish between quantitative and qualitative trends, saying in essence that perhaps certain brute force capabilities such as memory capacity, processor speed, and communications bandwidths are expanding exponentially, but the qualitative aspects are not. And towards this end, Jaron complains of a multiplicity of software frustrations (many, incidentally, having to do with Windows) that plague both users and, in particular, software developers like himself. This is the hardware versus software challenge, and it is an important one. Jaron does not mention at all my primary thesis having to do with the software of intelligence. Jaron characterizes my position and that of other so-called "cybernetic totalists" to be that we'll just figure it out in some unspecified way, what he refers to as a software "Deus ex Machina." I have a specific and detailed scenario to achieve the software of intelligence, which concerns the reverse engineering of the human brain, an undertaking that is much further along than most people realize. I'll return to this in a moment, but first I would like to address some other basic misconceptions about the so-called lack of progress in software.

Jaron calls software inherently "unwieldy" and "brittle" and writes at great length on a variety of frustrations that he encounters in the world of software. He writes that "getting computers to perform specific tasks of significant complexity in a reliable but modifiable way, without crashes or security breaches, is essentially impossible." I certainly don't want to put myself in the position of defending all software (any more than I would care to characterize all people as wonderful). But it's not the case that complex software is necessarily brittle and prone to catastrophic breakdown. There are many examples of complex mission critical software that operates with very little if any breakdowns, for example the sophisticated software that controls an increasing fraction of airplane landings, or software that monitors patients in critical care facilities. I am not aware of any airplane crashes that have been caused by automated landing software; the same, however, cannot be said for human reliability.

Jaron says that "Computer user interfaces tend to respond more slowly to user interface events, such as a keypress, than they did fifteen years agoŠWhat's gone wrong?" To this I would invite Jaron to try using an old computer today. Even we put aside the difficulty of setting one up today (which is a different issue), Jaron has forgotten just how unresponsive, unwieldy, and limited they were. Try getting some real work done to today's standards with a fifteen year-old personal computer. It's simply not true to say that the old software was better in any qualitative or quantitative sense. If you believe that, then go use them.

Although it's always possible to find poor quality design, the primary reason for user interface response delays is user demand for more sophisticated functionality. If users were willing to freeze the functionality of their software, then the ongoing exponential growth of computing speed and memory would quickly eliminate software response delays. But they're not. So functionality always stays on the edge of what's feasible (personally, I'm waiting for my Teleimmersion upgrade to my videoconferencing software).

Previous | Page 1 2 3 4 5 6 Next