Forrest Sawyer, John McCarthy respond to Ray Kurzweil; Kurzweil answers McCarthy

[most recent first]

From: Ray Kurzweil
Date: 3.29.02

I'd like to take exception to McCarthy's statement that Hans Moravec and I are "not active" in solving the technical problems of AI. I won't attempt here to evaluate the results of my efforts and leave that to others (including McCarthy), but it has now been forty years that I have been active in research and development in the areas of pattern recognition, natural language technology, and related fields. I can be more direct concerning Moravec: he is widely regarded as having made major contributions to three-dimensional machine vision and robotics.

The only way that McCarthy's statement makes sense is if one excludes the fields of pattern recognition, language technologies, machine vision, and robotics from the "AI" field (a field, incidentally, which John McCarthy helped name by organizing the 1956 Dartmouth Conference that gave the field its name). Indeed, that was the narrow view of many AI practitioners and observers in the 1980s. Although it was never my perspective, this "narrow view" equates AI with rule-based (i.e., "expert") systems. Although an important and useful technique, and one that I've often incorporated as an element in larger systems, it is my view that developing AI, whether today's "narrow" domain-specific variety, or future "strong" (human-level) systems, will require a panoply of approaches. Indeed, what we've found in the last half century is that rule-based methods tend to be brittle, and that pattern recognition (which tends to be characterized by massively parallel, self-organizing methods) lies at the heart of many important human cognitive abilities.

With regard to Eric Drexler, he is also widely acknowledged as having laid the theoretical foundations of nanotechnology. This field, although not directly related to current AI research, will ultimately provide the three-dimensional molecular computers that will provide at least the hardware requirements for human-level AI.

It is also a restricted view to regard the nineteenth century as having had more impact than the twentieth century. It is always the case that later innovations build on earlier ones, so earlier developments may appear to be more fundamental (e.g., consider the world without fire or the wheel). It requires a methodical approach to clearly see the acceleration of technology and its impact. Human life has changed far more in the twentieth century than in the nineteenth century. Human life expectancy went from 39 years in 1800 to 47 years in 1900, a 20% increase. It then increased to 77 years by 2000, a 64% increase. Although the railroad was unquestionably important, very few people were impacted by electricity, electric light, telephones, or cars by 1900. These innovations took decades to be adopted by even a quarter of the U.S. population. Adoption of innovations today is faster by an order of magnitude. Real average incomes today are six times greater (after adjustment for inflation) than a century ago. It's always possible to pick isolated examples that belie a pervasive trend.

I have studied these trends for over twenty years, and the acceleration of technology and its adoption is undeniable [1]. The acceleration of computation is not a recent phenomenon and goes back to the roots of computation in the late nineteenth century. We've had over a century of (double) exponential growth of the price-performance of computation through five paradigms (electromagnetic calculators, relay-based computers, vacuum tubes, discrete transistors, and finally, integrated circuits and Moore's Law). This phenomenon of acceleration and exponential growth is not limited to computation, but applies also to communication bandwidths, biological technologies and knowledge (e.g., the price-performance of DNA sequencing), brain scanning bandwidths and resolution, human brain reverse-engineering, the size of networks (e.g., the Internet), and even the size of technology (i.e., we're shrinking the size of technology at an exponential rate of 5.6 per linear dimension per decade). When one paradigm runs out of steam, it gets replaced by another paradigm, which starts another "S-curve" (i.e., exponential growth followed by saturation) and thereby continues the exponential growth. We'll see that for the sixth time in computation with the advent of three-dimensional molecular computing. In the past year alone, there have been multiple advances in the area of carbon nanotube-based electronics.

With regard to putting a date on "strong AI." I understand John's hesitation, after several embarrassing predictions by leading AI practitioners (e.g., Herbert Simon's 1965 prediction that by 1985, "machines will be capable of doing any work a man can do."). Although I don't claim to have an infallible crystal ball, there is a methodology to my technology forecasting, and I can say that I'm not embarrassed by the predictions I made during the 1980s about the 1990s and early 2000 years based on these models, which I continue to refine.

There will be a variety of sources for the "fundamental conceptual advances" that McCarthy refers to, but a critical one is the effort to reverse-engineer the human brain, an endeavor well under way and further along than most observers realize. Recall that predictions in 1985 that the human genome would be transcribed by 2002 were considered extreme because at that time we could only sequence about one ten-thousandth of the genome in a year. But ongoing acceleration of the progress of genetic sequencing (basically doubling its price-performance annually) enabled the prediction to be realized. As I mentioned above, we see similar exponential growth in such technologies as human brain scanning, and neuronal modeling, as well as the modeling of specific brain regions. It is a conservative projection to say that we will have detailed mathematical models of all neuron types and brain regions by the mid 2020s. We already routinely use transformations from these neurophysiological models in our pattern recognition work (e.g., in the front end of speech recognition systems). The insights from the accelerating efforts to reverse engineer the human brain will be an important source of the conceptual advances required to create the software of human intelligence. With regard to the hardware, that is a more straightforward proposition, and I would doubt that McCarthy would disagree that we will have the requisite hardware well before 2029.

[1] See charts in "The Law of Accelerating Returns" at

From: John McCarthy
Date: 3.25.02

1. I agree that human level AI will be developed and that it will revolutionize human society, though in more complicated ways than anyone has yet worked out.

2. I disagree with Kurzweil's putting a date on it. Fundamental conceptual advances are required to reach human level AI. Maybe we'll have it in five years, maybe it will take 500 years, although I doubt it will take that long.

3. I'm disappointed that Kurzweil makes these cheerleading predictions while himself not active in solving the technical problems. As a cheer leader he joins Hans Moravec and K. Eric Drexler.

4. By the way, Kurzweil is mistaken in claiming that progress is accelerating and that a "singularity" is being approached. As it happens, progress affecting human life was faster in the 19th century than in the 20th century. At the beginning of 19th century, it took an expedition, the Lewis and Clark expedition, to journey overland from the states to the Pacific coast; by 1869 there was a railroad across the country, the first railroad anywhere being in 1828. In 1815, the battle of New Orleans was fought 6 weeks after the war was over; the telegraph came in 1840 and the transatlantic cable by 1870. Anesthesia for operations came in the 1840s. The germ theory of infectious diseases was established in 1860. In the 1890s we got electric lights in the home, automobiles were being manufactured and also refrigerators. Shortly after the turn of the 20th century we got radio and the airplane.

The 20th century inventions were important but had somewhat lesser effects. There are mass air travel, antibiotics, the pill, TV, nuclear energy, the beginning of space travel, remote operation of equipment of all kinds, the computer, the personal computer, and the internet. It is much less stressful to do without the products of 20th century invention for a month than those of 19th century invention. There's no law of diminishing returns here; this is just how it worked out.

Merging biology with electronic computation will happen, but we can't yet say when it will affect daily life. Human level AI will also be revolutionary, but we don't know when.

From: Forrest Sawyer
Date: 3.25.02

This latest missive struck a nerve, and I thought I'd pass the twinge along. Ray argues that "it is part of our destiny and part of the destiny of evolution to continue to progress ever faster, and to grow the power of intelligence exponentially." This is much closer to religious dogma than a statement of fact about the nature of evolution. We do tend to be anthropocentric when studying biological systems, and we do tend to extrapolate from past patterns to the future. But there is no reason to believe that, from a purely evolutionary standpoint, we are any more successful than viruses or bacteria. In fact, based on biomass and the likelihood of longterm survival, the opposite could easily be argued. The evolutionary jury is out on whether primate intelligence is in for the long run. Judging from the environmental impact of our staggeringly recent population surge, the odds are growing slimmer.

To suggest that it is our "destiny" to grow ever more intelligent, merging with machines along the way, is to fail to understand the fragility of our present position. It is not destiny but a series of decisions (or lack of them) that will shape the near future. We can also be sure that massive environmental perturbations will play a role along the way as well...a quick review of the iridium layer at the end of the Cretaceous can assure us of that.

It may make us feel better to believe we are destined for what we envision to be greatness, but the systems at work here are far too large and complex for us to control with such happy precision. The fact that we exist is hardly satisfactory evidence that nature loves intelligence, and it is not evidence that "progress" is inevitable. Progress is after all a relative term and, indeed, our progress has spelled disaster for most of the world's large animals and many ecosystems. A little humility in the face of the challenges before us might help us decide our next steps more carefully...and help us gain a clearer understanding of the biological systems of which we are a part


John Brockman, Editor and Publisher
contact: [email protected]
Copyright © 2002 by
Edge Foundation, Inc
All Rights Reserved.