Edge: ONE HALF AN ARGUMENT
 

However, on balance, I view the progression of evolution as a good thing, indeed as a spiritual direction. What we see in evolution is a progression towards greater intelligence, greater creativity, greater beauty, greater subtlety (i.e., the emergence of entities with emotion such as the ability to love, therefore greater love). And "God" has been described as an ideal of an infinite level of these same attributes. Evolution, even in its exponential growth, never reaches infinite levels, but it's moving rapidly in that direction. So we could say that this evolutionary process is moving in a spiritual direction.

However, the story of the twenty-first century has not yet been written. So it's not my view that any particular story is inevitable, only that evolution, which has been inherently accelerating since the dawn of biological evolution, will continue its exponential pace.

Jaron writes that "the whole enterprise of Artificial Intelligence is based on an intellectual mistake." Until such time that computers at least match human intelligence in every dimension, it will always remain possible for skeptics to say the glass is half empty. Every new achievement of AI can be dismissed by pointing out yet other goals have not yet been accomplished. Indeed, this is the frustration of the AI practitioner, that once an AI goal is achieved, it is no longer considered AI and becomes just a useful technique. AI is inherently the set of problems we have not yet solved.

Yet machines are indeed growing in intelligence, and the range of tasks that machines can accomplish that previously required intelligent human attention is rapidly growing. There are hundreds of examples of narrow AI today (e.g., computers evaluating electrocardiograms and blood cell images, making medical diagnoses, guiding cruise missiles, making financial investment decisions, not to mention intelligently routing emails and cell phone connections), and the domains are becoming broader. Until such time that the entire range of human intellectual capability is emulated, it will always be possible to minimize what machines are capable of doing.

I will point out that once we have achieved complete models of human intelligence, machines will be capable of combining the flexible, subtle, human levels of pattern recognition with the natural advantages of machine intelligence. For example, machines can instantly share knowledge, whereas we don't have quick downloading ports on our interconnection and neurotransmitter concentration level patterns. Machines are much faster (as I mentioned contemporary electronics is already ten million times faster than the electrochemical information processing used in our brains) and have much more prodigious and accurate memories.

Jaron refers to the annual "Turing test" that Loebner runs, and maintains that "we have caused the Turing test to be passed." These are misconceptions. I used to be on the prize committee of this contest until a political conflict caused most of the prize committee members to quit. Be that as it may, this contest is not really a Turing test, as we're not yet at that stage. It's a "narrow Turing test" which deals with domain specific dialogues, not unrestricted dialogue as Turing envisioned it. With regard to the Turing test as Turing described it, it is generally accepted that this has not yet happened.

Returning to Jaron's nice phrase "circle of empathy," he writes that his "personal choice is to not place computers inside the circle." But would he put neurons inside that circle? We've already shown that a neuron or even a substantial cluster of neurons can be emulated in great detail and accuracy by computers. So where on that slippery slope does Jaron find a stable footing? As Rodney Brooks says in his September 25, 2000 commentary on Jaron's "Half of a Manifesto," Jaron "turns out to be a closet Searlean." He just assumes that a computer cannot be as subtle ­ or as conscious ­ as the hundreds of neural regions we call the human brain. Like Searle, Jaron just assumes his conclusion. (For a more complete discussion of Searle and his theories, see my essay "Locked in his Chinese Room, Response to John Searle" in the forthcoming book Are We Spiritual Machines?: Ray Kurzweil vs. the Critics of Strong AI, Discovery Institute Press, 2001. This entire book will be posted on http://www.KurzweilAI.net).

Near the end of Jaron's essay, he worries about the "terrifying" possibility that through these technologies the rich may obtain certain opportunities that the rest of humankind does not have access to. This, of course, would be nothing new, but I would point out that because of the ongoing exponential growth of price-performance, all of these technologies quickly become so inexpensive as to become almost free. Look at the extraordinary amount of high-quality information available at no cost on the web today which did not exist at all just a few years ago. And if one wants to point out that only a small fraction of the world today has Web access, keep in mind that the explosion of the Web is still in its infancy.

At the end of his "Half of a Manifesto," Jaron writes that "the ideology of cybernetic totalist intellectuals [may] be amplified from novelty into a force that could cause suffering for millions of people." I don't believe this fearful conclusion follows from Jaron's half of an argument. The bottom line is that technology is power and this power is rapidly increasing. Technology may result in suffering or liberation, and we've certainly seen both in the twentieth century. I would argue that we've seen more of the latter, but nevertheless neither Jaron nor I wish to see the amplification of destructiveness that we have witnessed in the past one hundred years. As I mentioned above, the story of the twenty first century has not yet been written. I think Jaron would agree with me that our destiny is in our hands. However, I regard "our hands" to include our technology, which is properly part of the human-machine civilization.

Previous Page 1 2 3 4 5 6 Beginning