Edge: ONE HALF AN ARGUMENT


This romancing of software from years or decades ago is comparable to people's idyllic view of life hundreds of years ago, when we were unencumbered with the frustrations of machines. Life was unencumbered, perhaps, but it was also short (e.g., life expectancy less than half of today's), labor-intensive (e.g., just preparing the evening meal took many hours of hard labor), poverty-filled, disease and disaster prone.

With regard to the price-performance of software, the comparisons in virtually every area are dramatic. For example, in 1985 $5,000 bought you a speech recognition software package that provided a 1,000 word vocabulary, no continuous speech capability, required three hours of training, and had relatively poor accuracy. Today, for only $50, you can purchase a speech recognition software package with a 100,000 word vocabulary, continuous speech, that requires only five minutes of training, has dramatically improved accuracy, natural language understanding ability (for editing commands and other purposes), and many other features.

How about software development itself? I've been developing software myself for forty years, so I have some perspective on this. It's clear that the growth in productivity of software development has a lower exponent, but it is nonetheless growing exponentially. The development tools, class libraries, and support systems available today are dramatically more effective than those of decades ago. I have today small teams of just three or four people who achieve objectives in a few months that are comparable to what a team of a dozen or more people could accomplish in a year or more 25 years ago. I estimate the doubling time of software productivity to be approximately six years, which is slower than the doubling time for processor price-performance, which is approximately one year today. However, software productivity is nonetheless growing exponentially.

The most important point to be made here is that there is a specific game plan for achieving human-level intelligence in a machine. I agree that achieving the requisite hardware capacity is a necessary but not sufficient condition. As I mentioned above, we have a resource for understanding how to program the methods of human intelligence given hardware that is up to the task, and that resource is the human brain itself.

Here again, if you speak to some of the neurobiologists who are diligently creating detailed mathematical models of the hundreds of types of neurons found in the brain, or who are modeling the patterns of connections found in different regions, you will in at least a few cases encounter the same sort of engineer's / scientist's myopia that results from being immersed in the specifics of one aspect of a large challenge. However, having tracked the progress being made in accumulating all of the (yes, exponentially increasing) knowledge about the human brain and its algorithms, I believe that it is a conservative scenario to expect that within thirty years we will have detailed models of the several hundred information processing organs we collectively call the human brain.

For example, Lloyd Watts has successfully synthesized (that is, assembled and integrated) the detailed models of neurons and interconnections in more than a dozen regions of the brain having to do with auditory processing. He has a detailed model of the information transformations that take place in these regions, and how this information is encoded, and has implemented these models in software. The performance of Watt's software matches the intricacies that have been revealed in subtle experiments on human hearing and auditory discrimination. Most interestingly, using Watts' models as the front-end in speech recognition has demonstrated the ability to pick out one speaker against a backdrop of background sounds, an impressive feat that humans are capable of, and that up until Watts' work, had not been feasible in automated speech recognition systems.

The brain is not one big neural net. It consists of hundreds of regions, each of which is organized differently, with different types of neurons, different types of signaling, and different patterns of interconnections. By and large, the algorithms are not the sequential, logical methods that are commonly used in digital computing. The brain tends to use self organizing, chaotic, holographic (i.e., information not in one place but distributed throughout a region), massively parallel, and digital controlled-analog methods. However, we have demonstrated in a wide range of projects the ability to understand these methods, and to extract them from the rapidly escalating knowledge of the brain and its organization.

The speed, cost effectiveness, and bandwidth of human brain scanning is also growing exponentially, doubling every year. Our knowledge of human neuron models is also rapidly growing. The size of neuron clusters that we have successfully recreated in terms of functional equivalence is also scaling up exponentially.

I am not saying that this process of reverse engineering the human brain is the only route to "strong" AI. It is, however, a critical source of knowledge that is feeding into our overall research activities where these methods are integrated with other approaches.

Also, it is not the case that the complexity of software, and therefore its "brittleness" needs to scale up dramatically in order to emulate the human brain, even when we get to emulating its full functionality. My own area of technical interest is pattern recognition, and the methods that we typically use are self-organizing methods such as neural nets, Markov models, and genetic algorithms. When set up in the right way, these methods can often display subtle and complex behaviors that are not predictable by the designer putting them into practice. I'm not saying that such self-organizing methods are an easy short cut to creating complex and intelligent behavior, but they do represent one important way in which the complexity of a system can be increased without the brittleness of explicitly programmed logical systems.

Previous | Page 1 2 3 4 5 6 Next