Edge: ONE HALF AN ARGUMENT


Consider that the brain itself is created from a genome with only 23 million bytes of useful information (that's what's left of the 800 million byte genome when you eliminate all the redundancies, e.g., the sequence "ALU" which is repeated hundreds of thousands of times). 23 million bytes is not that much information (it's less than Microsoft Word). How is it, then, that the human brain with its 100 trillion connections can result from a genome that is so small? I have estimated that just the interconnection data alone to characterize the human brain is a million times greater than the information in the genome.

The answer is that the genome specifies a set of processes, each of which utilizes chaotic methods (i.e., initial randomness, then self-organization) to increase the amount of information represented. It is known, for example, that the wiring of the interconnections follows a plan that includes a great deal of randomness. As the individual person encounters her environment, the connections and the neurotransmitter level patterns self-organize to better represent the world, but the initial design is specified by a program that is not extreme in its complexity.

It is not my position that we will program human intelligence link by link as in some huge CYC-like expert system. Nor is it the case that we will simply set up a huge genetic (i.e., evolutionary) algorithm and have intelligence at human levels automatically evolve itself. Jaron worries correctly that any such approach would inevitably get stuck in some local minima. He also interestingly points out how biological evolution "missed the wheel." Actually, that's not entirely accurate. There are small wheel like structures at the protein level, although it's true that their primary function is not for vehicle transportation. Wheels are not very useful, of course, without roads. However, biological evolution did create a species that created wheels (and roads), so it did succeed in creating a lot of wheels, albeit indirectly (but there's nothing wrong with indirect methods, we use them in engineering all the time).

With regard to creating human levels of intelligence in our machines, we will integrate the insights and models gained from reverse engineering the human brain, which will involve hundreds of regions, each with different methods, many of which do involve self-organizing paradigms at different levels. The feasibility of this reverse engineering project and of implementing the revealed methods has already been clearly demonstrated. I don't have room in this response to describe the methodology and status of brain reverse engineering in detail, but I will point out that the concept is not necessarily limited to neuromorphic modeling of each neuron. We can model substantial neural clusters by implementing parallel algorithms that are functionally equivalent. This often results in substantially reduced computational requirements, which has been shown by Lloyd Watts and Carver Mead.

Jaron writes that "if there ever was a complex, chaotic phenomenon, we are it." I agree with that, but don't see this as an obstacle. My own area of interest is chaotic computing, which is how we do pattern recognition, which in turn is the heart of human intelligence. Chaos is part of the process of pattern recognition, it drives the process, and there is no reason that we cannot harness these methods in our machines just as they are utilized in our brains.

Jaron writes that "evolution has evolved, introducing sex, for instance, but evolution has never found a way to be any speed but very slow." But he is ignoring the essential nature of an evolutionary process, which is that it accelerates because each stage introduces more powerful methods for creating the next stage. Biological evolution started out extremely slow, and the first halting steps took billions of years. The design of the principal body plans was faster, requiring only tens of millions of years. The process of biological evolution has accelerated, with each stage faster than the stage before it. Later key steps, such as the emergence of Homo Sapiens, took only hundreds of thousands of years. Human technology, which is evolution continued indirectly (created by a species created by evolution), continued this acceleration. The first steps took tens of thousands of years, outpacing biological evolution, and has accelerated from there. The World Wide Web emerged in only a few years, distinctly faster than, say, the Cambrian explosion.

Jaron complains that "surprisingly few of the most essential algorithms have overheads that scale at a merely linear rate." Without taking up several pages to analyze this statement in detail, I will point out that the brain does what it does in its own real-time, using interneuronal connections (where most of our thinking takes place) that operate at least ten million times slower than contemporary electronic circuits. We can observe the brain's massively parallel methods in detail, ultimately scan and understand all of its tens of trillions of connections, and replicate its methods. As I've mentioned, we're well down that path.

To correct a few of Jaron's statements regarding (my) time frames, it's not my position that the "singularity" will "arrive a quarter of the way into the new century" or that a "new criticality" will be "achieved in the about the year 2020." Just so that the record is straight, my view is that we will have the requisite hardware capability to emulate the human brain in a $1,000 of a computation (which won't be organized in the rectangular forms we see today such as notebooks and palmtops, but rather embedded in our environment) by 2020. The software will take longer, to around 2030. The "singularity" has divergent definitions, but for our purposes here we can consider this to be a time when nonbiological forms of intelligence dominate purely biological forms, albeit being derivative of them. This takes us beyond 2030, to perhaps 2040 or 2050.

Jaron calls this an "immanent doom" and "an eschatological cataclysm," as if it were clear on its face that such a development were undesirable. I view these developments as simply the continuation of the evolutionary process and neither utopian nor dystopian. It's true, on the one hand, that nanotechnology and strong AI, and particularly the two together, have the potential to solve age-old problems of poverty and human suffering, not to mention clean up the messes we're creating today with some of our more primitive technologies. On the other hand, there will be profound new problems and dangers that will emerge as well. I have always considered technology to be a double-edged sword. It amplifies both our creative and destructive natures, and we don't have to look further than today to see that.

Previous | Page 1 2 3 4 5 6 Next