There is also the more fundamental issue of whether or not ethical debates are going to stop the developments that I'm talking about. It's all very good to have these mathematical models and these trends, but the question is if they going to hit a wall because people, for one reason or another through war or ethical debates such as the stem cell issue controversy thwart this ongoing exponential development.
I strongly believe that's not the case. These ethical debates are like stones in a stream. The water runs around them. You haven't seen any of these biological technologies held up for one week by any of these debates. To some extent, they may have to find some other ways around some of the limitations, but there are so many developments going on. There are dozens of very exciting ideas about how to use genomic information and proteonic information. Although the controversies may attach themselves to one idea here or there, there's such a river of advances. The concept of technological advance is so deeply ingrained in our society that it's an enormous imperative. Bill Joy has gotten around correctly talking about the dangers, and I agree that the dangers are there, but you can't stop ongoing development.
The kinds of scenarios I'm talking about 20 or 30 years from now are not being developed because there's one laboratory that's sitting there creating a human-level intelligence in a machine. They're happening because it's the inevitable end result of thousands of little steps. Each little step is conservative, not radical, and makes perfect sense. Each one is just the next generation of some company's products. If you take thousands of those little steps which are getting faster and faster you end up with some remarkable changes 10, 20, or 30 years from now. You don't see Sun Microsystems saying the future implication of these technologies is so dangerous that they're going to stop creating more intelligent networks and more powerful computers. Sun can't do that. No company can do that because it would be out of business. There's enormous economic imperative.
There is also a tremendous moral imperative. We still have not millions but billions of people who are suffering from disease and poverty, and we have the opportunity to overcome those problems through these technological advances. You can't tell the millions of people who are suffering from cancer that we're really on the verge of great breakthroughs that will save millions of lives from cancer, but we're cancelling all that because the terrorists might use that same knowledge to create a bioengineered pathogen.
This is a true and valid concern, but we're not going to do that. There's a tremendous belief in society in the benefits of continued economic and technological advance. Still, it does raise the question of the dangers of these technologies, and we can talk about that as well, because that's also a valid concern.
Another aspect of all of these changes is that they force us to re-evaluate our concept of what it means to be human. There is a common viewpoint that reacts against the advance of technology and its implications for humanity. The objection goes like this: we'll have very powerful computers but we haven't solved the software problem. And because the software's so incredibly complex, we can't manage it.
I address this objection by saying that the software required to emulate human intelligence is actually not beyond our current capability. We have to use different techniques different self-organizing methods that are biologically inspired. The brain is complicated but it's not that complicated. You have to keep in mind that it is characterized by a genome of only 23 million bytes. The genome is six billion bits that's eight hundred million bytes and there are massive redundancies. One pretty long sequence called ALU is repeated 300 thousand times. If you use conventional data compression on the genomes (at 23 million bytes, a small fraction of the size of Microsoft Word), it's a level of complexity that we can handle. But we don't have that information yet.
You might wonder how something with 23 million bytes can create a human brain that's a million times more complicated than itself. That's not hard to understand. The genome creates a process of wiring a region of the human brain involving a lot of randomness. Then, when the fetus becomes a baby and interacts with a very complicated world, there's an evolutionary process within the brain in which a lot of the connections die out, others get reinforced, and it self-organizes to represent knowledge about the brain. It's a very clever system, and we don't understand it yet, but we will, because it's not a level of complexity beyond what we're capable of engineering.
In my view there is something special about human beings that's different from what we see in any of the other animals. By happenstance of evolution we were the first species to be able to create technology. Actually there were others, but we are the only one that survived in this ecological niche. But we combined a rational faculty, the ability to think logically, to create abstractions, to create models of the world in our own minds, and to manipulate the world. We have opposable thumbs so that we can create technology, but technology is not just tools. Other animals have used primitive tools, but the difference is actually a body of knowledge that changes and evolves itself from generation to generation. The knowledge that the human species has is another one of those exponential trends.