Edge: THE EVOLUTION OF COOKING


RAY KURZWEIL: THE SINGULARITY

RAY KURZWEIL: My interest in the future really stems from my interest in being an inventor. I've had the idea of being an inventor since I was five years old, and I quickly realized that you had to have a good idea of the future if you're going to succeed as an inventor. It's a little bit like surfing; you have to catch a wave at the right time. I quickly realized the world quickly becomes a different place than it was when you started by the time you finally get something done. Most inventors fail not because they can't get something to work, but because all the market's enabling forces are not in place at the right time.

So I became a student of technology trends, and have developed mathematical models about how technology evolves in different areas like computers, electronics in general, communication storage devices, biological technologies like genetic scanning, reverse engineering of the human brain, miniaturization, the size of technology, and the pace of paradigm shifts. This helped guide me as an entrepreneur and as a technology creator so that I could catch the wave at the right time.

This interest in technology trends took on a life of its own, and I began to project some of them using what I call the law of accelerating returns, which I believe underlies technology evolution to future periods. I did that in a book I wrote in the 1980s, which had a road map of what the 1990s and the early 2000's would be like, and that worked out quite well. I've now refined these mathematical models, and have begun to really examine what the 21st century would be like. It allows me to be inventive with the technologies of the 21st century, because I have a conception of what technology, communications, the size of technology, and our knowledge of the human brain will be like in 2010, 2020, or 2030. If I can come up with scenarios using those technologies, I can be inventive with the technologies of the future. I can't actually create these technologies yet, but I can write about them.

One thing I'd say is that if anything the future will be more remarkable than any of us can imagine, because although any of us can only apply so much imagination, there'll be thousands or millions of people using their imaginations to create new capabilities with these future technology powers. I've come to a view of the future that really doesn't stem from a preconceived notion, but really falls out of these models, which I believe are valid both for theoretical reasons and because they also match the empirical data of the 20th century.

One thing that observers don't fully recognize, and that a lot of otherwise thoughtful people fail to take into consideration adequately, is the fact that the pace of change itself has accelerated. Centuries ago people didn't think that the world was changing at all. Their grandparents had the same lives that they did, and they expected their grandchildren would do the same, and that expectation was largely fulfilled.

Today it's an axiom that life is changing and that technology is affecting the nature of society. But what's not fully understood is that the pace of change is itself accelerating, and the last 20 years are not a good guide to the next 20 years. We're doubling the paradigm shift rate, the rate of progress, every decade. This will actually match the amount of progress we made in the whole 20th century, because we've been accelerating up to this point. The 20th century was like 25 years of change at today's rate of change. In the next 25 years we'll make four times the progress you saw in the 20th century. And we'll make 20,000 years of progress in the 21st century, which is almost a thousand times more technical change than we saw in the 20th century.

Specifically, computation is growing exponentially. The one exponential trend that people are aware of is called Moore's Law. But Moore's Law itself is just one method for bringing exponential growth to computers. People are aware that we're doubling the power of computation every 12 months because we can put twice as many transistors on an integrated circuit every two years. But in fact, they run twice as fast and double both the capacity and the speed, which means that the power quadruples.

What's not fully realized is that Moore's Law was not the first but the fifth paradigm to bring exponential growth to computers. We had electro-mechanical calculators, relay-based computers, vacuum tubes, and transistors. Every time one paradigm ran out of steam another took over. For a while there were shrinking vacuum tubes, and finally they couldn't make them any smaller and still keep the vacuum, so a whole different method came along. They weren't just tiny vacuum tubes, but transistors, which constitute a whole different approach. There's been a lot of discussion about Moore's Law running out of steam in about 12 years because by that time the transistors will only be a few atoms in width and we won't be able to shrink them any more. And that's true, so that particular paradigm will run out of steam.

We'll then go to the sixth paradigm, which is massively parallel computing in three dimensions. We live in a 3-dimensional world, and our brains organize in three dimensions, so we might as well compute in three dimensions. The brain processes information using an electrochemical method that's ten million times slower than electronics. But it makes up for this by being three-dimensional. Every intra-neural connection computes simultaneously, so you have a hundred trillion things going on at the same time. And that's the direction we're going to go in. Right now, chips, even though they're very dense, are flat. Fifteen or twenty years from now computers will be massively parallel and will be based on biologically inspired models, which we will devise largely by understanding how the brain works.

We're already being significantly influenced by it. It's generally recognized, or at least accepted by a lot of observers, that we'll have the hardware to manipulate human intelligence within a brief period of time — I'd say about twenty years. A thousand dollars of computation will equal the 20 million billion calculations per second of the human brain. What's more controversial is whether or not we will have the software. People acknowledge that we'll have very fast computers that could in theory emulate the human brain, but we don't really know how the brain works, and we won't have the software, the methods, or the knowledge to create a human level of intelligence. Without this you just have an extremely fast calculator.

But our knowledge of how the brain works is also growing exponentially. The brain is not of infinite complexity. It's a very complex entity, and we're not going to achieve a total understanding through one simple breakthrough, but we're further along in understanding the principles of operation of the human brain than most people realize. The technology for scanning the human brain is growing exponentially, our ability to actually see the internal connection patterns is growing, and we're developing more and more detailed mathematical models of biological neurons. We actually have very detailed mathematical models of several dozen regions of the human brain and how they work, and have recreated their methodologies using conventional computation. The results of those re-engineered or re-implemented synthetic models of those brain regions match the human brain very closely.

Previous Page 1 2 3 4 5 6 Next