Edge: THE COMPUTATIONAL PERSPECTIVE


DANIEL C. DENNETT: THE COMPUTATIONAL PERSPECTIVE

[DANIEL C. DENNETT:] If you go back 20 years, or if you go back 200 years, 300 years, you see that there was one family of phenomena that people just had no clue about, and those were mental phenomena — that is, the very idea of thinking, perception, dreaming, sensing. We didn't have any model for how that was done physically at all. Descartes and Leibniz, great scientists in their own right, simply drew a blank when it came to trying to figure these things out. And it's only really with the ideas of computation that we now have some clear and manageable ideas about what could possibly be going on. We don't have the right story yet, but we've got some good ideas. And at least one can now see how the job can be done.

Coming to understand our own understanding, and seeing what kinds of parts it can be made of, is one of the great breakthroughs in the history of human understanding. If you compare it, say, with our understanding of life itself, or reproduction and growth, those were deep and mysterious processes a hundred years ago and forever before that. Now we have a pretty clear idea of how it's possible for things to reproduce, how it's possible for them to grow, to repair themselves, to fuel themselves, to have a metabolism. All of these otherwise stunningly mysterious phenomena are falling into place.

And when you look at them you see that at a very fundamental level they're basically computational. That is to say, there are algorithms for growth, development, and reproduction. The central binding idea of all of these phenomena is that you can put together not billions, but trillions of moving parts and get these entirely novel, emergent, higher-level effects. And the best explanation for what governs those effects is at the level of software, the level of algorithms. If you want to understand how orderly development, growth, and cognition take place, you need to have a high-level understanding of how these billions or trillions of pieces interact with each other.

We never had the tools before to understand what happens when you put a trillion cells together and have them interact. Now we're getting these tools, and even the lowly laptop gives us hints, because we see phenomena happening right on our desks that would just astound Newton or Descartes, or Darwin for that matter, that would look like sheer magic. We know it isn't magic. There's not a thing that's magical about a computer. One of the most brilliant things about a computer is that there's nothing up its sleeve. We know to a moral certainty there are no morphic resonances, psyonic waves, spooky interactions; it's good old push-pull, traditional, material causation. And when you put it together by the trillions, with software, with a program, you get all of this magic that's not really magic.

The idea of computation is a murky idea and it's a mistake to think that we have a clear, unified, unproblematic concept of what counts as computation. Even computer scientists have only a fuzzy grip on what they actually mean by computation; it's one of those things that we recognize when we see it. But it seems to me that probably the idea of computation, itself, is less clearly defined than the idea of matter, or the ideas of energy or time in physics, for instance. The fundamental idea is itself still in some regards a bit murky. But that doesn't mean that we can't have good theories of computation. The question is just where to draw the line that says this is computation, this isn't computation. It's not so clear. Almost any process can be interpreted through the lens of computational ideas, and usually — not always — that's a fruitful exercise of reinterpretation. We can see features of the phenomena through that lens that are essentially invisible through any other lens, as far as we know.

DANIEL C. DENNETT: THE COMPUTATIONAL PERSPECTIVE

[DANIEL C. DENNETT:] If you go back 20 years, or if you go back 200 years, 300 years, you see that there was one family of phenomena that people just had no clue about, and those were mental phenomena — that is, the very idea of thinking, perception, dreaming, sensing. We didn't have any model for how that was done physically at all. Descartes and Leibniz, great scientists in their own right, simply drew a blank when it came to trying to figure these things out. And it's only really with the ideas of computation that we now have some clear and manageable ideas about what could possibly be going on. We don't have the right story yet, but we've got some good ideas. And at least one can now see how the job can be done.

Coming to understand our own understanding, and seeing what kinds of parts it can be made of, is one of the great breakthroughs in the history of human understanding. If you compare it, say, with our understanding of life itself, or reproduction and growth, those were deep and mysterious processes a hundred years ago and forever before that. Now we have a pretty clear idea of how it's possible for things to reproduce, how it's possible for them to grow, to repair themselves, to fuel themselves, to have a metabolism. All of these otherwise stunningly mysterious phenomena are falling into place.

And when you look at them you see that at a very fundamental level they're basically computational. That is to say, there are algorithms for growth, development, and reproduction. The central binding idea of all of these phenomena is that you can put together not billions, but trillions of moving parts and get these entirely novel, emergent, higher-level effects. And the best explanation for what governs those effects is at the level of software, the level of algorithms. If you want to understand how orderly development, growth, and cognition take place, you need to have a high-level understanding of how these billions or trillions of pieces interact with each other.

We never had the tools before to understand what happens when you put a trillion cells together and have them interact. Now we're getting these tools, and even the lowly laptop gives us hints, because we see phenomena happening right on our desks that would just astound Newton or Descartes, or Darwin for that matter, that would look like sheer magic. We know it isn't magic. There's not a thing that's magical about a computer. One of the most brilliant things about a computer is that there's nothing up its sleeve. We know to a moral certainty there are no morphic resonances, psyonic waves, spooky interactions; it's good old push-pull, traditional, material causation. And when you put it together by the trillions, with software, with a program, you get all of this magic that's not really magic.

The idea of computation is a murky idea and it's a mistake to think that we have a clear, unified, unproblematic concept of what counts as computation. Even computer scientists have only a fuzzy grip on what they actually mean by computation; it's one of those things that we recognize when we see it. But it seems to me that probably the idea of computation, itself, is less clearly defined than the idea of matter, or the ideas of energy or time in physics, for instance. The fundamental idea is itself still in some regards a bit murky. But that doesn't mean that we can't have good theories of computation. The question is just where to draw the line that says this is computation, this isn't computation. It's not so clear. Almost any process can be interpreted through the lens of computational ideas, and usually — not always — that's a fruitful exercise of reinterpretation. We can see features of the phenomena through that lens that are essentially invisible through any other lens, as far as we know.

Previous Page 1 2 3 4 5 6 Next