Edge: THE COMPUTATIONAL PERSPECTIVE


THE COMPUTATIONAL PERSPECTIVE: A TALK WITH DANIEL C. DENNETT [11.19.01]

"There are going to be things that meet those conditions that are not interestingly computational by anybody's standards, and there are things that are going to fail to meet the standards, which nevertheless you see are significantly like the things that you want to consider computational. So how do you deal with that? By ignoring it, by ignoring the issue of definition, that's my suggestion. Same as with life! You don't want to argue about whether viruses are alive or not; in some ways they're alive, in some ways they're not. Some processes are obviously computational. Others are obviously not computational. Where does the computational perspective illuminate? Well, that depends on who's looking at the illumination."


Introduction


A philosopher by training, Daniel C. Dennett is known as the leading proponent of the computational model of the mind. He has made significant contributions in fields as diverse as evolutionary theory, artificial intelligence, cognitive science, animal studies, computer science among others. Never one to avoid a good fight, he has clashed with such noted thinkers as John Searle, Roger Penrose, and Stephen Jay Gould. In this regard, Dennett is emblematic of the third culture intellectual.The strength of the third culture is precisely that it can tolerate disagreements about which ideas are to be taken seriously. There is no canon or accredited list of acceptable ideas. Unlike previous intellectual pursuits, the achievements of the third culture are not the marginal disputes of a quarrelsome mandarin class: they affect the lives of everybody on the planet.

"Dan Dennett is living proof that philosophy is not, as many think, airy speculation and effete musing.," notes Steven Pinker. "Time and again Dan has worked as a razor-sharp cognitive scientist, analyzing the implications of research more thoroughly than the researchers did themselves. His elucidation of different explanatory "stances" (physical, intentional, design) provided the key ideas behind mental modules (or multiple intelligences) for different domains of knowledge. His analyses of behaviorism, artificial intelligence, imagery, consciousness, free will, and evolutionary psychology just brim with insight and original ideas. And it doesn't seem fair that someone with such serious and important ideas should be so much fun to read!"

Marc D. Hauser credits Dennett (along with Jerry Fodor) as one of the two empirical philosophers — those who use data to drive philosophical discussion — that has hs had an extraordinary impact on evolutionary studies of the mind. Although these two often hold quite radically different positions, they have each contributed in important ways to our understanding of the mind, and how psychological findings bear on profound philosophical distinctions.

According to Hauser, "Dennett has had a significant impact on studies of animal cognition due in part to his work on the intentional stance and his intuitions about the kinds of inferences that humans and nonhuman animals might make with respect to other minds. When Dan laid out, in his typically lucid and playful fashion, how ethologists might go about studying intentionality from a Gricean perspective (I know that you know that I want that banana hidden from view from our fearless leader), this opened the door to a series of studies and analyses of animal behavior.

"Most crucially, Dan's insight into the problem of other minds, and of using studies of false belief to test for such mental states, set forth a cottage industry of research in animals and human infants. It is the combination of Dan's playfulness and creativity that makes him an asset to those of us working on animal cognition. One is almost tempted to say that in the same way that imaging provides a tool for understanding the neurobiological and functional architecture of the human mind, Dennett represents a tool for those of us studying animal minds."

JB



DANIEL C. DENNETT is Distinguished Arts and Sciences Professor, Professor of Philosophy, and Director of the Center for Cognitive Studies at Tufts University. His first book, He is the author of Content and Consciousness; Brainstorms; Elbow Room;The Intentional Stance; Consciousness Explained; Darwin's Dangerous Idea; Kinds of Minds; and Brainchildren: A Collection of Essays. He co-edited The Mind's I with Douglas Hofstadter and he is the author of over a hundred scholarly articles on various aspects on the mind, published in journals ranging from Artificial Intelligence and Behavioral and Brain Sciences to Poetics Today and the Journal of Aesthetics and Art Criticism.


Daniel C. Dennett's Edge Bio Page

THE REALITY CLUB: Jaron Lanier responds to Dan Dannett


THE COMPUTATIONAL PERSPECTIVE: A TALK WITH DANIEL C. DENNETT

If you go back 20 years, or if you go back 200 years, 300 years, you see that there was one family of phenomena that people just had no clue about, and those were mental phenomena — that is, the very idea of thinking, perception, dreaming, sensing. We didn't have any model for how that was done physically at all. Descartes and Leibniz, great scientists in their own right, simply drew a blank when it came to trying to figure these things out. And it's only really with the ideas of computation that we now have some clear and manageable ideas about what could possibly be going on. We don't have the right story yet, but we've got some good ideas. And at least one can now see how the job can be done.

Coming to understand our own understanding, and seeing what kinds of parts it can be made of, is one of the great breakthroughs in the history of human understanding. If you compare it, say, with our understanding of life itself, or reproduction and growth, those were deep and mysterious processes a hundred years ago and forever before that. Now we have a pretty clear idea of how it's possible for things to reproduce, how it's possible for them to grow, to repair themselves, to fuel themselves, to have a metabolism. All of these otherwise stunningly mysterious phenomena are falling into place.

And when you look at them you see that at a very fundamental level they're basically computational. That is to say, there are algorithms for growth, development, and reproduction. The central binding idea of all of these phenomena is that you can put together not billions, but trillions of moving parts and get these entirely novel, emergent, higher-level effects. And the best explanation for what governs those effects is at the level of software, the level of algorithms. If you want to understand how orderly development, growth, and cognition take place, you need to have a high-level understanding of how these billions or trillions of pieces interact with each other.

We never had the tools before to understand what happens when you put a trillion cells together and have them interact. Now we're getting these tools, and even the lowly laptop gives us hints, because we see phenomena happening right on our desks that would just astound Newton or Descartes, or Darwin for that matter, that would look like sheer magic. We know it isn't magic. There's not a thing that's magical about a computer. One of the most brilliant things about a computer is that there's nothing up its sleeve. We know to a moral certainty there are no morphic resonances, psyonic waves, spooky interactions; it's good old push-pull, traditional, material causation. And when you put it together by the trillions, with software, with a program, you get all of this magic that's not really magic.

The idea of computation is a murky idea and it's a mistake to think that we have a clear, unified, unproblematic concept of what counts as computation. Even computer scientists have only a fuzzy grip on what they actually mean by computation; it's one of those things that we recognize when we see it. But it seems to me that probably the idea of computation, itself, is less clearly defined than the idea of matter, or the ideas of energy or time in physics, for instance. The fundamental idea is itself still in some regards a bit murky. But that doesn't mean that we can't have good theories of computation. The question is just where to draw the line that says this is computation, this isn't computation. It's not so clear. Almost any process can be interpreted through the lens of computational ideas, and usually — not always — that's a fruitful exercise of reinterpretation. We can see features of the phenomena through that lens that are essentially invisible through any other lens, as far as we know.

Human culture is the environment that we live in. There's the brute physical environment, the streets and the air we breathe, the water we drink, and the cars we travel in, and then there's all the communication going on around us in many different media: everyday conversation, newspapers, books, radios and television, and the Internet. Pigeons live in that world too, but they're simply oblivious to most of it; they don't care what's written in the newspaper that they find their crumbs on. It's immaterial to them what the content, what the information, is. For us it's different; the information is really important.

So if we just think about the informational world that we as a species now live in, we see that, in fact, it's got a lot of structure. It's not amorphous. Everything is not connected to everything else. There are lots of barriers, and there's an architecture to this world of communication. And that architecture is changing very rapidly, and in ways that we don't understand yet.

Let me give you a really simple example of this. You tune in the Super Bowl, and you find that there are these dot-com companies that are pouring an embarrassingly large amount of their initial capitalization into one ad on the Super Bowl; they're trying to get jump-started with this ad. And this is curious so you ask yourself, "If this is an Internet company, why aren't they using the Internet? Why are they doing this retrograde thing of going back and advertising on regular broadcast TV instead of using the Internet?" And the answer, of course, is that there's a fundamental difference in the conceptual architecture of these different media.

When you watch the Super Bowl you are part of a large simultaneous community — and you know it. You know that you are one of millions, hundreds of millions of people. You're all having the same experience at once, and you know that you are. And it's that second fact, it's that reflexive fact, that's so important. You go to a website, and there might be a hundred million people looking at that website, but you don't know that. You may have read that somewhere but you're not sure, you don't know. The sense that you have when you're communicating on the Web is a much more private sense than when you're watching something on network television. And this has huge ramifications for the credibility conditions. An ad that will work well on television falls flat on the Web, because the people that view it, that read it, that listen to it, don't know what audience they're a part of. They don't know how big a room they are in, whether this is a private communication or a public communication. We don't know yet what kind of fragmentation of the world's audiences is going to be occasioned by the Internet. It brings people together, but it also creates isolation in a way that we haven't begun to assess.

There's a system of what you might call landmarks, and then there's a system of filters. Everybody needs them. That sense of being utterly lost that neophytes have when they first get on the Web — choosing search engines, knowing what to trust, where home is, whom to believe, what sites to go to ­ is due to the fact that everybody is thirsting for reliable informants or sign posts.

This is something that was established over centuries in the traditional media. You went to the Times and you read it there, and that had a certain authority for you. Or you went to the public library and you read it in the Encyclopedia Britannica. And all of these institutions had their own character, and also their own reputations, and their reputations were shared communally. It was very important that your friends also knew that the Times or the Encyclopedia Britannica was an important place to look. Suppose somebody writes and publishes a volume called "Sammy's Encyclopedia of the World's Information"; it might be the best encyclopedia in the world, but if people in general don't realize it, nobody's going to trust what's in there. It's this credibility issue which, as far as I can see, has not yet even begun to crystalize on the Web. So we're entering uncharted waters there. What comes out of this is very hard to predict. All I know is that we are, indeed, in a period where the whole architecture of our culture is being shifted under our feet, and we don't know where it's going.

We've changed human experience tremendously in the last century and in the last decade. For instance, I'd guess that the average Western-world teenager has heard more professionally played music than Mozart heard in his whole life (not counting his own playing and composing, and rehearsal time!). It used to be that hearing professional musicians in performance was a very special thing. Now not hearing professional musicians is a special thing. There's a soundtrack almost everywhere we go. It's a huge change in the auditory structure of the world we live in. The other arts are similarly positioned. There was a time when just seeing some written words was a pretty big thing. Now of course everything has words on it. People can stand in the shower and read the back of the shampoo bottle. We are completely surrounded by the technology of communication. And that's a new thing. The species, of course, has no adaptations for it, so we're winging it. We're responding to it the best we can.

There are lots of patterns in the world. Some of them are governed by the law of gravity, some by other physical principles. And some of them are governed by software. That is to say, the robustness of the pattern, the fact that it's salient, the fact that you can identify it, the fact that it keeps reproducing itself, that it can be found here, there, and elsewhere, the fact that you can predict it, is not because there's a fundamental law like the law of gravity that governs it, but because these are the patterns that occur wherever there are ultimately computational devices, wherever you have organisms that process information. They preserve, restore and repair the patterns, and keep the patterns going. And that really is a fundamental, new feature of the universe. If you went to a lifeless planet and did an inventory of all the patterns that were on that planet, these wouldn't be there. They're the patterns you can find in DNA--those are the ur-patterns, the ones that make all the rest of the patterns possible. They're also the patterns that you find in texts. They're the patterns that folks try to hide by encryption, and that really clever cryptographers nevertheless uncover. These are, of course, patterns to be discovered in the layout of physical matter; they have to have some physical embodiment in nucleotides or ink marks or particles and charges. But what explains their very existence in the universe is computation, is the algorithmic quality of all things that reproduce and that have meaning, and that make meaning.

These patterns are not, in one sense, reducible to the laws of physics, although they are based in physical reality, although they are patterns in the activities and arrangements of physical particles. The explanation of why they form the patterns they do has to go on at a higher level. Doug Hofstadter once gave a very elegant simple example of this. We come across a computer and it's chugging along, chugging along; it's not stopping. And our question is: Why doesn't it stop? What fact explains the fact that this particular computer at this time doesn't stop? And in Doug's example, the answer is that reason it doesn't stop is that pi is irrational! What? Well, the number pi is an irrational number, which means it's a never-ending decimal, and this particular computer program is generating the decimal expansion of pi, a process that will never stop. Of course, the computer may break. Somebody may come along with an ax and cut the cord so it doesn't have any more power, but as long as it keeps powered, it's going to go on generating these digits forever. That's a simple concrete fact that can be detected in the world, the explanation of which cites an abstract mathematical fact about a particular number that is an irrational number.

Now, there are many other patterns like this in the world, which are not as arcane and that have to do with the meaning we attach to things. Why is he blushing? Blushing is a matter of the suffusion of blood through the skin of the face for goodness sake — there's a perfectly good explanation of what the process of blushing is — but why is he blushing? He's blushing because he thinks she knows some fact about him that he wishes that she didn't know. That's a complex, higher order intentional state, one that's only visible when you go to the higher, intentional level. You can't see that by looking at the individual states of the neurons in his brain. You have to go to the level at which you're talking about what this man knows, what he believes, and what he wants.

The intentional level is what I call the intentional stance. It's a strategy you can try whenever you're confronted with something complex in nature ­ it doesn't always work. The idea is to interpret that complexity as one or more intelligent, rational agents that have agendas, beliefs, and desires, and that are interacting. When you go up to the intentional level, you discover patterns that are highly predictive, that are robust, and that are not reducible in any meaningful sense to the lower-level patterns at the physical level. In between the intentional stance and the physical stance is what I call the design stance. That's the level of software.

The idea of abstraction has been around for a long time, and 200 years ago you could enliven a philosophical imagination by asking what Mozart's Haffner Symphony is made of. It's ink on pieces of paper, it's a sequence of sounds as played by people with various stringed instruments and other instruments. It's an abstract thing. It's a symphony. Stradivarius made violins; Mozart made symphonies, which depend on a physical realization, but don't depend on any particular one. They have an independent existence, which can shift from one medium to another and back.

We've had that idea for a long time but we've recently become much more comfortable with it, living as we do in a world of abstract artifacts, where they now jump promiscuously from medium to medium. It's no longer a big deal to go from the score to the music that you hear live to the recorded version of the music. You can jump back and forth between media very rapidly now. It's become a fact of life. It never used to be like this. It used to be hard work to get things from one form to another. It's not hard work any more, it's automatic. You eliminate the middle man. You no longer have to have the musician to read the score, to produce the music. This removal of all the hard work in translating from one medium to another makes it all the more natural to populate your world with abstractions, because you find it's hard to keep track of what medium they're in. It doesn't matter much any more. You're interested in the abstraction, not the medium. Where'd you get that software? Did you go to a store and buy a physical CD and put it in your computer, or did you just download it off the Web? It's the same software one way or another. It doesn't really matter. This idea of medium neutrality is one of the essential ideas of software, or of algorithms in general. And it's one that we're becoming familiar with, but it's amazing to me how much friction there still is, how much resistance there still is, to this idea.

An algorithm is an abstract process that can be defined over a finite set of fundamental procedures, an instruction set. It is a structured array of such procedures. That's a very generous notion of algorithm—more generous than many mathematicians would like, because I would include by that definition algorithms that may be in some regards defective. Consider your laptop. There's an instruction set for that laptop, consisting of all the different basic things that your laptop's CPU can do; each basic operation has a digital name or code, and every time that bit-sequence occurs, the CPU tries to execute that operation. You can take any bit sequence at all, and feed it to your laptop, as if it were a program. Almost certainly, any sequence that isn't designed to be a program to run on that laptop won't do anything at all — it'll just crash. Still, there's utility in thinking that any sequence of instructions, however buggy, however stupid, however pointless, should be considered an algorithm, because one person's buggy, dumb sequence is another person's useful device for some weird purpose, and we don't want to prejudge that question. (Maybe that "nonsense" was included in order to get the laptop to crash at just the point it crashed!) One can define a more proper algorithm as one which runs without crashing. The only trouble is that if you define algorithm that way, then probably you don't have any on your laptop, because there's almost certainly a way to make almost every program on your laptop crash. You just haven't found it yet. Bug-free software is an ideal that's almost never achieved.

Looking at the world as if everything is a computational process is becoming fashionable. Here one encounters not an issue of fact, but an issue of strategy. The question isn't, "What's the truth?" The question is, "What's the most fruitful strategy?" You don't want to abandon standards and count everything as computational, because then the idea loses its sense. It doesn't have any grip any more. How do you deal with that? One way is to try to define, in a rigid centralist way, some threshold that has to be passed, and say we're not going to call it computational unless it has properties A, B, C, D, and E. That's fine, you can do that in any number of different ways, and that will save you the embarrassment of having to say that everything is computational. The trouble is that anything you choose as a set of defining conditions is going to be too rigid. There are going to be things that meet those conditions that are not interestingly computational by anybody's standards, and there are things that are going to fail to meet the standards, which nevertheless you see are significantly like the things that you want to consider computational. So how do you deal with that? By ignoring it, by ignoring the issue of definition, that's my suggestion. Same as with life! You don't want to argue about whether viruses are alive or not; in some ways they're alive, in some ways they're not. Some processes are obviously computational. Others are obviously not computational. Where does the computational perspective illuminate? Well, that depends on who's looking at the illumination.

I describe three stances for looking at reality: the physical stance, the design stance, and the intentional stance. The physical stance is where the physicists are, it's matter and motion. The design stance is where you start looking at the software, at the patterns that are maintained, because these are designed things that are fending off their own dissolution. That is to say, they are bulwarks against the second law of thermodynamics. This applies to all living things, and also to all artifacts. Above that is the intentional stance, which is the way we treat that specific set of organisms and artifacts that are themselves rational information processing agents. In some regards you can treat Mother Nature--that is, the whole process of evolution by natural selection--from the intentional stance, as an agent, but we understand that that's a façon de parler, a useful shortcut for getting at features of the design processes that are unfolding over eons of time. Once we get to the intentional stance, we have rational agents, we have minds, creators, authors, inventors, discoverers ­ and everyday folks ­ interacting on the basis of their take on the world.

Is there anything above that? Well, in one sense there is. People, or persons, as moral agents, are a specialized subset of the intentional systems. All animals are intentional systems. Parts of you are intentional systems. You're made up of lots of lesser intentional systems — homunculi of sorts — but unless you've got multiple personality disorder, there's only one person there. A person is a moral agent, not just a cognitive agent, not just a rational agent, but a moral agent. And this is the highest level that I can make sense of. And why it exists at all, how it exists, the conditions for its maintenance, are very interesting problems. We can look at e theory as applied to the growth of trees ­ they compete for sunlight ­ it's a game in which there are winners and losers. But when we look at game theory as applied not just to rational agents, but to people with a moral outlook, we see some important differences. People have free will. Trees don't. It's not an issue for trees in the way it is for people.

What I like about the idea is that it agrees with a philosophical tradition (including Aristotle and Descartes, for instance) that maintains that people are different — that people aren't just animals. It completely disagrees, of course, on what that difference consists in. Although it's a naturalization of the idea of people, it does say they're different. And this, I discover, is the thing that most entices and upsets people about my view. There are those who want people to be more different than I'm allowing. They want people to have souls, to be Cartesian people. And there are those who are afraid that I'm trying to differentiate people too much from the other animals with my claim that human beings really are, because of culture, an importantly different sort of thing. Some scientists view this with skepticism, as if I'm trying to salvage for philosophy something that should fall to science. But in fact my view about what is different about people is a scientific theory—it stands or falls as an implication of a scientific theory, in any case.

In terms of my own role in cognitive science, as to whether I consider myself a philosopher or a scientist, I think I'm good at discovering the blockades of imagination, the bad habits of thought, that infect how theorists think about the problem of consciousness. When I go off to a workshop or conference and give a talk, I'm actually doing research, because the howls and screeches and frowns that I get from people, the way in which they react to what I suggest, is often diagnostic of how they are picturing the problems in their own minds. And, in fact, people have very different covert images about what the mind is and how the mind works. The trick is to expose these, to bring them up into public view, and then correct them. That is what I specialize in.

My demolition of the Cartesian theater, of Cartesian materialism, is just one of these campaigns of exposure. People often pay lip service to the idea that there isn't any privileged medium in the brain which is playing the role that Descartes assigned to the non-physical mind as the theater of consciousness. Nevertheless, if you look closely at what they are thinking and saying, their views only really make sense if you interpret them as covertly still presupposing a Cartesian theater somewhere in their model. So teasing this out, bringing this up to the surface and then showing what you might replace it with turns out to be, to me, very interesting work. Happily, some people have come to appreciate that this is a valuable service that somebody like me, a philosopher, can perform: getting them to confront the hidden assumptions of their own thinking, and see how those hidden assumptions are blinding them to opportunities for explaining what they want to explain.

I've come to respect the cautious conservatism that many people express — and some even live by — which says that the environmental impact of these new ideas is not yet clear and that you should be very careful about how you introduce them. Don't fix what isn't broke. Don't let your enthusiasm for new ideas blind you to the possibility that maybe they will undo something of long standing that is really valuable. That's an idea that is seldom articulated carefully, but that, in fact, drives many people. And it's an entirely honorable motivation to be concerned that some of our traditional ideas are deeply threatened by these innovations of outlook, and to be cautious about just trading in the old for the new. Indeed I think that's wise. Environmental impact statements for scientific and philosophical advances should be taken seriously. There might be a case of letting the cat out of the bag in a way that would really, in the long run, be unfortunate. Anybody who appreciates the power of ideas realizes that even a true, or well founded, idea can do harm if it is presented in an unfortunate context. What I mainly object to is the way some people take it unto themselves to decide just which ideas are dangerous, and then decide that they're justified in going out and beating those ideas up with whatever it takes: misleading descriptions, misrepresentations, character assassinations and so forth.

In terms of which individuals have the big ideas today, after Turing and von Neumann we don't have any giants, and we don't need any giants. More and more what we're seeing is that there are many good ideas out there, and people put them together in different ways. In a sense, every paper worth reading actually has 500 co-authors, but the tradition says that we don't treat it that way, that we try to assign authorship to particular individuals. To me it makes less and less sense to try to do that. Distributed invention is a much more salient fact. This is especially true in philosophy, since philosophers don't, in general, do experiments or conduct empirical investigations, so they all share the same data. Priority disputes in science sometimes have real substance, but philosophers arguing about who gets credit for a "new" argument or objection or philosophical "theory" is like sailors arguing about who first noticed that the wind had come up. They all noticed it at about the same time.

John Brockman, Editor and Publisher
contact: [email protected]
Copyright © 2001 by
Edge Foundation, Inc
All Rights Reserved.

|Top|