Chapter 10

DANIEL C. DENNETT

"Intuition Pumps"



Marvin Minsky: Dan Dennett is our best current philosopher. He is the next Bertrand Russell. Unlike traditional philosophers, Dan is a student of neuroscience, linguistics, artificial intelligence, computer science, and psychology. He's redefining and reforming the role of the philosopher. Of course, Dan doesn't understand my society-of-mind theory, but nobody's perfect.

__________


DANIEL C. DENNETT
is a philosopher; director of the Center for Cognitive Studies and Distinguished Arts and Sciences Professor at Tufts University; author of Content and Consciousness (1969), Brainstorms (1978), (with Douglas R. Hofstadter) The Mind's I (1981), Elbow Room: The Varieties of Free Will Worth Wanting (1984), The Intentional Stance (1987), Consciousness Explained (1991), Darwin's Dangerous Idea (1995), and Kinds of Minds (1996).



Daniel C. Dennett: If you look at the history of philosophy, you see that all the great and influential stuff has been technically full of holes but utterly memorable and vivid. They are what I call "intuition pumps" — lovely thought experiments. Like Plato's cave, and Descartes's evil demon, and Hobbes' vision of the state of nature and the social contract, and even Kant's idea of the categorical imperative. I don't know of any philosopher who thinks any one of those is a logically sound argument for anything. But they're wonderful imagination grabbers, jungle gyms for the imagination. They structure the way you think about a problem. These are the real legacy of the history of philosophy. A lot of philosophers have forgotten that, but I like to make intuition pumps.

I like to think I'm drifting back to what philosophy used to be, which has been forgotten in many quarters in philosophy during the last thirty or forty years, when philosophy has become a sometimes ridiculously technical and dry, logic-chopping subject for a lot of people — applied logic, applied mathematics. There's always a place for that, but it's nowhere near as big a place as a lot of people think.

I coined the term "intuition pump," and its first use was derogatory. I applied it to John Searle's "Chinese room," which I said was not a proper argument but just an intuition pump. I went on to say that intuition pumps are fine if they're used correctly, but they can also be misused. They're not arguments, they're stories. Instead of having a conclusion, they pump an intuition. They get you to say "Aha! Oh, I get it!"

The idea of consciousness as a virtual machine is a nice intuition pump. It takes a while to set up, because a lot of the jargon of artificial intelligence and computer science is unfamiliar to philosophers or other people. But if you have the patience to set some of these ideas up, then you can say, "Hey! Try thinking about the idea that what we have in our heads is software. It's a virtual machine, in the same way that a word processor is a virtual machine." Suddenly, bells start ringing, and people start seeing things from a slightly different perspective.

Among the most appealing ideas in artificial intelligence are the variations on Oliver Selfridge's original Pandemonium idea. Way back in the earliest days of AI, he did a lovely program called Pandemonium, which was very well named, because it was a bunch of demons. Pan-demonium. In his system, there were a lot of semi-independent demons, and when a problem arose, they would all jump up and down and say, in effect: "Me! me! me! Let me do it! I can do it!" There would be a brief struggle, and one of them would win and would get to tackle the problem. If it didn't work, then other demons could take over.

In a way, that was the first connectionist program. Ever since then, there have been waves of enthusiasm in AI for what are, ultimately, evolutionary models. Connectionist models are ultimately evolutionary. They involve the evolution of connection strengths over time. You get lots of things happening in parallel, and what's important about them is that, from a Calvinist perspective, they look wasteful. They look like a crazy way to build anything, because there are all these different demons working on their own little projects; they start building things and then they tear them apart. It seems to be very wasteful. It's also a great way of getting something really good built — to have lots of building going on in a semicontrolled way, and then have a competition to see which one makes it through to the finals.

The AI researcher Douglas Hofstadter's jumbo architecture is a very nice model that exhibits those features. The physicist Stephen Wolfram has some nice models, although they're not considered AI. These architectures are very different from good old-fashioned AI models, which, you might say, were bureaucratic, with a chain of command and a boss and a sub-boss and a bunch of sub sub-bosses, and delegation of responsibility, and no waste. Hofstadter once commented that the trouble with those models is that the job descriptions don't leave room for fooling around. There aren't any featherbedders. There aren't any people just sitting around, or making trouble. Mother Nature doesn't design things that way. When Mother Nature designs a system, it's "the more the merrier, let's all have a big party, and somehow, we'll build this thing." That's a very different organizational structure. My task, in a way, is to show how, if you impose those ideas — of a plethora of semi-independent agents acting in an only partly organized way with lots of "waste motion" — on the brain, all sorts of things begin to fall into place, and you get a different view of consciousness.

As technology changes, we change. As computers evolve, our philosophical approach to thinking about the brain will evolve. In the history of thinking about the brain, as each new technology has come along it's been enthusiastically exploited: clockwork and wires and pulleys back in Descartes's day, then steam engines and dynamos and electricity came in, and then the telephone switchboard. We should go back earlier. The most pervasive of all of the technological metaphors people have used to explain what goes on in the brain is writing — the idea that we think about the things happening in the brain as signals, as messages being passed. You don't have to think about telegraphy or telephones, you just have to think about writing messages.

The idea that memory is a storehouse of things written is already a metaphor, and even a bad metaphor. The very idea that there has to be a language of thought doesn't make sense unless you think of it as a written language of thought. A spoken language of thought won't get you much of anything. One of the themes that interests me is the idea of talking before you know you're talking, before you know what talking is, which we all do. Children do it. There's a big difference between talking and self-conscious talking, which, if you get clear about it, helps with the theory of language.

People couldn't think of the brain as a storehouse at all before there was a written language. There wasn't a mind/body problem, and there weren't any theories of mind, even if you go back to the ancient Greeks, even Plato and Aristotle. You find nothing much in the way of what looks like theorizing about this. What they did say was rather bad.

The basic idea of computation, as formulated by the mathematicians John von Neumann and Alan Turing, is in a class by itself as a breakthrough idea. It's the only idea that begins to eliminate the middleman. What was wrong with the telephone- switchboard idea of consciousness was that you have these wires that connect what's going on out at the eyeballs into some sort of control panel. But then you still have that clever homunculus sitting at the control panel doing all the work.

If you go back further, David Hume theorized about impressions and ideas. Impressions were like slides in a slide show, and ideas were faint copies — poor-quality Xerox copies — of the original pictures. He tried to dream up a chemistry theory, a phony theory of valences which would suggest how one idea could bring the next one along. I explained this idea to a student one day who said that Hume was trying to get the ideas to think for themselves. That's exactly what Hume was trying to do. He was trying to get rid of the thinker, because he realized that that was a dead end. If you still have that middleman in there doing all the work, you haven't made any progress. Hume's idea was to put little valence bonds between the ideas, so that each one could think itself and then get the next one to think itself, and so forth — getting rid of the middleman. But it didn't work.

The only idea anyone has ever had which demonstrably does get rid of the middleman is the idea of computers. Homunculi are now O.K., because we know how to discharge them. We know how to take a homunculus and break it down into smaller and smaller homunculi, eventually getting down to a homunculus that you can easily replace with a machine. We've opened up a huge space of designs — not just von Neumannesque, old-fashioned computer designs but the designs of artificial life, the massively parallel designs.

Right now I'm working on how you get rid of the Central Meaner, which is one of the worst homunculi. The Central Meaner is the one who does the meaning. Suppose I say, "Please repeat the following sentence in a loud clear voice: `Life has no meaning, and I'm thinking of killing myself.'" You might say it, but I don't think you'd mean it, because — some people would be tempted to say — even though your body uttered the words, your Central Meaner wasn't endorsing it, wasn't in there saying, in effect, "This is a real speech act. I mean it!"

I've recently been looking at the literature on psycholinguistics, and sure enough, they have a terrible time dealing with production of speech. All their theories are about how people understand speech, how they comprehend it, how they take it in. But there isn't much at all about how people generate speech. If you look at the best model that anyone's come up with to date, the Dutch psycholinguist Willem Levelt's model, he's got a "blueprint" for a speaker — the basic model, you might say — and right there in the upper-left-hand corner of the blueprint he's got something called the Conceptualizer. The Conceptualizer figures out what the system's got to say and delegates that job to the guys down in the scene shop, who then put the words together and figure out the grammatical relations. The Conceptualizer is the boss, who sets the specs for what's going to be said. Levelt writes a whole book showing how to fit all the results into a framework in which there's this initial Conceptualizer giving the rest of the system a preverbal message. The Conceptualizer decides, "O.K., what we have to do is insult this guy. Tell this bozo that his feet are too big." That gives the job to the rest of the team, and they put the words together and out it comes: "Your feet are too big!"

The problem is, How did the Conceptualizer figure out what to tell the language system to say? The linguists finesse the whole problem. They've left the Central Meaner in there, and all they've got is somebody who translates the message from mentalese into English — not a very interesting theory. The way around this, once again, is to have one of these Pandemonium models, where there is no Central Meaner; instead, there are all these little bits of language saying, "Let me do it, let me do it!" Most of them lose, because they want to say things like "You big meanie!" and "Have you read any good books lately?" and other inappropriate things. There's this background struggle of parallel processors, and something wins. In this case, "Your feet are too big!" wins, and out it comes.

What about the person who said it? Did he mean it? Well, ask him. The person who said it will say, "Well, yeah, I meant it. I said it. I didn't correct it. My ears aren't burning. I'm not blushing. I must have meant it." He has no more access into whether he meant it in any deep, deep sense than you do. As E.M. Forster once remarked, "How do I know what I think until I see what I say?" The illusion of the Central Meaner is still there, because we listen to ourselves and we endorse what we find ourselves saying. Right now, all sorts of words are coming out of my mouth and I'm fairly happy with how it's going; every now and then I correct myself a bit, and if you ask me whether I mean what I say, sure I do — not because there's a subpart of me, a little subsystem, which is the Central Meaner, giving the marching orders to a bunch of lip-flappers. That's a terrible model for language.

Pandemonium makes a better model: Right now, all my little demons are conspiring; they've formed a coalition, and they're saying, "Yeah, yeah, basically the big guy is telling the truth!"

Since publishing Consciousness Explained, I've turned my attention to Darwinian thinking. If I were to give an award for the single best idea anyone has ever had, I'd have to give it to Darwin, ahead of Newton and Einstein and everyone else. It's not just a wonderful scientific idea; it's a dangerous idea. It overthrows, or at least unsettles, some of the deepest beliefs and yearnings in the human psyche. Whenever the topic of Darwin's idea comes up, the temperature rises, and people start trying to divert their own attention from the real issues, eagerly squabbling about superficial controversies. People get anxious and rush to take sides whenever they hear the approaching rumble of evolution.

A familiar diagnosis of the danger of Darwin's idea is that it pulls the rug out from under the best argument for the existence of God that any theologian or philosopher has ever devised: the Argument from Design. What else could account for the fantastic and ingenious design to be found in nature? It must be the work of a supremely intelligent God. Like most arguments that depend on a rhetorical question, this isn't rock-solid, by any stretch of the imagination, but it was remarkably persuasive until Darwin proposed a modest answer to the rhetorical question: natural selection. Religion has never been the same.

At least in the eyes of academics, science has won and religion has lost. Darwin's idea has banished the Book of Genesis to the limbo of quaint mythology. Sophisticated believers in God have adapted by reconceiving God as a less anthropomorphic, more abstract entity — a sort of blank, unknowable Source of Meaning and Goodness. Some unsophisticated believers have tried desperately to hold their ground by concocting creation science, which is a pathetic imitation of science, a ludicrous parade of self-delusions and pious nonsense. Stephen Jay Gould and many other scientists have rightly exposed and condemned the fallacies of creationism. Darwin's idea is triumphant, and it deserves to be.

And yet, and yet. All is not well. There are good and bad Darwinians, it seems, and nothing so outrages the authorities as the "abuse" of Darwin's idea. When the smoke screens are blown away, they can all be seen to have a common theme: the fear that if Darwin is right, there's no room left in the universe for genuine meaning. This is a mistake, but it hasn't been properly exposed yet.

When Steve Gould exhorts his fellow evolutionists to abandon "adaptationism" and "gradualism" in favor of "exaptation" and "punctuated equilibrium," the issues are clearly not just scientific but political, moral, and philosophical. Gould is working vigorously, even desperately, to protect a certain vision of Darwin's idea. But why?

Sociobiologists claim to have deduced from Darwin's theory important generalizations about human culture, and particularly about the origin and status of our most deeply held ethical principles. When Gould and others mount their attacks on the "specter of sociobiology," the issue is presented as political: scientists on the left attacking pseudoscientists on the right. The creationists are obvious pseudoscientists. The sociobiologists are more pernicious, according to Gould et al., because it's not so obvious that what they say is nonsense. There's some truth to this, but at the heart of the controversy lies something deeper. Why do these critics want so passionately to believe that sociobiology could not be good science?

Some people hate Darwin's idea, but it often seems that even we who love it want to exempt ourselves from its dominion: "Darwin's theory is true of every living thing in the universe — except us human beings, of course." Darwin himself fully realized that unless he confronted head on the descent of man, and particularly man's mind, his account of the origin of the other species would be in jeopardy. His followers, from the outset, exhibited the same range of conflicts visible in today's controversies, and some of their most important contributions to the theory of evolution were made in spite of the philosophical and religious axes they were grinding.

I'm not purporting to advance either revolution or reform of Darwinian theory. I'm trying to explain what Darwinian theory is and why it's such an upsetting idea.


Marvin Minsky:
Dan Dennett is our best current philosopher. He is the next Bertrand Russell. Unlike traditional philosophers, Dan is a student of neuroscience, linguistics, artificial intelligence, computer science, and psychology. He's redefining and reforming the role of the philosopher. Of course, Dan doesn't understand my society-of-mind theory, but nobody's perfect.

Roger Penrose: Dan Dennett is obviously somebody who'll listen to arguments. The title of his book Consciousness Explained is overstated, however. I certainly don't believe that those ideas explain consciousness. He's exploring what I call "point of view A," in the list of four viewpoints I discuss in Shadows of the Mind, which I call A, B, C, and D. A is the strong artificial- intelligence viewpoint: that is, that mentality is to be understood in terms of computation. It doesn't matter what's doing the computation; a computer or a biological structure would be equally good.

Point of view B — which is more the philosopher John Searle's viewpoint, as I understand it — is that you could simulate the action of the brain, but the simulation wouldn't have mental attributes, so there's something other than computation involved in conscious thinking. That differs from point of view C, which is my own point of view and asserts that you can't even simulate conscious activity. What's going on in conscious thinking is something you couldn't properly imitate at all by computer, according to C.

Point of view D asserts that you can't understand mentality in terms of science at all. So I'm saying, "Yes, it's science, but it's science of a kind that eludes computation." Dennett belongs to the A point of view, of which he's one of the best exponents. Another person representing this point of view is Hans Moravec, who's written an interesting book, where he takes this point of view to its extreme and argues that within thirty-five years or so the computer will achieve our level and then race beyond us.

There are at least two different kinds of arguments you can use against A. One is the John Searle type of argument, which is that just because something carries out computations, that doesn't make it capable of being aware of anything. That argument has quite a lot of power to it. But my argument is different, because it argues against both A and B. It's a stronger argument, because it says that you can't even properly simulate conscious actions. If something behaves as though it's conscious, do you say it is conscious? People argue endlessly about that. Some people would say, "Well, you've got to take the operational viewpoint; we don't know what consciousness is. How do you judge whether a person is conscious or not? Only by the way they act. You apply the same criterion to a computer or a computer- controlled robot." Other people would say, "No, you can't say it feels something merely because it behaves as though it feels something." My view is different from both those views. The robot wouldn't even behave convincingly as though it was conscious unless it really was — which I say it couldn't be, if it's entirely computationally controlled.

Roger Schank: Dan Dennett is the AI person's dream philosopher. We had all those years of putting up with philosophers like Hubert Dreyfus, who felt the need to attack AI without any attempt to understand it. Dan has made a real effort to understand AI and cognitive science, and he is the consummate philosopher in our world. I always enjoy listening to him; he always says clever things; he's one of the great fun people in our field.

What philosophers are doing is trying to put into perspective things that other people have thought. Dan does more than that, of course. He has his own thoughts, too. But it's not as if there's stuff that an AI person is likely to learn from a philosopher that will help them in AI. It's interesting to read philosophy, but it doesn't give you something you could somehow put into a program.

Nicholas Humphrey: Dan's a purist, who can be tough-minded to a fault. He's wedded to the way of looking at things he learned from Gilbert Ryle, at Oxford. Its roots are in logical positivism and behaviorism. Basically it prescribes what you can talk about and what you can't: the meaning of statements lies in the way you would verify them by observation, and if you can't offer any sort of verification, forget it. Dan got trapped by the beauty of this approach. And if it meant denying the reality of things we all know are important — like sensations, raw feelings, all the qualitative aspects of consciousness — too bad. You have to be brave to be a philosopher. You have to follow where your arguments take you, until you get proved wrong. And since no one has proved Dan wrong, he's still out there.

Of course, part of Dan is uneasy about where his theories have taken him. He's much too sensitive not to be. He realizes there's something missing. When his critics point out what they think are the weaknesses, he hates it and demands they say just what they mean. Often as not, they're reduced to mumbling, because it really is very hard to fault Dan's theory on its own terms. But I suspect that if anyone is aware of the problems, Dan himself is. It's just that he's not going to surrender to people who haven't understood him to begin with. He's not going to give way to people who challenge him on wishy-washy metaphysical grounds.

Dan's book Consciousness Explained is tremendously original, and it's already having a huge impact on cognitive psychology. He's produced the best account yet — a brilliant, funny, beautifully written description of the inner processes underlying thought. But while it's so good on the question of thinking, it's much less good on the question of feeling.

If you're going to explain "consciousness," you have to come to grips with the kind of consciousness that really counts with ordinary people. What do people want to have explained? What do they mean by consciousness? Or rather — since they may mean different things at different times — what is it they really care about?

If you listen to the kinds of questions people ask about consciousness — "Are babies conscious?" "Will I be conscious during the operation?" "How does my consciousness compare with yours?" and so on — you find again and again that the central issue isn't thinking but feeling. What concerns people is not so much the stream of thoughts that may or may not be running through their heads as the sense they have of being alive in the first place: alive, that is, as embodied beings interacting with an external world at their own body surfaces and subject to a spectrum of sensations — pain in their feet, taste on their tongue, color at their eyes.

What matters in particular is the subjective quality of these sensations: the peculiar painfulness of a thorn, the saltiness of an anchovy, the redness of an apple, the "What it's like" for us when the stimuli from these external objects meet our bodies and we respond. Thoughts may come and thoughts may go. A person can be conscious without thinking anything. But a person simply cannot be conscious without feeling.

Here's the paradox, though. What figures so strongly in ordinary people's conception of what matters about consciousness figures hardly at all in Dan's account of it. In Consciousness Explained, there's hardly anything about sensory phenomenology. Once when I said that in print, Dan pointed out to me in no uncertain terms that I'd ignored the several passages in the book where he does talk about sensations. Well, O.K., it's there if you look for it. There are some passages where he talks about sensations and feelings as complex behavioral dispositions (which is, I think, on the right lines, provided you allow that their complexity may mean that they're qualitatively in a different league from anything else.) But my point is that for Dan, the question of sensory phenomenology is no more than a side issue, never the central mystery it is for me.

Francisco Varela: While Dan focuses on the cognitive level, my own approach is to think about all levels, perhaps because I'm influenced by the broad idea of nonrepresentationalist knowledge. In my reality, knowledge coevolves with the knower and not as an outside, objective representation.

Dan is against the idea of experience bearing on science. I'm not very fond of doing psychological readings of people. I do have this distinct impression from a long discussion with Dennett, who, unlike Minsky, is somebody you can engage in conversation and who will read the other person's point of view. It's a delight to have a debate with him. For reasons I still don't understand, he has an absolute panic of bringing experience and the subjective element into the field of explaining consciousness.

Dennett doesn't deny that people have minds. He says those minds can be useful only if you treat them as overt behavior, as an anthropologist does with a foreign culture. You take them at face value. If you tell me you're in pain, I believe you. Then I note it down in my book. Then I consider it as overt behavior. That's what he calls heterophenomenology — or, more classically, the intentional stance. He treats you as if you're something capable of intentionality.

I find that far too weak to support a theory of consciousness, because it's just one leg. The other leg, which is the real phenomenology — that is, the "as is," firsthand, direct account of the quality of experience, is irreducible. To the extent that it's irreducible, his whole enterprise just falls short of getting down to the tack we need to get down to. On the positive side, what Dennett has done, probably better than anybody else, in terms of theorizing and writing, is to eliminate what he calls the ghost of the Cartesian Theater. He argues that you have to take this distributed phenomenon of the emergent properties of the brain in order to account for consciousness. In that sense, it's quite brilliant.

The other thing I credit him for is that he's introduced in philosophy of mind a style of discussion that's very rich: he comes into it with a philosopher's discipline, but he takes into account results from empirical research. You can't say that of people like John Searle, who keeps talking about philosophy of mind in a very dry, abstract, armchair way. I like Dennett's pulling up his sleeves and going into the lab with people. He's done something quite revolutionary, which is to steep himself in the scientific literature.

W. Daniel Hillis: Dan Dennett is my favorite philosopher, because he takes the trouble to understand things. I get annoyed at the traditional school of philosophy, whose members believe that they already understand things, so they pontificate on artificial intelligence without having the slightest idea of what the work is. Dennett, although I often disagree with him, takes the trouble to read the technical literature, and understand what people are doing in areas like linguistics, artificial intelligence, biology. His philosophical ideas are informed. They are sometimes wrong, but at least they are informed wrong as opposed to uninformed wrong.

Dennett sometimes is a sucker for a reductionist theory that seems to explain something. Maybe he inherits that disease from biologists. For instance, I think he's been suckered a little bit by Richard Dawkins' view of genes as the central player, because it appears to explain things. People might even argue that he's been suckered by the simple theories of AI into believing that they explain too much about the mind. Fundamentally, he's a reductionist, and he does believe that the phenomena we see in the mind are the result of fundamental physical principles. That's a philosophical standpoint I'm basically comfortable with. It maybe makes him more popular among the scientists than among the philosophers, because if he's right then all philosophy is just a matter of science that hasn't been done yet.

Dennett's ideas are compatible with the notion of science that there's a reality out there; it's understandable; it's based on some simple underlying laws, and we just need to understand what those laws are and the connection between them and what we see. Philosophers have always felt that there's a set of things that don't fit that paradigm. People used to say, "Well, the laws may apply on Earth, but they're not true for the heavenly bodies." Then, after Galileo, they said, "Well, that might be true for physical bodies, but it's not true for biological organisms." After Darwin, they said, "Well, that might be true for our bodies, but it's not true for our minds." And so on. We are backing the philosophers into a corner and giving them less and less to talk about. In some sense, Dennett is cooperating with the enemy by helping us back the philosophers into a smaller and smaller corner, and I like that.

Brian Goodwin: Dennett's concept of relational order in relation to the brain is something I find extremely interesting. He suggests that the properties of mind aren't material properties, they're relational properties. That leads to the strong AI position. I tend to take a similar view with respect to artificial life — a view similar to the strong AI position, the idea that you can actually get intelligence in systems that aren't constituted of molecules and cells. You can get life in computers.

Steven Pinker: I've always been interested in Dennett's work, because he's interested in the main scientific questions I deal with — namely, how the mind is engineered; how the kinds of abilities we all take for granted, like recognizing a face or using common sense, get executed by mental software. His perspective of seeing psychology as reverse engineering is one I share in my day-to-day experimental work.

In forward engineering, you start off with an idea of what your machine is supposed to do and then you go out and design the machine. Biology, including cognitive science, is a kind of reverse engineering: you start off with a machine — namely, the human being — and you have to figure out what purpose it was designed for. The main impediment in getting other scientists to understand the complexity of intelligence is that people have minds that work so well that they're apt not to be suitably impressed by what their minds are doing, in the same way that they're apt to be unaware of what's going on when they digest food.

I enjoyed, but disagree in some ways with, Dan's discussion of consciousness in Consciousness Explained. I like it because Dan challenges us to come up with an argument for why we should believe that there exist some kind of raw feelings, or qualia, or subjective experience. He argues that there isn't any substance to the idea: a person with what we think of as consciousness and a zombie who behaved in the same way would be indistinguishable, as far as science is concerned.

I agree that the qualitative experience is not the key to understanding intelligence from a scientist's point of view. The scientifically tractable aspect of consciousness is not the fact that there are people or animals subjectively experiencing it, but the fact that some kinds of information are mutually accessible and others are not, and that there is therefore a portion of mental information processing that has a different status than the rest of it. That's one sense of consciousness: information that's accessible to a particular body of information-processing involved with the current environment, and which in humans can interface with the verbal apparatus. So one can study, for example, why some information — say, an overlearned skill, like operating a stick shift when you're an experienced driver — is beneath the level of consciousness, whereas other kinds of mental processing (like how to operate a stick shift the first time you're learning to drive), where you have to reason something out consciously, step by step, is different.

Many people say that Dan's book should have been called Consciousness Explained Away instead of Consciousness Explained. (People congratulate themselves for that supposedly telling witticism, not realizing that Dan used it as a heading in the book!) The reason the book is in some way unsatisfying is that there's another aspect of the problem of consciousness for which no one has yet come up with a satisfactory explanation: why there's a clear intuitive difference between one organism that feels pain and another organism that acts as if it feels pain but doesn't really feel it, or one organism or system that has the experience of seeing red when a red object is in front of it, whereas another acts identically in every way but does not have the experience. Until one addresses the problem of why that's so compelling an intuition, one isn't going to have a completely satisfying account of consciousness.

I read Dan as saying that we've been misled into thinking there's a real question there. According to Dan, there isn't. That's where I disagree: I suspect there's a real question and that it's not just an error in the way we conceptualize the problem. Perhaps our minds are simply not designed to be able to formulate or grasp the answer — a suggestion of Chomsky's that I know Dan hates. But the intuition that qualia exist is real, and as yet irreducible and inexplicable. For one thing, all our intuitions about ethics crucially presuppose the distinction between a sentient being and a numb zombie. Putting a sentient being's thumb in a thumbscrew is unethical, but putting a robot's thumb in a thumbscrew is something else. And this isn't just a thought experiment; the debates over animal rights, euthanasia, and the use of anesthetics in infant surgery depend on it.

One other area in which I disagree with Dan is the explanation of human intelligence in an evolutionary context. Dan makes heavy use of Richard Dawkins' concept of the meme — an idea that replicates, mutates, and differentially spreads in the medium of brains in the same way that a gene replicates, mutates, and differentially spreads in the medium of bodies. This is Dan's main way of placing cognition in the context of evolution, rather than having it appear by magic; thoughts are created by a process analogous to the process of natural selection. But there are many other ways of explaining the emergence of human intelligence in a nonmiraculous way. I think it's much more plausible that evolution designed a brain that's a kind of computer that can generate complex ideas, in ways that need not be analogous to the operation of natural selection itself.

There's a big difference between gene selection in the design of organisms and meme selection in the design of mind and culture. For organisms, undirected variation followed by selection is the explanation, and the only explanation, for complex design. In contrast, because the brain is a complex machine that was itself designed by selection, "mutations," or ideas, are virtually always directed, and meme dynamics need not be the design source (though I agree it plays a big role in the demography of ideas: how many copies are out there). Memes such as the theory of relativity are not the cumulative product of selection of millions of random, undirected mutations of some original idea, but each brain in the chain of production added huge dollops of value to the product in a nonrandom way.

Here's another way of putting it. I think Dan thinks that the parallelism between genetics and memetics is profound: that it's the key to exorcising the hated idea that the mind came from nowhere, that it's a magical, miraculous "skyhook," hanging in midair. According to Dan, the power of the theory of selection of replicators is that it can explain organisms and culture in the same way. Maybe so (in my view, that would be an interesting coincidence), but then again, maybe not, and if not, it shouldn't matter to Dan's larger argument — namely, that the mind is a product of evolution. Perhaps, as the anthropologist Dan Sperber argues, the formal mechanisms that explain cultural evolution are from epidemiology, not population genetics — ideas spread like contagious diseases, not like genes.

Richard Dawkins: I think of Dan Dennett as a great fountain of ideas, and he's like a fireworks display for me. On every page of his you read, you constantly put ticks in the margin. I'm never quite sure why he's classified as a philosopher rather than as a scientist; he seems to me to do the same kind of thing I do in a somewhat different field, and I greatly admire the way he thinks, the way he uses metaphors to try to get his points across. And they're elegant metaphors; they really make you feel he's hit the nail on the head. My complaint about him is that his books set you thinking so hard that you have trouble turning to the next page, because you're so busy thinking about what's on the current one.


Back to Contents

Excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995) . Copyright 1995 by John Brockman. All rights reserved.