Chapter 10 "INTUITION PUMPS"

Chapter 10 "INTUITION PUMPS"

John Brockman [5.7.96]

Marvin Minsky: Dan Dennett is our best current philosopher. He is the next Bertrand Russell. Unlike traditional philosophers, Dan is a student of neuroscience, linguistics, artificial intelligence, computer science, and psychology. He's redefining and reforming the role of the philosopher. Of course, Dan doesn't understand my society-of-mind theory, but nobody's perfect.

__________

Hardcover [ May, 2013 ]
Daniel C. Dennett
 

DANIEL C. DENNETT is a philosopher; director of the Center for Cognitive Studies and Distinguished Arts and Sciences Professor at Tufts University; author of Content and Consciousness (1969), Brainstorms (1978), (with Douglas R. Hofstadter) The Mind's I (1981), Elbow Room: The Varieties of Free Will Worth Wanting (1984), The Intentional Stance (1987), Consciousness Explained (1991), Darwin's Dangerous Idea (1995), and Kinds of Minds (1996). 

Daniel C. Dennett's Edge Bio Page


[Daniel C. Dennett:] If you look at the history of philosophy, you see that all the great and influential stuff has been technically full of holes but utterly memorable and vivid. They are what I call "intuition pumps" — lovely thought experiments. Like Plato's cave, and Descartes's evil demon, and Hobbes' vision of the state of nature and the social contract, and even Kant's idea of the categorical imperative. I don't know of any philosopher who thinks any one of those is a logically sound argument for anything. But they're wonderful imagination grabbers, jungle gyms for the imagination. They structure the way you think about a problem. These are the real legacy of the history of philosophy. A lot of philosophers have forgotten that, but I like to make intuition pumps.

I like to think I'm drifting back to what philosophy used to be, which has been forgotten in many quarters in philosophy during the last thirty or forty years, when philosophy has become a sometimes ridiculously technical and dry, logic-chopping subject for a lot of people — applied logic, applied mathematics. There's always a place for that, but it's nowhere near as big a place as a lot of people think.

I coined the term "intuition pump," and its first use was derogatory. I applied it to John Searle's "Chinese room," which I said was not a proper argument but just an intuition pump. I went on to say that intuition pumps are fine if they're used correctly, but they can also be misused. They're not arguments, they're stories. Instead of having a conclusion, they pump an intuition. They get you to say "Aha! Oh, I get it!"

The idea of consciousness as a virtual machine is a nice intuition pump. It takes a while to set up, because a lot of the jargon of artificial intelligence and computer science is unfamiliar to philosophers or other people. But if you have the patience to set some of these ideas up, then you can say, "Hey! Try thinking about the idea that what we have in our heads is software. It's a virtual machine, in the same way that a word processor is a virtual machine." Suddenly, bells start ringing, and people start seeing things from a slightly different perspective.

Among the most appealing ideas in artificial intelligence are the variations on Oliver Selfridge's original Pandemonium idea. Way back in the earliest days of AI, he did a lovely program called Pandemonium, which was very well named, because it was a bunch of demons. Pan-demonium. In his system, there were a lot of semi-independent demons, and when a problem arose, they would all jump up and down and say, in effect: "Me! me! me! Let me do it! I can do it!" There would be a brief struggle, and one of them would win and would get to tackle the problem. If it didn't work, then other demons could take over.

In a way, that was the first connectionist program. Ever since then, there have been waves of enthusiasm in AI for what are, ultimately, evolutionary models. Connectionist models are ultimately evolutionary. They involve the evolution of connection strengths over time. You get lots of things happening in parallel, and what's important about them is that, from a Calvinist perspective, they look wasteful. They look like a crazy way to build anything, because there are all these different demons working on their own little projects; they start building things and then they tear them apart. It seems to be very wasteful. It's also a great way of getting something really good built — to have lots of building going on in a semicontrolled way, and then have a competition to see which one makes it through to the finals.

The AI researcher Douglas Hofstadter's jumbo architecture is a very nice model that exhibits those features. The physicist Stephen Wolfram has some nice models, although they're not considered AI. These architectures are very different from good old-fashioned AI models, which, you might say, were bureaucratic, with a chain of command and a boss and a sub-boss and a bunch of sub sub-bosses, and delegation of responsibility, and no waste. Hofstadter once commented that the trouble with those models is that the job descriptions don't leave room for fooling around. There aren't any featherbedders. There aren't any people just sitting around, or making trouble. Mother Nature doesn't design things that way. When Mother Nature designs a system, it's "the more the merrier, let's all have a big party, and somehow, we'll build this thing." That's a very different organizational structure. My task, in a way, is to show how, if you impose those ideas — of a plethora of semi-independent agents acting in an only partly organized way with lots of "waste motion" — on the brain, all sorts of things begin to fall into place, and you get a different view of consciousness.

As technology changes, we change. As computers evolve, our philosophical approach to thinking about the brain will evolve. In the history of thinking about the brain, as each new technology has come along it's been enthusiastically exploited: clockwork and wires and pulleys back in Descartes's day, then steam engines and dynamos and electricity came in, and then the telephone switchboard. We should go back earlier. The most pervasive of all of the technological metaphors people have used to explain what goes on in the brain is writing — the idea that we think about the things happening in the brain as signals, as messages being passed. You don't have to think about telegraphy or telephones, you just have to think about writing messages.

The idea that memory is a storehouse of things written is already a metaphor, and even a bad metaphor. The very idea that there has to be a language of thought doesn't make sense unless you think of it as a written language of thought. A spoken language of thought won't get you much of anything. One of the themes that interests me is the idea of talking before you know you're talking, before you know what talking is, which we all do. Children do it. There's a big difference between talking and self-conscious talking, which, if you get clear about it, helps with the theory of language.

People couldn't think of the brain as a storehouse at all before there was a written language. There wasn't a mind/body problem, and there weren't any theories of mind, even if you go back to the ancient Greeks, even Plato and Aristotle. You find nothing much in the way of what looks like theorizing about this. What they did say was rather bad.

The basic idea of computation, as formulated by the mathematicians John von Neumann and Alan Turing, is in a class by itself as a breakthrough idea. It's the only idea that begins to eliminate the middleman. What was wrong with the telephone- switchboard idea of consciousness was that you have these wires that connect what's going on out at the eyeballs into some sort of control panel. But then you still have that clever homunculus sitting at the control panel doing all the work.

If you go back further, David Hume theorized about impressions and ideas. Impressions were like slides in a slide show, and ideas were faint copies — poor-quality Xerox copies — of the original pictures. He tried to dream up a chemistry theory, a phony theory of valences which would suggest how one idea could bring the next one along. I explained this idea to a student one day who said that Hume was trying to get the ideas to think for themselves. That's exactly what Hume was trying to do. He was trying to get rid of the thinker, because he realized that that was a dead end. If you still have that middleman in there doing all the work, you haven't made any progress. Hume's idea was to put little valence bonds between the ideas, so that each one could think itself and then get the next one to think itself, and so forth — getting rid of the middleman. But it didn't work.

The only idea anyone has ever had which demonstrably does get rid of the middleman is the idea of computers. Homunculi are now O.K., because we know how to discharge them. We know how to take a homunculus and break it down into smaller and smaller homunculi, eventually getting down to a homunculus that you can easily replace with a machine. We've opened up a huge space of designs — not just von Neumannesque, old-fashioned computer designs but the designs of artificial life, the massively parallel designs.

Right now I'm working on how you get rid of the Central Meaner, which is one of the worst homunculi. The Central Meaner is the one who does the meaning. Suppose I say, "Please repeat the following sentence in a loud clear voice: `Life has no meaning, and I'm thinking of killing myself.'" You might say it, but I don't think you'd mean it, because — some people would be tempted to say — even though your body uttered the words, your Central Meaner wasn't endorsing it, wasn't in there saying, in effect, "This is a real speech act. I mean it!"

I've recently been looking at the literature on psycholinguistics, and sure enough, they have a terrible time dealing with production of speech. All their theories are about how people understand speech, how they comprehend it, how they take it in. But there isn't much at all about how people generate speech. If you look at the best model that anyone's come up with to date, the Dutch psycholinguist Willem Levelt's model, he's got a "blueprint" for a speaker — the basic model, you might say — and right there in the upper-left-hand corner of the blueprint he's got something called the Conceptualizer. The Conceptualizer figures out what the system's got to say and delegates that job to the guys down in the scene shop, who then put the words together and figure out the grammatical relations. The Conceptualizer is the boss, who sets the specs for what's going to be said. Levelt writes a whole book showing how to fit all the results into a framework in which there's this initial Conceptualizer giving the rest of the system a preverbal message. The Conceptualizer decides, "O.K., what we have to do is insult this guy. Tell this bozo that his feet are too big." That gives the job to the rest of the team, and they put the words together and out it comes: "Your feet are too big!"

The problem is, How did the Conceptualizer figure out what to tell the language system to say? The linguists finesse the whole problem. They've left the Central Meaner in there, and all they've got is somebody who translates the message from mentalese into English — not a very interesting theory. The way around this, once again, is to have one of these Pandemonium models, where there is no Central Meaner; instead, there are all these little bits of language saying, "Let me do it, let me do it!" Most of them lose, because they want to say things like "You big meanie!" and "Have you read any good books lately?" and other inappropriate things. There's this background struggle of parallel processors, and something wins. In this case, "Your feet are too big!" wins, and out it comes.

What about the person who said it? Did he mean it? Well, ask him. The person who said it will say, "Well, yeah, I meant it. I said it. I didn't correct it. My ears aren't burning. I'm not blushing. I must have meant it." He has no more access into whether he meant it in any deep, deep sense than you do. As E.M. Forster once remarked, "How do I know what I think until I see what I say?" The illusion of the Central Meaner is still there, because we listen to ourselves and we endorse what we find ourselves saying. Right now, all sorts of words are coming out of my mouth and I'm fairly happy with how it's going; every now and then I correct myself a bit, and if you ask me whether I mean what I say, sure I do — not because there's a subpart of me, a little subsystem, which is the Central Meaner, giving the marching orders to a bunch of lip-flappers. That's a terrible model for language.

Pandemonium makes a better model: Right now, all my little demons are conspiring; they've formed a coalition, and they're saying, "Yeah, yeah, basically the big guy is telling the truth!"

Since publishing Consciousness Explained, I've turned my attention to Darwinian thinking. If I were to give an award for the single best idea anyone has ever had, I'd have to give it to Darwin, ahead of Newton and Einstein and everyone else. It's not just a wonderful scientific idea; it's a dangerous idea. It overthrows, or at least unsettles, some of the deepest beliefs and yearnings in the human psyche. Whenever the topic of Darwin's idea comes up, the temperature rises, and people start trying to divert their own attention from the real issues, eagerly squabbling about superficial controversies. People get anxious and rush to take sides whenever they hear the approaching rumble of evolution.

A familiar diagnosis of the danger of Darwin's idea is that it pulls the rug out from under the best argument for the existence of God that any theologian or philosopher has ever devised: the Argument from Design. What else could account for the fantastic and ingenious design to be found in nature? It must be the work of a supremely intelligent God. Like most arguments that depend on a rhetorical question, this isn't rock-solid, by any stretch of the imagination, but it was remarkably persuasive until Darwin proposed a modest answer to the rhetorical question: natural selection. Religion has never been the same.

At least in the eyes of academics, science has won and religion has lost. Darwin's idea has banished the Book of Genesis to the limbo of quaint mythology. Sophisticated believers in God have adapted by reconceiving God as a less anthropomorphic, more abstract entity — a sort of blank, unknowable Source of Meaning and Goodness. Some unsophisticated believers have tried desperately to hold their ground by concocting creation science, which is a pathetic imitation of science, a ludicrous parade of self-delusions and pious nonsense. Stephen Jay Gould and many other scientists have rightly exposed and condemned the fallacies of creationism. Darwin's idea is triumphant, and it deserves to be.

And yet, and yet. All is not well. There are good and bad Darwinians, it seems, and nothing so outrages the authorities as the "abuse" of Darwin's idea. When the smoke screens are blown away, they can all be seen to have a common theme: the fear that if Darwin is right, there's no room left in the universe for genuine meaning. This is a mistake, but it hasn't been properly exposed yet.

When Steve Gould exhorts his fellow evolutionists to abandon "adaptationism" and "gradualism" in favor of "exaptation" and "punctuated equilibrium," the issues are clearly not just scientific but political, moral, and philosophical. Gould is working vigorously, even desperately, to protect a certain vision of Darwin's idea. But why?

Sociobiologists claim to have deduced from Darwin's theory important generalizations about human culture, and particularly about the origin and status of our most deeply held ethical principles. When Gould and others mount their attacks on the "specter of sociobiology," the issue is presented as political: scientists on the left attacking pseudoscientists on the right. The creationists are obvious pseudoscientists. The sociobiologists are more pernicious, according to Gould et al., because it's not so obvious that what they say is nonsense. There's some truth to this, but at the heart of the controversy lies something deeper. Why do these critics want so passionately to believe that sociobiology could not be good science?

Some people hate Darwin's idea, but it often seems that even we who love it want to exempt ourselves from its dominion: "Darwin's theory is true of every living thing in the universe — except us human beings, of course." Darwin himself fully realized that unless he confronted head on the descent of man, and particularly man's mind, his account of the origin of the other species would be in jeopardy. His followers, from the outset, exhibited the same range of conflicts visible in today's controversies, and some of their most important contributions to the theory of evolution were made in spite of the philosophical and religious axes they were grinding.

I'm not purporting to advance either revolution or reform of Darwinian theory. I'm trying to explain what Darwinian theory is and why it's such an upsetting idea.

 


Back to Contents

Excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995) . Copyright © 1995 by John Brockman. All rights reserved.