2008 : WHAT HAVE YOU CHANGED YOUR MIND ABOUT? WHY?

daniel_c_dennett's picture
Philosopher; Austin B. Fletcher Professor of Philosophy, Co-Director, Center for Cognitive Studies, Tufts University; Author, From Bacteria to Bach and Back
Competition in the brain

I've changed my mind about how to handle the homunculus temptation: the almost irresistible urge to install a "little man in the brain" to be the Boss, the Central Meaner, the Enjoyer of pleasures and the Sufferer of pains. In Brainstorms (1978) I described and defended the classic GOFAI (Good Old Fashioned AI) strategy that came to be known as "homuncular functionalism," replacing the little man with a committee.

The AI programmer begins with an intentionally characterized problem, and thus frankly views the computer anthropomorphically: if he solves the problem he will say he has designed a computer than can [e.g.,] understand questions in English . His first and highest level of design breaks the computer down into subsystems, each of which is given intentionally characterized tasks; he composes a flow chart of evaluators, rememberers, discriminators, overseers and the like. These are homunculi with a vengeance. . . . . Each homunculus in turn is analyzed into smaller homunculi, but, more important, into less clever homunculi. When the level is reached where the homunculi are no more than adders and subtractors, by the time they need only the intelligence to pick the larger of two numbers when directed to, they have been reduced to functionaries "who can be replaced by a machine." (p80)

I still think that this is basically right, but I have recently come to regret–and reject–some of the connotations of two of the terms I used: committee and machine. The cooperative bureaucracy suggested by the former, with its clear reporting relationships (an image enhanced by the no-nonsense flow charts of classical cognitive science models) was fine for the sorts of computer hardware–and also the levels of software, the virtual machines–that embodied GOFAI, but it suggested a sort of efficiency that was profoundly unbiological. And while I am still happy to insist that an individual neuron, like those adders and subtractors in the silicon computer, "can be replaced by a machine," neurons are bio-machines profoundly unlike computer components in several regards.

Notice that computers have been designed to keep needs and job performance almost entirely independent. Down in the hardware, the electric power is doled out evenhandedly and abundantly; no circuit risks starving. At the software level, a benevolent scheduler doles out machine cycles to whatever process has highest priority, and although there may be a bidding mechanism of one sort or another that determines which processes get priority, this is an orderly queue, not a struggle for life. (As Marx would have it, "from each according to his abilities, to each according to his needs). It is a dim appreciation of this fact that probably underlies the common folk intuition that a computer could never "care" about anything. Not because it is made out of the wrong materials — why should silicon be any less suitable a substrate for caring than organic molecules? — but because its internal economy has no built-in risks or opportunities, so it doesn't have to care.

Neurons, I have come to believe, are not like this. My mistake was that I had stopped the finite regress of homunculi at least one step too early! The general run of the cells that compose our bodies are probably just willing slaves–rather like the selfless, sterile worker ants in a colony, doing stereotypic jobs and living out their lives in a relatively non-competitive ("Marxist") environment. But brain cells — I now think — must compete vigorously in a marketplace. For what?

What could a neuron "want"? The energy and raw materials it needs to thrive–just like its unicellular eukaryote ancestors and more distant cousins, the bacteria and archaea. Neurons are robots; they are certainly not conscious in any rich sense–remember, they are eukaryotic cells, akin to yeast cells or fungi. If individual neurons are conscious then so is athlete’s foot. But neurons are, like these mindless but intentional cousins, highly competent agents in a life-or-death struggle, not in the environment between your toes, but in the demanding environment of the brain, where the victories go to those cells that can network more effectively, contribute to more influential trends at the virtual machine levels where large-scale human purposes and urges are discernible.

I now think, then, that the opponent-process dynamics of emotions, and the roles they play in controlling our minds, is underpinned by an "economy" of neurochemistry that harnesses the competitive talents of individual neurons. (Note that the idea is that neurons are still good team players within the larger economy, unlike the more radically selfish cancer cells. Recalling Francois Jacob’s dictum that the dream of every cell is to become two cells, neurons vie to stay active and to be influential, but do not dream of multiplying.)

Intelligent control of an animal’s behavior is still a computational process, but the neurons are "selfish neurons," as Sebastian Seung has said, striving to maximize their intake of the different currencies of reward we have found in the brain. And what do neurons "buy" with their dopamine, their serotonin or oxytocin, etc.? Greater influence in the networks in which they participate.