EDGE


EDGE 30 — December 7, 1997

THE REALITY CLUB

Robert Provine, Douglas Rushkoff, Thomas De Zengotita, and Margaret Wertheim on Rodney Brooks

(ROBERT PROVINE:) Rodney Brooks reminds us not to leave our bodies and computers lost in thought. Breakthroughs in both computer and neurocognitive domains are likely to come when we move beyond the philosophically driven thinking that shapes so much work in these areas and respond to the phylogentic evidence supporting action-based systems.

(DOUGLAS RUSHKOFF:) I've been combatting the idea that human beings, in society, need a singular god or driving ethical template in order to peacefully co-exist. I'd like to believe that "what feels good, is good," so to speak, and that our uninhibited organic responses to stimuli are not a "lower" or dangerous set of behaviors, but a trait that is developed only after passing through an externally, artificially, or hierarchically directed coordination.

(THOMAS DE ZENGOTITA:) If a mobile robot could be made so it had to replenish itself at intervals, and somehow had to perform the procedure privately, and had to arrange for that privacy in varying circumstances — that would be interesting...

(MARGARET WERTHEIM:) The whole issue of embodiment and cognition is one that I think is central right now. As a coincidence I have just written a couple of articles on the corollary question of could a computer intelligence ever develop a "soul" (one piece is for the Christmas issue of New Scientist) and in both pieces I discuss the Cog project. Rodney Brooks has actually appointed a theological advisor to that project, who has considered just this question — which I found fascinating.

George Lakoff responds to Steven Pinker

I am encouraged by Pinker's present dismissal of the Computer Program Theory of Mind, even though he previously espoused it in his books. With the field developing this rapidly, changes in position are natural. There is no reason for us to disagree on this matter, given that we both recognize the need for conceptual and linguistic structuring, and given that structured neural modeling provides that structure in a biologically responsible way. I hope Pinker's dismissal of the Computer Program Theory means that he has given up on the Two Minds Theory and has adopted the sensible alternative that also best fits the facts—the Neural Computational Theory of Mind. The evidence warrants it.

Life after Death—Response to Horgan by Joseph Ledoux

We have no idea how our brains make us who we are. There is as yet no neuroscience of personality. We have little understanding of how art and history are experienced by the brain. The meltdown of mental life in psychosis is still a mystery. In short, we have yet to come up with a theory that can put all this together. We haven't yet had a Darwin, Newton, or Einstein.


(6,942 words)


John Brockman, Editor and Publisher | Kip Parent, Webmaster


THE REALITY CLUB


Robert Provine, Douglas Rushkoff, Thomas De Zengotita, and Margaret Wertheim on Rodney Brooks


From: Robert Provine
Submitted: 11.22.97

Rodney Brooks focuses on an important element missing in most contemporary analyses of cognitive, psychological, and computer systems — the central role of lower-level motor processes. Too often we forget that consciousness, sensing, and learning evolved in the service of guiding movement. Without movement these capacities would never have emerged. Yet how many cognitive and computer scientists ponder the ramifications of this fact? Pure motor systems can be adaptive (imagine an "eating machine" gobbling algae on a pond bottom), yet a cognitive endowment would be useless to a non-moving entity. Of course, the hypothetical eating machine would do even better if it could use sensors to more efficiently encounter algae, or to develop strategies based on past experience to increase further its feeding efficiency.

Additional support for motor-driven evolutionary process comes from comparative and developmental analyses. Motor regions of the central nervous system often develop before they receive input from sensory regions. And sensory and neuronal components of some marine filter feeders degenerate after they pass from free-swimming larval stages to immobile adulthood.

The process of natural selection works efficiently to sculpt the neurologically-driven illusion that we call "physical reality." Natural selection is the engine through which we are linked to the wider world noted by Rodney Brooks. Our body and the brain that propels it are both adequate if not ideal matches of the environments that created them. The fragile, hypothetical nature of our "reality" become painfully obvious in brain damage, when cognitive capacity does not only degrade, but often fragments.

Rodney Brooks reminds us not to leave our bodies and computers lost in thought. Breakthroughs in both computer and neurocognitive domains are likely to come when we move beyond the philosophically driven thinking that shapes so much work in these areas and respond to the phylogentic evidence supporting action-based systems.

Best wishes,

Robert Provine

ROBERT R. PROVINE, Professor of Neurobiology and Psychology at the University of Maryland is the author of Quest for Laughter (Little, Brown). His findings on laughter have been featured in dozens of articles worldwide, including pieces in The New York Times, The Wall Street Journal, The Daily Telegraph, New Scientist, Science News, Discover, and The Los Angeles Times.


From: Douglas Rushkoff
Submitted: 11.23.97

Great interview with Rodney Brooks — particularly your steadfastness in attempting to extract a workable new metaphor for living systems from him, and his equally steadfast determination to refuse you one, at least for the time being.

The very interaction between the two of you seemed to crystalize the two-headed dynamic he's trying to tackle. The bottom-up development of relational toolsets — in living things and robots alike — requires a sort anti-discipline. One must refuse to surrender to the notion that there's a need for static, predetermined, command line at all. This is scary stuff, and we resist it — on both theoretical and practical levels — because we're deeply afraid of what we would do if we were literally "left to our devices."

On an interpersonal level, it calls to mind theories of transactional and transpersonal therapy, where the patient is never isolated but considered part of the living relationship between himself, his therapist, and his environment. On a cultural level, though, it's even more far-reaching.

I've been combatting the idea that human beings, in society, need a singular god or driving ethical template in order to peacefully co-exist. I'd like to believe that "what feels good, is good," so to speak, and that our uninhibited organic responses to stimuli are not a "lower" or dangerous set of behaviors, but a trait that is developed only after passing through an externally, artificially, or hierarchically directed coordination.

I'd like to ask Brooks if he's considered the moral and social implications of the bottom-up models he's working with, and whether he believes the rejection of top-down, overarching command sets in models for robotics and biology is somehow analogous to an evolutionary step where civilization learns to interact cooperatively by employing less codes rather than more.-

Douglas Rushkoff

DOUGLAS RUSHKOFF is the author of Cyberia, Media Virus, Free Rides, Playing the Future, The GenX Reader, and The Ecstasy Club. His monthly column "Screen Spirit" appears in Time Digital. He also writes a syndicated column for The New York Times Syndicate and The Guardian of London


From: Tom de Zengotita
Submitted: 11.26.97

I come at this from a phenomenological point if view, so I have no idea what the practical considerations are. But I can make a crucial point about consciousness in a simple way, one that moves us far from neural models and computer analogies...

If a mobile robot could be made so it had to replenish itself at intervals, and somehow had to perform the procedure privately, and had to arrange for that privacy in varying circumstances — that would be interesting...

See Sartre on "the look."

THOMAS DE ZENGOTITA teaches philosophy and anthropology at The Dalton School and at the Draper Graduate Program at New York University. He holds a BA, MA, MPh, and PhD in Anthropology from Columbia University. Publications include "On Wittgenstein's 'Remarks on Frazer's Golden Bough' " in Cultural Anthropology (4:4 1989), "Speakers of Being; Romantic Refusion in Cultural Anthropology" in Romantic Motives; Essays in Anthropological sensibility, George Stocking ed., 1991, "Irony, Celebrity and You" in The Nation, December 2 1996.


From: Margaret Wertheim
Submitted: 11.26.97

Dear John

I just read the newest Edge piece on Rodney Brooks — which I found extremely interesting. I like very much his approach to robotics and his insistence that intelligence is necessarily an embodied phenomena. [BTW: the book by Brian Rotman on mathematics that I mentioned in last time also insists that numbers have no existence outside of embodied beings. You may be interested to see that the current issue of The Sciences has an article by Rotman about his work on this.] The whole issue of embodiment and cognition is one that I think is central right now. As a coincidence I have just written a couple of articles on the corollary question of could a computer intelligence ever develop a "soul" (one piece is for the Christmas issue of New Scientist) and in both pieces I discuss the Cog project. Rodney Brooks has actually appointed a theological advisor to that project, who has considered just this question — which I found fascinating.

best wishes

margaret wertheim

MARGARET WERTHEIM is the author of Pythagoras' Trousers (Times Books 1995), a history of the relationship between physics, religion, and women. She is just completing a new book, The Pearly Gates of Cyberspace, a cultural history of space from Dante to the Internet, (for W.W. Norton.) Wertheim is an Australian science writer now based in Berkeley, CA. She has written extensively about science, technology and culture for magazines, television, and radio. She writes for New Scientist, The Sciences, The New York Times, The Australian Review of Books, 21C, World Art, HQ, and others. She is also currently producing "Faith and Reason", a television documentary about science and religion for PBS. She regularly lectures on this subject at colleges and universities here and abroad


George Lakoff responds to Steven Pinker


From: George Lakoff
Submitted: 11.12.97

Reply to Pinker

I am absolutely delighted to hear that Steve Pinker believes that the Computer Program Theory of Mind is "mad." I agree with him completely. It is mad.

However, the discussion I cited from "How The Mind Works" might lead other readers to interpret Pinker as saying something that he does not believe. If I misread Pinker (as I hope I have), other readers may misread him too. This is a fine opportunity to set the record straight.

The issue needs a bit of elaboration. One possible source of confusion is that there is not one "Computational Theory of Mind" but two, with variations on each. Those two principal computational theories are at odds with one another and the disagreement defines one of the major divisions within contemporary cognitive science. Here are the two computational theories of mind:

1. The Neural Computational Theory of Mind.

The neural structure of the brain is conceptualized as "circuitry," with axons and dendrites seen as "connections", with activation and inhibition as positive and negative numerical values. Neural cell bodies are conceptualized as "units" that can do basic numerical computations such as adding, multiplying, etc. Synapses are seen as points of contact between connections and units. Chemical action at the synapses determines a "synaptic weight" -- a multiplicative factor. Learning is modeled as change in these synaptic weights. Neural "firing" is modeled in terms of a "threshold", a number indicating the amount of charge required for the "neural unit" to fire. The computations are all numerical.

The Neural Computational Theory comes in a number of flavors, each reflecting research programs that focus on modeling different kinds of phenomena: (1) Highly structured special purpose neural circuits that describe low-level functions, e.g., topographic maps of the visual field or assemblies of center-surround structures that form line detectors. (2) Highly structured, sparsely connected, special purpose neural circuits that model higher-level functions, e.g., high-level motor control, spatial relations, abstract reasoning, language, etc. (the so-called "structured connectionist models") (3) Layered, densely connected neural circuits for modeling general learning mechanisms (the so-called "PDP connectionist models"). These are not necessarily mutually exclusive approaches. Given the complexity of the brain, it would not be surprising if each was used in different regions for different purposes.

The fundamental claim is that "higher level" rational functions like language and thought are carried out in the same way as "lower-level" descriptions of the details of the visual system, of motor synergies, etc.

The Neural Computational Theory of Mind states that the mind is constituted by neural computations carried out by the brain, and that those neural computations are the ONLY computations involved in the characterization of mind. The result is a Brain-Mind, a single entity characterized by (1) the specific detailed neural architecture of the brain, (2) the neural connections between the brain and the rest of the body, and (3) neural computation.

The connections between the brain and the rest of the body are crucial to all this. The brain, after all, is structured to function in combination with a body. Its specific neural architectures, which are central to the neural computational theory, are there to perform bodily functions — movement, vision, audition, olfaction, and so on, and their structures have evolved to work with the bodies we have and with the kinds of systems that neurochemistry allows (e.g., topographic maps). Thus, the Neural Computational Theory is inherently an embodied theory.

Patricia Churchland and Terry Sejnowski's wonderful book, The Computation Brain, is about the Neural Computational Theory of Mind.

2. The Computer Program Theory of Mind (aka "The Symbolic Computational Theory of Mind")

In the Symbolic Computational Theory, a "mind" is characterized via the manipulation of uninterpreted symbols, as in a computer program. The "symbols" are arbitrary: they could be strings of zeroes and ones, or letters of some alphabet, or any other symbols as long as they can be distinguished from one another. Nothing the symbols "mean" can enter into the computations. Computations are performed by strictly stated formal rules that convert one sequence of symbols into another.

The symbols and the computations are abstract mathematical entities. In the general case, this kind of symbolic computational "mind" is disembodied, and nothing about real human bodies or brains is needed to define what symbols or rules can be. A mind is conceptualized as a large computer program, written in symbols. It is an abstract, disembodied entity with a computational structure of its own.

The symbol system becomes physical when it is "implemented" in some physical system like a physical computer made of silicon chips, or (it is often claimed) a human brain. The manner of "implementation" doesn't matter to characterization of mind. Only the symbolic program does. This kind of "mind" thus has an existence independent of how it is implemented. It is in the form of software than can be run on any suitable hardware.

Of course, it is possible to MODEL a neural computational model of brain structure using such a general symbol system and it is done all the time: You just impose severe limitations: Model only neural units, connections, levels of activation, weights, thresholds, delays, firing frequencies, etc. and compute only the numerical functions that the neural units compute. This is a model of a very specific type of model of a physical system, the brain.

But this fact is not really germane to the Symbolic Computational Theory of Mind. Such models of how the physical BRAIN computes are not what the Symbolic Computational Theory claims a MIND is. Minds are to be characterized by symbolic computations that are supposed to characterize reasoning, for example, the kind of "reasoning" carried out by the pure manipulations of symbols in symbolic logic or in "problem solving" programs.

A special case of the Computer Program Theory of Mind is obtained by adding a constraint, namely, that the program be implementable by a human brain. Let us call this the Brain-Implementable version of the Computer Program Theory. In a Brain-implementable Computer Program Theory, the program is LIMITED by what a brain could implement, but nothing in it is DETERMINED by the structure of the brain. Its computations are not brain computation s—they are still computer software that can presumably be "run on brain hardware." Naturally, such a brain-implementable computer program theory would allow the program to also be implementable on all kinds of hardware other than a brain. The "mind" defined by the computations of the program would be unaffected by how the program was implemented.

There is in addition a Two Minds theory, in which the mind is separated into two parts: one part of the mind works by the Neural Computational Theory and the other part of the mind works by the Symbolic Computational (or Computer Program) theory. The Two Minds Theory separates mind and body: it posits a form of faculty psychology in which there is a rational faculty governing thought and a language faculty governing language, which are autonomous and distinct from bodily faculties governing perception, motor activity, emotions, and all other bodily activities. In the Two Minds Theory, the Neural Computational Theory is reserved for the bodily functions: low-level vision, motor synergies, the governing of heartbeat rate, and so on are left to neural computation alone. But the "higher" faculties of mind and language are characterized by the Brain-implementable version of the Computer Program Theory which works by symbolic computation. The Computer Program parts of the mind in this theory—the rational faculties and language—are characterized in a disembodied way, with no structure imposed by the brain, and can be implemented on either brain or nonbrain hardware.

Reading Pinker, I was (I hope mistakenly) led to believe that he had accepted the Computer Program Theory in the Brain-implementable version for rational functions and language. Here are some passages from both The Language Instinct and How The Mind Works that led me to the conclusion that he held such a theory.

In The Language Instinct, there is chapter called "Mentalese." The title is from Jerry Fodor's Language of Thought theory of mind, which is a version of the computer program theory. On pages 73-77, Pinker describes a Turing machine, an instance of the Computer Program Theory of Mind, as "intelligent" (p. 76). On p. 77, he describes how the abstract symbolic representations might be implemented neurally. At this point he adds: "Or the whole thing might be done in silicon chips. . . Add an eye that might detect certain contours in the world and turn on representations that symbolize them, and muscles that can act on the world whenever certain representations symbolizing goals are turned on, and you have a behaving organism (or add a TV camera and a set of levers and wheels, and you have a robot)."

"This, in a nutshell, is the theory of thinking called "the physical symbol system hypothesis" or the "computational" or "representational" theory of mind. It is as fundamental to cognitive science as the cell doctrine is to biology. . . The representations that one posits in the mind have to be arrangements of the symbols."

There are also passages in How The Mind Works that sound as if Pinker is advocating a version of the Computer Program Theory. On page 24, Pinker says,

"This book is about the brain, but I will not say much about neurons . . . The brain's special status comes from a special thing the brain does . . . information processing, or computation."

One might think that here Pinker was leading up to the Neural Computational Theory of Mind, but then he says:

"Information and computation reside in patterns of data and in relations of logic that are independent of the physical medium that carries them." He describes how a message might be carried by neurons, and continues, "Likewise a given program can run on computers made of vacuum tubes, electromagnetic switches, transistors, integrated circuits, or well-trained pigeons, and it accomplishes the same things for the same reasons . . . The computational theory of mind . . . says that beliefs and desires are information, incarnated as configurations of symbols. The symbols are physical states of bits of matter, like chips in the computer or neurons in the brain."

This sure sounds as though Pinker is accepting the Computer Program Theory of Mind in its Brain-implementable version. Other readers may not have been as badly misled on this matter as I was, but it will be useful to hear from Pinker why these passages are not versions of the Computer Program Theory of Mind (aka The Symbolic Computational Theory) in its brain implementable version.

Indeed, later in the book (p. 112), Pinker seems to be advocating the Two Minds Theory:

"Where do the rules and representations in mentalese leave off and the neural networks begin? Most cognitive scientists agree on the extremes. At the highest level of cognition, where we consciously plod through steps and invoke rules we learned in school or discovered ourselves, the mind is something like a production system, with symbolic inscriptions in memory and demons that carry out procedures. At a lower level, the inscriptions and rules are implemented in something like neural networks, which respond to familiar patterns and associate them with other patterns. But the boundary is in dispute. Do simple neural networks handle the bulk of everyday thought, leaving only the products of book learning to be handled by explicit rules and propositions? ... The other view—which I favor— is that those neural networks alone cannot do the job. It is the structuring of networks into programs for manipulating symbols that explains much of human intelligence. That's not all of cognition, but it is a lot of it; it's everything we can talk about to ourselves and others."

This sure sounds like the Two Minds Theory with the Computer Program Theory of Mind applying to rational thought and language — "everything we can talk about to ourselves and others."

At this point, the Dehaene book becomes relevant. Since mathematics is part of rational thought, part of "everything we can talk about to ourselves and others," it would seem that Pinker is implicitly claiming that mathematical cognition too is to be characterized not by the Neural Computational Theory of Mind, but by the Computer Program (or Symbolic Computational) Theory. If so, this would seem to directly contradict Dehaene, who claims that very elementary arithmetic is characterized by neural circuitry in the brain, not by a symbol manipulation system. Again, I may be misreading Pinker and he can explain the apparent disparity. Dehaene's research seems to contradict what Pinker is taking as his basic beliefs.

The issue, of course, is not just who advocates what position, but what the evidence is. What kind of evidence could separate out the Neural Computational Theory from the Two Minds Theory in which concepts, reason, and language are all characterized by the Computer Program theory (aka Symbolic Computation) in its Brain-instantiable version, while the bodily functions are characterized by the Neural Computation Theory? There is such evidence, and it comes down on the side of the pure Neural Computational Theory.

The argument hinges on the Two Minds Theory's use of faculty psychology, in which visual perception, mental imagery, motor activity, and so on are NOT part of the rational/linguistic faculty (or faculties). Neither Pinker nor anyone else these days proposes that human visual and motor systems work by symbolic rather than neural computation. So, if we can assume that the visual and motor systems work according to the Neural Computational Theory of Mind, can we show that the conceptual system, including human reason and language, makes use of aspects of the motor and visual system that use neural computation not symbolic computation?

The first evidence for such a view came in the mid-1970's, when Eleanor Rosch showed that basic-level categories in the conceptual system—categories like Car and Chair—made essential use of mental imagery, gestalt perception and motor programs. (For discussion, see my Women, Fire, and Dangerous Things, pp. 46-52). Similarly, research on the neuroscience of color vision indicated that the linguistic and conceptual properties of color concepts were consequences of neural structure of color vision. More recently, contemporary neuroscience research has shown that visual and motor areas are active during linguistic activity.

Recent neural modeling research also supports the idea that the sensory-motor system enters into CONCEPTS and LANGUAGE. Terry Regier has argued that models of topographic maps of the visual field, orientation-sensitive cell assemblies, and center-surround receptive fields are necessary to characterize and learn spatial relations CONCEPTS and linguistic expressions. (See discussion in Regier's The Human Semantic Potential, MIT Press, 1995, especially Chapter 5, pp. 81-120.) In the past year, David Bailey and Srini Narayanan in their Berkeley dissertations have provided further arguments. Bailey demonstrated that verbs of hand motion in the various of the world's languages and hand-motion CONCEPTS can be defined and learned on the basis of the motor characteristics of the hand — neural motor schemas and motor synergies. Narayanan, even more dramatically showed that semantics of aspect (event structure) in the world's languages and its logic arise from motor control systems and that the same neural control system involved in moving your body can perform abstract reasoning about the structure of events. (For details, the dissertations can be found on the website of the Neural Theory of Language group at International Computer Science Institute at Berkeley (www.icsi.berkeley.edu/NTL).

These results should not be surprising. Our spatial relations concepts are about space, and it is not surprising our neural systems for vision and negotiating space should shape those CONCEPTS, their LOGIC, and the LANGUAGE that expresses them. Nor should it be surprising that our CONCEPTS about bodily movement and their LOGIC and LANGUAGE should be shaped by our actual motor schemas and motor parameters. And one should not have been surprised to learn that our aspectual concepts, that is, our conceptual system for structuring, reasoning about, and talking about actions and events in general are shaped by the most important actions we perform, moving our bodies, and that general neural motor control schemas should be used for structuring and reasoning about events in general. Furthermore, given that conceptual metaphor maps body-based concepts onto abstract concepts preserving their logic and often their language, it should be no surprise that the Neural Computational Theory governing the detailed structures of our sensory-motor system ought to apply as well not only to sensory-motor concepts, but to abstract concepts based on them. This is exactly what has been confirmed in studies over two decades.

Dehaene's book presents an important piece of that evidence, that the rational activity of basic arithmetic is neural in character and to be characterized by the Neural Computational Theory of Mind. Dehaene's work fits perfectly with recent work on conceptual systems and language in cognitive linguistics and structured neural modeling. The research that Dehaene cites—by himself, Changeux, and others—seems to disconfirm the Two Minds Theory and the idea from faculty psychology that there is an autonomous faculty of reason that humans have entirely and that animals have none of, with mathematics as an example of that faculty of reason.

If these results about basic arithmetic and body-based concepts are correct, as they seem to be, then the assumed faculty psychology is wrong. There are no separate faculties of reason and language that are fully autonomous and independent of the visual and motor systems. Instead, CONCEPTS, REASONING, and LANGUAGE make use of parts of the visual and motor systems. Since these must be characterized using the Neural Computational Theory, it follows that the Neural Computational Theory must be used in concepts, reasoning, and language. If Dehaene is right, as he seems to be, then the Neural Computational Theory needed to characterize the structure of basic arithmetic is also used in REASONING about basic arithmetic, which is a rational capacity. For this reason, the rational and language capacities cannot be characterized purely in terms of the Symbolic Computational Theory. Therefore, it would appear that the evidence falls on the side of the pure Neural Computational Theory of Mind. The Two Minds Theory does not work. What makes the Symbolic Theory of Mind for reason and language a "mad theory " (in Pinker's terminology) is that it does not fit the facts. Read the sources and make up your own minds.

Despite Pinker's writings advocating the Symbolic Computational Theory for reason and language, Pinker really ought to like the version of the Neural Computational Theory of Mind coming out of the Berkeley group, Regier's Chicago group, and other groups. In that version of the theory, neural modeling is done by highly STRUCTURED connectionist models (rather than PDP connectionist models). We agree with Pinker that conceptual structure, reasoning, and language require structure, and that is just what structured neural models of the sort we and others have been developing over the past couple of decades provide.

The field of neural modeling is evolving very quickly. At the time Pinker was writing How The Mind Works, Regier's book had not yet been published and some of the more important recent research on structured neural models of mind and language had not been completed. Perhaps Pinker was under the impression that the Neural Computational Theory could not characterize the kinds of conceptual and linguistic structures we now know it can characterize. Perhaps Pinker, correctly seeing that important parts of the structure in thought and language cannot be characterized by PDP connectionist models, and not being aware of structured neural models, was driven to what he saw as the only alternative: the "mad" Two Minds Theory, with Symbolic Computation providing the structure to thought and language.

I am encouraged by Pinker's present dismissal of the Computer Program Theory of Mind, even though he previously espoused it in his books. With the field developing this rapidly, changes in position are natural. There is no reason for us to disagree on this matter, given that we both recognize the need for conceptual and linguistic structuring, and given that structured neural modeling provides that structure in a biologically responsible way. I hope Pinker's dismissal of the Computer Program Theory means that he has given up on the Two Minds Theory and has adopted the sensible alternative that also best fits the facts—the Neural Computational Theory of Mind. The evidence warrants it.

GEORGE LAKOFF previously taught at Harvard and the University of Michigan and since 1972 has been Professor of Linguistics at the University of California at Berkeley, where he is on the faculty of the Institute of Cognitive Studies. He has been a member of the Governing Board of the Cognitive Science Society, President of the International Cognitive Linguistics Association, and a member of the Science Board of the Santa Fe Institute. He is the author of Metaphors We Live By (with Mark Johnson), Women, Fire and Dangerous Things: What Categories Reveal About the Mind, More Than Cool Reason: A Field Guide to Poetic Metaphor (with Mark Turner), and most recently, Moral Politics, an application of cognitive science to the study of the conceptual systems of liberals and conservatives. He has just completed (with Mark Johnson) Philosophy In The Flesh, a re-evaluation of Western Philosophy on the basis of empirical results about the nature of mind, and is now working with Rafael Nunez on a book tentatively titled The Mathematical Body, a study of the conceptual structure of mathematics.


Life after Death—Response to Horgan by Joseph Ledoux


From: Joseph LeDoux
Submitted: 11.22.97

Life after Death — Response to Horgan:

I can't quite warm up to Horgan's death sentence for science. My chills are not from a deep need to protect the concept of science but from a really specific instance. I just can't see what he's getting at when he talks about my field — neuroscience.

The main argument about the end of science, if I've got it right, is that the progress has been so great that we have nothing big left to figure out. But when it comes to neuroscience, where he says we face one of the biggest problems — the mind, our field is ending because we'll never figure the big one out.

Can't we get a youth discount? Neuroscience is infantile. We can't have a paradigm shift since we don't have a paradigm. Maybe we'll never have one. But maybe it's too soon to tell. Either way it seems terribly closed minded to say we'll never figure out how the mind works. If we give up at this early stage we'll certain never get there.

Still, it's important to point out that the study of consciousness is just a minute part of neuroscience (though the only part of the field he discusses in his book). There is, after all, the question of how the brain works, in addition to the one about the mind. In fact, many neuroscientists think they are working on the brain rather than the mind. The brain does many non-mental things that are important, like keeping our lungs inhaling and exhaling at the right speed, making sure the heart pumps away, controlling posture and locomotion, regulating digestion, and on and on. Though certainly less sexy than consciousness, these are more important for survival. We can live a long time without a belief, but not very without a breath.

But even if we go back to the mind, there's much more to figure out than consciousness. Most of the mind works unconsciously. That's not to say its operations are repressed or otherwise hidden from consciousness. Instead, it means that consciousness (at least what we humans refer to when we talk about consciousness) is something that was added to the brain recently (in evolutionary terms). It was layered on top of all the other stuff that was already there. Consciousness has access to some of that stuff, but not all of it. In fact, most of it is inaccessible. Much of our brain's functions operate unconsciously simply because those processes are not available (neurally) to consciousness. And many of these processes fall into the domain we call "mental." Speaking grammatically, for example, is done without willful participation of consciousness, as is our initial response to danger or beauty. And the breakdown of mental life that occurs in mental disorders is due at least as much to changes in these implicitly operative systems as to alterations in consciously controlled processes.

The foregoing implies that we know a lot about the brain, otherwise how could I say so much about it. But sadly we know very little. We have no idea how our brains make us who we are. There is as yet no neuroscience of personality. We have little understanding of how art and history are experienced by the brain. The meltdown of mental life in psychosis is still a mystery. In short, we have yet to come up with a theory that can put all this together. We haven't yet had a Darwin, Newton, or Einstein.

Don't get me wrong, I'm not proud that our field has yet to achieve a grand theory. On the other hand, I'm not even sure that we need one. Maybe what we need most are lots of little theories. It would be great to know how anxiety or depression works, even if we don't have a theory of mental illness. And wouldn't it be wonderful to know how we experience a wonderful piece of music (be it Bach or rock), even in the absence of a theory of perception. And to understand fear or love in the absence of a theory of emotion in general wouldn't be so bad either. The field of neuroscience is in a position to make progress on these problems, even if it doesn't come up with a theory of mind and brain. In Horgan's terms, this may mean the field is dead. If so, then we can look forward to a long and wonderful life after death.

Horgan will surely have something clever to say in response. I'm prepared to be ripped to shreds. But I'm not prepared to concede infant mortality to neuroscience.

JOSEPH LEDOUX is a Professor at the Center for Neural Science, New York University. He is the author of the recently published The Emotional Brain: The Mysterious Underpinnings of Emotional Life, coauthor (with Michael Gazzaniga) of The Integrated Mind, and editor with W. Hirst of Mind and Brain: Dialogues in Cognitive Neuroscience.



Copyright ©1997 by Edge Foundation, Inc.

Back to EDGE INDEX

Home | Digerati | Third Culture | The Reality Club | Edge Foundation, Inc.

EDGE is produced by iXL, Inc.
Silicon Graphics Logo


This site sponsored in part by Silicon Graphics and is authored and served with WebFORCE® systems. For more information on VRML, see vrml.sgi.com.