| Home | About Edge| Features | Edge Editions | Press | Reality Club | Third Culture | Digerati | Edge:Feed | Edge Search |
Marc D. Hauser
metaphors are so possible, and transforming, then Lakoff and Johnson I
presume would argue that the human mind is fundamentally transformed by
the acquisition of language, and the young child, lacking language, has
absolutely different conceptual representations than children with language.
If this is the case, it goes against many of the findings in current developmental
psychology and evolutionary psychology that argue for a core set of representational
systems. Moreover, many of these representational systems are present
in animals, lacking language and metaphor.
From: Stanislas Dehaene
George Lakoff's statement that "You cannot think anything without using the neural system of your brain" will be completely obvious to any neuroscientist. What is more difficult is to find clear evidence that the structure of our brains imposes a sharp limit on the patterns of our thoughts. I very much like the idea that much of mathematics is based on metaphors between space, time, number, sets, games, etc. But the metaphor idea remains underspecified. Is the brain so flexible that almost any metaphor is possible? In this case, the presumed limits are essentially inexistent, and one might as well be a functionalist or a dualist.
Like Lakoff, I am convinced that cognitive studies of mathematics will ultimately provide beautiful examples of the limits that our brains impose on our thoughts. As I tried to show in The Number Sense, we have very strong intuitions about small numbers and magnitudes, which are provided to us by a specific cerebral network with a long evolutionary history. But one could probably write another book describing the limits on our mathematical intuitions. Take topology, for instance. At home, I have a small collection of extremely simple topological brainteasers. Some of them (essentially made from a metal ring and a piece of string) are strikingly counter-intuitive our first reaction is that it is simply impossible to remove the ring, but of course it can be done in a few moves. Thus, our sense of topology is extremely poor. Yet it's easy enough to imagine a different species that would have evolved a cerebral area for "topo-sense", and for which all of my brain-teasers would be trivial.
So my answer to Lakoff would be: Sure, our thoughts are grounded in our brains. But the real challenge is to find empirical domains in which the constraints linking brain and mind can be tracked down in a convincing manner.
The idea I would like to throw into this discussion is a socio-political dimension. Lakoff did not address the question in his interview of why the idea of disembodied intelligence has been so popular in our culture. I would like to suggest that it is not purely because it is intellectually appealing, but also because there have been deep cultural and political forces supporting this idea.
For at least 3000 years Western culture has been deeply dualistic in its philosophical outlook, constructing a duality between mind and body that was seen to mirror the duality of male and female. Part of the reason for the appeal of the idea of a disembodied mind was because it suggested to many philosophers (mostly men) that it was possible to realize a purely "masculine" state of being. A "pure" mind without a body, was regarded in the Pythagorean/Platonic tradition as a purely male entity.
The long philosophical contempt for the body in Western thinking comes ultimately not from Christianity, but from ancient Greek philosophy. It is precisely this contempt that is still expressed today by proponents of the classical AI paradigm. This contempt for the body (and the desire to escape it by disembodying the mind) cannot be fully understood unless we also understand its historical linkage with a deep Western ambivalence towards women.
In response to Lakoff, Stanislas Dehaene asserts "...the real challenge is to find empirical domains in which the constraints linking brain and mind can be tracked down in a convincing manner."
What might guide our search for such domains, and what kinds of constraints should be the focus of investigation? We have an abundance of empirical evidence showing that damage to particular areas of the brain results in particular sensory and cognitive deficits. Recent work in brain imaging reveals selective localized patterns of heightened neuronal activity associated with particular cognitive tasks. So, in this sense, having intact and healthy brain tissue in these areas is a constraint on their correlated aspects of mind. But these findings shed little light on how the brain does its cognitive work. I think this is the real challenge. It is doubtful that we will get much of a handle on the constraints linking brain and mind without the formulation of minimal and plausible neuronal models that can be shown to perform competently over a range of cognitive tasks.
A particularly important aspect of mind that plays a role in many different cognitive functions is the representation of space. In The Cognitive Brain, I proposed a detailed neuronal mechanism (called the retinoid system) that can account for our veridical and imaginary representation of objects and their relationships in 3-D space, as well as our sense of self-location in egocentric space. A model of this kind might help explain Dehaene's observation about the difficulty people have in solving his topological puzzles.
Neuroscience does not yet have the tools which might enable us to lay out bare the operant microscopic machinery of the human brain's cognitive systems. But explicit models of biologically plausible mechanisms can be tested for their ability to perform in a way consistent with human cognitive performance. The structure and dynamics of these models can then shed light on the constraints that operate on the brain/mind.
A case in point is the explanation of the seeing-more-than-is-there (SMTT) phenomenon by the retinoid model. The SMTT illusion is experienced if a figure is moved back and forth laterally behind an occluding screen with a very narrow vertically oriented aperture slit high enough to span the vertical dimension of the almost completely occluded figure behind the screen. As fixation is maintained on the center of the slit, one perceives a complete but horizontally contracted image of the hidden figure. Even though the retinal stimulus consists only of tiny, intermittently appearing line segments moving up and down on the vertical meridian, there is a vivid visual perception of the whole figure moving left and right, well beyond the narrow aperture. This striking illusory experience has puzzled investigators since it was first observed, but it can now be explained as a natural consequence of the neuronal structure and dynamics of the putative retinoid system. A unified spatial representation of the hidden stimulus is assembled in the brain from the sequence of its fragmentary inputs that are registered on the retina in the narrow aperture region and then shifted postretinally across adjacent retinoid cells, with the direction and velocity of translation driven by the detection of lateral motion in the aperture.
The retinoid mechanism imposes several constraints on how this phenomenon is experienced. The principle cells in the retinoid array that represent the hidden stimulus are autaptic neurons with short-term memory properties. (An autaptic neuron is a neuron that can restimulate itself for a brief period after its initial spike discharge by means of a synaptic connection from a branch of its own axon to its own dendrite.) Cells of this type require relatively rapid refreshing of direct stimulation if they are to maintain their discharge. Thus the coherent perception of a whole object suddenly breaks down to a pattern of vertically oscillating dots if the lateral oscillation of the occluded object is slower than approximately 2 cycles/sec (each translation phase approximately 250 ms). In addition, because there is a relatively fixed integration time for the interneurons effecting stimulus translation across the retinoid array, as the occluded figure moves faster, its perceived horizontal dimension becomes shorter.
I think that we must reconcile ourselves to the idea that detailed minimal models of cognitively relevant neuronal mechanisms are required if we are to understand the constraints operating on the brain/mind.
From: George Lakoff
Reply to comments
I want to thank Stanislas Dehaene, Sandy Blaskeslee, Marc Hauser, and Arnold Trehub for their comments.
As Professor Dehaene observed, it is obvious to any neuroscientist that you have to use the neural system of your brain to think. But that has two interpretations.
1. The peculiarities of the brain, including those of the sensory-motor system, structures concepts and "abstract" thought.
2. The brain merely instantiates any symbol-processing system and thought
a) Spatial relations concepts. Here we cite Terry Regier's hypothesis in The Human Semantic Potential (MIT Press) that primitive spatial relations concepts arise from neural structures that make use of topographic maps of the visual field, orientation-sensitive cells, center-surround receptive fields, etc.
b) Event Structure Concepts (or "aspect" to linguists): Srini Narayanan's modeling results (click here) implicitly make reference to Rizzolatti's mirror neurons. Narayanan argues that there is a single high-level neural control system for motor control and perception of motor movements, and that it characterizes "abstract" event structure (or aspectual) concepts.
These of course are NTL (Neural Theory of Language) neural modeling results, not results from neuroscience. That is they are "how" results (how the neural computation works) not "where" results (where the neural computation is done). These are cases where it appears that the structure of the brain imposes "sharp limits" on conceptual structure.
These results are very much in the spirit of Professor Trehub's very interesting observations and his claim that "detailed minimal models of cognitively relevant neuronal mechanisms are required if we are to understand the constraints operating on the brain/mind." That is just what we are finding from the computational neural modeling perspective: The details of conceptual structure can only be computed by neural networks of a very limited kind with specific structures. As Professor Dehaene observes, his own research indicates that our concepts of small numbers and magnitudes is constituted by a specific cerebral network with a long evolutionary history.
All of this points to a strong embodiment of mind hypothesis: Mind is not just any kind of symbol-manipulation that happens to be instantiated somehow in the brain. Instead, the possibilities for concepts and for thought are shaped in very special ways by the body and the brain that evolved to control it, especially the sensory-motor system.
Metaphor plays a major role in this account: Conceptual metaphors appear to be neural maps that link sensory-motor domains in the brain to regions where more abstract reasoning is done. This allows sensory-motor structures to play a role in abstract reason.
I'm glad that Professor Dehaene likes the idea that abstract mathematics is based on metaphors linking number with space, actions, sets, and so on. But he is incorrect that the theory of conceptual metaphor is so "underspecified" that "one might as well be a functionalist or a dualist." It is not the cases that almost any metaphor is possible. Possible metaphors are constrained in many ways, as discussed in Philosophy in the Flesh, Chapter 4. The possibilities for what Joe Grady has called "primary metaphor" are constrained by (a) sensory-motor and other source-domain inferential mechanisms; (b) regularly repeated real-world experiences, especially in the early years, in which source and target domains are systematically correlated; (c) mechanisms of recruitment learning. Our empirical studies show that conceptual metaphors around the world seem to be quite limited in ways that such constraints would predict. The wide variety of complex conceptual metaphors are predicted by the possibilities for co-activation of neural metaphorical maps.
What results is not possible in a dualist or functionalist system, since
many actual inferential mechanisms are in the sensory-motor system. Narayanan's
modeling results indicate that abstract reason can be carried out by sensory-motor
neural mechanisms. Very non-dualist and non-functionalist. Non-dualist
because bodily control mechanisms are being used in abstract reason. That
does not allow a mind-body split. The results are nonfunctionalist because
Narayanan's inferential mechanisms have the properties of neural systems.
"Functionalist" systems are general symbol manipulation systems and they do not have these properties.
Some of Marc Hauser's comments are based on a lack of familiarity with results on metaphorical thought over the past two decades. Professor Hauser incorrectly presumes that Johnson and I "would argue that the human mind is fundamentally transformed by the acquisition of language, and the young child, lacking language, has than children with language. If this is the case, it goes against many of the findings in current developmental psychology and evolutionary psychology that argue for a core set of representational systems. Moreover, many of these representational systems are present in animals, lacking language and metaphor."
Conceptual metaphorical mappings are not primarily matters of language, they are part of our conceptual systems, cross-domain mappings allowing us to use sensory-motor concepts and reasoning in the service of abstract reason. Children acquire conceptual metaphorical mappings automatically and unconsciously via their everyday functioning in the world. See Chapter 4 of Philosophy in the Flesh. Thus it is not the case that " the young child, lacking language, has absolutely different conceptual representations than children with language." Our results are very much in accord with child language acquisition. Indeed, Chris Johnson's research on polysemy acquisition supports our account.
Finally, I'd like to turn to Professor Hauser's "questions/challenges." He writes:
If our brains are structured on the basis of the input from the body, then how can Lakoff and Johnson explain the phantom limb results that Ramachandran has obtained with mirrors. Here, simply seeing the intact arm in the mirror provides the necessary input to the brain to show that the phantom can be relieved of pain. Nothing is happening at the body surface. It is a visual image of the good arm in the place of the missing arm. Seeing this image apparently tricks the brain into thinking that the pain can be relieved. This is an elegant example, it seems to me, of modularity, and the encapsulation of information within one system.
Johnson and I accept (and applaud) Ramachandran's account. It is entirely consistent with ours as discussed in Philosophy in the Flesh, Chapter 3 and Appendix. The brain is structured so as to run a body and has very specific connections to and from the body. The Neural Theory of Language is based on empirical results about the details of body-linked brain structure. Recall that in order to have a phantom limb, you have to have had a real limb linked to the brain before you lost it.
Ramachandran's results, so far as I can tell, show nothing about "modularity". They only show that there are constraints on where information can flow in the brain, but that is anything but surprising.
I try to avoid using the word "modularity" because of its wide misuse in linguistics. When "modularity" is taken to mean that there are places in the brain where there is neural computation done using circuitry specific to that place, then there is no problem. This is just localized neural computation of a specialized kind performed by specialized circuitry with normal neural inputs and outputs. There is no question that this exists and our group makes use of extensively in our neural modeling enterprise.
However, there is a strange use of the word "module" that is current in linguistics that does not mean this at all. This is the chomskyan "syntax module" or "syntax box," which has outputs but no input. There is nothing in a brain like this. You can see why I would avoid the word "module." No neuroscientist I know uses the word in such a sense. For discussion, see the chapter on Chomsky's philosophically-based linguistics in Philosophy in the Flesh.
Again, I would like to thank those wrote. I hope that reading Philosophy in the Flesh will clarify these issues.
It would be interesting to hear what philosophers have to say about all this. Our book surveys many of the vast changes that would result if philosophy were to conform to the empirical results of neuroscience and cognitive science (especially cognitive linguistics). We hope that most philosophers will not close their minds to the sciences of the brain and the mind, which bear so centrally on the philosophical enterprise.
| Home | About Edge| Features | Edge Editions | Press | Reality Club | Third Culture | Digerati | Edge:Feed | Edge Search |
| Top |