| Home | About Edge| Features | Edge Editions | Press | Reality Club | Third Culture | Digerati | Edge:Feed | Edge Search |
Daniel Goleman, William H. Calvin, Douglas Rushkoff, Paulo Pignatelli, W.Daniel Hillis, Steven Pinker, Joseph LeDoux, Paulo Pignatelli (2), and Joseph LeDoux (2) on Parallel Memories by Joseph LeDoux.
From: Daniel Goleman
AI: A Wake Up Call From the Amygdala.
LeDoux's groundbreaking work has, understandably, been embraced by neuroscientists, even psychoanalysts, as the best insight so far into the workings of the emotional brain and the role of unconscious processing of emotions in mental life. But the field that should be standing up and paying attention is artificial intelligence. LeDoux has shown the neural circuitry that accounts for the fact -- established in separate research by cognitive/social psychology -- that the act of cognition entails emotion. That is, as we comprehend that this or that is an 'X', we already have an opinion about it -- as we realize what it is, we like it or we don't. And this affective processing occurs in the very first phase of the act of cognition -- we actually have the emotional reaction many milliseconds before we know exactly what it is we're reacting to.
Couple this intrinsic role of feeling in cognition with Antonio Damasio's work, showing that sound decision-making and information-processing in life depends on intact circuitry to the amygdala, and the conclusion is that need to know our feelings about our thoughts, or we have no preferences. The only realm where feelings don't matter is the purely abstract and cognitive -- mathematics, for example, But only mathematics for technicians -- not for theoretical work or creative insight: we're back to the need for feelings -- for intuition, for the "feel" of what's "right".
All this means that to model the mind in a significant way, you need the cyber equivalent of an amygdala. All else is a shallow version of the mind at work.
From: William H. Calvin
Let me tackle queries about feelings from another angle. Behavior is only the tip of the iceberg -- and especially for humans, because we have such an extensive fantasy life, imagining all sorts of things that might happen and subconsciously rating them. The outcome of those ratings are a big part of our "emotional feelings." Like Joe LeDoux, I tend to doubt that there are a few primal emotions, conveniently forming the axes of an emotional coordinate space, but clearly our feelings involve combinations: we simultaneously feel fear and desire and hope, just as we can simultaneously rate an apple's appearance and, separately, its taste.
While most rating scales operate on memories of actual happenings rather
than offline simulations of future courses of action, we can even incorporate
memories of imagined courses of action into our future emotional base
-- as when we realize that something is a bad idea, and soon have a gut
feeling about it that biases our future ratings (an important source of
ethical behavior, lacking in other animals). But even if you aren't speculating
(as I assume many animals aren't), emotional ratings are enormously important
in biasing neural operations. I often talk (e.g., in How Brains Think
of the four major diffusely-broadcast neurotransmitters as the "mood music"
of the neocortex, the various proportions of acetylcholine, serotonin,
and dopamine being something like the colors that are produced by various
proportions of red, green, and blue photoreceptors -- but augmented by
norepi's drumroll announcing a sudden happening in the external world.
Regional variations of neuromodulators are part of the stage-setting for
even simple neural circuits (my wife, Katherine Graubard, works on exactly
this problem in a 30-cell minibrain in the crab, e.g.,
I tend to look on emotion as part of the memorized environment that biases a Darwinian process, busy bootstrapping quality in our speculations about what to say next, using a lot of parallel computation. Explicit memories are also part of that memorized environment, but the broad strokes of past "emotional" judgments are what set the stage, what focus your attention on some fine details rather than others.
I have a caution about Joe LeDoux's hypothetical example of false memories of stress: remember that it assumes, as a teaching tactic, that there could be a complete shutdown of hippocampus to stress (and thus failure of episodic memories), but with augmentation of amygdala's emotional memories. Though never observed, that's a useful extreme example to show how different the systems are. There's no evidence that false memories, as they are studied by cognitive psychologists such as Beth Loftus, or as they appear in child abuse lawsuits that hinge on supposedly repressed memories, have anything to do with LeDoux's teaching example. Human repressed memories have to be judged on the empirical evidence, such as how susceptible normal people are to memory errors (we're often in error as eyewitnesses, and some people can be persuaded that they participated in events that never happened), and how stress might change those tendencies to error (a problem for future research)
From: Douglas Rushkoff
While Ledoux's biological, quasi-reductionist model of emotions might work for survival mechanisms like fear, pain, fight, flight, or even reproductive urges, I have a feeling it would prove less successful in analyzing more complex mechanisms like the quest for intimacy, community, and comprehension.
I have always understood biology as a yearning for complexity in the face of entropy. That is, life itself is a force that resists reductionism and statistical probability (thus, for example, the preponderence of right-handed molecules against statistical reason). Evolution, a tool of living systems, might not be the result of chance mutation but rather an almost conscious striving towards dimensionality. It defies the odds. This is why I believe that the laws of Newtonian physics and entropy don't particularly apply to consciousness.
Survival instincts are merely our interactions with the force of entropy -- our resistance to death -- so of course they will be subject to the traditional laws of reductionist sciences. They are what we use to push off. But to infer from the torture of animals that all emotion has a base in survival instincts and the mechanistic world in which they occur is to reduce the enterprise of life to a determinist and spiritless happenstance within a downward spiral of entropic inevitability.
No, the universe itself is a battle between entropy and life -- and, because it gets smarter over time, life is sure to win.
- Douglas Rushkoff
As I understand it, in early development, there is a differentiation of neurons, yet under some circumstances, differentiation is not such that these neurons' evolutionary changes are irreversible back to some initial or earlier stage, (the whole concept of polymorphism). If we count the (average) number of generations and n=1 is the first undifferentiated, homogeneous generation and n=p is the present specialized generation, at what approximate point are the differentiations such that one type of memory unit can no longer revert or the other, even when necessary (for survival, or whatever highest criteria we may choose?
When we say that a fear is "ingrained" we usually mean that it has been there for some time. Time is one parameter of the generation number, (the name I call that number between n=1 and n=p). If there is an irreversible point for neuron re-differentiation, (which implies that neurons can be considered to have a memory themselves?), and that seems to be the case seeing the results of stress on memory that you talk about here, especially the dendrite shriveling in the hippocampus, then, what is the nature of the potentially "deadly" neuronal mutations (in a computational framework) that end, for example, with hippocampal degeneration?
If we imagine both the amygdala and the hippocampus as computing machines, (Turing machines), and the output as behavior, are we saying that there is not another Turing machine that can take the input from one and transform it to the other? If that is not the case, then how would the Turing models be defective with respect to the neuronal models? Naturally, that would be equivalent to asking if emotional memory can be transformed into "memory" memory through some other area of the brain, which you say can not be done, but is that because we have not seen it, or is it impossible? I will go now to your website and read more about your research.
From: W. Daniel Hillis
How many emotions are there? It's a simple question, but as far as I know there is not enough scientific knowledge about emotions to answer it. Since I have been spending my time lately in the entertainment industry I have been struck by how little science there is that tells us anything useful about human emotion. The entertainment business is all about emotion, but there is not much science that informs its primary mission. Major decisions are made by a combination of analogy, habit and superstition. Imagine, for example, what the chemical industry would be like if there was no science of chemistry: a few recipes that work, some people who have a successful record of mixing things together, a combination of many failures and a few unexplainable successes. It would be a lot like Hollywood.
I am very sympathetic to LeDoux's approach to finding a biological basis for emotion. It is interesting that in large massively parallel computers engineers have also adopted the general strategy of providing two different communications mechanisms for controlling computations. One mechanism is local and specific point-to-point and the other is a slow, global system that generally coordinates the action of the local components. I have often thought of these as corresponding roughly to the electrical and chemical signalling systems in the brain. What appeals about LeDoux's work is that it may someday lead to a precise description of how the chemical signaling system works. We might finally progress from today's alchemy to an understanding the chemistry of emotions.
From: Steven Pinker
I like Joseph Ledoux's approach, in which he studies a single system at many levels of analysis, from psychology to molecules. Decades ago, Tinbergen suggested that anything in psychology has to be understood at four levels -- mechanism, development, function, and phylogeny. Ledoux's intensive analysis of one psychological faculty may present a special opportunity to realize Tinbergen's ideal. I'd like to throw out a couple of questions that speak to the integration of the four levels; they come from my own approach to fear in How the Mind Works.
Of course, there are alternative explanations -- conceivably there's some kind of evolutionary intertia or developmental constraint making it impossible to grow symmetrical connections between amygdala and cortex. It strikes me that Ledoux's project offers the promise of allowing us to decide between the accounts someday. Perhaps one could look at the development of the connections, and see if the growth factors (or other molecular guides) and the genes controlling them are the kinds of things that can easily be modulated in development, with different parts of the brain, or the brains of different species or even different individuals, showing fine-tuned control of the degree of symmetry or asymmetry of that kind of connection. We know that fearfulness shows variation across life stages, species, breeds (e.g., of dogs), and individuals; perhaps one could compare the size of their amygdalas or the strength or asymmetry of their connections to the rest of the brain. One could even selectively breed animals for different kinds of fear response and look at their brains. If there was fine-tuned, systematic variation in the neuroanatomy across these comparisons, it would suggest that there is no inviolable neurodevelopmental constraint but that natural selection had considerable freedom in selecting the genes that wire up animals' brains, presumably in the service of the survival demands of their niche and social system.
From: Joseph LeDoux
To: Daniel Goleman, Douglas Rushkoff, William Calvin, Paolo Pignatelli, and Steven Pinker
Computation and Emotion
There are lots of good reasons why cognitive science ignored emotion at the beginning. Emotion just wasn't what this approach was about. When cognitive science started claiming to be the new science of the mind, instead of a science of cognition, though, emotion and related topics should no longer have been ignored. But they were, and, for the most part, still are. As Dan Goleman points out, this leads to a shallow view of how the mind works.
To be fair, though, many of the leading researchers in AI and cognitive science have from time to time recognized this. The problem is that this recognition hasn't led the rank and file of cognitive science to embrace emotions with open arms. For the most part, emotion is still one of those messy things you need to control in a good cognitive experiment, rather than something you should study.
It's worth noting, though, that there's a growing interest in emotions in AI. There is, however, some work in AI that has tried to bridge the cognition- emotion gap. For example, there are models of emotional face perception and emotional language recognition, and an expert system that simulates stimulus evaluation processes. Also, there's an internet "Cognition-Affect Group" that functions as a clearinghouse for announcing and describing new findings or just plain asking questions of people who have an interest in the subject. Good starts, but more is needed.
My own efforts in this area have involved the development of a connectionist model of the fear conditioning network. Rather than starting with a psychological theory of emotion (a tough task to accomplish) I start with an understanding of the neural system that underlies one kind of emotion - fear (as modeled by fear conditioning). This work, but the way, is done in collaboration with connectionist modelers. The model is anatomically constrained. These constraints allow the model to discover many aspects of the behavior and physiology of fear conditioning that we've identified through animal studies. The model has even made novel, non-intuitive predictions about the effects of brain lesions. We've then gone back and tested the predictions in the rat, and the model was right.
Another point about the model is that it is modular. This means that we can pick one structure, like the amygdala, and fill it with simulated neurons that have real physiological and biochemical properties, while leaving the rest of the model to function at the systems level (linked structures with generic neuron-like unit). These kinds of hybrid models that include both neural network and detailed neuronal compartmental components are potentially very powerful. The modular nature of the model means that we can also take other brain regions that have been simulated with some success, like the hippocampus or areas of neocortex, and integrate those with our model. We're doing this to try to understand how cognitive functions such as attention and contextual processing influence and are influenced by fear arousal.
Have we explained all of emotion through animal studies?
Douglas Rushkoff has the feeling that my ideas about emotion might work for survival mechanisms like fear, but not for other more complex emotions. I would be the first to admit this. One of my main points is that we need to study each emotion on its own terms, and I make no claim that I've got anything to say about emotion in general. I'm happy if I can contribute something to our understanding of fear. Fear is, after all, the emotion that most often goes wrong and accounts for most so-called emotional disorders. Remember, Freud's whole theory was based on anxiety.
Memory and Stress
Hippocampal functions (including explicit, declarative memory) can be impaired by high levels of steroid hormones (stress hormones) floating around in the blood. This is a well-documented finding supported by studies of hippocampal physiology in animals and by studies of hippocampal-dependent memory in stressed animals and in humans with high levels of steroids (such as in Cushing's disease and in people with severe depression). These facts are documented in The Emotional Brain (http://www.cns.nyu.edu/home/ledoux/book.html).
William Calvin calls this a teaching example. He's of course right when he says that there's no way to know whether a stress-induced impairment in hippocampal function might account for "false memories" in traumatized people. I didn't mean to imply that stress effects on hippocampus could account for false memories. I only meant that this phenomenon might help explain why there is sometimes an amnesia for traumatic experiences. False memory is something else that comes later to fill in the gaps left by the amnesia.
This brings us to Paolo Pignatelli's question about why the amygdala can't make up for the hippocampal impairment in cases of stress-induced amnesia. I argued that in stressful situations the amygdala and hippocampus each form their own memories. The amygdala forms unconscious emotional memories, and the hippocampus conscious memories about emotional situations. The amygdala memories lead to bodily responses, while the hippocampal memories just lead to thoughts. When the hippocampus is "shut down" by stress the amygdala continues to form its unconscious memories, and may form even stronger ones.
Pignatelli asks, why can't the hippocampus later get the information from the amygdala, converting the amygdala's unconscious memory into a conscious one? Actually, he's not the first to want to know this. Many proponents of "recovered memory" have tried to make this case. Pignatelli comes at it from a perfectly good theoretical position. If the amygdala and hippocampus are both computing machines, by which he means Universal Turing Machines, then shouldn't there be a way for one to read the output of the other?
I think Pignatelli has raised an interesting point. I don't really know enough about the theory of computation to say whether the amygdala and hippocampus, or any other two brain regions, satisfy the requirements of Universal Turing Machines. However, it seems to me that they are best thought of as special purpose rather than universal computers. I don't know of any evidence that would support the idea that the memories of these two systems are shared, but there is some evidence that they are not. The evidence comes from a study Jeff Muller did in my lab last year. The study wasn't done with Pignatelli's question in mind, so it doesn't completely answer his question, but it goes pretty far.
We temporarily put the amygdala to sleep in rats. This is done by injecting a drug directly into the amygdala that shuts down neural activity, but only for a short while. While the amygdala was out, the rats, which were otherwise fully awake, underwent a standard fear conditioning procedure. The next day, after the drug wore off, we tested the rats. They showed no sign of having learned. Presumably the hippocampus was awake and storing information. But the amygdala couldn't later use that information to express learned fear responses. We know that the amygdala injection didn't affect the hippocampus since even permanent amygdala lesions don't interfere with hippocampal-dependent memory in rats or humans. This is pretty good evidence that the amygdala can't convert a hippocampal memory into something it can use to do its work. It doesn't answer the specific question (whether the hippocampus can read amygdala memories) but it does answer negatively the reverse question, and seems to suggest that brain areas are specific rather than universal computers.
Maturation vs. Learning
I think many of the issues raised by Steve Pinker about the evolutionary and maturational basis of fears and phobias are addressed in The Emotional Brain. The evolutionary and developmental researchers that Pinker cites are covered and endorsed there. I tend to emphasize learning because I want to see how far the fear conditioning model can go rather than because I think it can explain everything. Pinker asks whether my account can easily include the idea that some adult fears and phobias are developmentally programmed childhood fears that never went away. I think this kind of idea can fit right in. I view fear learning as something that a fear detection network does. This network can detect dangers that are programmed in by evolution, as well as those we've learned about. Many of the things that make otherwise normal people afraid are things we've learned about, whereas many phobias seem to involve things that were dangerous to our ancestors. In either case the stimulus goes to the amygdala, which unleashes the responses. The only difference between a learned and an innate releaser of fear responses is how the amygdala was programmed to detect the stimulus.
While I thus agree with Pinker's point that evolutionary and maturational factors are important, I'd like to add a couple of additional points of support for the role of learning. First, there are some instances of phobia where there is a triggering event, even if this isn't always the case. Second, certain evolutionarily programmed fears have to be massaged by experience, as when baby monkeys need to observe snake fear in others once in order to express it themselves. Third, not all learning is associative in nature. Jake Jacobs and Lynn Nadel have in fact raised the possibility that non-associative processes might be important in getting phobias going in some cases. A related notion comes from Jay Schulkin and Jeff Rosen, who are proposing that amygdala kindling might be a kind of non-associative neural trigger of phobias. Finally, some phobias involve non-biologically relevant stimuli, like cars and elevators. Here experience (associative or non-associative) would seem to be key.
Pinker also asks about the maturation of the amygdala. There's not a lot of work on this. Just as emotion has been ignored by cognitive science, the amygdala has been ignored by neurobiology. The neocortex and hippocampus, the king and queen of cognition, have been preferred targets of study. However, here's what the data available suggest. The amygdala matures relatively early after birth, whereas the hippocampus comes in much later. Jacobs and Nadel have suggested that the late development of the hippocampus accounts for infantile amnesia, that inability to consciously remember what happened to you in early childhood. However, because the amygdala is up and running, it can form its memories. A child that's abused will then have unconscious (fear conditioned) memories throughout life, but will never be able to verbalize why he reacts the way he does. That the amygdala never forgets is suggested by lots of evidence, again documented in The Emotional Brain. The amygdala's memories can be inhibited (by extinction or therapy processes) but these extinguished responses (unconscious implicit memories) can usually be brought back. They return as implicit not as conscious memories.
Pinker's final point is about why the cortex and amygdala are asymmetrically connected -- the connections from the amygdala to the cortex are stronger than the other way around. He suggests that this may have a purpose -- lack of cognitive control over emotion can be an asset in strategic conflict, since it makes threats more credible. His alternative is that there's some kind of evolutionary inertia that prevents the development of symmetrical connections. I prefer the second choice over the first, mainly because it's grounded in "brain" rather than "mind" evolution. In any event, as Pinker points out, now that we've pinpointed the kinds of circuits that are involved in fear, we can begin to ask the really interesting questions about how that system relates to the rest of the brain, and that's going to help us understand the cognitive/emotional mind, or, in other words, the real mind, as opposed to the one that much of cognitive science has been stuck on.
FURTHER QUESTIONS FOR JOSEPH LEDOUX
Thank you for your detailed and illuminating reply to my previous questions, I have a few more, if you could indulge my curiosity on this subject of which I unfortunately know so little about.
You say, "I think it's safe to say fear behavior preceded fear feelings in evolution." If fear behavior precedes fear feeling, in neurological differentiation, is it an evolutionary descendant of it, fear behavior at the (relative) apex, fear feeling a few levels below it? Or is the structure a hybrid of fear behavior and other "centers"? (That was the reason for the introduction of the idea of the similarity of polymorphism as understood in the computing community with that of the understanding of polymorphism as understood in the biological community.
When I first ready your fascinating interview, the famous sheep clone experiment had not yet hit the press, but the experiment did in a way clarify some things, and bring other questions to mind. The interesting thing from my point of view was how the experimenters succeeded in making the cell revert back to a more "primitive" communication and organizational structure. If we take those clues and apply them to the brain, how would we "starve" one part of the brain so that it could revert to a more primitive "mode", and then re-build itself (from amygdala to hippocampus, for example)? I was further interested in the dissipation of information that would occur in that process, where would the lost energy (energy used to store something) end up? Could the brain then be made to, using principles of computing polymorphism, store that "stack" that was being unloaded in order to make that trip up and then down the evolutionary "tree", in some "parallel structure it created (by manipulation similar to the "clone one")?
In your reply you say, "While the amygdala was out, the rats, which were otherwise fully awake, underwent a standard fear conditioning procedure."
Were there differences in the amount of work necessary to induce the neuronal potentiation in the amygdala as opposed to the hippocampus? By work I mean that force or energy necessary to change the structure in the desired direction (the final structure as affected by learning (through potentiation).
As may become apparent, I am very interested in what a next generation of computers would be like, what would they need to be like in order for them to be much more "friendly", how could we imbue in them 'common sense", or the understanding of a language. As slowly science breaks down the barriers, many based on language and its artifacts, in my opinion, that may have prevented us from delving into some truly interesting questions of science, for example, is self-awareness possible in a machine (yes, I believe, since I read that certain lesions can lead to a loss of that at least physical self awareness), we may see that a mechanistic interpretation of Man need not be in contrast with "humanity".
You mention the support of the biological-based companies for your work. Have the computer companies shown much interest?
The additional questions are even harder than the first. I'm afraid I don't have anything illuminating to say in response to several of the points you raised, so it's best I say nothing at all about them. I can, however, comment on a couple of things.
When I said that I thought that fear behavior preceeded fear feelings in evolutionary time, I was suggesting a couple of things that are made much clearer in The Emotional Brain. One of these is that the capacity to feel afraid is not necessarily what underlies our ability to respond to danger behaviorally. Fear feelings, in my view, are just what happens when you take the fear system, which maybe is better called a defensive behaviror system, and put it in a brain, like the human brain, that can be conscious of its own activities, then you get fearful feelings. What this means is that emotional beavhiors, like defensive beavhior and sexual behavior, are mediated by specific systems, but that feelings are mediated by a general purpose system of consciousness.
I don't think I understand one of your arguments. You said that since awareness could be eliminated by brain lesions then awareness could be programmed into a machine. I don't think your example really supports that. I happen to believe that consciousness has a material basis, but suppose it doesn't. Suppose consciousness is, as Descartes said, immaterial, but it needs material stuff for its expression. Take away the brain and you lose the expression of the soul, but you don't lose the soul. Like I said, I don't necessarily believe that, but your argument doesn't rule out this kind of an explanation, at least as I understood it.
Finally, the answer to the last question is no. Computer companies have not consulted with me about how to make emotional machines. However, I believe that there is an emotional machine group at MIT that is trying to do just that. I'd be interested in talking with them, but haven't heard from them yet. For that matter, I haven't heard much from the drug companies either. I said I thought that should be interested in what we are finding out, not that they had showed interest.
Thanks, again, and sorry I couldn't answer all your questions. By the way, I'm leaving NY in a couple of days and will not be back for some time (months). I assume I will be able to get email, but I may be cut off for a while.
| Home | About Edge| Features | Edge Editions | Press | Reality Club | Third Culture | Digerati | Edge:Feed | Edge Search |
| Top |