LANGUAGE, BIOLOGY, AND THE MIND

LANGUAGE, BIOLOGY, AND THE MIND

Gary Marcus [1.26.04]

video

Introduction

Gary Marcus is a young research psychologist whose interest in the literature of biology and resulted in new and interesting ideas about the biological basis of mind. He believes that "the mechanisms that build our brains are just a special case of the mechanisms that build the rest of our body. The initial structure of the mind, like the initial structure of the rest of the body, is a product of our genes."

His goal is twofold: (a) "to track closely the progress in genetics, and try to think about the question of how a tiny number of genes can lead you from an ancestral chimpanzee view of the world to a human view of the world"; and (b) "to rethink linguistics as a question of adapting from primate systems that are already in place. Instead of assuming that everything about language is sui generis—independent of the rest of the cognitive system—or the opposite extreme, which the anti-nativists might assume—that there's nothing special about language—I'm assuming there's something special about language, but that it's a variation on a theme."

Noam Chomsky appreciates Marcus's "wonderful contribution to our understanding of the biological basis for higher mental processes." Steven Pinker notes his "new ideas for how to integrate what we know about the thinking, talking person". Howard Gardner reads him for an understanding of the "space between genes and the mind". Read on....

—JB

GARY F. MARCUS is Associate Professor at the Department of Psychology at New York University and Director of the NYU Infant Language Center. His research on language acquisition and computational modeling has been published in journals such as Science, Cognition, and Cognitive Psychology. He is the author of The Algebraic Mind: Integrating Connectionism and Cognitive Science and The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought.

Gary Marcus's Edge Bio Page


From The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought by Gary Marcus:

"It is popular in some quarters to claim that the human brain is largely unstructured at birth; it is tempting to believe that our minds float free of our genomes. But such beliefs are completely at odds with everything that scientists have learned in molecular biology over the last decade. Rather than leaving everything to chance or the vicissitudes of experience, nature has taken everything it has developed for growing the body and put it towards the problem of growing the brain. From cell division to cell differentiation, every process that is used in the development of the body is also used in the development of the brain. Genes do for the brain the same things as they do for the rest of the body: they guide the fates of cells by guiding the production of proteins within those cells. The one thing that is truly special about the development of the brain—the physical basis of the mind—is its "wiring", the critical connections between neurons, but even there, as we will see in the next chapter, genes play a critical role.

"This idea that the brain might be assembled in much the same way as the rest of the body—on the basis of the action of thousands of autonomous but interacting genes (shaped by natural selection)—is an anathema to our deeply held feeling that our minds are special, somehow separate from the material world. Yet at the same time, it is a continuation, perhaps the culmination, of a long trend, a growing-up for the human species that for too long has overestimated its own centrality in the universe. Copernicus showed us that our planet is not the center of the universe. William Harvey showed that our heart is a mechanical pump. John Dalton and the 19th century chemists showed that our bodies are, like all other matter, made up of atoms. Watson and Crick showed us how genes emerged from chains of carbon, hydrogen, oxygen, nitrogen and phosphorus. In the 1990s, the Decade of the Brain, cognitive neuroscientists showed that our minds are the product of our brains. Early returns from this century are showing that the mechanisms that build our brains are just a special case of the mechanisms that build the rest of our body. The initial structure of the mind, like the initial structure of the rest of the body, is a product of our genes.'


LANGUAGE, BIOLOGY, AND MIND

(GARY MARCUS:) For a long time the fields of biology and psychology have been quite separate, and only in the last few years people have started thinking about brain imaging and about how the brain and mind relate. But they haven't really thought that much about another part of biology: developmental biology. Brain imaging tells you something about how the brain works, but that doesn't tell you anything about how the brain gets to be the way that it is. Of course, we also have the human genome sequence and have made enormous advances in genetics and related fields, and what I've been trying to do in the last few years is to relate all of the advances in biology to what people have been finding out in cognitive development and language acquisition.

Here the question has always been, what's innate? What does the child start with? What do children have at birth? There have always been two sides to that debate, a nature side and a nurture side. The nativists say that there's lots of stuff built into the mind that is the product of adaptation and natural selection. Others say, no, you don't really need all that stuff; you just need some general ability to learn, because the world is a very rich place. We get lots of information from the world, and you don't need to have anything built in at all. I've always been closer to the nativist side, thinking that there probably are sophisticated mechanisms built in. I've been persuaded by scientists like Chomsky and Pinker that we start with something interesting in the mind. We don't just start with a blank slate.

But for me that's always left a question open: If there is something built in at birth, how does it get there? Obviously you have to turn to biology to answer that question. If you really want to understand what the child has at birth, one way to go is through psychological evidence. You can do experiments with babies, as I've done in my lab, and many other people like Liz Spelke have done in their labs, and you can see that babies are very sophisticated by the time they're seven months old. Some people have done newborn studies and have shown that children know something even as soon as they're born: for example, they can recognize the difference between a face and a scrambled version of a face. What's been unsatisfying for me is that we haven't understood how the brain gets to be that way. How is it that we go from a fertilized egg to this complicated brain that at birth is already starting the process of language acquisition, and is already starting the problem of analyzing the world?

I was trained as a psychologist, not a biologist, but I started reading the literature of biology, and if I had done that twenty years ago I wouldn't have found it very interesting, and wouldn't have really understood enough about biology to have any impact on psychology. But that's really changed. Just in the last five years scientists have developed incredible new genetic techniques that allow them to do things like splice together genes. They can customize what individual genes do, orswitch them on in a particular part of the brain and not somewhere else. These are just amazing techniques.

At the same time we now actually have a sequence to the human genome, which is something people wouldn't have imagined 20 years ago. We're well on our way to having a chimpanzee genome so if you want to understand what's special about language you can compare between the species. There's one gene, called FOXP2, that we've already identified as having a tiny but potentially important difference between a human version and a chimpanzee version. There's this enormous wealth of new biological data that's telling us something about how the brain is constructed, and it lets us get a new angle into these nature-nurture questions.

One thing that helps us understand is the level of precision. How much, or how precisely can you specify the development of the brain? Some people say that there are only 30,000 genes; we used to think there were 100,000. Does that make Chomsky's idea of a built-in language acquisition device three times more wrong than it was ever before? Of course not, it doesn't mean that at all.

The genome is really more like a compression scheme. It's not a blueprint that shows you this neuron goes here and this neuron goes there. It's like an MP3 file or a Zip file that can store a lot of information in a compact way. It stores a recipe for building the brain, and it turns out that the genome is capable of building the brain with enormous precision. Using these new techniques people have been able to do things like re-route particular individual neurons, making them go to different destinations. Essentially they've figured out—not for the whole brain, or the whole genome, but for parts of it—the particular code by which particular neurons connect themselves up. They've figured out the general form of a solution to the problem of how to get the precision that you need.

At the same time there's a misapprehension about what the genome really does among researchers in psychology (and among the general public as well). Most people are conditioned into thinking that it's a blueprint. They think of the genome as giving dictation, as giving orders. Many people think that if there's something in the genome that controls our lives we're slaves to it.

This is the wrong way of thinking about genes. A gene is really not a dictator, but an opportunity, because each gene actually has two parts. Everybody knows a gene constructs a protein, but not everybody realizes that the other half of every gene is essentially what's called a regulatory region. It's essentially like an "if" in a computer program. Each gene is really like an if/then statement. There's a "then" that says, build this particular protein. It could be insulin in the pancreas, it could be hemoglobin in precursors to red blood cells, or it could be a particular protein for building a neuron in the brain. But when it does that, it is controlled by the "if" part of each gene. So there's an "if" and a "then."

This seemingly very simple idea, a tiny little twist on the usual idea of thinking of a gene as coding for a particular protein, means that every gene has some kind of way it can respond to the environment, either inside the cell or outside the cell. So the "if" that controls whether a gene is turned on or not is responsive to chemical signals that are around a particular cell, and those chemical signals can be used for things like telling the cell where it is in the growing body so if it moves around it can adopt a new plan according to its new location. It also means that the external environment can, in principle, modify gene expression. Each gene becomes like a switch.

One example I really like is the Bicyclus butterfly. The butterfly actually grows into two different forms depending on what season it's raised in. This butterfly is found in Africa, and in the summer, the rainy season, the butterfly will be green, shiny, and interesting, and in the fall it'll grow into a dull brown. It doesn't do this by looking around at the other butterflies, it just does this on the basis of the temperature. If you take a single butterfly alone, raise it by itself in a lab, and control the temperature, you control which genetic switch gets turned on. So the butterfly will go this way in one environment and another way in another. The genome is giving the butterfly two different choices, two different opportunities. It's not dictating, "You must take this form"; it's saying, "If you're in this situation you can take this form, if you're in this other situation you can take this other form."

When you reflect on that, and think about human psychology, that means that our genes aren't dictating that we have to be a particular way. It's specifying different ways in which the environment might interact with us. One of the most interesting studies I read recently talks about a gene that can influence violence in some way. It turns out that people that have this gene are more prone to violence than people who lack it. This is maybe no big surprise, but what's really interesting is that it really seems to be a kind of predisposition that depends on a particular environment. If you're raised in an abusive family and you have this gene you tend to become more violent. If you're raised in a non-abusive family and you have this gene, you don't tend to become more violent. So you can actually think that maybe what's going on is that this gene is actually switching people to respond in environments in different ways. The adaptation here is not towards violence or towards being less aggressive—the adaptation is towards giving the organism ways of coping with different kinds of environments.

~~~

I got into this field because I was a computer geek, a computer nerd. Back when computers were just starting to become available I had a friend who had a TRS-80 and I wanted to learn how to program. I started reading artificial intelligence literature, and when I was in high school I wrote a computer program that would translate a semester's worth of Latin into English. What I realized early into that endeavor was that we were never really going be able to build machines that could do things like understand language without understanding how people could understand language. I was able to hack things together based on my own intuitions, and got a little bit of Latin translated, but I couldn't really build a program to do it properly. Eventually that led me towards trying to understand psychology rather than just building computer programs based on my own intuitions.

And so I went to Hampshire College, a little-known experimental school in Amherst, Massachusetts where they were actually interested in cognitive science, even back when I was going to college. I designed my own major. I was a cognitive science major, and I worked on human reasoning. Then I went to graduate school and worked with Steven Pinker. I got involved with a project to study how children acquired the past tense of the English verbs, which has become a touchstone in the field of cognitive science. Everyone is trying to figure out how the child understands the difference between a verb like sing/sang, which is irregular, and a verb like walk/walked, which is regular. Most of the verbs in English add -ed, but some of them are idiosyncratic. My dissertation looked at eleven and a half thousand utterances by children to see when they got the past-tense form right, when they got it wrong, and what the circumstances were. That led me into this debate about connectionism, which is about how you might build a neural network model of the mind. How could you get a set of relatively simple-minded neurons together to do something that was cognitively interesting? David Rumelhart and Bruce McClelland are probably the best-known connectionists. Geoff Hinton is another really important connectionist. Jeff Elman is another.

I continued to do experimental work, but I got very interested in these computational models. In this case you have a set of neurons and you're trying to put them together in a way that will do something, like learn the past tense of a verb, in a way that matches the experimental data that you get from children. I discovered fairly early that the models really weren't as good as they looked. They were able to get a first order of approximation of the data, and they were able to get a little bit of what the child can do, but they couldn't really get the generalizations that a child would make. So a child, for example, can hear a new verb that he never heard before and realize that the default way of forming the past tense in English is to add -ed to it.

Here's an example: An adult can say, "Yeltsin out-Gorbacheved Gorbachev." You make up this new verb. You've never heard Gorbachev used as a verb before, but you know that if it's in the past tense you add -ed. Children know that too, but in these models they couldn't really do it because they were basically just working by analogy and similarity. If the verb sounded like something they heard before they could handle it, and if it didn't they couldn't. I got interested in the question of what the basic components in cognition are. This was arguing for a kind of mental algebra, where the way that you form the past tense is that you just have an equation in your head that says, "To form the past tense of a verb you just take the stem of the verb and you add -ed to it." It's just like a line in a computer program—a concatenation operation—you put together the stem of the verb and -ed, and that's how you form the past tense of a verb.

I tried to argue that you need a kind of mental algebra to do that. I got involved in experiments with infants to try to show that the ability to do this mental algebra actually starts very early in life. I ran experiments with 7-month-old infants, where we gave them an artificial grammar. They would hear sentences like la-ta-la, ga-na-ga—that had an a-b-a grammar—and then we would expose them to new sentences with new words: Wo fay-wo had the same grammar. Wo-fay-fay would have a different grammar. Using a paradigm where you see if the child gets bored if it keeps hearing the same thing, we were able to show that children were able to tell the difference between the new grammar and the old grammar. All the words were new but the grammar either changed or didn't, and the infants could tell the difference. I tried to argue that you need this kind of mental algebra to do that. I also looked at other aspects of the basic circuits out of which you might build a mind.

So the way I think about this whole neural network debate—the connectionism debate—is that people are right that neurons are the simplest units that you've got. But people in the field who take an empiricist, anti-Chomskyan approach are missing that you need an intermediate level circuit. It's not just that the brain is made up of neurons, but they're put together to build circuits to do things like process rules or process structured representations and so forth. What are the components out of which you could build a brain? That's what I worked on for a number of years. And then having reached what I thought was a satisfactory answer to that question, I became interested in these questions about how biology builds these things. I have a hypothesis about what the basic components in the brain are, and I've been pursuing the question of how on earth a biological system could actually grow the components that you need in order to build complexity.

I focused on three fundamental components. It's not that I think they're the only ones, but they're the ones that I think are critical for language. They're not necessarily unique to humans, but they're absolutely essential to building a cognitive system that can be as flexible and powerful as a human being.

One of them is this idea of a mental algebra: You have to be able to represent equations in your head. You've got to have something like variables. You can say X or Y, and have some way of saying that this variable is set to a particular value. You can say X = 7, and you have to have a way of doing operations on those variables. You can say Y = X + 2. You have to be able to manipulate these variables, just like in grade school algebra. That doesn't mean it's available to consciousness, such that our bodies can do calculus to figure out where objects are going to fall. And that doesn't mean that we are necessarily aware of the calculus that's being done. But it's probably absolutely fundamental to language and to thought.

Another part of my hypothesis is about having structured representations. For example, in language you can put together any two elements to form a bigger element, and then take that element to make it a part of something else. You can say, "the book," or you can say, "the book that is on the table," or "the book that is on the table that's in the living room," or "the book that's on the table that's in the living room that's in my house," and so forth. It's essential for language that you can always put together simple elements to make more complicated elements.

And then another thing that's absolutely fundamental is the difference between individuals and kinds. We have to be able not only to represent, say, bottles in general, but also this particular bottle, and my coffee cup as opposed to coffee cups in general.

It turns out that in the simple neural networks that people have built it's not easy to actually capture any of these things. They seem very basic—mental algebra, structured representations, and the distinction between individuals and kinds—but it's not easy to figure out how the brain does these things. I'm pretty sure that we do them, in the way that I'm pretty sure that the brain does short-term memory. But in the same ways that the brain does short-term memory even though we don't yet know the circuit, we don't yet know exactly how the brain pulls these tasks off. My job has been to try to figure out what areas we should focus on to figure out what are the neural substrates. A lot of neuroscientists tend to work from the bottom up. They look at a particular group of cells and try to guess what they've been doing, but it's also important to work from the top down—Mendel gave us the idea of a gene, and then Watson and Crick figured out what its instantiation was. I'm trying to figure out what the basic components of thought are, hoping that that will influence neuroscientists to figure out what circuits might support these kinds of things.

~~~

The field of connectionism has lost a bit of its strength. It's a little bit like artificial intelligence. The neural network community and the original AI community oversold themselves. They said, "We're going to solve all these problems and we'll have the answers for you in ten years," and interesting problems actually take longer than that.

There's a sort of sociological accident in the field. The people for the most part who have been working with neural network models say, "How do I build cognition out of neurons?" In general, these are people like McCelleland and Rumelhart, and they happen all to be empiricists. They happen to be on the side of the nature-nurture debate that says there's not much there to begin with. The place that I fit in is a little bit closer to Chomsky and Pinker; since I think that there's a lot of interesting stuff there to begin with. In some sense what I'm trying to do to the field of connectionism is to execute a hostile takeover. You've got all these people who are working with neural network models, asking, "How do I put these neurons together to build cognition?" but their idea of cognition is a blank slate, and that's the wrong idea. The right idea of cognition is that there's a lot of interesting initial structure. That's not all there is—the initial structure is in part about learning mechanisms that take you to the next step—but the average connectionist model is about associationism or something like that. It's a modern version of Skinner's theory where you're pairing things. But the learning mechanisms in a songbird or in a human being are far more sophisticated than just associationism. We may be able to associate things together, but learning language, for example, requires very complicated machinery. Starting with a blank slate is never going to get you to the complexities of human language or human thought.

That doesn't mean that we should just forget about the whole neural network enterprise—it means that we should change it. People have built neural networks to give you blank slate empiricism, but that doesn't mean that you have to build them that way. One of my projects is to try to figure out how you could reconcile this fact that neurons are relatively simple beasts with the fact that when you put them all together you get complicated cognition, and what I've been saying is we need certain intermediate steps in order to do that. You can't just go throwing all the neurons into a jar and hope they're going to figure things out. What we need to do is actually figure out how those neurons are structured. The salient part of any biology, whether you're talking about the heart or about the brain, is that there's an enormous amount of structure. There's a lot of structure in brain circuitry that supports the ability to learn language, and that means that when you want to build neural networks you don't want an unstructured mess, you want something that's systematic, and that reflects the complexity of all of the elements.

Even if you look at something as simple as the retina, there are 55 different kinds of neurons, and in your average neural network is one kind of neuron. There's obviously something going on in biology to give it a great deal of complexity in terms of individual units, in terms of how they're connected, and so forth. McClelland and Rumelhart missed that. I think Chomsky and Pinker would agree with me that there's going to be a lot of complexity. They haven't been as interested as I have in this particular question of how you put these neurons together to get complicated cognition, but in that sense I would say that I fit better, much better, with the Pinker and Chomsky school than with McClelland and Rumelhart.

There are also some very interesting political issues that this brings up. In The Blank Slate some of Pinker's political ideas come through. There's an interesting question of how Chomsky and Pinker seem to come to different political interpretations of the biology itself. Pinker allows less room for improving the human condition than I would. I don't think we disagree a whole lot about the nature of the facts, but Pinker tends to put his emphasis on the ways in which the biology constrains us in one direction or another, and he puts less emphasis on ways in which learning can change those things. I would say that the ability to learn is actually one of the things that humans are really good at. One of our unique talents is an incredible facility for learning, an incredible flexibility in learning, that even some of our closest primate cousins don't have. Our miraculous abilities to learn actually open up lots of possibilities, and by not stressing this, Pinker in his latest book paints a somewhat darker picture of human nature than I would.

I don't think that Chomsky has of late directly related his politics to his beliefs about linguistics. He did early on in the 60s, but he's mostly given up on that. People can speculate about what he thinks about the relation between his linguistics and his politics, but he's been more cautious of late, saying they're just the way they are.

One of the most interesting things that's been going on in the field of linguistics lately comes from a paper that Marc Hauser, Noam Chomsky and Tecumseh Fitch wrote together—it came out in Science about a year ago—arguing that there wasn't so much different between humans and chimpanzees, and that the language faculty might derive from relatively little innovation. One could even read their paper as saying the only thing that's special is the ability to put together smaller parts of sentences into larger parts of sentences, using a process that's known as recursion. Compared to earlier work where Chomsky is saying there's lots and lots of stuff about language that's special, this seems like a really radical change. Not everybody agrees with the paper, but there's something to it. They maybe overstate the case, but they're right to reconsider the classic questions about innateness and the relation between language and the rest of the cognitive system in light on all the advances in understanding our primate cousins. There's genetic evidence when we realize our genome's just not that different. There's neuroanatomical difference. We might have thought that chimpanzees don't have Broca's area, but they do, and there are all kinds of experimental evidence on chimpanzees—Hauser's cotton-top tamarins and so forth—and interesting cognitive abilities, so it's time to rethink how different language and the rest of cognition are.

Pinker and Jackendoff think that there's a lot that's special about language. They argue that language is an adaptation, which is something that Chomsky, to my puzzlement, has never really agreed with. Chomsky is still of the opinion that language could have arisen by accident, for some reason besides communication. Pinker and Jackendoff are convinced that language arose through natural selection for the purposes of communication. In this paper they try to outline all the ways in which they think language really is different from the other cognitive systems. For example, they look at the relation between language and the auditory system that we have and other mammals and other primates. One of the reasons they do that is there's a long history of saying that one of the things that's special about humans is the way that they understand the sounds that are involved in language. It turns out that chinchillas and cotton-top tamarins can do a lot of the same things. You might have thought it was something special to humans in their ability to distinguish between "ba" and "pa," because humans use a lot of sounds like "ba" and "pa" in their words. It then turns out that it's not that hard to train a chinchilla or a cotton-top tamarin to do that. It turns out that the cotton-top tamarin can tell the difference between forward speech and backward speech. It's clear now that our ability to understand the sounds of language has something to do with an auditory system that's conserved across the vertebrate world, conserved across the mammal world, across primates, or something like that—it's not something that's unique to humans. On the other hand, as Pinker and Jackendoff emphasize, there are probably subtle ways in which our human ability to understand the sounds of language is different from the others. So you've got some system that was already in place in primates, and it's been changed in particular ways, presumably to make communication more efficient or something like that.

There are two things I want to do next. One of them is to track closely the progress in genetics, and try to really think about the question of how a tiny number of genes can lead you from an ancestral chimpanzee view of the world to a human view of the world. How is it that a tiny number of changes in the genome can give you species that are as different as a human being and a chimpanzee? I have collaborators who are doing some of the genetic work, and I'm also building computer models of my own that are simulations of how neural networks develop. The question is, if I change the genotype—your set of genes—in a small way, how can I get a big change in the phenotype—in the set of behaviors an organism has? If you change one gene, or if you change 100 genes, you get some radically different system.

The other, which is closely related, is to rethink linguistics as a question of adapting from primate systems that are already in place. Instead of assuming that everything about language is sui generis—independent of the rest of the cognitive system—or the opposite extreme, which the anti-nativists might assume—that there's nothing special about language—I'm assuming there's something special about language, but that it's a variation on a theme.

A metaphor I've been thinking about a lot lately is the relation between the hand and the foot. They're physically distinct; they do different jobs, they obviously evolved down different paths, but it wasn't that long ago that we had some ancestral limb that gave rise to these kinds of things. It's clear that you have a common program that's tweaked in different ways to build a hand and a foot; they're products of the same kind of developmental system even though they're applied to different jobs. You might think of language as being a little bit like that—as a tweak, a variation on the cognitive system.

François Jacob had a great saying that evolution is like a "tinkerer" who fools around with whatever is there, and builds something new out of it. Language is going to be understood in that fashion, as variations on existing themes, putting together kinds of circuits that are already there and making new copies of them, twiddling with them in different ways to make them more efficient, and connecting them in different ways.

You're going to see the hallmarks of cognition throughout language, because the general cognitive system is at least built in much the same way as the language system. They may have different parts of the brain devoted to them, but even if this is the case, they're built in common ways. That means that the cognitive system can tell us something about the language system.

This is exciting, in part because the problem in linguistics is that there are too many different theories that people have about how to account for the grammar of this or that language, and there hasn't been a way to choose between them. But if we reflect on our primate heritage, on the rest of the cognitive system, we might finally have some ways of choosing between competing frameworks for how we represent linguistic knowledge.