Edge 72 July 26, 2000
THE THIRD CULTURE
Like little ripples on the surface of a deep, turbulent pool, calculation and other kinds of procedural thought are possible only when the turbulence is quelled. Humans achieve quiescence imperfectly by intense concentration. Much easier to discard the pesky abyss altogether: ripples are safer in a shallow pan. Numbers are better manipulated as calculus stones or abacus beads than in human memory.
THE REALITY CLUB
Rafael Nunez, Willam H. Calvin on MIRROR NEURONS and imitation learning as the driving force behind "the great leap forward" in human evolution by V.S. Ramachandran
THE THIRD CULTURE
Hans Moravec has been thinking about machines thinking since he was a child in the 1950s, building his first robot, a construct of tin cans, batteries, lights and a motor, at age ten. In high school he won two science fair prizes for a light-following electronic turtle and a tape-controlled robot hand.
As an undergraduate he designed a computer to control fancier robots, and experimented with learning and automatic programming on commercial machines. During his master's work he built a small robot with whiskers and photoelectric eyes controlled by a minicomputer, and wrote a thesis on a computer language for artificial intelligence. He received a PhD from Stanford in 1980 for a TV-equipped robot, remote controlled by a large computer, that negotiated cluttered obstacle courses.
Since 1980 he has been director of the Carnegie Mellon University Mobile Robot Laboratory, birthplace of mobile robots deriving 3D spatial awareness from cameras, sonars, and other sensors.
His books consider the future prospects for humans, robots and intelligence. He has published many papers in robotics, computer graphics, multiprocessors, space travel and other speculative areas.
HANS MORAVEC is a Principal Research Scientist in the Robotics Institute of Carnegie Mellon University and the author of Mind Children: The Future of Robot and Human Intelligence, and Robot: Mere Machine to Transcendent Mind.
Ripples and Puddles
By Hans Moravec
Computers were invented recently to mechanize tedious manual informational procedures. Such procedures were themselves invented only during the last ten millennia, as agricultural civilizations outgrew village-scale social instincts. The instincts arose in our hominid ancestors during several million years of life in the wild, and were themselves built on perceptual and motor mechanisms that had evolved in a vertebrate lineage spanning hundreds of millions of years.
Bookkeeping and its elaborations exploit ancestral faculties for manipulating objects and following instructions. We recognize written symbols in the way our ancestors identified berries and mushrooms, operate pencils like they wielded hunting sticks, and learn to multiply and integrate by parts as they acquired village procedures for cooking and tentmaking.
Paperwork uses evolved skills, but in an unnaturally narrow and unforgiving way. Where our ancestors worked in complex visual, tactile and social settings, alert to subtle opportunities or threats, a clerk manipulates a handful of simple symbols on a featureless field. And while a dropped berry is of little consequence to a gatherer, a missed digit can invalidate a whole calculation.
The peripheral alertness by which our ancestors survived is a distraction to a clerk. Attention to the texture of the paper, the smell of the ink, the shape of the symbols, the feel of the chair, the noise down the hall, digestive rumblings, family worries and so on can derail a procedure. Clerking is hard work more because of the preponderance of human mentation it must suppress than the tiny bit it uses effectively.
Like little ripples on the surface of a deep, turbulent pool, calculation and other kinds of procedural thought are possible only when the turbulence is quelled. Humans achieve quiescence imperfectly by intense concentration. Much easier to discard the pesky abyss altogether: ripples are safer in a shallow pan. Numbers are better manipulated as calculus stones or abacus beads than in human memory. A few cogwheels in Blaise Pascal's seventeenth century calculator perform the entire procedure of addition better and faster than a human mind. Charles Babbage's nineteenth century Analytical Engine would have outcalculated dozens of human computers and eliminated their errors. Such devices are effective because they encode the bits of surface information used in calculation, and not the millions of distracting processes churning the depths of the human brain.
The deep processes sometimes help. We guess quotient digits in long divisions with a sense of proportion our ancestors perhaps used to divide food among mouths. Mechanical calculators, unable to guess, plod through repeated subtractions. More momentously, geometric proofs are guided (and motivated!) by our deep ability to see points, lines, shapes and their symmetries, similarities and congruences. And true creative work is shaped more by upwellings from the deep than by overt procedure.
Calculators gave way to Alan Turing's universal computers, and grew to thousands, then millions and now approaching billions of storage locations and procedure steps per second. In doing so they transcended their paperwork origins and acquired their own murky depths. For instance, without great care, one computer process can spoil another, like a clerk derailed by stray thoughts. On the plus side, superhumanly huge searches, table lookups and the like can sometimes function like human deep processes. In 1956 Allen Newell, Herbert Simon and John Shaw's Logic Theorist's massive searches found proofs like a novice human logician. Herbert Gelernter's 1963 Geometry Theorem Prover used large searches and Cartesian coordinate arithmetic to equal a fair human geometer's visual intuitions. Expert systems' large compilations of inference rules and combinatorial searches match human experience in narrow fields. Deep Blue's giga-scale search, opening and endgame books and carefully-tuned board evaluations defeated the top human chess player in 1997.
Despite such isolated soundings, computers remain shallow bowls. No reasoning program even approaches the sensory and mental depths habitually manifest at the surface of human thought. Doug Lenat's common-sense encoding Cyc, begun in the 1980s and about the most ambitious, would capture broad verbal knowledge yet still lack visual, auditory, tactile or abstract understanding.
Many critics contrast computers' superiority in rote work with their deficits of comprehension to conclude that computers are prodigiously powerful, but universal computation lacks some human mental principle (of physical, situational or supernatural kind, per taste). Some Artificial Intelligence practitioners profess a related view: computer hardware is sufficient, but difficult unsolved conceptual problems keep us from programming true intelligence.
The latter premise can seem plausible for reasoning, but it is preposterous for sensing. The sounds and images processed by human ears and eyes represent megabytes per second of raw data, itself enough to overwhelm computers past and present. Text, speech and vision programs derive meaning from snippets of such data by weighing and reweighing thousands or millions of hypotheses in its light. At least some of the human brain works similarly. Roughly ten times per second at each of the retina's million effective pixels, dozens of neurons weigh the hypothesis that a static or moving boundary is visible then and there. The visual cortex's ten billion neurons elaborate those results, each moment appraising possible orientations and colors at all the image locations. Efficient computer vision programs require over 100 calculations each to make similar assessments. Most of the brain remains mysterious, but all its neurons seem to work about diligently as those in the visual system. Elsewhere I've detailed the retinal calculation to conclude that it would take on the order of 100 trillion calculations per second of computing -- about a million present-day PCs -- to match the brain's functionality.
That number presumes an emulation of the brain at the scale of image edge detectors: a few hundred thousand calculations per second doing the job of a few hundred neurons. The computational requirements would increase (maybe a lot) if we demanded emulation at a finer grain, say explicit representation of each neuron. By insisting on a fine grain we constrain the solution space and outlaw global optimizations. On the plus side, by constraining the space we simplify the search! No need to find efficient algorithms for edge detection and other hundred-neuron-scale nervous system functions. If we had good models for neurons and a wiring diagram of a brain, we could emulate it as a straightforward network simulation. The problems of Artificial Intelligence would be reduced to merely instrumentally- and computationally-daunting work.
Alternatively we could try to implement the brain's function at much larger than edge-detector grain. The solution space expands and with it the difficulty of finding globally efficient algorithms, but their computational requirements decrease. Perhaps programs implementing humanlike intelligence in a highly abstract way are possible on existing computers, as AI traditionalists imagine. Perhaps, as they also imagine, devising such programs requires lifetimes of work by world-class geniuses.
But it may not be so easy. The most efficient programs exhibiting human intelligence might exceed the power and memory of present PCs manyfold, and devising them might be superhumanly difficult. We don't know: the pool is extremely murky below the ripples, and has not been fathomed.
(Very powerful optimizing compilers could conceivably blur grain sizes by transforming neuron-level brain simulation programs into super-efficient code that preserves input-output behavior but resembles traditional AI programs. Such compilers would surely need superhuman mental power (they would be singlehandedly solving the AI problem, after all), but perhaps of a relatively simple, idiot-savant, kind.)
Each approach to matching human performance is interesting intellectually and has immediate pragmatic benefits. Reasoning programs outperform humans at important tasks, and many already earn their keep. Neural modeling is of great biological interest, and may have medical uses. Efficient perception programs are somewhat interesting to biologists, and useful in automating factory processes and data entry.
But by which will succeed first? The answer is surely a combination of all those techniques and others, but I believe the perception route, currently an underdog, will play the largest role.
Reasoning-type programs are superb for consciously explicable tasks, but become unwieldy when applied to deeper processes. In part this is simply because the tasks deep in the subconscious murk elude observation. But also, the deeper processes are quantitatively different. A few bits of problem data ripple across the conscious surface, but billions of noisy neural signals seethe below. Reasoning programs will become more powerful and useful in coming decades, but I think comprehensive verbal common sense, let alone sensory understanding, will continue to elude them.
Entire animal nervous systems, hormonal signals and interconnection plasticity included, may become simulable in coming decades, as imaging instrumentation and computational resources rapidly improve. Such simulations will greatly accelerate neurobiological understanding, but I think not rapidly enough to win the race. Valentino Braitenberg, who analyses small nervous systems and has designed artificial ones, notes the rule of "downhill synthesis and uphill analysis" -- it is usually easier to compose a circuit with certain behaviors than to describe how an existing circuit manages to achieve them. Meager understanding and thus means to modify designs, the cost of simulating at a very fine grain and ethical hurdles as simulations approach human-scale will slow the applications of neural simulations. But robot toys following in Aibo's pawprints should be interesting!
No human-scale intelligence (as far as we know) ever developed from conscious reasoning down, nor from simulations of neural processes, and we really don't know how hard doing either may be. But the third approach is familiar ground.
Multicellular animals with cells specialized for signaling emerged in the Cambrian explosion a half-billion years ago. In a game of evolutionary one-upmanship (there's always room at the top!) maximum nervous system masses doubled about every 15 million years, from fractional micrograms then to several kilograms now (with several abrupt retreats, often followed by accelerated redevelopment, when catastrophic events eliminated the largest animals).
Our gadgets, too, are growing exponentially more complex, but 10 million times as fast: human foresight and culture enables bigger, quicker steps than blind Darwinian evolution. The power of new personal computers has doubled annually since the mid 1990s. The "edge operator" estimate makes today's PCs comparable only to milligram nervous systems, as of insects or the smallest vertebrates (eg. the 1 cm dwarf goby fish), but humanlike power is just thirty years away. A sufficiently vigorous development with well-chosen selection criteria should be able to incrementally mold that growing power in stages analogous to those of vertebrate mental evolution. I believe a certain kind of robot industry will do this very naturally. No great intellectual leaps should be required: when insight fails, Darwinian trial and error will suffice -- each ancestor along the lineage from tiny first vertebrates to ourselves became such by being a survivor in its time, and similarly ongoing commercial viability will select intermediate robot minds.
Building intelligent machines by this route is like slowly flooding puddles to make pools. Existing robot control and perception programs seem muddy puddles because they compete in areas of deepest human and animal expertise. Reasoning programs, though equally shallow, comparatively shine by efficiently performing tasks humans do awkwardly and animals not at all. But if we keep pouring, the puddles will surely become deeper. That may not be true for reasoning programs: can pools be filled surface down?
Many of our sensory, spatial and intellectual abilities evolved to deal with a mobile lifestyle: an animal on the move confronts a relentless stream of novel opportunities and dangers. Other skills arose to meet the challenges of cooperation and competition in social groups. Elsewhere I've outlined a plan for commercial robot development that provides similar challenges. It will require a large, vigorous industry to search for analogous solutions. Today the industry is tiny. Advanced robots have insectlike mentalities, besting human labor only rarely, in exceptionally repetitive or dangerous work. But I expect a mass market to emerge this decade. The first widely usable products will be guidance systems for industrial transport and cleaning machines that three-dimensionally map and competently navigate unfamiliar spaces, and can be quickly taught new routes by ordinary workers. I have been developing programs that do this. They need about a billion calculations per second, like the brainpower of a guppy! Industrial machines will be followed by mass-marketed utility robots for homes. The first may be a small, very autonomous robot vacuum cleaner that maps a residence, plans its own routes and schedules, keeps itself charged and empties its dustbag when necessary into a larger container. Larger machines with manipulator arms and the ability to perform several different tasks may follow, culminating eventually in human-scale "universal" robots that can run application programs for most simple chores. Their 10-billion-calculation-per-second lizard-scale minds would execute application programs with reptilian inflexibility.
This path to machine intelligence, incremental, reactive, opportunistic and market-driven, does not require a long-range map, but has one in our own evolution. In the decades following the first universal robots, I expect a second generation with mammallike brainpower and cognitive ability. They will have a conditioned learning mechanism, and steer among alternative paths in their application programs on the basis of past experience, gradually adapting to their special circumstances. A third generation will think like small primates and maintain physical, cultural and psychological models of their world to mentally rehearse and optimize tasks before physically performing them. A fourth, humanlike, generation will abstract and reason from the world model. I expect the reasoning systems will be adopted from the traditional AI approach maligned earlier in this essay. The puddles will have reached the ripples.
Robotics should become the largest industry on the planet early in this
evolution, eclipsing the information industry. The latter achieved its
exalted status by automating marginal tasks we used to call paperwork.
Robotics will automate everything else!
THE REALITY CLUB
Hans Moravec stops his speculations at the "fourth, humanlike, generation of robots." However, most interesting to me is the fifth generation that surpasses humans in every conceivable (and inconceivable) endeavor including art, philosophy, and science. What new realms of thought and reality will these robots, "the Transcendent," explore? Many of them will probably escape into virtual realities superior in many respects to ordinary reality. By this time, humans will have become obsolete, ancient ants trapped in amber, compared to the Transcendent. Eventually these robots will have little need for bodies and will comprise software entities alone. They may love, explore, and create, much as we do, but they will also ask questions about the universe that we can not possibly ask limited by our three pounds of wet brain. Some of the Transcendent will choose to interact with physical reality to find ways to ward off the eventual destruction of the Earth and the fading of the universe as it expands. Others will submerge into artificial worlds separated from us by the filmiest of veils. Will their virtual realms be shared? Or will each of the Transcendent create a private pocket universe sheltered from others but containing other intelligences of their own design? I don't know the answer, but whatever the Transcendent become, they will live immensely long lives. I wish I could become one of the Transcendent rather than turn to dust with my organic friends.
Robots Yes, But Cyborgs Too.
Whilst applauding much of Professor Moravec's vision, I want to question the blunt assertion that robotics will eclipse the information industry, and the attendant image of that industry as simply about the automation of marginal tasks.
The true value of paperwork, text and symbols, it seems to me, lay in the way they actively transformed certain kinds of (otherwise daunting) tasks into ones that brains like ours could easily cope with. Think of the way pen and paper enabled us to store, share, and re-inspect our own thoughts and arguments. Think of the way learning symbols for complex relations (even number, odd number etc.) enabled us to discover even more complex relations (prime number and so on) a process quite analogous to the way that the creation of stocks and shares opened up the spaces for trading in futures, then options on futures, and so on. The cascades of thought and transformations of problems made possible by external symbols and paperwork are not, I think, just the icing on the contemporary cognitive cake. They are the source of much of it's distinctive character and power. A biological brain, equipped with pen and paper, can think thoughts, and make discoveries, that the unaugmented human brain would never be able to construct.
The major value of the so-called information industry, I thus suggest, likewise lies not in it's capacity to automate what we can pretty much already do, but in it's own potential to once again profoundly transform the spaces of human reason, by making available new tools to structure and refine our thinking. I have in mind for example the use of games like SimCity City which teach young minds to think better about complex, decentralized systems, or the use of various kinds of computer-aided design and drafting techniques, or the provision of more and more faintly intelligent and inter-communicating devices to aid our work and leisure, or the use of personalized, continuously running software agents to search the web for items of interests, and so on and so on. Thus where Professor Moravec sees only a future in which robots become increasingly intelligent, I foresee one in which we, too, become more intelligent courtesy of various kinds of complex coupling between human brains and the technological prostheses we create: prostheses which in turn create us.
The information industry, I suggest, is not simply about the automation of dumb and marginal tasks. It is about the creation of new ways to literally upgrade human mindware, it is about the planting of new seeds for future kinds of human-machine symbiosis. Our brains, more than those of any other creature on the planet, are plastic and labile, ever-ready to dovetail some of their modes of operation to the particular transformations, tricks and resources made available by the technologies that silently surround them and inform their growth.
Robots yes. But Cyborgs too. And the Cyborgs are us.
Web Robots not Physical Robots are the Key to AI
Moravec gives a generally sound analysis of the challenges of AI and the strengths and weaknesses of various approaches to overcoming them. But he stumbles, in my view, on one crucial point his emphasis on robotics as the magic solution to AI. This focus on the physical world may have seemed obvious in the 70s, but here in the 21st century, as communication networks expand and the era of virtual reality dawns, it's a severely limited perspective.
It's absolutely correct that true intelligence requires embodiment. Physical-world robotics provides a familiar approach to embodied intelligence. But embodying an AI mind in an Internet agent sidesteps a lot of nasty mechanical and electrical engineering issues, and allows one to focus on mind design and mind engineering.
I believe that the magic solution to AI, insofar as there is one, is not robotics but the Net. Web robots and more sophisticated Internet agents, not physical robots, are the ideal bodies for the first generation of real AI systems.
Of course, embodying AI in Internet agents doesn't solve all the hard problems of AI. It only solves two of them: where to find a big enough brain for an AI system, and how to embody an AI system within a world that it can fluidly perceive and manipulate. This still leaves the problem of how to actually structure a digital mind how to build the software (or specialized hardware). In this respect, all the Net offers is a metaphor: the mind as a self-organizing network.
The "Internet as AI brain" is a fairly simple point, but Moravec chooses not to emphasize it. He points out, correctly, that simulating the detailed functioning of the human brain on contemporary computer hardware is very difficult, requiring a scale of processing power equal to millions of PC's. But he doesn't note that, through distributed processing across the Internet, it's possible to actually harness the power of millions of PC's, right now. Distributed.net and [email protected] started using the latent computing power of the Net, various start-up firms are now following in their footsteps and this is only the beginning.
The "mind as network" metaphor is a powerful one. Mind is a massively parallel self-organizing system of interacting, intertransforming actors, many of them specialized to particular domains or particular processes. It demands a complex-systems-theoretic analysis. If a sufficiently deep and careful analysis of mental processes is carried out, in this vein, one discovers that the division between reasoning-based AI and neural-net based AI is largely bogus; reasoning emerges in a clear and detailed way as a statistical emergent from neural net dynamics. The network approach cuts through the apparently unresolvable knots set up by traditional AI theorists.
Moravec suspects that "devising such programs requires lifetimes of work by world-class geniuses." My claim, on the other hand, is that these lifetimes of work have already been done by very clever computer scientists working outside the accepted mainstream of AI. The task he describes will indeed require hundreds of PC's, but not millions though it will be able to enhance its intelligence by dispatching hundreds of small learning problems to millions of distributed PC's operating in a peer-to-peer network.
In my view, the task he describes can be accomplished using existing ideas from systems theory, complexity science, and out-of-the mainstream AI. And to implement such a system will require a core "mind network" of hundreds of PC's, not millions though this core mind network may enhance its intelligence by dispatching hundreds of small learning problems to millions of distributed PC's operating in a peer-to peer network.
In short, Moravec foresees a path to AI that begins with simple robots like robotic lawnmowers and vacuum cleaners, and progresses eventually to human-level intelligence. I say: Sure, this can work, but it's an unnecessarily long and difficult path. There is another, shorter one, which is going to be followed first. The incremental development of intelligent robots which Moravec describes will take place in the context of an increasingly intelligent population of Internet minds.
A minor (or not so minor) quibble. Hans Moravec offers "humanlike performance" as the acme of intelligent behavior. Maybe it's the best model of general-intelligence behavior we have now, but its flaws are serious, possibly catastrophic. Moreover, he doesn't distinguish between collective human intelligent performance (a cumulative culture, a team of engineers) and an individual's performance. If he means individual human performance, we should ask whose? The average member of congress? The average theoretical physicist? The average dry cleaner?
What I'm sorry to see left out here is a point of view that says insisting on "humanlike performance" from our artificial intelligences is a bit like Columbus insisting that he reached "the Indies." He didn't. The place he did reach (the U. S. Bureau of Indian Affairs notwithstanding) was at least as interesting as the Indies, and changed history. Artificial intelligence is growing as interesting as human intelligence, and has already begun to change history.
Rafael Nunez. William H. Calvin on MIRROR NEURONS and imitation learning as the driving force behind "the great leap forward" in human evolution by V.S. Ramachandran
From: Rafael Nunez
Date: July 16, 2000
For a number of years I have admired Ramachandran's work. However, after reading his essay on mirror neurons I feel quite disappointed (... and, yes, I am aware that his piece was written for a chatty environment and not for an academic journal. My comments are also written in this spirit).
To avoid confusions, I must first say that I have no problems with Ramachandran's opening questions. Whether they are original or not, I find them interesting. I don't have problems either with his closing remarks ("So it makes no more sense to ask 'Why did sophisticated tool use and art emerge only 40,000 years ago even though the brain had all the required latent ability100,000 earlier?' than to ask 'Why did space travel occur only a few decades ago, even though our brains were pre- adapted for space travel at least as far back Cro Magnons?' "). In fact, George Lakoff and I have defended a similar position in our work on the embodied nature of mathematics, and the astonishing development of mathematics in the last century.
So, I have problems neither with Ramachandran's opening questions, nor with his closing remarks (other than the very last sentence: "I regard Rizzolati's discovery ... as the most important unreported story of the last decade"). But as it happens with sandwiches, what matters is often the stuff in the middle. That is where I have problems digesting Ramachandran's piece. Here is why.
1) His central prediction can be quite harmful.
Ramachandran opens with an enthusiastic prediction: "mirror neurons will do for psychology what DNA did for biology". I understand his enthusiasm, especially when seen from the perspective of his field, "visual psychophysics" (as he calls it), where the operationalization of relevant variables pushes to (sometimes extreme) reductionism.
But I think, as a scientist one should be more cautious. Ramachandran's prediction in fact has the potential to be quite harmful. It makes me think of those enthusiastic predictions made in the sixties by Herbert Simon, Marvin Minsky, and others, regarding the wonders of Artificial Intelligence, of General Problem Solving theory, and so on. The problem then was not that those influential predictions turned out to be false (in fact they never even came close to be true!). The problem was that in the meantime they did a lot of harm to the study of the richness, the subtleties, the dynamism, and the complexity of the human mind. In many ways we are still paying the price of thinking through the eyes of those predictions.
Although I don't think Ramachandran's prediction by itself can be that harmful, I think that reductionistic and simplistic predictions of this sort, when made by "prominent" people, can be quite dangerous. Paradoxically, I have to admit, in some (depressing) sense Ramachandran's prediction may be right. We may now, in psychology, spend futile time studying the sophistication of culture and mirror neurons in the way a number of biologists wasted their time looking for the genes responsible for being a criminal or a great basketball player.
2) His "necessary but not sufficient" condition is ambiguous.
Ramachandran's piece is articulated around the centrality (perhaps primacy?) of mirror neurons and imitation learning as "the driving force behind 'the great leap forward' in human evolution". His opening tone is quite radical. In fact, right after asking his introductory questions he says "The solution to many of these riddles comes from an unlikely source the study of single neurons in the brain of monkeys". Notice that he does not say "some preliminary suggestions come from", or "some pieces of the puzzle may be provided by ", or anything like that. He categorically says "The Solution ... comes from". Later in the essay he washes out this dramatic statement by saying that "mirror neurons obviously cannot be the only answer to all these riddles of evolution" (good to hear that!), and that "mirror neurons are necessary but not sufficient".
My impression is that, in order to be consistent with his enthusiastic prediction, he would love to see mirror neurons as being the necessary and sufficient condition, or perhaps the main necessary condition required to address his questions (in particular if it is supposed to be "the most important unreported story of the last decade"), but he realizes that things are not that simple. The result is ambiguity.
3) Necessary but not sufficient, ... but how necessary?
When specifying that mirror neurons are necessary but not sufficient, Ramachandran mentions what for me is the real (and perhaps most) relevant issue: "After all rhesus monkeys and apes have them [mirror neurons], yet they lack the cultural sophistication of humans". From this, it seems clear to me that if we want to understand the "great leap", the emergence of language, and of cultural sophistication, etc., we can't start by saying that "the solution" to these questions comes from the study of single neurons in the brains of monkeys. To me, this is analogus to what I call the base-ten-arithmetic-because-of-ten- fingers argument: We developed our usual arithmetical base ten system, because we have ten fingers. Well, rhesus monkeys and apes also have ten fingers, and as far as we can tell they haven't developed anything even close to an arithmetic digital system.
In fact, over tens of thousands of years, there have been literally millions of fellow members of our species Homo sapiens who, although having ten fingers, never operated with base-ten arithmetic (or with any sophisticated arithmetic at all!). So where mirror neurons are concerned, I would give no more relevance to them in explaining Ramachandran's questions over, say, the fully opposable thumb (and its neuromuscular complexity) of humans. Unlike mirror neurons, other primates don't have a fully opposable thumb. Moreover, there are a number of plausible accounts of how the uniqueness of the human thumb may have shaped the human brain, language, and culture. The scientific question then remains open: What allowed modern human primates, and not rhesus monkeys and apes, to achieve cultural sophistication? ... certainly not mirror neurons.
4) The survival value of language is not obvious.
As Jean-Louis Dessalles argues in his recent book Aux Origines du Langage, many scholars such as Steven Pinker, Philip Lieberman, Derek Bickerton, and others, assume that the survival value of language is "obvious". Ramachandran joins the list as he says "Unlike many other human traits such as humor, art, dancing or music the survival value of language is obvious it helps us communicate our thoughts and intentions". But, is it really obvious? It is certainly easier to say that something is obvious when we don't know how to actually explain it. If it is so obvious, why then is it that thousands of other species didn't develop language? Why didn't even the common chimp or the bonobos make it, even though they differ only in about 1.6% of their DNA from us?
From an anthropocentric view in which we see our species as the ultimate achievement of evolution, other species look somewhat "incomplete". Language then is seen as an essential feature of being complete, or at least more "advanced". When seen from this perspective the advantages of having language seem obvious. But telling the story from the perspective of what we know now, in which events are seen as inevitably leading to our present state, is completely misleading. By assuming that the survival value of language is obvious we hide the real scientific problem of the evolution of language (for an interesting discussion see Dessalles). In fact, today there is no scientific theory capable of satisfactorily explaining the mysterious problem of the evolution of language.
In short, the discovery of mirror neurons is certainly an extraordinary achievement, (as was the discovery of DNA). It may even be the case that it is indeed an important unreported scientific story, ... but I am afraid its importance is not supported by Ramachandran's speculations on their key role in human evolution.
From: William H. Calvin
Date: July 22, 2000
For a half century, we have been aware of a big puzzle in human evolution. The Upper Paleolithic art and tools speak, as Richard Leakey said, "of a mental world we readily recognize as our own." It bursts on the scene rather suddenly about 40,000 years ago.
Ian Tattersall noted that this "stands in dramatic contrast to the relative monotony of human evolution throughout the five million years that preceded it. For prior to the Cro-Magnons, innovation wasä sporadic at best." Adding to the puzzle is the fact that anatomically-modern Homo sapiens (big brain and all) was around for about 100,000 years before this efflorescence of cave art and fine toolmaking that signals the emergence of behaviorally-modern Homo sapiens.
Mirror neurons in the frontal lobe are the latest intriguing possibility for explaining the "Mindžs Big Bang." They are neurons, located in the monkeyžs version of Brocažs area, which might be involved in mirroring ( that tendency of two people in conversation to mimic one anotheržs postures and gestures. And, as such, candidates for what might be involved in the cultural spread of communicative gestures and perhaps even vocalizations.
First, imagine a neuron which buzzes away during certain actions and not others, say when the monkey picks up an object with its finger tips (but not when digging it out of a hole, when other neurons might buzz away instead). Some such neurons (the ones called mirror neurons) also buzz away when the monkey merely sees another monkey (or even human) do the same thing, often only out of the corner of its eye.
It isnžt just the visual motion that stirs up these neurons; making the grasping gesture in mid-air wonžt work. It seems to require an object as well, much as most verbs require a noun for an associated theme to be expressed at the same time. And object movement alone wonžt stir the mirror neuron, as when the experimenter picks up the object with forceps instead of fingers. Sure sounds verb-like to me, and the mirror neurons are in roughly the part of the brain that, in humans, lights up during find-the-right-verb tasks.
The mirror neurons would seem to be just what you need for mirroring gestures, the confluence of the particular sensory representation and the particular movement-sequence production. And why has this stirred so many of us into enthusiastic extrapolations? Because humans are extraordinary mimics, compared to monkeys and apes, and mimicry surely helps in the cultural spread of language and toolmaking. So have we found the "seat of the meme"?
The Italian neurophysiologist who discovered the monkey mirror neurons, Giacoma Rizzolatti, does not claim so much. Indeed, he takes pains to bring flights of fancy back to the more limited hard data. At a recent conference on mirror neurons in Germany, every time someone mentioned mirror neurons as part of a neural circuitry for "see one, do one" mimicry, Rizzolatti would point out that they could equally be involved in simply understanding. Just as in language, where we know that there is a big difference between understanding a sentence and being able to construct and pronounce it, so mirror neurons might just be in the understanding business rather than the movement mimicry business. I take Rizzolattižs point ( but temptation remains, so let me address mirroringžs alternatives.
The British anthropologist Kenneth Oakley suggested in 1951 that the art-and-tools efflorescence might have been when fully modern language appeared on the scene. Was this because our ancestors got a lot more mirror neurons 40,000 years ago? Or did an already augmented human mirroring system merely help spread another biological or cultural invention in a profound way?
The latter would likely suffice because the profound improvement in language concerns structuring, not words or phrases per se. Mirroring had likely been aiding in the gradual development of novel vocalizations and words for a million years, and likely even short sentences (for which you donžt need structural support). A great deal of whatžs important for language doesnžt involve syntax, but structuring really makes long sentences fly, and likely complicated thoughts as well. Let me explain.
There are certainly major predecessors to structured language. Body postures communicate mood and intention (dogs communicate dozens), and arm or face posture sequences provide even more bandwidth for broadcasting your feelings and intentions. Species-specific vocalizations get a big addition from culturally-defined "words" (whether signed or spoken) whose learned meaning depends much more on context. Next comes word combinations, such as short sentences. So far wežre mostly talking about what Derek Bickerton in 1990 called "protolanguage," and this unstructured language (you can guess the meaning without any help from word order or inflections) is what you see in toddlers and speakers of pidgins.
Long sentences, however, are too ambiguous without some mutually understood conventions about internal structuring into phrases and clauses. A clause may be embedded in a phrase, and vice versa, ad infinitum. Such conventions constitute syntax and each dialect has a different way of doing things. "Universal grammar" is simply the tendency of all human groups to use a restricted set of structuring possibilities; not every scheme is possible, and that likely has something to do with the way in which the human brain is wired.
Once you have a syntax (kids pick them up between 18 and 36 months), you can convey complicated thoughts. And hopefully think them first, so as to avoid that blues lament of Mose Allison, about when "Your mind is on vacation but your mouth is working overtime."
It is this last step up to syntax that is the usual candidate for the mindžs big bang, not the language-lower-case stuff that, though essential, falls short of structuring per se. And not augmented cultural spread, as mirror neurons might help with. But I think that it isnžt syntax per se. Rather, it may be all the higher intellectual functions that emerged so dramatically 50,000 years ago ( that the same neural machinery is likely used by syntax, planning, multipart music, chains of logic, and our fascination with discovering hidden patterns in our sensory environment. Improve one, improve all of structured thought.
But while Ižd guess that augmented structuring of thought was likely responsible for the suddenness of the 40,000-years-ago transition, youžve still got to spread it around. Was it a matter of spreading genes or memes? You canžt rule out genes at this point, given that newly-discovered Y chromosome bottleneck at about the same time, but culture might well have sufficed. Mirroring novel sequences, as in learning how to dance, may have been the key to spreading structured thought around the world so quickly as a cultural conquest.