EDGE


EDGE 55 — May 22, 1999

THE THIRD CULTURE

THE MILLION-DOLLAR SCIENCE PRIZE

Richard Dawkins, Stewart Brand, Paul C. Davies, Marc D. Hauser, Steven Pinker, Nicholas Humphrey, Philip W. Anderson

Perhaps the prize would be given for the discovery of a new planet, a new element, the synthesis of life in a test-tube, the first contact with extraterrestrial intelligence, or a cure for cancer. Let us also say that the million-dollar prize will be given in your name. What would you give it for, and why?

WHAT IS THE MOST IMPORTANT INVENTION OF THE PAST 2,000 YEARS?

A book spin-off of the continuing EDGE Inventions project will be published globally in January, 2000. In the meantime, new and noteworthy commentaries continue to arrive. Here are Gino Segre (the lens), Colin Tudge (The plough), Murray Gell-Mann (disbelief in the supernatural); Patrick Bateson (leaping monks).

THE REALITY CLUB

Pamela McCorduck on Marc D. Hauser

However, I'm puzzled by what he calls Natalie Angier's misunderstanding of evolutionary psychology based on the basic biology that leads to sexual promiscuity in humans, what Hauser calls "a nasty asymmetry" — that sperm are cheap to produce and eggs are expensive, and that therefore men have the freedom to be promiscuous (it's cheap), and women do not (it's costly).

Arnold Trehub responds to Stanislas Deheane

Neuroscience does not yet have the tools which might enable us to lay out bare the operant microscopic machinery of the human brain's cognitive systems. But explicit models of biologically plausible mechanisms can be tested for their ability to perform in a way consistent with human cognitive performance. The structure and dynamics of these models can then shed light on the constraints that operate on the brain/mind.

George Lakoff responds to Dehaene, Blaskeslee, Hauser, and Trehub

Conceptual metaphorical mappings are not primarily matters of language, they are part of our conceptual systems, cross-domain mappings allowing us to use sensory-motor concepts and reasoning in the service of abstract reason. Children acquire conceptual metaphorical mappings automatically and unconsciously via their everyday functioning in the world.

[8,729 words]


THE THIRD CULTURE


THE MILLION-DOLLAR SCIENCE PRIZE

Richard Dawkins, Stewart Brand, Paul C. Davies, Marc D. Hauser, Steven Pinker, Nicholas Humphrey, Philip W. Anderson

For the third in a in a series of EDGE special events which began with "The World Question Center" followed by "That Is The Most Important Invention In The Past 2,000 Years," I am asking for your participation in a new project: "The Million-Dollar Science Prize."

Let us say for purposes of this exercise that a nonprofit foundation is interested in establishing a million-dollar science prize to be given for discoveries, accomplishments, etc. that are easily and simply verified and for which no interpretation is necessary. The award process will be objective. No committees will have to decide whether or not a hurdle has been cleared.

Perhaps the prize would be given for the discovery of a new planet, a new element, the synthesis of life in a test-tube, the first contact with extraterrestrial intelligence, or a cure for cancer. Let us also say that the million-dollar prize will be given in your name.

What would you give it for, and why?

I asked a EDGE regulars to jump-start the project and help set the tone. Responses from Richard Dawkins, Stewart Brand, Marc Hauser, Steven Pinker, Nicholas Humphrey, and Philip W. Anderson are below.

I look forward to hearing from you.

JB


RICHARD DAWKINS

I have made the following minor suggestion:

Let us say for purposes of this exercise that a nonprofit foundation is interested in establishing a million-dollar science prize to be given.....

(Change :) ..... for discoveries, accomplishments, etc. that are easily and simply verified and for which no interpretation is necessary. The award process will be objective. No committees will have to decide whether or not a hurdle has been cleared.

(To:) ..... to the first person to achieve a particular discovery or accomplishment, defined sufficiently clearly that value judgements by panels of judges will be superfluous.

RICHARD DAWKINS is an evolutionary biologist and the Charles Simonyi Professor For The Understanding Of Science at Oxford University; Fellow of New College; author of The Selfish Gene, The Extended Phenotype, The Blind Watchmaker, River out of Eden, Climbing Mount Improbable, and the recently published Unweaving the Rainbow.


STEWART BRAND

The idea and the letter is good. It does need to be defined in terms of time... Now? In the next few years? The next 10 years? The next century?

For instance, if it was now I would award the prize to Michael West for his work on telomerase and embryonic stem cells, leading toward near-immortality in large mammals such as we.

STEWART BRAND is founder of the Whole Earth Catalog, cofounder of The Well, cofounder of Global Business Network, cofounder and president of The Long Now Foundation. He is the original editor of The Whole Earth Catalog, author of The Media Lab: Inventing The Future At MIT, How Buildings Learn, and The Clock Of The Long Now: Time and Responsibility (MasterMinds Series).


PAUL C. DAVIES

I'm not sure if you want me to nominate what I would give the prize for, or simply comment on the wording of the question. The problem with a goal specific prize is that if the goal is already spectacular, a million dollar prize won't be much incentive. For example, a cure for cancer would bring in a billion dollars in patents. Likewise a new spacecraft propulsion system. Most clear-cut advances would also secure the Nobel prize.

The only goal-specific science/engineering prizes I know about are either issued for PR reasons for something that will never be achieved (e.g. the Skeptics $100,000 prize for the first clear demonstration of a repeatable paranormal phenomenon), or a sport-related activity (e.g. the man-powered flight prize).

So a million dollar science prize could be used in the same vein. Giving it for contacting ET might spur research and thinking in this area, but in my opinion it would fall into the "paranormal" category, i.e. be unlikely ever to be collected. Choosing a fascinating but peripheral topic is another possibility. Examples might be the first nanomachine (according to some criterion), the first superconductor to work above x degrees, the discovery of the first microbe that can survive for one hour at y degrees (150C?), a proof of Golbach's conjecture in mathematics... All these are worthy challenges in their own right, but are not mainstream scientific advances. The question is, who would see value in instituting such a prize? Some years ago an oddball character named Babson founded a gravity prize. He was hoping someone would invent antigravity. Now it is awarded to worthy but dull essays on more conventional gravitational topics.

My own choice is: award it to the first person to derive the fine structure constant to all measured decimal places from a credible physical theory.


PAUL C. DAVIES is a physicist, writer and broadcaster, now based in South Australia. He is the author of some twenty books, including Other Worlds, God And The New Physics, The Edge Of Infinity, The Mind Of God, The Cosmic Blueprint, Are We Alone? and About Time. Davies was awarded the Templeton Prize for progress in religion, the world's largest prize for intellectual endeavor.


MARC D. HAUSER

I am at home, sitting on my deck, looking at our new pear and apple trees, it is 70 degrees out and sunny. The million dollars prize, hmmmm?

My own view is that the prize should be given in an area that currently lacks such prizes. Thus, for example, we should exclude topics that fall within the Nobel. I also think it should be a prize that will provide additional creative freedom, whether this be in the form of increasing time by decreasing academic responsibilities, funding for equipment, or what have you. In this sense, I very much like the spirit of the MacArthur prize. This prize is beautiful because it looks at new people on the fringe perhaps, but who have already begun to make contributions, both theoretical and empirical. I like the idea, therefore, of a million dollar prize to a young person whose work sits on the edge of several fields, is a non-traditionalist, and who could use the funds to push an idea or line of research. I for one, would love to buy my own monkey island!

MARC D. HAUSER is an evolutionary psychologist, and a professor at Harvard University where he is a fellow of the Mind, Brain, and Behavior Program. He is a professor in the departments of Anthropology and Psychology, as well as the Program in Neurosciences. He is the author of The Evolution Of Communication, and Wild Minds: What Animals Think (forthcoming).


STEVEN PINKER

I agree with Marc — as worded, the danger is that the million-dollar prize would either never be offered, or be offered to someone who has already won the Nobel, making it pointless. A sum of money like that should really be offered to someone who would not otherwise get recognized or rewarded. One possibility is for a really good theoretician (one whose work is closely tied to empirical matters, not just a system-builder), since granting agencies, prize associations, and many universities fail to give them proper recognition. Paul Ewald, Ray Jackendoff, and Robert Trivers are the kind of people I have in mind.

Another, and perhaps even better possibility, is to reward someone who is studying an underfunded, crucially important, and possibly short-lived topic. Examples include:

• Human biodiversity (e.g., the human genome diversity project, getting DNA from aboriginal peoples)

• Studies of great ape behavior in the wild

• Moribund indigenous languages (languages are going extinct at an alarming rate)

• Hunter-gatherer studies, before the last hunter-gatherers disappear

• Hominid fossil-hunters

• Germ-hunters for diseases that might be caused by pathogens that no one has ever looked for (heart disease, schizophrenia, manic-depressive disorder)

The key thing is not to compete with, upstage, be upstaged by, or add nothing to areas and people already rewarded with big-name prizes.

STEVEN PINKER is professor in the Department of Brain and Cognitive Sciences at MIT; director of the McDonnell-Pew Center for Cognitive Neuroscience at MIT; author of Language Learnability and Language Development, Learnability and Cognition, The Language Instinct, and How The Mind Works.


NICHOLAS HUMPHREY

I love the idea of a new prize. In general I think "prizes" provide a great stimulus to creativity — exploiting people's deep-seated motivation to be first among their peers.

But I have to say I think the criteria for prize-winning work that you're suggesting are far too strict. There's hardly ever been in history a revolutionary discovery that was at the time "easily and simply verified and for which no interpretation was necessary." None of the greatest theoretical leaps-forward would have qualified. Galileo, Darwin, Einstein, Freud, would all have been excluded as prize-winners.

I'd suggest a bit of reverse engineering. Lets think whom we'd most like to see receiving the million-dollar prize, and then ask what it is about these individuals that makes them so special. In my own field, for example, Chomsky, Kauffman, Dennett, Hamilton, ... Surely no "objective, opinion-free" process could be counted to home in on these stars. Instead we ought to trust our finely-trained human judgement about where we *now believe* that truth and beauty lie.

An alternative Nobel Prize committee is hardly what is wanted. But I don't doubt that John Brockman himself could put together a group of third culture intellectuals who would know a million-dollar scientist when they saw her or him.

I'm all for a return to non-objective, un-democratic elitism in science. It's what makes the world ground.

NICHOLAS HUMPHREY is a theoretical psychologist; professor at the New School for Social Research, New York; author of Consciousness Regained, The Inner Eye, A History of the Mind, and Soul Searching.


PHILIP W. ANDERSON

I have to be pessimistic.

There have been a number of cases of races toward a specified goal, and more often than not the result has been extraordinarily ambiguous. A claims it but only with 3.5 sigmastatistics, while B waits to get greater accuracy — it has happened again and again in high energy physics. But also in condensed matter — flux quantisation came out a dead heat; and liquid N2 superconductivity is a wonderful mess —Mueller found a 35 degree material by absolutely pure dumb luck but the jump to 90 degrees was made independently in no less than 4 laboratories, with Chu winning by at most a few hours if that.

Parity violation was another dead heat settled by who was closest to the journal editor; and another contender, the sainted Miss Wu, neatly finessed her collaborators out of the picture by her close relations with the theorists who happened to be Chinese.

The other thing is that the important ideas, to my mind, are more often the answers to questions nobody yet knows how to phrase. A cure for cancer? you
would either have to award it already or wait indefinitely — it will surely be some quite subtle concept if it ever comes. Understanding of consciousness? Minsky and Dennett think they are there already but they haven't convinced the rest of us. What would you have done about Darwin — which Darwin?

Committees are surprisingly necessary — they make mistakes but there is very little that is that simple. And the Nobel committee has at least been at it long enough to have acknowledged some of their mistakes.

Finally, I have one more carp — a million isn't nearly enough, these days. it would not change the lifestyle of the kind of person likely to get it.

PHILIP W. ANDERSON is a Nobel laureate physicist at Princeton and one of the leading theorists on superconductivity. He is the author of A Career in Theoretical Physics, and Economy as a Complex Evolving System.


WHAT IS THE MOST IMPORTANT INVENTION OF THE PAST 2,000 YEARS?

Gino Segre, Colin Tudge, Murray Gell-Mann, Patrick Bateson

A book spin-off of the continuing EDGE Inventions project will be published globally in January, 2000. In the meantime, new and noteworthy commentaries continue to arrive. Here are Gino Segre (the lens), Colin Tudge (The plough), Murray Gell-Mann (disbelief in the supernatural); Patrick Bateson (leaping monks).

 


GINO SEGRE (the lens)

My choice for the greatest invention of the past 2,000 years is the lens. First of all, without lenses, you might not even be able to read this piece and even worse, you might not have ever been able to read if your vision had not been corrected. I remember Teddy Roosevelt's description of getting his first pair of glasses and suddenly having the world come into focus. Seeing clearly is of course no small matter, but it seems limited to pick eyeglasses as the greatest invention of the last 2,000 years so my vote is for lenses big and small, alone and combined.. The lenses we use to read the Universe or the intricacies of life are variations of those we use to absorb the written word

I am going to start, however, with plain old spectacles. We don't really when they first started being used. They were not uncommon in fourteenth century Italy and by 1600, there were specialized artisans who carefully ground lenses, keeping their tricks secret. One of them, a Dutch spectacle maker named Lippershey, noticed that a combination of two lenses made distant objects bigger. He tried to use this to get rich. He didn't succeed but several of his two lens devices were made. By 1609 one of the devices reached a transplanted Florentine named Galileo Galilei who was teaching at the University of Padova. He pointed his device, or telescope as it was later called, at the night sky and looked out. He took his telescope apart, rebuilt it, improved it and looked some more. What he saw changed our view of the world. The Sun rotated around its axis, Venus revolved around the Sun, the Moon had mountains and valleys, Jupiter had four moons and the Milky Way was made up of vast numbers of stars. It was crystal clear that the old Ptolemaic vision of the Universe was wrong. Copernicus and Kepler were right, the Earth was not the center of the Universe and there was no going back. We were launched on our exploration of outer space.

It is a short journey from the telescope to the microscope . Not surprisingly they were discovered at around the same time. After all, they are both just the simple piecing together of the right two lenses in correct positions. Galileo used the telescope brilliantly, but he also peered through a microscope of sorts. He saw flies the size of sheep and spots of dirt that looked like rocks, but he did not know what to make of it. In 1665 Robert Hooke published a best-seller called Micrographia. The book had a series of beautiful plates in it, Hooke's rendering of what he had seen with his microscope. There was a fly's eye, mold on the leaf of a rose, a picture of a louse and so on. All very pretty, but it did not lead to anything. The microscope was a tool in search of a problem. The problem eventually did develop and it was nothing less than understanding the origins of life and of disease. This first came into focus, no pun intended, when Anton Van Leuwenhoek in 1678 made a lens good enough to get a magnifying power close to five hundred. At that point a whole rich substructure was revealed. A drop of pond water turned out to be filled with little "animacules" swimming in it. Van Leuwenhoek had discovered bacteria. It took another two hundred years to really understand what he had seen, but then it also took three hundred years to understand that the Milky Way was just one of many galaxies.

I have been saying the lens is the greatest invention of the past 2,000 years but an excellent lens had already been perfected over the course of millions of years by creatures so primitive they didn't even know how to make a fire. Despite this comparative ignorance, their lenses are as good as anything we can dream of making in the lab today. Of course I am talking about our own ancestors and the lens I am describing is our eye's lens. It was developed by that diabolically clever builder we call evolution. There are many places and ways to learn just what a good job evolution did, but my favorite is offered by Richard Feynman in a physics course he taught at Caltech. Given who Feynman was, none of the course is ordinary and some of it is extraordinary, the work of a true genius. He describes how light rays enter our eye and are immediately bent and focused toward the retina by a surface lens we call the cornea. After the first focusing the rays travel through a chamber filled with fluid and then meet the second focuser, known simply as the lens. This lens is exquisite, a thing of beauty. It is built up like an onion with transparent layers, slightly flatter toward the edges and with slowly varying bending power of light, all designed for optimal focusing. The curvature of the lens can be adjusted by muscles on the side and with a little luck the lens forms the perfect image on the best of all screens, the retina.

The retina is wired to the visual cortex in the brain and, voila, we see the picture. I have been implying that the brain and the retina are two separate things, but it may make more sense to talk of the retina as a piece of the brain because it does a lot of the information processing before sending on its results through the optic nerve to the cortex.

My answer for the greatest invention of the last 2,000 years is still the lens, but the greatest invention of all times is the brain which, incidentally, has managed to figure out how to use the lens it is already hooked up to and the lens it has learned how to build in its never-ending attempt to understand the Universe.

GINO SEGRE is a professor of Physics and Astronomy at the University of Pennsylvania. HE was born in Florence, Italy and raised in Florence and New York City.He has been a visiting professor at M.I.T. and Oxford, chair of the Physics and Astronomy Department of the University of Pennsylvania from 1987 until 1992 and Director of Theoretical Physics at the National Science Foundation in 1995. He is the author of Einstein's Refrigerator: Tales of the Hot and the Cold (forthcoming).


COLIN TUDGE (the plough)

I would very much like to add The Plough (or the digging stick — the principle is the same) to your list of "Inventions ".

My thesis is that farming really began at least 40,000 years ago (not 10,000 years ago in the "Neolithic Revolution" as is generally supposed) but that for 10s of thousands of years people 'merely' managed the environment in various ways, while at the same time getting a large propotion of their food by hunting and gathering. This management I have called "protofarming ".

In fact you can do all of horticulture (which in effect means cultivation of individual plants, though the etymology means gardens) and pastoralism in protofarming mode. The economic switch came when people came to rely on
farming more than on hunting and gathering. The most significant technological shift was the act of breaking the soil — i.e., cultivating the field ("agri" — culture) as opposed to the ad hoc management of individual plants and herds of animals, in an essentially wild environment. It is when you break the soil that you really start to re-create landscape: to produce crops on a mass scale; and to eliminate all other species (which increases reliance on farming still further).

Ploughing (or soil-breaking in general) leads in short to arable farming, which primarily means mass growing of cereals. The Old Testament shows how people hated this (arable farmers have an extremely bad press in the OT) and indeed regarded the breaking of soil as blasphemous. Cain was the arable farmer — perceived as the murderer (of the pastoral Abel) whose gift of corn was rejected by God. The "Neolithic Revolution " is not about the origin of farming; but it does reflect the birth of arable farming (ie, agriculture in the strict etymological sense).

Ploughing has given us a world population of 6 billion, and transformed the world's landscape.

The plough is the most significant human invention of all.

(Together with the spear of course — which enables human beings to kill at a distance!).

 


MURRAY GELL-MANN (disbelief in the supernatural)

I thought about your question and came up with an answer right away, but I am not sure if my answer is suitable. For one thing, I don't know if it really refers to an invention of the last two thousand years. Most likely there were many people who thought about it before the year 2 BCE, and we may well have documentary evidence of that, although there might easily have been discoverers who were afraid to discuss it publicly.

In any case, the most important invention I can think of is disbelief in the supernatural, the realization that we are part of a universe governed entirely by law and chance. (Of course, the fundamental role of chance was not fully appreciated before the discovery of quantum mechanics.)

The deism to which some of our U.S. founding fathers subscribed was not altogether different, in that it involved a supernatural being that set the orderly universe in motion and then left it alone. In its pure form, though, what I am discussing is the complete elimination of the supernatural from our world picture.

MURRAY GELL-MANN is a theoretical physicist; Robert Andrews Millikan Professor Emeritus of Theoretical Physics at the California Institute of Technology; winner of the 1969 Nobel Prize in physics; a cofounder of the Santa Fe Institute, where he is a professor and cochairman of the science board; a director of the J.D. and C.T. MacArthur Foundation; author of The Quark and the Jaguar: Adventures in the Simple and the Complex.


PATRICK BATESON (leaping monks)

As I sit at my computer writing this whimsy, I realise how much of my life is spent peering at its pale screen. So much of my working life has been transformed by the user-friendly software that is now available. As an inveterate reviser, when I write by hand, I start to change my prose almost immediately after I have written something. Large chunks are crossed out, word orders are changed, sentences rearranged, paragraphs moved about. Before long the manuscript looks like a bird's nest. Producing a tidy typewritten copy is not at all easy after so many afterthoughts. The editing facilities of modern word-processing packages are so straightforward that manuscript bird's nests are a part of my past. The new technologies have been truly liberating. So my first thought was that the invention of friendly word processors was my candidate for this symposium. But wait a minute.

A good principle used by historians of technologies is to ask what had to had to be known in order for a particular development to have occurred. It is doubtful, for example, if desktop computers of the power and flexibility we now have would have been possible without the invention of the silicon chip. This approach to emerging technologies produces a fan of necessary developments or, more aptly, a root system branching outwards as the historian moves backwards in time. Some of these roots are undoubtedly more important than others, some certainly more enabling. Consider the computer on my desk again. It is inconceivable that such a machine would have been possible without electricity.

To be sure, Charles Babbage developed plans in the 1830s for what he called an analytical engine. His idea was that the machine would perform any arithmetical operation on the basis of instructions from punched cards, a memory unit in which to store numbers, sequential control, and most of the other basic elements of the present-day computer. The analytical engine was not built according to Babbage's specifications for another 150 years. Its mechanical components meant that it was bulky and the modern outgrowth of a Babbage machine would exclude both my desk and me from the room in which it sat - and it would do a fraction of what my liberating machine does. So, my candidate for the greatest invention of the last two thousand years is the harnessing of electricity.

The first device that could store large amounts of electric charge was the Leyden jar invented in 1745 by Pieter van Musschenbroek, a Leyden physicist. The jar was partially filled with water and contained a thick wire capable of storing a substantial amount of charge. One end of this wire protruded through the cork sealing the jar and was connected to a device generating friction and static electricity. Soon after the invention "electricians" were earning their living all over Europe killing animals with electric shock and devising other spectacles. In one demonstration in France a wire made of iron connected a row of Carthusian monks; when a Leyden jar was discharged, the white-robed monks leapt simultaneously into the air. The frivolities led to thought. Thanks to Ben Franklin in the United States and Joseph Priestley in England, experiments and theorising proceeded apace and, by the mid-19th century, the study of electricity had become a precise, quantitative science which paved the way for the technologies we now all take for granted.

We need electricity for keeping us cool in summer and warm in winter - though our ancestors would have been flabbergasted by the profligate way in which we do so. We use electricity for cooking much of our food and for freezing what we intend to eat later. We depend on it for transport, for communication, for entertainment, for running lives that bear no relation to the rising and setting of the sun. Of the major human appetites, only sex it seems is likely to be served by a power cut.

PATRICK BATESON is Professor of Ethology (the biological study of behaviour) at Cambridge University, the Provost of King's College Cambridge, a Fellow and the Biological Secretary of the Royal Society of London. He co-authored with Paul Martin Measuring Behaviour and Design For A Life: How Behaviour Develops (forthcoming, 1999). He has also edited or co-edited several books, including Mate Choice, The Development and Integration of Behaviour, Behavioural Mechanisms in Evolutionary Perspective and the series Perspectives in Ethology.


THE REALITY CLUB

Pamela McCorduck on Marc Hauser

Arnold Trehub on Stanislas Deheane

George Lakoff replies to comments by Dehaene, Blaskeslee, Hauser, and Trehub


From: Pamela McCorduck
Submitted: 4.21.99

Marc Hauser's interview was both a delight and provocative. His work with animals makes fresh approaches to some of the most vexing questions facing the field of human cognition and for that, it was a joy to read.

However, I'm puzzled by what he calls Natalie Angier's misunderstanding of evolutionary psychology based on the basic biology that leads to sexual promiscuity in humans, what Hauser calls "a nasty asymmetry" — that sperm are cheap to produce and eggs are expensive, and that therefore men have the freedom to be promiscuous (it's cheap), and women do not (it's costly).

Does Hauser — do evolutionary biologists or psychologists — impute to humans some instinct that has told us about this cheap/costly ratio (real evidence of which must have come very recently indeed in our evolution)? Is it that hence we have selected for females "naturally" choosing monogamy, and males "naturally" choosing promiscuity?

It's possible, I suppose — we've certainly heard the argument ad infinitum from guys who ought to know — but I take Angier's point to be that other interpretations of our mating patterns are at least as plausible. These alternative interpretations have the advantage of not confusing libido with procreation (connected of course, but by no means the same thing); nor confusing science with the social convenience of the long-dominant sex. In short, right or wrong, Angier brings to that particular issue the same kind of fresh and persuasive thinking that Hauser brings to cognition, and I'm surprised he doesn't see that.

..................................................................................

We have read your manuscript with boundless delight. If we were to publish your paper, it would be impossible for us to publish any work of lower standard. And as it is unthinkable that in the next thousand years we shall see its equal, we are, to our regret, compelled to return your divine composition, and to beg you a thousand times to overlook our short sight and timidity.

Rejection slip from a Chinese economics journal,
quoted in The Financial Times

PAMELA McCORDUCK is a writer; author of Machines Who Think; The Universal Machine; The Rise Of The Expert Company; Aaron's Code; and coauthor of The Fifth Generation; and The Futures Of Women.

 


Arnold Trehub responds to Stanislas Dehaene


From: Arnold Trehub
Submitted: 4.23.99

In response to Lakoff, Stanislas Dehaene asserts "...the real challenge is to find empirical domains in which the constraints linking brain and mind can be tracked down in a convincing manner."

What might guide our search for such domains, and what kinds of constraints should be the focus of investigation? We have an abundance of empirical evidence showing that damage to particular areas of the brain results in particular sensory and cognitive deficits. Recent work in brain imaging reveals selective localized patterns of heightened neuronal activity associated with particular cognitive tasks. So, in this sense, having intact and healthy brain tissue in these areas is a constraint on their correlated aspects of mind. But these findings shed little light on how the brain does its cognitive work. I think this is the real challenge. It is doubtful that we will get much of a handle on the constraints linking brain and mind without the formulation of minimal and plausible neuronal models that can be shown to perform competently over a range of cognitive tasks.

A particularly important aspect of mind that plays a role in many different cognitive functions is the representation of space. In The Cognitive Brain, I proposed a detailed neuronal mechanism (called the retinoid system) that can account for our veridical and imaginary representation of objects and their relationships in 3-D space, as well as our sense of self-location in egocentric space. A model of this kind might help explain Dehaene's observation about the difficulty people have in solving his topological puzzles.

Neuroscience does not yet have the tools which might enable us to lay out bare the operant microscopic machinery of the human brain's cognitive systems. But explicit models of biologically plausible mechanisms can be tested for their ability to perform in a way consistent with human cognitive performance. The structure and dynamics of these models can then shed light on the constraints that operate on the brain/mind.

A case in point is the explanation of the seeing-more-than-is-there (SMTT) phenomenon by the retinoid model. The SMTT illusion is experienced if a figure is moved back and forth laterally behind an occluding screen with a very narrow vertically oriented aperture slit high enough to span the vertical dimension of the almost completely occluded figure behind the screen. As fixation is maintained on the center of the slit, one perceives a complete but horizontally contracted image of the hidden figure. Even though the retinal stimulus consists only of tiny, intermittently appearing line segments moving up and down on the vertical meridian, there is a vivid visual perception of the whole figure moving left and right, well beyond the narrow aperture. This striking illusory experience has puzzled investigators since it was first observed, but it can now be explained as a natural consequence of the neuronal structure and dynamics of the putative retinoid system. A unified spatial representation of the hidden stimulus is assembled in the brain from the sequence of its fragmentary inputs that are registered on the retina in the narrow aperture region and then shifted postretinally across adjacent retinoid cells, with the direction and velocity of translation driven by the detection of lateral motion in the aperture.

The retinoid mechanism imposes several constraints on how this phenomenon is experienced. The principle cells in the retinoid array that represent the hidden stimulus are autaptic neurons with short-term memory properties. (An autaptic neuron is a neuron that can restimulate itself for a brief period after its initial spike discharge by means of a synaptic connection from a branch of its own axon to its own dendrite.) Cells of this type require relatively rapid refreshing of direct stimulation if they are to maintain their discharge. Thus the coherent perception of a whole object suddenly breaks down to a pattern of vertically oscillating dots if the lateral oscillation of the occluded object is slower than approximately 2 cycles /sec (each translation phase approximately 250 ms). In addition, because there is a relatively fixed integration time for the interneurons effecting stimulus translation across the retinoid array, as the occluded figure moves faster, its perceived horizontal dimension becomes shorter.

I think that we must reconcile ourselves to the idea that detailed minimal models of cognitively relevant neuronal mechanisms are required if we are to understand the constraints operating on the brain/mind.

ARNOLD TREHUB is adjunct professor of psychology, University of Massachusetts at Amherst, and the author of The Cognitive Brain.


George Lakoff responds to Dehaene, Blaskeslee, Hauser, and Trehub


From: George Lakoff
Submitted: 4.24.99

Reply to comments by Dehaene, Blaskeslee, Hauser, and Trehub

I want to thank Stanislas Dehaene, Sandy Blaskeslee, Marc Hauser, and Arnold Trehub for their comments.

As Professor Dehaene observed, it is obvious to any neuroscientist that you have to use the neural system of your brain to think. But that has two interpretations.

1. The peculiarities of the brain, including those of the sensory-motor system, structures concepts and "abstract" thought.

2. The brain merely instantiates any symbol-processing system and thought is symbol-manipulation.
Position 2 is still held by many philosophers and even some cognitive scientists (e.g. Dennett, Pinker, etc.). Position 1 is what Mark Johnson and I argue for in our recent book, Philosophy in the Flesh. It is of course clear in the case of color concepts. The neuroscience of color vision makes clear that it is neural structures hooked up to color cones in the retina that make possible the color concepts we have and the internal structure of those concepts. The examples Mark and I discuss are:

a) Spatial relations concepts. Here we cite Terry Regier's hypothesis in The Human Semantic Potential (MIT Press) that primitive spatial relations concepts arise from neural structures that make use of topographic maps of the visual field, orientation-sensitive cells, center-surround receptive fields, etc.

b) Event Structure Concepts (or "aspect" to linguists): Srini Narayanan's modeling results implicitly make reference to Rizzolatti's mirror neurons. Narayanan argues that there is a single high-level neural control system for motor control and perception of motor movements, and that it characterizes "abstract" event structure (or aspectual) concepts.

These of course are NTL (Neural Theory of Language) neural modeling results, not results from neuroscience. That is they are "how" results (how the neural computation works) not "where" results (where the neural computation is done). These are cases where it appears that the structure of the brain imposes "sharp limits" on conceptual structure.

These results are very much in the spirit of Professor Trehub's very interesting observations and his claim that "detailed minimal models of cognitively relevant neuronal mechanisms are required if we are to understand the constraints operating on the brain/mind." That is just what we are finding from the computational neural modeling perspective: The details of conceptual structure can only be computed by neural networks of a very limited kind with specific structures. As Professor Dehaene observes, his own research indicates that our concepts of small numbers and magnitudes is constituted by a specific cerebral network with a long evolutionary history.

All of this points to a strong embodiment of mind hypothesis: Mind is not just any kind of symbol-manipulation that happens to be instantiated somehow in the brain. Instead, the possibilities for concepts and for thought are shaped in very special ways by the body and the brain that evolved to control it, especially the sensory-motor system.

Metaphor plays a major role in this account: Conceptual metaphors appear to be neural maps that link sensory-motor domains in the brain to regions where more abstract reasoning is done. This allows sensory-motor structures to play a role in abstract reason.

I'm glad that Professor Dehaene likes the idea that abstract mathematics is based on metaphors linking number with space, actions, sets, and so on. But he is incorrect that the theory of conceptual metaphor is so "underspecified" that "one might as well be a functionalist or a dualist." It is not the cases that almost any metaphor is possible. Possible metaphors are constrained in many ways, as discussed in Philosophy in the Flesh, Chapter 4. The possibilities for what Joe Grady has called "primary metaphor" are constrained by (a) sensory-motor and other source-domain inferential mechanisms; (b) regularly repeated real-world experiences, especially in the early years, in which source and target domains are systematically correlated; (c) mechanisms of recruitment learning. Our empirical studies show that conceptual metaphors around the world seem to be quite limited in ways that such constraints would predict. The wide variety of complex conceptual metaphors are predicted by the possibilities for co activation of neural metaphorical maps.

What results is not possible in a dualist or functionalist system, since many actual inferential mechanisms are in the sensory-motor system. Narayanan's modeling results indicate that abstract reason can be carried out by sensory-motor neural mechanisms. Very non-dualist and non-functionalist. Non-dualist because bodily control mechanisms are being used in abstract reason. That does not allow a mind-body split. The results are nonfunctionalist because Narayanan's inferential mechanisms have the properties of neural systems.

Here is what we mean by a computational model that "has properties of neural systems."

1. It uses spreading activation, with degrees of activation, thresholds, and so on.

2. It operates in parallel, with (potential) connections providing feedback across levels.

3. It has no central control.

4. It is a constraint satisfaction system that "combines evidence," that is, it optimizes by computing best fits among competing analyses.

5. It is consistent with psycholinguistic data on processing and developmental data on learning.

6. It is "embodied" in that it makes use of brains structures that have evolved to control the body.

7. It is adaptive; the same input may not have the same output.

8. Its generalizations arise by principles of neural optimization.

9. Via neural optimization, it may develop "shortcuts" around general processes.

10. Binding and learning work via "recruitment" of connections, short-term and long-term.

11. It is "path-dependent" in that prior structure is used in extensions (e.g., radial categories, metaphors, and so on).

12. It is neurally plausible with respect to the following:

a. resource constraints

b .binding possibilities

c. Speed

d .number of connections

e. number of units

f. learning capability

g. fault tolerance

h. structures must be plausible brain structures

"Functionalist" systems are general symbol manipulation systems and they do not have these properties.

Some of Marc Hauser's comments are based on a lack of familiarity with results on metaphorical thought over the past two decades. Professor Hauser incorrectly presumes that Johnson and I "would argue that the human mind is fundamentally
transformed by the acquisition of language, and the young child, lacking language, has than children with language. If this is the case, it goes against many of the findings in current developmental psychology and evolutionary psychology that argue for a core set of representational systems. Moreover, many of these representational systems are present in animals, lacking language and metaphor."

Conceptual metaphorical mappings are not primarily matters of language, they are part of our conceptual systems, cross-domain mappings allowing us to use sensory-motor concepts and reasoning in the service of abstract reason. Children acquire conceptual metaphorical mappings automatically and unconsciously via their everyday functioning in the world. See Chapter 4 of Philosophy in the Flesh. Thus it is not the case that " the young child, lacking language, has absolutely different conceptual representations than children with language." Our results are very much in accord with child language acquisition. Indeed, Chris Johnson's research on polysemy acquisition supports our account.

Finally, I'd like to turn to Professor Hauser's "questions/challenges." He writes:
If our brains are structured on the basis of the input from the body, then how can Lakoff and Johnson explain the phantom limb results that Ramachandran has obtained with mirrors. Here, simply seeing the intact arm in the mirror provides the necessary input to the brain to show that the phantom can be relieved of pain. Nothing is happening at the body surface. It is a visual image of the good arm in the place of the missing arm. Seeing this image apparently tricks the brain into thinking that the pain can be relieved. This is an elegant example, it seems to me, of modularity, and the encapsulation of information within one system.

Johnson and I accept (and applaud) Ramachandran's account. It is entirely consistent with ours as discussed in Philosophy in the Flesh, Chapter 3 and Appendix. The brain is structured so as to run a body and has very specific connections to and from the body. The Neural Theory of Language is based on empirical results about the details of body-linked brain structure. Recall that in order to have a phantom limb, you have to have had a real limb linked to the brain before you lost it.

Ramachandran's results, so far as I can tell, show nothing about "modularity". They only show that there are constraints on where information can flow in the brain, but that is anything but surprising.

I try to avoid using the word "modularity" because of its wide misuse in linguistics. When "modularity" is taken to mean that there are places in the brain where there is neural computation done using circuitry specific to that place, then there is no problem. This is just localized neural computation of a specialized kind performed by specialized circuitry with normal neural inputs and outputs. There is no question that this exists and our group makes use of extensively in our neural modeling enterprise.

However, there is a strange use of the word "module" that is current in linguistics that does not mean this at all. This is the chomskyan "syntax module" or "syntax box," which has outputs but no input. There is nothing in a brain like this. You can see why I would avoid the word "module." No neuroscientist I know uses the word in such a sense. For discussion, see the chapter on Chomsky's philosophically-based linguistics in Philosophy in the Flesh.

Again, I would like to thank those wrote. I hope that reading Philosophy in the Flesh will clarify these issues.

It would be interesting to hear what philosophers have to say about all this. Our book surveys many of the vast changes that would result if philosophy were to conform to the empirical results of neuroscience and cognitive science (especially cognitive linguistics). We hope that most philosophers will not close their minds to the sciences of the brain and the mind, which bear so centrally on the philosophical enterprise.

GEORGE LAKOFF is Professor of Linguistics at the University of California at Berkeley, where he is on the faculty of the Institute of Cognitive Studies.He is the author of Metaphors We Live By (with Mark Johnson), Women, Fire and Dangerous Things: What Categories Reveal About the Mind, More Than Cool Reason: A Field Guide to Poetic Metaphor (with Mark Turner), and Moral Politics. His most recent book is Philosophy in the Flesh (with Mark Johnson).


John Brockman, Editor and Publisher | Kip Parent, Webmaster

 

Copyright ©1999 by Edge Foundation, Inc.

Back to EDGE INDEX



Home | Digerati | Third Culture | The Reality Club | Edge Foundation, Inc.





EDGE is produced by iXL, Inc.
Silicon Graphics Logo



This site sponsored in part by SGI and is authored and served with WebFORCE® systems. For more information on VRML, see vrml.sgi.com.