EDGE 29 November 19, 1997

THE THIRD CULTURE
"THE DEEP QUESTION"
A Talk with Rodney Brooks
The central idea that I've been playing with for the last 12-15
years is that what we are and what biological systems are, are not
what's in the head, it's in their interaction with the world. You
can't view it as the head, and the body hanging off the head, being
directed by the brain, and the world being something else out there.
It's a complete system, coupled together.
THE REALITY CLUB
Marc D. Hauser Re: Edge University
George Dyson On Bionumerical Paleontology
Rafael Nunez, Margaret Wertheim, Howard Gardner, Joseph Traub,
Steven Pinker, and Charles Simonyi on Stanislas Dehaene's "What
Are Numbers, Really? A Cerebral Basis For Number Sense"
Stanislas Dehaene Responds
(11,827 words)
THE THIRD CULTURE
"THE DEEP QUESTION"
A Talk with Rodney Brooks
ROD BROOKS built computers as a kid. There was only one computer
in the Autralian city where he grew up and there wasn't much technology.
He spent his childhood building computers from whatever he could
manufacture. There were no computer science departments in the colleges
in Australia when he started so he did pure mathematics. He was
going to become a pure mathematician, and then discovered that research
assistantships were availalbe in American universities. He received
a Ph.D. at Stanford in computer science, in John McCarthy's artificial
intelligence lab, and then came to MIT where he thinks about biological
systems and their interaction with the world.
JB
ROD BROOKS is director of the AI Lab at MIT.
"THE DEEP QUESTION"
A Talk with Rodney Brooks
BROOKS: The thing that puzzles me is that we've got all these biological
metaphors that we're playing around with - artificial immunology
systems, building robots that appear lifelike but none of
them come close to real biological systems in robustness and in
performance. They look a little like it, but they're not really
like biological systems. What I'm worrying about is that perhaps
in looking at biological systems we're missing something that's
always in there. You might be tempted to call it an essence of life,
but I'm not talking about anything outside of biology or chemistry.
A good analogy is the idea of computation. Once Turing came up
with a formalism for computation we were able make great progress
fairly quickly. Now if you take any late 19th-century mathematicians,
you could explain the fundamental ideas of computation to them in
two or three days, lead them through the theorems, they could understand
it and they wouldn't find it mind boggling in any way. It follows
on from 19th-century mathematics. Once you have that notion of computation,
you are able to do a lot with it. The question then is whether there
something else, besides computation, in all life processes? We need
an conceptual framework such as computation that doesn't involve
any new physics or chemistry, a framework that gives us a different
way of thinking about the stuff that's there. Maybe this is wishful
thinking, but maybe there really is something that we're missing.
We see the biological systems, we see how they operate, but we don't
have the right explanatory modes to explain what's going on and
therefore we can't reproduce all these sorts of biological processes.
That to me right now is the deep question. The bad news is that
it may not have an answer.
JB: How are you exploring it?
BROOKS: You can take a few different models. You can try to do
it by analogy, you can try to see how people made leaps before and
see if you can set yourself up in the right framework for that leap
to happen. You can look at all the biological systems, look at different
pieces that we're not able to replicate very well and look for some
commonality. Or you can try to develop a mathematics of distributed
systems, which all these biological systems seem to be. There are
lots of little processes running with no central control, although
they appear to have central control. I'm poking at all three.
JB: I've very little use for the digital technology metaphor as
related to the human body.
BROOKS: The fact that we use the technology as a metaphor for
ourselves really locks us into the way we think. We think about
human intelligence as these neurons with these electrical signals.
When I was a kid the brain was a telephone switching network, then
it became a digital computer, then it became a massively parallel
digital computer. I'm sure there's a book out there now for kids
that says the brain is the World Wide Web, and everything's cross-linked.
JB: In 1964 we used to talk about the mind we all share, the global
utilities network.
BROOKS: We get stuck in that mode. Everyone starts thinking in
those terms. I go to interdisciplinary science conferences and neuroscientists
get up and say the brain is a computer and thoughts are software,
and this leads people astray, because we understand that technical
thing very well, but something very different is happening. We're
at a point now where we're limited by those metaphors.
JB: How do you get beyond thinking about it that way?
BROOKS: The best I can hope for is to get another metaphor that
leads us further, but it won't be the ultimate one either. So no,
in no sense do are we going to get to an absolute understanding.
JB: Do you have any hints as to what might be the way to go?
BROOKS: Inklings, but it's still a question I don't have
the answer. I don't know the next step. For the last 15 years I've
been plodding along and doing things based on these metaphors
getting things distributed, and not having central control, and
showing that we can really do stuff with that, we can build real
systems. But there's probably something else that we're missing.
That's the thing that puzzles me now and that I want to spend time
figuring out. But I can't just sit and abstractly think about it.
For me the thing I need to do is keep pushing and building on more
complex systems, and then try to abstract out from that what is
going on that makes those systems work or fail.
JB: Talk about what you're doing.
BROOKS: The central idea that I've been playing with for the last
12-15 years is that what we are and what biological systems are.
It's not what's in the head, it's in their interaction with the
world. You can't view it as the head, and the body hanging off the
head, being directed by the brain, and the world being something
else out there. It's a complete system, coupled together. This idea's
been around a long time in various forms Ross Ashby, in Design
for Brain in the '50s that's what the mathematics he was
trying to develop was about. Putting that in the digital age has
led to being able to get machines to do things with very little
computation, frighteningly little computation, when we have our
metaphors of digital computers. That we're claiming and showing
that we can do things without lots of computation has gotten a lot
of people upset, because it really is a mixture of computation and
physics and placement in the world. Herb Simon talked about this
back in Science of the Artificial, in 1966, or '68 his ant
walking along the beach. The complexity of the path that the ant
takes is forced on it by the grains of sand, not by the ant's intelligence.
And he says on the same page, the same is probably true of humans
the complexity of their behavior is largely determined by
the environment. But then he veers off and does crypto-arithmetic
puzzles as the key to intelligence.
What we've been able to do is build robots that operate in the
world, and operate in unstructured environments, and do pretty well,
because they use whatever structure there is in the world to get
the tasks done. And the hypothesis is that that's largely what humans
are; that humans are not the centralized controllers. We're trying
to implement this, and see what sorts of behavior we can get. It's
like Society of Mind, which is coming top-down, and we're coming
bottom-up, building with pieces and actually putting them together.
JB: How do define the word "robot."
BROOKS: I mean a physical robot made of metal, not a soft-bot.
Most recently I've been working on Cog, a humanoid robot which has
human size, form, and sits and acts in the world. There are a few
curious things that have come out of that which I didn't quite expect.
One is that the system with very little content inside it, seems
eerily human to people. We get its vision system running, with its
eyes and its head moving, and it hears a sound and it saccades to
that, and then the head moves to follow the eyes to get them back
into roughly the center of their range of motion. When it does that,
people feel it has a human presence. Even to the graduate students
who designed the thing, and know that there's nothing in there,
it feels eerily human.
The second thing is that people are starting to interact with
it in a human-like way. They will lead it on, so it will do something
interesting in the world. People come up and play with it. And the
robot is now doing things that we haven't programmed it to do; not
that it's really doing them. The human is taking advantage of those
little pieces of dynamics and leading it through a series of sub-interactions,
so that to a naive observer it looks like the robot is doing a lot
of stuff that it's not doing. The easiest example of this is turn-taking.
One of my graduate students, Cynthia Ferrell, who had been the major
designer of the robot, had been doing something with the robot so
we could videotape how the arms interacted in physical space. When
we looked at the videotape, Cynthia and it were taking turns playing
with an whiteboard eraser, picking it up, putting it down, the two
of them going back and forth. But we didn't put turn taking into
the robot, it was Cynthia pumping the available dynamics, much like
a human mother leads a child through a series of things. The child
is never as good as the human mother thinks it is. The mother keeps
doing things with the kid, that the kid isn't quite capable of.
She's using whatever little pieces of dynamics are there, getting
them into this more complex behavior, and then the kid learns from
that experience, and learns those behaviors. We found that humans
can't help themselves; that's what they do with these systems such
as kids and robots. The adults unconsciously put pieces of the kid's
or robot's dynamics together without thinking. That was a surprise
to us that we didn't have to have a trained teacher for the
robot. Humans just do that. So it seems to me that what makes us
human, besides our genetic makeup, is this cultural transferral
that keeps making us human, again and again, generation to generation.
Of course it's involved with genetics somehow, but it's missing
from the great apes. Naturally raised chimpanzees are very different
from chimpanzees that have been raised in human households. My hypothesis
here is that the humans engage in this activity, and drag the chimpanzee
up beyond the fixed point solution in chimpanzee space of chimpanzee
to chimpanzee transfer of culture. They transfer a bit of human
culture into that chimpanzee and pull him/her along to a slightly
different level. The chimpanzees almost have the stuff there, but
they don't quite have enough. But they have enough that humans can
pull them along a bit further. And humans have enough of this stuff
that now a self-satisfying set of equations gets transferred from
generation to generation. Perhaps with better nurturing humans could
be dragged a little further too.
JB: Let's talk about your background.
BROOKS: I have always been fascinated by building physical things
and spent my childhood doing that. When I first came to MIT as a
post-doc I got involved in what I call classical robotics, applying
artificial intelligence to automating manufacturing. My earlier
Ph.D. thesis was on getting computers to see 3-dimensional objects.
At MIT I started to worry about how to move a robot arm around without
collisions, and you had to have a model, you had to know where everything
was. It built up more and more mathematical structures where the
computations were just getting way out of hand tons of symbolic
algebra, in real time, to try and make some decision about moving
the fingers around. And at some point it got to the point that I
decided that it just couldn't be right. We had a very complex mathematical
approach, but that couldn't be what was going on with animals moving
their limbs about. Look at an insect, it can fly around and navigate
with just a hundred thousand neurons. It can't be doing this very
complex symbolic mathematical computations. There must be something
different going on.
I was married at the time to a woman from Thailand, the mother
of my three kids, and I got stuck in a house on stilts in a river
in southern Thailand for a month, not allowed out of the house because
it was supposedly dangerous outside, and the relatives couldn't
speak a word of English. I sat in this house on stilts for a month,
with no one talking to me because they were all gossiping in Thai
the whole time. I had nothing to do but think and I realized that
I had to invert the whole thing. Instead of having top-down decomposition
into pieces that had sensing going into a world model, then the
planner looks at that world model and builds a plan, then a task
execution module takes steps of the plan one by one and almost blindly
executes them, etc., what must be happening in biological systems
is that sensing is connected to action very quickly. The connectivity
diameter of the brain is only five or six neurons, if you view it
as a graph, so there must be all these quick connections of sensing
to action, and evolution must have built on having those early ones
there doing something very simple, then evolution added more stuff.
It didn't take it all apart at each step and rewire it, because
you wouldn't be able to get from a viable creature to a viable creature
at each generation. You can only do it by accretion, and modulation
of existing neural circuitry. This idea came out of thinking of
having sensors and actuators and having very quick connections between
them, but lots of them. As a very approximate hand-waving model
of evolution, things get built up and accreted over time, and maybe
new accretions interfere with the lower levels. That's how I came
to this approach of using very low small amounts of computation
in the systems that I build to operate in the real world.
JB: What happened after you had your epiphany?
BROOKS: We built our first robot, and it worked fantastically
well. At the time there were just a very few mobile robots in the
world; they all assumed a static environment, they would do some
sensing, they would compute for 15 minutes, move a couple of feet,
do some sensing, compute for 15 minutes, etc., and that on a mainframe
computer. Our system ran on a tiny processor on board. The first
day we switched it on the robot wandered around without hitting
anything, people walked up to it, it avoided them, so immediately
it was in a dynamic environment. We got all that for free, and this
system was operating fantastically better than the conventional
systems. But the reaction was that "this can't be right!" I gave
the first talk on this at a robotics seminar held in Chantilly,
France, and I've heard since that Georges Giralt head of robotics
in France , and Ruzena Basczy, who at the time was head of the computer
science department at the University of Pennsylvania, were sitting
at the back of the room saying to each other, "what is this young
boy doing, throwing away his career?" They have told me since that
they thought I was nuts. They thought I'd gone off the deep end
because I threw out all the mathematical modeling, all the geometric
reconstruction by saying you can do it directly, connecting sensors
to actuators. I had videotape showing the thing working better than
any other robot at the time. But the reaction was, it can't be right.
It's a parlor trick, this won't work anywhere else, this is just
a one-shot thing.
JB: What do biologists think?
BROOKS: Largely the reaction from biologists and ethologists is
that this is obvious, this is the way to do it, how could anyone
have thought to do it differently? I had my strongest support early
on from biologists and ethologists. Now it's become mainstream in
the following weak sense. The new classical AI approach is, you've
got this Brooks-like going on down at the bottom, and then you have
the standard old AI system sitting on top modulating its behavior.
Before there was the standard AI system controlling everything;
now this new stuff has crept in below. Of course, I say that you
just have this new stuff, and nothing else. The standard methodology
is you have both right now. So everyone uses my approach , or some
variation, at the bottom level now. Pathfinder on Mars is using
that at the bottom level. I want to push this all the way, and people
are sort of holding onto the old approach on top of it, because
that's what they know how to deal with.
JB: Where is this going to go?
BROOKS: Certainly these approaches are going to get out there
into the real world, and will be in consumer products within a very
small number of years. And it's going to come through the toy industry
because that's already happening.
JB: What kind of consumer applications?
BROOKS: Things that appear frivolous. Let me give you an analogy
on the frivolousness of things. Imagine you've got a time machine
and you go back to the Moore School, University of Pennsylvania,
around 1950, and they've spent 4 million dollars in four years and
they've got Eniac working. And you say, by the way, in less than
fifty years you'll be able to buy a computer with the equivalent
power to this for 40 cents. The engineers would look at you like
you were crazy, this whole thing with 18,000 vacuum tubes, for 40
cents? Then if they asked you what will people use these computers
for? You say "Oh, to tell the time." Totally frivolous. How could
you be so crazy as to have such a thing of complexity, such a tool
telling the time. It's the same thing in consumer products. What
it relies on is getting a variety of components that you can plug
together to rapidly build up many different frivoulous products,
by just adding a little bit of bottom up intelligence. For instance,
one component is being able to track where a person is. With that
you can have a thing that knows where you are in the room and dynamically
adjusts the balance in your stereo system for you. If you're in
the room you always have perfect stereo. Or it's attached to a motor
and it's a vanity mirror that follows you around in the bathroom,
so it's always at exactly the right angle for you to see your face.
For kids they can set up a guard for their bedroom such that when
someone comes by it shoots them with a water pistol. Once the components
are available there'll be lots of other frivolous applications.
But that's what's going to be what's there and what people will
buy.
THE REALITY CLUB
Marc D. Hauser Re: Edge University
From: Marc D. Hauser
Submitted: 11.4.97
Re: EDGE University
Dear John,
Irv Devore and I teach the "Evolution of Human Behavior" class,
a Core Course at Harvard with 500 undergraduate students. The interdisciplinary
course, "Science B29" (nickname: "The Sex Course"), has been running
for 30 years, was started by Devore and Robert Trivers, and is the
second most popular course on campus, behind "Econ 10". Section
teachers over the years comprise a who's who of leading thinkers
and include people such as John Tooby and Leda Cosmides, and Sarah
B. Hrdy.
Nearly every EDGE edition features people and ideas we teach in
the course. Moreover, the main focus of the course is provide students
with radically new ways of thinking about the world. In essence,
we want the students to play with ideas and challenge some of their
most precious beliefs. I would thus be very interested in seriously
exploring an official connection whereby EDGE is utilized as part
of the course. In this regard the students would receive the EDGE
editions, which would be discussed in the weekly section sessions.
The students, both those at Harvard and at the other participating
colleges and universities, would have their own area on the EDGE
Website for "Reality Club" discussions. In turn, you would encourage
the presenters on EDGE to take some time each week to participate
in the student forums. For your interest, here is the course description:
The Faculty of Arts and Sciences
Science B-29. Human Behavioral Biology Catalog Number: 0152
Website: http://icg.fas.harvard.edu/~scib29/
Irven DeVore and Marc D. Hauser
half course (spring term)
VI
M., W., F., at 1, and a 90-minute weekly section/laboratory to be
arranged. Additional meeting times for two required film showings
to be announced.
Human biology and behavior are considered in a broad evolutionary
context, showing how the facts of development, physiology, neurobiology,
reproduction, cognition, and especially behavior are informed by
evolutionary theory and comparative evidence. Field and experimental
data on other species are introduced with the aim of illuminating
human behavior. Behavior is traced from its evolutionary function
as adaptation, through its physiological basis and associated psychological
mechanisms, to its expression. The role of ecology and social life
in shaping human behavior is examined through the use of ethnographies
and cross-cultural materials on a variety of human cultures. Topics
include basic genetics, neural and neuroendocrine systems, behavioral
development, sex differences, kinship and mating systems, ecology,
language, and cognition. Enrollment: Limit ed to 400.
There are two other similar courses which might be appropriate
for EDGE University. At UCLA Robert Boyd and Joan Silk teach an
interdisciplniary course; At Stanford, Anne Fernald runs the Human
Biology Program. I suggest you contact them.
Best regards,
Marc
MARC D. HAUSER, is an evolutionary psychologist, and an associate
professor at Harvard University where he is a fellow of the Mind,
Brain, and Behavior Program. With wide-ranging post-doctoral experience
in neuroscience, linguistics, cognitive science, and evolutionary
biology, he is a professor in the departments of Anthropology and
Psychology, as well as the Program in Neurosciences. Dr. Hauser
works on both captive and wild monkeys and apes as well as collaborative
work on human infants. His research focuses on problems of acoustic
perception, the generation of beliefs, the neurobiology of acoustic
and visual signal processing, and the evolution of communication.
He is the author of The Evolution of Communication (MIT Press),
and What The Serpent Said: How Animals Think And What They Think
About (Henry Holt, forthcoming).
George Dyson On Bionumerical Paleontology
From: George Dyson
Submitted: 11.12.97
BIONUMERICAL PALEONTOLOGY, continued
Recently, I visited the Institute for Advanced Study in Princeton,
New Jersey, to examine the surviving records of the Electronic Computer
Project (1945-1958) for evidence of Nils Barricelli's bionumeric
evolution experiments (see EDGE 21). The Electronic Computer Project's
Monthly Progress Report (Contract No. DA-36-034-ORD-1023-RD) for
March 1953 explains:
"A code has been written and preliminary runs made which attempts
to present a mathematical model for symbiogenetical processes...
According to the theory of symbiosis of genes, the genes were originally
independent, virus-like organisms which by symbiotic association
formed more complex units. According to this theory a similar evolution
should be possible with any kind of elements having the necessary
fundamental properties... A series of numerical experiments are
being made with the aim of verifying the possibility of an evolution
similar to that of living organisms taking place in an artificially
created universe."
Although none of the original coding appears to have survived,
Barricelli's initial report, *Experiments in Bionumeric Evolution
Executed by the Electronic Computer at Princeton, N. J.* (August
1953) provides many details--from seeding the empty universe by
means of a deck of playing cards to an analysis of the first series
of experiments, keyed to large-format photo-mosaics of the punched
cards representing its results. The machine log-books, kept by the
engineers, provide a cryptic record of the obstacles presented by
equipment (especially the temperamental cathode-ray-tube memory)
that was working reliably only for very short periods of time. From
the Operating Log, 7 April 1953: "Dr. Barricelli claims machine
is wrong. Code is right."
Barricelli had no illusions about the difficulty of evolving self-sustaining
numerical organisms, and in the first published report on the Institute
experiments ["Esempi numerici di processi di evoluzione," *Methodos,*
vol. 6 no. 21-22 (1954, received 30 December 1953); draft translation
by Verena Huber-Dyson, "Numerical Models of Evolutionary Processes"]
he offered some sound advice:
"Small wonder that from Darwin's time to this day a somewhat gratuitous
optimism has prevailed in many quarters about the possibility of
settling that 'ultimate detail' needed for completion of a theory
of the nature of living organisms."
"But a question that might embarrass the optimists is the following:"
"'If it's that easy to create living organisms, why don't you
create a few yourself?'"
I was able to speak briefly with Julian Bigelow, von Neumann's
lead architect and original chief engineer. He remembered the numerical
evolution experiments and commented that "Barricelli was the only
person who really understood the path toward genuine artificial
intelligence at that time." Barricelli's view of intelligence was
unusually broad, and encompassed both biological and non-biological
systems, epitomized by the provocative title of his last published
paper (1987) on bionumeric evolution: "Suggestions for the starting
of numeric evolution processes intended to evolve symbioorganisms
capable of developing a language and technology of their own."
On the final day of my brief visit, Nathan Myhrvold and Charles
Simonyi (Microsoft Research Directors, and IAS Trustees) happened
to show up for a meeting, and, at 4:45 on Friday afternoon, James
Fein, the Institute archivist, was persuaded to bring up the seven
cardboard boxes so that we could take a look. If there are any documents,
anywhere, that can be said to represent the "Big Bang" of the modern
digital universe, these papers (including the minutes of the first
meeting of the Electronic Computer committee, held 12 November 1945
in Vladimir Zworykin's office at RCA) are high on the list.
In 1947, Arthur Burks, Herman Goldstine, and John von Neumann
commandeered a couple offices (including an annex to Kurt Gödel's)
among the pure mathematicians and mathematical physicists at the
Institute for Advanced Study, and started writing code. In 1997,
Nathan Myhrvold and a small band of followers commandeered a few
offices among the programmers at Microsoft, and are starting to
do pure mathematical research.
On my flight from New Jersey to Seattle, I found myself pondering
this symmetry, fifty years removed.
GEORGE DYSON, the leading authority in the field of Russian Aleut
kayaks, has been a subject of the PBS television show Scientific
American Frontiers. He is the author of Baidarka, and
the forthcoming Darwin Among the Machines:The Evolution of Global
Intelligence.
Rafael Nunez, Margaret Wertheim, Howard Gardner, Joseph Traub,
Steven Pinker, And Charles Simonyi On Stanislas Dehaene's "What
Are Numbers, Really? A Cerebral Basis For Number Sense "
From: Rafael Nunez
Submitted: 10.31.97
Commentary on S. Dehaene's La Bosse des Maths.
Several months ago I had the chance to read the original French
version of S. Dehaene's book La Bosse des Maths. Not only
I deeply enjoyed the manner in which he presents an amazing amount
of experimental findings, but also I learned quite a bit about neuroscience,
methodological issues, and even about interesting anecdotes. The
book is certainly a rich contribution to the study of the cognitive
origins of numbers and its philosophical implications.
The truth is that much of what I would like to say about Dehaene's
work has been already said by George Lakoff in his comment (this
issue). This is not surprising, since Lakoff explicitly refers to
the work he and I have been doing on the cognitive science of mathematics,
and to the book we are currently writing tentatively titled
The Mathematical Body. So I will not repeat how much we consider
Dehaene's work important. I would like to insist, however, that
both Dehaene's work and ours, in a complementary way, show that
(1) Platonism and transcendental objectivism, (2) the correspondence
theory of truth, and (3) functionalism, are untenable when it comes
to explain the mind.
But despite of the fact that I endorse most of Dehaene's claims,
and I agree with most of his conclusions, I still would like to
make a few comments. I have to confess that his book left me with
the sensation that very relevant things were said, but that there
were important parts missing in the account of "la bosse des maths".
These are some of them:
1) Mathematics is really not just numbers.
I am sure Dehaene knows this very well. This is not a mystery.
After all, there are journals of Mathematics where the only numbers
you see are those of the pages. However, in his book (as in his
article) Dehaene makes too many conclusions about "mathematical
objects", "mathematical talents", "mathematics education", "matematical
knowledge", "mathematical thinking", and so on. Based mainly on
the work done in the neuroscience of quantity discrimination, counting,
and simple arithmetic operation with small numbers he does make
conclusions about mathematics as a whole. After reading the book
I didn't know, for instance, what to do with the mathematical cognition
of those very good mathematicians who are quite bad at dealing with
everyday numbers and arithmetic operations. This, of course, does
not invalidate his work. But it is an unfortunate fast extrapolation
that has important theoretical consequences. In fact in jumping
too fast to conclusions about the mathematics as a whole one overlooks
fundamental cognitive phenomena underlying what mathematics is.
Phenomena that does not have anything to do with quantities (or
numbers) as such (and that may have confer evolutionary advantage
as well). The universe of bodily grounded experiences is much richer
than the one of quantities, and as Lakoff and I have found, some
of them are essential in providing grounding for what mathematics
is (see Lakoff's comment). The task is to find the neural and bodily
basis of this grounding as well.
2) What is REALLY "..." or "etc"?
Even if one doesn't intend to make conclusions about mathematics
as a whole, and stays in the domain of small everyday "numbers",
one should be very careful in making fast extrapolations. In fact
when we refer to the "natural numbers" or to "1, 2, 3, 4, etc" we
are doing something that cognitively is much more complex than what
is shown in Dehaene's book. It is true that we may be building on
the kind of phenomena Dehaene describes very well, but the "etc"
hides a very complex cognitive universe that is missing in Dehaene's
account. In fact, with the "etc" we implicitly invoke a whole universe
of entities that have never been explicitely thought or written
by anybody in the whole history of humankind (e.g., really huge
integers)! Still, some cognitive mechanisms allow us to know that
these entities preserve certain properties, such as order or induction,
that they share some kind of inferential structure such that we
put them in the same cognitive category, and so on. All that is
hidden when we say "integers" or we write "..." or "etc" after 1,
2, 3. This fact, is an extraordinary cognitive accomplishment that
is not in Dehaene's account of the "number" sense. That cognitive
activity requires other fundamental mechanisms that are not intrinsically
related to quantity and that can't be overlooked.
3) Brains in living bodies.
After building up the "number sense" Dehaene is right in referring
to language and culture in order to explain more abstract and sophisticated
mathematics. This move is crucial, since if one simply believes
that everything happens in the brain, then one would expect Euclid
or Archimedes to be theorizing about the Cantor's set or some strange
diffeomorphisms. It is clear that there is a historical process
with semantic change that we need to account for. But in Dehaene's
account it is not clear how is this grounded in the brain he claims
to be essential. Marc Hauser has pointed it out in his comment:
"of course language contributes in some way to our numerical sense,
but in precisely what way?" Well, it is here where genuine embodiment
plays a vital role. Brains, although extremeley relevant in giving
accounts of cognitive phenomena, evolved in and with
bodies, not in isolation. Moreover, these human brain-bodies have
been permanently in interaction, in an ongoing way, throughout evolution.
Language and Culture emerged from these processes and are manifestations
of non-arbitrary extensions of shared bodily grounded phenomena.
This means that big cultural differences don't necessarily mean
dramatic changes in brains. When looking at mathematics, one should
look at these embodied extensions. As Lakoff says in his comment,
an important part of these extensions are conceptual metaphorical
mappings, that is, cross-domain mappings that preserve inference.
When one studies these extensions empirically it is possible to
observe, for instance, that the different concepts of continuity
before and after the XIX century don't necessarily correspond to
a dramatic change in the brain of the mathematicians, but it corresponds
to different extensions of bodily grounded primitives. In the case
of continuity, Lakoff and I have postulated that before the XIX
century, mathematicians such as Euler, Fourier, Newton, and Leibniz
thought of continuity using the cognitive mechanisms we use today
when in everyday activity we think of something as being continuous
(Euler referred to continuity as "freely leading the hand"). That
is, such an idea is realized through simple bodily grounding mechanisms
that have to do with motion, wholeness, and source-path-goal schemata.
But with Weierstrass' work in the XIX century, continuity has been
conceived through radically different cognitive primitives: static
entities, discreteness, and container-schemata. Does this mean that
the brains of the mathematicians changed dramatically at the end
of last century? No. Weierstrass-continuity requires, among others,
mappings that allow us to understand a line as being composed of
points (sets of points), and static preservation of closeness as
the dynamic approach to a limit. It is only under these cognitive
extensions of other bodily grounded mechanisms that one can do post
Weierstrass mathematics, a kind of mathematics that Euler, although
having a relatively similar brain, couldn't do. The point is that
language and culture are also brain-bodily grounded, and there are
techniques to study these phenomena empirically without forcing
ourselves to remain at the level of isolated brains, but rather
extending from it.
But, in any case, and after all this being said, I am sure that
I will enjoy the English version of La Bosse des Maths, that
I hope to have in my hands soon.
RAFAEL E. NUNEZ is a cognitive scientist, now at the University
of California at Berkeley. He studied mathematics and psychology
in Chile, completed his PhD in Switzerland and did his post-doctoral
work at Sanford University and at University of California at Berkeley
under a fellowship of the -Swiss National Science Foundation. He
is currently writing a book with George Lakoff on the conceptual
structure of mathematics, which is tentatively titled The Mathematical
Body.
From: Margaret Wertheim
Submitted: 11.4.97
Dear John
I read with interest the recent piece by Stanislas Dehaene on
"what are numbers". The metaphysical and ontological status of mathematics
is a subject of great concern to me largely because of its
relevance to the question of the ontological status of the mathematical
relations we find in the physical world i.e. the so-called
"laws of nature".
While I found Dehaene's piece interesting, I would like to draw
the attention of the group to a radically different approach to
the same issue. The philosopher of mathematics Brian Rotman has
written a superb book on just this question. His book is Ad Infinitum
(Stanford University Press). Here, Rotman (who was a mathematician
for 20 years before turning to the philosophy of mathematics and
computing) puts forward what he calls a "post-Euclidean philosophy
of number". Essentially he repudiates Platonism and presents instead
a semiotic model of the ontology of number. Basically Rotman argues
that numbers emerge semiotically from the act of counting. His careful
and intricate treatment of this subject is a truly superb example
of intellectual integration between phisilophy, culture and science
and I would strongly recommend to all those interested in this subject
to take a look at this work. It is not an easy read but then
it is not an easy subject. FYI: there will be a piece about Rotman's
work in a forthcoming issue of The Sciences.
I would like also to make a comment on the recent issue
raised by Natalie Angier and Carl Djerassi re the under-representation
of women in the EDGE Salon. I agree with the poster who suggested
that women scientists are perhaps too busy to participate in such
"extra-curricula" activities being tied up with consolidating
their positions as scientists. This is indeed a very real issue
for many female scientists, at the same time I suggest there are
other factors that ought also to be considered.
First, I would ask the question whether anyone has actually tried
to solicit participation by many women? I notice that in the list
of EDGE contributors well over 90% of them are men. Someone presumably
approached many of them and asked them to participate. It is my
belief that the under-representation of women scientists in the
public sphere ultimately does science great harm perhaps
the EDGE group could take a pro-active stance towards seeking out
more women. Many potential candidates may not even know that the
Salon exists.
A final point re the intersection science and culture and
how to get such dicussions "out there" in the wider world. In fact
I have been writing about this subject for over ten years and I
lecture about it at universities all over the country. In my experience,
there is a great desire "out there" both among students and among
the public at large to have such discussions the hard question
is to find ways to integrate it into existing curriculums and subject
categories. Many peple are in fact desperate for interdiscilpinary
dialog about science but what they also want is for the culture
part of the equation to be taken seriously as well. In other words
what we need is not just a discussion by scientists but one that
really is a dialog between the sciences and the humanities. That
dialog is what drives my own work and I'd be very interested to
participate in any projects along those lines generated by the EDGE
group.
Best wishes
margaret wertheim
MARGARET WERTHEIM is the author of Pythagoras' Trousers
(Times Books 1995), a history of the relationship between physics,
religion, and women. She is just completing a new book, The Pearly
Gates of Cyberspace, a cultural history of space from Dante
to the Internet, (for W.W. Norton.) Wertheim is an Australian science
writer now based in Berkeley, CA. She has written extensively about
science, technology and culture for magazines, television, and radio.
She writes for New Scientist, The Sciences, The New York Times,
The Australian Review of Books, 21C, World Art, HQ, and others.
She is also currently producing "Faith and Reason", a television
documentary about science and religion for PBS. She regularly lectures
on this subject at colleges and universities here and abroad.
From: Howard Gardner
Submitted: 11.3.97
Hi. Here are some comments on the piece by Stan Dehaene, for inclusion
on your website.
I admire Stan Dehaene's very clear exposition of recent findings
about the number sense. Of special value is his excellent synthesis
of findings from psychology, brain research, computational models,
and cross-cultural investigations. Our understandings of human cognitive
achievements are most likely to be advanced by efforts to integrate
the sometimes disparate data from the several discipline.
I take issue with Dehaene's argument that prodigies are like the
rest of us or that, at the very least, they have the same
initial endowment. Having reviewed the literature on prodigies in
various domains (e.g., music, mathematics, chess) I have reached
a quite different conclusion.
As Ellen Winner argues in Gifted Children (Basic Books,
1996), prodigies can be distinguished from an early age from their
peers. Prodigies show a fascination (bordering on obsession) with
a certain content (e.g., numbers, visual patterns, auditory musical
patterns) and they have a rage to master the domains that deal with
such specific content. While they may have parental support, this
support is reactive rather than initiating. Moreover, prodigies
unlike the rest of us do not simply follow the conventional
educational pattern. They pose new questions, and they often solve
domain issues wholly or largely on their own. Philosopher Saul Kripke
conjectures that if algebra had not existed when he was in elementary
school, he would have invented it; and this kind of comment (whatever
its truth value in the specific case) captures quite accurately
the mental attitudes and powers of prodigies.
No one understands the origins of prodigies. We simply have to
generate satisfying ways of thinking about them. I find it useful
to think of prodigies as having the same strategies and parameters
with reference to their chosen content domain that all normal individuals
have with respect to the mastery of one natural language. (In other
words, we are all linguistic prodigies, while prodigies in other
domains are rare). The prodigy seems "pretuned" to discover patterns
in the domain, including ones that have eluded others. Perhaps,
if it is to result in achievements that are valued by the adult
society, this gift has eventually to be wedded to strong motivation
(to succeed, to master) and to be creative (to step out in new directions);
and, if it is to be distinguished from the mechanistic ability of
the savant, it has eventually to be linked to wider problems, including
issues from other domains. Dean Keith Simonton has written interestingly
about the possibility that genius involves the very occasional concatenation
of these disparate human proclivities and talents.
I think that one is far more likely to understand Mozart, Bobby
Fischer, or Ramanujun if one assumes that they differ in fundamental
ways from the rest of the population than if one has to gerrymander
an explanation that simply builds on the general abilities of the
general public. Whether Ramanujun may have recalled an earlier feat
of calculation, and whether the rest of us could also recognize
the special features of the number 1729 is beside the point. Ramanujun
is honored because he covered several hundred years of mathematics
on his own in India and then made original contributions to number
theory after he joined G. H. Hardy in Cambridge.
My disagreement with Dehaene is of more than academic interest.
If parents believe that they can convert their children into the
next Karl Gauss, John Stuart Mill, Gary Kasparov, or Felix Mendelsohn,
they are likely to subject them to training regimes that are inappropriate
and even cruel. Recognition that all of us can become numerate and
that many of us can eventually master the calculus is an achievement
in itself one that Dehaene has done much to foster; it is
neither necessary nor desirable to imply that the average individual
can exhibit prodigious achievements.
HOWARD GARDNER, the major proponent of the theory of multiple intelligences,
is Professor of Education at Harvard University and holds research
appointments at the Boston Veteran's Administration Medical Center
and Boston University School of Medicine. His numerous books include
Leading Minds, Frames of Mind, The Mind's New Science:
A History of the Cognitive Revolution, To Open Minds, and Extraordinary
Minds: Portraits of Four Exceptional Individuals. He has received
both a MacArthur Prize Fellowship and the Louisville Grawemeyer
Award.
From: Joseph Traub
Submitted: 11.5.97
I like the experimental research being done by Stanislas Dehaene.
Some questions for Stanislas:
* What is the relation between the intuitive notion of number
and number representations? For example, the Anasazi of Chaco Canyon
must have had an intuitive notion of number to engage in trade,
do astronomy, and build villages like Pueblo Bonito. As far as I
know, they did not have a written language or number representation.
(Archaeologists who read this, please correct me if I'm wrong.)
* What determines whether a civilization achieves a written language
or number representation? Again, the Anasazi provide an example
of a complex civilization which didn't have those achievements
Best,
Joe
JOSEPH TRAUB is the Edwin Howard Armstrong Professor of Computer
Science at Columbia University and External Professor at the Santa
Fe Institute. He was founding Chairman of the Computer Science Department
at Columbia University from 1979 to 1989, and founding chair of
the Computer Science and Telecommunications Board of the National
Academy of Sciences from 1986 to 1992. From 1971 to 1979 he was
Head of the Computer Science Department at Carnegie-Mellon University.
Traub is the founding editor of the Journal of Complexity
and an associate editor of Complexity. A Festschrift in celebration
of his sixtieth birthday was recently published. He is currently
writing his ninth book, Information and Complexity, Cambridge
University Press, 1998.
From: Steve Pinker
Submitted: 11.6.97
George Lakoff and I share an admiration for Dehaene's excellent
book, and we also agree in many ways about the cognitive basis for
mathematical reasoning, as I point out in How the Mind Works.
Lakoff claims to disagree with one of the foundations of my book,
the computational theory of mind, which he repeatedly misidentifies
as "the computer program theory of the mind." The misidentification
does not rest with the name, however; Lakoff describes the theory
in a way that I explicitly disavow:
"all aspects of mind can be characterized adequately without looking
at the brain."
"This computer program mind is not shaped by the details of the
brain."
"The concept of number is part of a computer program that is not
shaped or determined by the peculiarities of the physical brain
at all and they we can know everything about number without knowing
anything about the brain."
Nowhere do I espouse these mad beliefs. I do believe that thinking
is a form of computation as does Pat Churchland, co-author
of a book entitled "The Computational Brain." I also believe that
to understand the mind, one must look at what the mind computes,
at how it computes those things, and at the brain structures that
carry out the computations. The gratuitous bits about "not looking
at the brain," "not shaped by the details of the brain," and "not
knowing anything about the brain" are inventions by Lakoff.
Steve Pinker
STEVEN PINKER is professor in the Department of Brain and Cognitive
Sciences at MIT; director of the McDonnell-Pew Center for Cognitive
Neuroscience at MIT; author of Language Learnability and Language
Development, Learnability and Cognition, The Language Instinct,
and How the Mind Works (Norton).
From: Charles Simonyi
Submitted: 11.7.97
Of course I enjoyed the interview with Dehaene and I am also looking
forward to the forthcoming Hungarian (or English?) translation.
In the spirit of brainstorming I will make some comments on the
comments.
I am very much in agreement with Prof. Gardner on prodigies: my
belief is also that they use a special encoding for a domain, an
encoding that is the most similar to the natural language sense
that every normal child has. It seems even possible to me that the
prodigies subvert their language organ for certain language-like,
that is abstract domains: chess, music, mathematics. The quality
of "fluency" is an excellent description of how prodigies navigate
in the domain it is easy to imagine Mozart as the native
speaker of music making fun of Salieri the earnest foreign student.
I have a friend, Mr. A, in his sixties who is a language phenomon:
he has the ability to learn languages to native level as far as
grammar, fluency, and pronunciation are concerned his vocabulary
and cultural information are still proportional to the time he spends
with the language. Proof: real natives regarded him as a retarded
compatriot rather than a skilled foreigner. I had many discussions
with him about his methods. My point is that studying adults who
are particularly successful with languages might give us insight
into unusually effective encodings which could be may be
the same that the language organ employs. Mr. A believes,
for example, in "basis vectors" that is random prototype sentences
which he fancies, and learns by heart, repeats frequently, with
great precision and great fluency ("You should have come by the
way of the Opera square") and which he then subverts to specific
purposes ("You should have counted the change").
Number sense is not very language-like as many comments noted.
Perhaps this is why numerical prodigies are rare and are idiot/savants:
they have to subvert their language organ beyond what is harmless.
As to the Ramanujun number 1729: has anybody noticed the same
number appearing in Feynman's book ("You are surely joking...")
when he competes using mental arithmetic with the Japanese abacus
master: after losing on simple arithmetic the problem given was
to calculate the cube root of 1729! Feynman apparently have not
heard the Ramanujun story because he does not mention it at all
in the book: his insight was to remember from his wartime engineering
days that one cubic foot equals 1728 cubic inches! So as the Japanese
sweated on the abacus he slowly emitted the obvious digits 1 2 ,
0 while feverishly trying to approximate a little bit of the rest
using a power series. Also the octal representation of this number
3301 was for a long time the secret password to the central computer
of Xerox Parc. Maybe this will spur someone onto more insights.
On Steve Pinker's response to George Lakoff: I thought I was a
believer in Steve's "computational theory of mind" but when I originally
read them I did not find Lakoff's exaggerations shocking, I thought
they were explanatory. Without agreeing with Lakoff's conclusions,
is it not the case, Steve, that "all aspects of the mind can be
characterized adequately without looking at the brain" if we take
this statement as a test that a future goal will have been reached
(the goal of very strong AI) rather than a description of an absurd
("mad") process which I do not believe was intended? I also interpreted
"Looking at" as "referenced in any way except for the fact that
none of this would have existed without the prior existence of many
brains both looking and looked at" that is "looking at the time
that the test is successful and very strong AI has been reached".
By very strong AI I mean consciousness without brain/body.
CHARLES SIMONYI, Chief Architect, Microsoft Corporation, joined
Microsoft in 1981 to start the development of microcomputer application
programs. He hired and managed teams who developed Microsoft Excel,
Multiplan, Word, and other applications. In 1991, he moved on to
Microsoft Research where he focused on Intentional Programming,
an "ecology for abstractions" which strives for maximal reuse of
components by separating high level intentions from implementation
detail.
Dehaene Responds
From: Stanislas Dehaene
Submitted: November 14, 1997
Many thanks to George Lakoff, Mark Hauser, Jaron Lanier, Rafael
Nunez, Margaret Werheim, Howard Gardner, Joseph Traub, Steve Pinker,
and Charles Simonyi for taking the time to write careful commentaries
and addenda to my "number sense" paper. Their comments often address
additional points that I did not have room to comment on in Edge,
but which are discussed at greater length in my book. Unfortunately,
the commentaries themselves are now two to three times as long as
my original article! In my reply, I will only confine myself to
a few generic remarks and rejoinders.
Higher mathematics (comments by George Lakoff and Rafael Nunez)
I am delighted to hear that George Lakoff and Rafael Nunez think
that they can now address the cognitive structure of higher mathematics
using an approach somewhat similar to the one I was using for number,
and I'm looking forward to reading their forthcoming book. There
is no doubt that arithmetic is but a very small aspect of mathematics.
If I am right , however, much of the mathematical edifice will be
found to have the same structure and origin as arithmetic. Namely,
mathematics is a construction on the basis of raw intuitions or
primary cerebral representations that have been engraved in our
brains through evolution. Lakoff and Nunez suggest that, besides
number, spatial relations such as containment, contact, etc are
also basic foundations of mathematics. That sounds very likely.
They also suggest that the mathematical edifice is built on "conceptual
metaphors", so that structures from one domain (e.g. space) can
be mapped metaphorically to another domain (e.g. number), giving
rise to novel mathematics in the process. This is a genuinely novel
and interesting idea which, if formalized, could flesh out the notion
of a "construction of mathematics".
Number processing and the brain-computer metaphor (comments by
George Lakoff and Steve Pinker)
My own view stands in between the contrasting or even clashing
views held by George Lakoff and Steve Pinker. I would grant, with
Steve, that computational models of the mind/brain are key to our
current rapid progress in cognitive science. I also think, however,
that the computational metaphor to brain function, which is currently
the best metaphor we have, has lead some to an extreme form of functionalism
according to which studying the brain has absolutely nothing to
do with how the mind work. This extreme form of functionalism, which
is not the one held by Steve Pinker, can be found in the writings
of Jerry Fodor or Philip Johnson-Laird (e.g."The physical nature
[of the brain] places no constraints on the pattern of thought").
This is, I think, a rather absurd statement.
I concur with George Lakoff in thinking that by looking at the
organization of the brain, we will be able to discover novel ways
of doing computations that are radically different from those in
digital computers. My own research refutes a simple-minded brain-computer
metaphor, and in fact the last chapter of my book is largely dedicated
to this point. As far as number crunching is concerned, the brain
is unlike any digital computer that we know of it is very
poor at exact symbolic processing, and seems to process numerical
quantities in an analog rather than a digital fashion.
Linguistic versus non-linguistic representations of number (comments
by Mark Hauser, Joseph Traub and Jaron Lanier)
To answer Joseph Traub's specific query, I do not have information
about the Anasazi representation of numbers. It is quite possible,
however, to have no written or spoken symbols for number and yet
to develop accurate arithmetic systems. Many tribes in New Guinea
use a body-pointing system to represent integers and to calculate.
The abacus is also an excellent, non-arbitrary way of representing
and computing with numbers. The exact relation to the infant and
animal "number sense" is unclear. One idea, which was defended by
UCLA psychologists Rochel Gelman and Randy Gallistel, is that the
preverbal quantity representation provides strong principles that
guide the acquisition of verbal counting and of symbolic notation.
Mark Hauser states that "in most animal interactions, there is
never really a situation where a more versus less distinction fails".
Is that really true? There certainly are many experiments in which
animals perform barely above chance when dealing with two close
numbers such as 6 and 7. It might be argued that such failures,
even if minor, provide at least some evolutionary pressure for the
development of exact number systems. It seems to me equally plausible,
however, that human language and symbolic abilities emerged through
entirely non-numerical evolutionary pressures, and the number system
merely exploited and pre-empted those systems later on. Indeed,
the connection between verbal numerals and preverbal numerical quantities
is one that is very difficult for children to acquire. In my book,
I suggest that this divide, and the ensuing blind execution of symbolic
algorithms without consideration of their quantitative meaning,
is the main reason for "innumeracy".
I disagree with Jaron Lanier, who thinks that "we find no occasion
when it is desirable or even acceptable to confuse 1006 with 1007
or pi with 3". Of course we do approximation is basic to
the way we talk about or work with numbers. All languages throughout
the world are full of expressions for approximation (e.g. I have
ten or twenty books). Our ability to approximate calculations is
certainly what distinguishes most clearly from current digital computers,
who can compute pi to the umpteenth decimal and yet have no idea
what the result means. We know what it means, I think, partly because
we can relate to other quantities using our sense of approximation.
Certainly, our adult brain eventually must comprise multiple representations
of numbers (quantities, dates, number line, arithmetic facts, etc...).
Indeed, in some brain-lesioned patients these various representations
may dissociate. Exactly how many of these representations are available
to an adult is, I think, highly variable from culture to culture
and even from individual to individual. We all start with a core
"number line" for approximate numerical quantities, but our end
state almost certainly is very different depending on our level
of mathematical achievement. This need not mean that the basic architecture
of the brain is transformed (here I concur with Rafael Nunez). Rather,
we seem to have a limited ability to recycle our pre-existing cerebral
networks for a different use that the one they were initially selected
for. For instance, we seem to use a cortico-subcortical circuit
initially destined to the storage of routine motor behaviors, in
order to store rote multiplication facts. Perhaps George Lakoff's
notion of a "conceptual metaphor" will help understand this recycling
process (which does not mean that the brain is fully "plastic").
It is interesting to speculate on the role of music in the acquisition
of higher mathematics (thanks to Jaron Lanier for point this out).
It is often claimed that mathematicians are "gifted" for music.
I believe that the causal relation may well be the converse. Children
who take music courses early on are exposed to what amounts to highly
abstract mathematical concepts very early on in life, way earlier
than they would if they were only taking a regular school curriculum.
For instance, fractions and powers of two are fundamental to musical
notation. Thus, early musical training may provide early mathematics
training as a bonus.
Great mathematicians and prodigies (comments by Howard Gardner
and Jaron Lanier)
One final thought about prodigies. I concur with Howard Gardner
in thinking that "No one understands the origins of prodigies".
However, I wanted to draw attention to the fact that there is really
very little evidence that prodigies start in life with distinct
neural hardware. Perhaps the most crucial difference lies in the
amount of attention that they are willing to dedicate to a narrow
domain such as numbers. Indeed, this is how retarded autistic children
eventually manage to learn about arithmetic or the calendar, after
thousands of hours of training themselves. My belief is that they
initially have little or no special talent for numbers or the calendar
to begin with indeed, how could neural networks be pretuned
to the structure of the calendar, which is a very recent cultural
invention? Whatever little evidence there is suggests that training
alone can turn normal subjects into apparent "prodigies", if one
is willing to dedicate enough time to it. Also I think that it's
important to debunk the idea that prodigies use radically different
algorithms from our own they simply don't. Rather they use
simple variations or shortcuts that anyone can learn.
"Perhaps Einstein started an alternate neural numerical scratch
pad", says Lanier. Who knows? There is a real chicken-and-egg problem
here. Research has seemingly shown that Einstein's brain had an
abnormally high concentration of glial cells in the inferior parietal
region. Is this because he was born with this "malformation", however,
or because he developed it through putting that brain region to
frequent use? Understanding the neural bases of talent is bound
to be among the most difficult questions that cognitive neuroscience
will have to address.
One final word on Charles Simonyi's remarks
I would like to thank Charles Simonyi for drawing my attention
to new anecdotes surrounding number 1729. The story about Richard
Feynman remembering that 1728 is 12x12x12 is particularly interesting,
because it strengthens the idea that, indeed, 1728 is a very recognizable
number, and that many people --including at least Ramanujan, Feynmann,
Simonyi, myself, and surely scores of others-- have it stored in
their mental number lexicon. This does not in the least diminish
my admiration for Ramanujan's feats, but it does make seem more
human and understandable.
Charles Simonyi notion that prodigies "navigate fluently" in their
domain of predilection is very similar to the metaphor I use in
my book I talk about a landscape of numbers (or more abstract
mathematical objects) within which prodigies and top mathematicians
can move freely. These people claim to experience numbers in a phenomenal
way, often within a spatial setting, and they claim that numbers
and their properties immediately pop to mind. Furthermore, many
claim to experience strong pleasure associated with this
some go as far as to prefer the company of numbers to that of other
fellow humans! ("Number are friends to me, more or less," says Wim
Klein. "It doesn't mean the same for you, does it, 3844? For you
it's just a three and an eight and a four and a four. But I say:
'Hi, 62 squared!'").
It seems plausible, indeed, as Charles Simonyi, that these people
have "subverted" several of their mental organs to mathematical
purposes certainly their numerosity system, but also perhaps
part of their spatial navigation system, their language system,
and their internal reward system. Perhaps someday we shall be able
to see this "recycling" of brain areas for a different cultural
purpose using brain imaging. There are already a few examples of
this in non-mathematical domains, for instance the recycling of
visual cortex for Braille processing in blind subjects, or the recycling
of part of the infero-temporal object recognition system for word
identification in alphabetized people.
My own view of prodigies is that their brain structure is initially
not very different from the rest of us mortals. What is special
is their ability to focus an amazing amount of attention on a narrow
domain (such as arithmetic) and to get strong pleasure out of it.
This gets them started, very early on in life, on a learning curve
where they acquire more and more memories and automatisms about
a domain, possibly subverting several initially unrelated brain
systems to their passion, and eventually turning into experts. In
a nutshell: passion breeds talent.-
Stanislas Dehaene
STANISLAS DEHAENE is a researcher at the Institut National de la
Santé where he studies cognitive neuropsychology of language
and number processing in the human brain. He was awarded a masters
degree in applied mathematics and computer science from the University
of Paris in 1985 and then earned a doctoral degree in cognitive
psychology in 1989 at the Ecole des Hautes Etudes en Sciences Sociales
in Paris. He is the author of The Number Sense: How Mathematical
Knowledge is Embedded in Our Brains (Oxford University Press).