| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >




2008

"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?"

CHRIS DIBONA
Open Source Programs Manager, Google


Oversight and Programmer productivity

Over the last three years, we've run a project called the Summer of Code, in which we pair up university level software developers with open source software projects. If the student succeeds in fulfilling the goals set forth in their application (which the project has accepted) then they are paid a sum of $4500. We wanted a program that would keep software developers coding over the summer and it would also help out our friends in the world of open source software development.

The passing rate last year was 81%, which means some 700+ students completed their projects to the satisfaction of their mentors.

This last year, we did a cursory study of the code produced by these students and reviewed, among other items, how many lines of code the student produced. The lines of code thing has been done to death in the computer industry, and it's a terrible measure of programmer productivity. It is one of the few metrics we do have and since we make the assumption that if they pass the project then their code has passed muster. So after that the lines of code becomes somewhat more significant than it would normally.

Over the summer of the average student produced 4,000 lines of code with some students producing as much at 10, 14 and even 20 thousand lines. This is an insane amount of code written weather you measure by time or by money. By some measures this means the students are anywhere between 5 and 40 times as productive as your 'average' employed programmer.  This code, mind you, was written by a student who is almost always geographically separated from their mentor by at least 3 time zones and almost never has a face to face meeting with their mentor.

This is an absurd amount of productivity for a student or,  heck, anyone, writing code. This is often production and user-ready code being produced by these students. It's really something to see and made me revise what I consider a productive developer to be and what indeed I should expect from the engineers that work for and with me.

But, and here's the thing I changed my mind about, is the tradeoff for silly high productivity that I have to run my projects the way we run the Summer of Code? Maybe. Can I keep my hands off and let things run their course? Is the team strong enough to act as this kind of mentoring to each other? I now think the answer is that yes, they can run each other better than I can run them. So let's see what letting go looks like. Ask me next year? Let's hope next years question is 'what chance are you least regretful you took?' and I can talk about this then!


BEATRICE GOLOMB, MD, PhD
Associate Professor of Medicine & Associate Professor of Family and Preventive Medicine at UCSD

Reasoning from Evidence: A Call for Education

Rather than choose a personal example of a change in mind, I reflect on instances in which my field, medicine, has apparently changed "its" mind based on changes in evidence. In my experience major reversals in belief (as opposed to simply progressions, or changes in course) typically arise when there are serious flaws in evaluation of evidence or inference leading to the old view, the new view, or both.

To be committed to a view based on facts, and later find the view wrong, either the facts had to be wrong or the interpretation of them had to extend beyond what the facts actually implied. The "facts" can be wrong in a range of settings: academic fraud, selective documentation of research methods, and selective publication of favorable results — among many. But in my experience more often it is the interpretation of the facts that is amiss.

Hormone replacement therapy ("HRT") is a case in point. You may recall HRT was widely hailed as slashing heart disease and dementia risk in women. After all, in observational studies — with large samples --women who took HRT had lower rates of heart disease and Alzheimer's than women who did not.

I was not among those advising patients that HRT had the benefits alleged. Women who took HRT (indeed any preventive medication) differed from those who did not. These differences include characteristics that might be expected to produce the appearance of a protective association, through "confounding." For instance, people who get preventive health measures have better education and higher income — which predict less dementia and better health, irrespective of any actual effect of the treatment. (Efforts to adjust for such factors can never be trusted to capture differences sufficiently.) These disparities made it impossible to infer from such "observational" data alone whether the "true" causal relationship of hormone replacement to brain function and heart events was favorable, neutral, or adverse.

When controlled trials were finally conducted — with random allocation to HRT or placebo providing that the compared groups were actually otherwise similar — HRT was found rather to increase rates of heart-related events and dementia. But the lessons from that experience have not been well learned, and new, similarly flawed "findings" continue to be published — without suitable caveats.

It is tempting to provide a raft of other examples in which a range of errors in reasoning from evidence were present, recognition of which should have curbed enthusiasm for a conclusion, but I will spare the reader.

Stunningly, there is little mandated training in evaluation of evidence and inference in medical school — nor indeed in graduate school in the sciences. (Nor are the medical practice guidelines on which your care is grounded generated by people chosen for this expertise.)

Even available elective coursework is piecemeal. Thus, statistics and probability courses each cover some domains of relevance such as study "power," or distinguishing posterior from a priori probabilities: thus it may be that commonly if a wife who was beaten is murdered, the spouse is the culprit (a posteriori), even if it is uncommon for wife beaters to murder their spouse (a priori). Epidemiology-methods courses address confounding and many species of bias.Yet instruction in logical fallacies, for instance, was absent completely from the available course armamentarium. Each of these domains, and others, are critical to sound reasoning from evidence.

Preventive treatments should mandate a high standard of evidence. But throughout the "real world" decisions are required despite incomplete evidence. At a level of information that should not propel a strong evidence-driven "belief," a decision may nonetheless be called for: How to allocate limited resources; whom to vote for; and whether to launch a program, incentive, or law. Even in these domains, where the luxury of tightly controlled convergent evidence is unattainable — or perhaps especially in these domains — understanding which evidence has what implications remains key, and may propel better decisions, and restrain unintended consequences.

The haphazard approach to training in reasoning from evidence in our society is, in its way, astounding — and merits a call to action. Better reasoning from evidence is substantially teachable but seldom directly taught, much less required. It should be central to curriculum requirements, at both graduate (advanced) and undergraduate (basic) levels — if not sooner.

After all, sound reasoning from evidence is (or ought to be) fundamental for persons in each arena in which decisions should be rendered, or inferences drawn, on evidence: not just doctors and scientists, but journalists, policy makers — and indeed citizens, whose determinations affect not solely their own lives but others', each time they parent, serve on juries, and vote.


STEPHON ALEXANDER
Assistant Professor of Physics, Penn State


The Light Side of Locality

Before I entered the intellectual funnel of graduate school, I used to cook up thought experiments to explain coincidences, such as  running into a person immediately after a random thought of them. This secretive thinking was good mental entertainment, but the demand of forging a serious career in physics research forced me to make peace with such wild speculations.  In my theory of coincidences, non-local interactions as well as a dark form of energy was necessary; absolute science fiction!  Fifteen years later, we now have overwhelming evidence of a 'fifth-force' mediating an invisible substance that the community has dubbed 'dark energy'.   In hindsight, it is of no coincidence that I have changed my mind that nature is non-local. 

Non-local correlations are not common experience to us, thus it is both difficult to imagine and accept.  Often, research in theoretical physics encourages me to keep an open mind; or to not get too attached to ideas that I am deluded into thinking should be correct.   While this has been a constant struggle for me in my scientific career thus far, I have experienced the value of this theoretical ideology weaning process.  After years of wrestling with some of the outstanding problems in the field of elementary particle physics and cosmology, I have been forced to change my mind on this predisposition that has been silently passed on to me by my physics predecessors, that the laws physics are, for the most part, local.

During my first year in graduate school, I came across the famous Einstein, Podosky and Rosen (EPR) thought experiment, which succinctly argue for the inevitable 'spooky action at a distance' in quantum mechanics.   Then came the Aspect experiment that measured the non-local entanglement of photon polarization confirming EPRs expectation that there exist non-local correlations in nature enabled by quantum mechanics (with a caveat of course).   This piece of knowledge had a short life in my education and research career.

Non-locality exited the door of my brain once and for all, after I approached one of my professors, an accomplished quantum field theorist.  He convinced me that non-locality goes away once quantum mechanics properly incorporates causality, through a unification with special relativity; i.e. a theory known as quantum field theory.   With the promise of a sounder career path I invited these, then comforting words, and attempted to master quantum field theory.   Plus, even if non-locality happens, these processes are exceptional events created under special conditions, while most physics is completely local.  Quantum field theory works and it became my new religion.  I have since remained on this comfortable path.

Now that I specialize in the physics of the early universe, I have first hand witnessed the great predictive and precise explanatory powers of Einstein's general relativity, married with quantum field theory to both explain the complete history and physical mechanism for the origin of structures in the universe, all in a seemingly local and causative fashion.  We call this paradigm cosmic inflation and it is deceptively simple.  The universe started out immediately after the big bang from a microscopically tiny piece of space then inflated 'faster than the speed of light'.   Inflation is able to explain our entire complexity of observed universe with the economy of a few equations that involve general relativity and quantum field theory.

Despite its great success inflation has been plagued with conceptual and technical problems.  These problems created thesis projects and inevitably, jobs for a number of young theorists like myself.  Time after time, publication after publication, like a rat on his wheel, we are running out of steam, as the problems of inflation just reappear in some other form.  I have now convinced myself that the problems associated with inflation won't go away unless we somehow include non-locality. 

Ironically, inflation gets ignited by the same form of dark energy that we see permeating the fabric of the cosmos today, except in much greater abundance fourteen billion years ago.  Where did most of the dark energy go after inflation ended?  Why is some of it still around?  Is this omniscient dark energy the culprit behind non-local activity in physical processes?  I don't know exactly how non-locality in cosmology will play itself out, but by its very nature, the physics underlying it will affect 'local' processes.  I still haven't changed my mind on coincidences though.


George Johnson
Science writer; Author, Miss Leavitt's Stars

I used to think that the most fascinating thing about physics was theory — and that the best was still to come. But as physics has grown vanishingly abstract I've been drawn in the opposite direction, to the great experiments of the past.

First I determined to show myself that electrons really exist. Firing up a beautiful old apparatus I found on eBay — a bulbous vacuum tube big as a melon mounted between two coils — I replayed J. J. Thomson's famous experiment of 1897 in which he measured the charge-to-mass ratio of an electron beam. It was thrilling to see the bluish-green cathode ray dive into a circle as I energized the electromagnets. Even better, when I measured the curve and plugged all the numbers into Thomson's equation, my answer was off by only a factor of two. Pretty good for a journalist. I had less success with the stubborn Millikan oil-drop experiment. Mastering it, I concluded, would be like learning to play the violin.

Electricity in the raw is as mysterious as superstrings. I turn down the lights and make my Geissler tubes glow with the touch of a high-voltage wand energized by a brass-and-mahogany Ruhmkorff coil. I coax the ectoplasmic rays in my de la Rive tube to rotate around a magnetized pole.

Maybe in a year or two, the Large Hadron Collider will make this century's physics interesting again. Meanwhile, as soon as I find a nice spinthariscope, I'm ready to go nuclear.


GEOFFREY MILLER
Evolutionary Psychologist, University of New Mexico; Author, The Mating Mind

Asking for directions

Guys lost on unfamiliar streets often avoid asking for directions from locals.  We try to tough it out with map and compass.  Admitting being lost feels like admitting stupidity.  This is a stereotype, but it has a large grain of truth.  It's also a good metaphor for a big overlooked problem in the human sciences. 

We're trying to find our way around the dark continent of human nature.  We scientists are being paid to be the bus-driving tour guides for the rest of humanity.  They expect us to know our way around the human mind, but we don't.

So we try to fake it, without asking the locals for directions.  We try to find our way from first principles of geography ('theory'), and from maps of our own making ('empirical research').  The roadside is crowded with locals, and their brains are crowded with local knowledge, but we are too arrogant and embarrassed to ask the way.  Besides, they look strange and might not speak our language.  So we drive around in circles, inventing and rejecting successive hypotheses about where to find the scenic vistas that would entertain and enlighten the tourists ('lay people', a.k.a. 'tax-payers').  Eventually, our bus-load starts grumbling about tour-guide rip-offs in boring countries.  We drive faster, make more frantic observations, and promise magnificent sights just around the next bend. 

I used to think that this was the best we could do as behavioural scientists.  I figured that the intricacies of human nature were not just dark, but depopulated — that a few exploratory novelists and artists had sought the sources of our cognitive Amazons and emotional Niles, but that nobody actually lived there.

Now, I've changed my mind — there are local experts about almost all aspects of human nature, and the human sciences should find their way by asking them for directions.  These 'locals' are the thousands or millions of bright professionals and practitioners in each of thousands of different occupations.  They are the people who went to our high schools and colleges, but who found careers with higher pay and shorter hours than academic science.  Almost all of them know important things about human nature that behavioural scientists have not yet described, much less understood.  Marine drill sergeants know a lot about aggression and dominance.  Master chess players know a lot about if-then reasoning.  Prostitutes know a lot about male sexual psychology.  School teachers know a lot about child development.  Trial lawyers know a lot about social influence.  The dark continent of human nature is already richly populated with autochthonous tribes, but we scientists don't bother to talk to these experts. 

My suggestion is that whenever we try to understand human nature in some domain, we should identify several groups of people who are likely to know a lot about that domain already, from personal, practical, or professional experience.  We should seek out the most intelligent, articulate, and experienced locals — the veteran workers, managers, and trainers. Then, we should talk with them, face-to-face, expert-to-expert, as collaborating peers, not as researchers 'running subjects' or 'interviewing informants'.  We may not be able to reimburse them at their professional hourly wage, but we can offer other forms of prestige, such as co-authorship on research papers.

For example, suppose a psychology Ph.D. student wants to study emotional adaptations such as fear and panic, that evolved for avoiding predators.  She learns about the existing research (mostly by Clark Barrett at UCLA), but doesn't have any great ideas for her dissertation research.  The usual response is three years of depressed soul-searching, random speculation, and fruitless literature reviews.  This phase of idea-generation could progress much more happily if she just picked up the telephone and called some of the people who spend their whole professional lives thinking about how to induce fear and panic.  Anyone involved in horror movie production would be a good start: script-writers, monster designers, special effects technicians, directors, and editors.  Other possibilities would include talking with:
  • Halloween mask designers,
  • horror genre novelists,
  • designers of 'first person shooter' computer games,
  • clinicians specializing in animal phobias and panic attacks,
  • Kruger Park safari guides,
  • circus lion-tamers,
  • dog-catchers,
  • bull-fighters,
  • survivors of wild animal attacks, and
  • zoo-keepers who interact with big cats, snakes, and raptors.

A few hours of chatting with such folks would probably be more valuable in sparking some dissertation ideas than months of library research. 

The division of labor generates wondrous prosperity, and an awesome diversity of knowledge about human nature in different occupations.  Psychology could continue trying to rediscover all this knowledge from scratch.  Or, it could learn some humility, and start listening to the real expertise about human nature already acquired by every bright worker in every factory, office, and mall.  


STEVE CONNOR
Science Editor, The Independent in London

I was born in the second half of the 20th Century and for most of my life I grew up in the perhaps naïve belief that the 21st Century would be somehow better, shinier and brighter than the last. We even used it as a positive adjective and talked about "21st Century healthcare", "a 21st Century car" or even a "a 21st Century way of life". Over the past decade or so my opinion has gradually changed. I now believe that however bad the 20th Century has been — and it brought us the horrors of the Holocaust and nuclear proliferation — this coming century will be far worse. 

Writing about science as a career takes you on an extraordinary journey of progression that gives the illusion that everything is on an unfaltering course of improvement. Many other specialisms in daily journalism — politics, arts, legal affairs, crime, education etc. — seem to follow a circular path of reporting which means that the same type of stories appear come round time and time again. But science is all about standing on the shoulders of the giants who came before, and with it the inverted pyramid of scientific knowledge continues its exponential growth. And so it seems self-evident that things can only get better as more questions can be answered and more problems solved.

History too supports the idea of a progressively better world. Vaccines, drugs, better hygiene and housing, clean water and other general improvements in health and wellbeing are now taken for granted. Today, people in developed countries live longer and healthier lives than any previous generation — and so often without the pain that went with living in the age before science. Anyone who doubts the improvements in medical science should read Claire Tomalin's biography of Samuel Pepys where she describes in some detail how surgeons removed a bladder stone through his penis without the benefit of anaesthetic. (Amazingly, he survived.)

But as the first decade of the 21st Century enters its final years, my optimism for the remaining nine has waned. I no longer see the phrase "21st Century" as being synonymous with progression and betterment. There is no single event or fact that has led to this change of mind, but if pressed I would blame two mutually interacting phenomena — global warming and the inexorable growth in the human population.

This century will see both effects come into deadly play. By mid century there will be half as many people on the planet as there is now — some 9 billion or more — and the resources available to support them will be severely degraded, even without the help of climate change. But we know that the world will be warmer, perhaps significantly so by mid to late century, and this will put intolerable pressure on the only life-support system we know — planet Earth.

I have also changed my mind about the assessments of the Intergovernmental Panel on Climate Change. They are much too conservative and have underestimated the future impact of melting polar ice sheets and rising sea levels. The biggest influence on changing my mind on this has been James Hansen, director of the Goddard Institute for Space Studies, who in 2007 co-authored a 29-page scientific paper published for the Philosophical Transactions of the Royal Society detailing why the scale of the threat has put the Earth in imminent peril. Hansen believes that nothing short of a planetary rescue will save us from global environmental cataclysm and that we have less than 10 years to act.

The sea ice of the Arctic is melting far faster than anyone had predicted and the record minimum seen in summer 2007 (which followed the previous record minimum of 2005) has shocked even the most seasoned Arctic observers. The stability of the Greenland ice sheet and the West Antarctic Ice Sheet in the southern hemisphere, which both have the potential to raise sea levels by many metres, is far more precarious than any IPCC report has hitherto suggested. Given that many hundreds of millions of people live within a few metres of sea level, and many of them are already competing for ever-more limited supplies of freshwater, the issue of impending sea level rise will become one of the most pressing problems facing humanity this century.

Added to this is the issue of positive feedbacks within the climate system — the factors that will make climate change far worse as carbon dioxide levels continue to rise. As Hansen and others have pointed out, there seems to be many more positive reinforcers of climate change than the negative feedbacks which could possibly help to limit the damage. In short, we are tinkering with a global climate system that could go dangerously out of control, and at a far faster rate than anyone has imagined as they peer into the crystal balls of their computer models. If it happens at all, the positive feedbacks will begin to exert their global influence early in the 21st Century.

James Lovelock, the veteran Earth scientist and inventor of the Gaia theory, has said that the four horsemen of the apocalypse will ride again this century as climate change triggers a wave of mass migrations, pandemics and violent conflicts. I would very much like to believe he is wrong, that we can somehow act in international unison as a common federation of humanity to address overpopulation and climate change. I wish I could believe that we have the resolve to tackle the two issues that could end the civilised progress of science and culture. Unfortunately, as this moment in time, I'm not ready to change my mind on that.


BARRY SMITH
Philosopher, School of Advanced Study, University of London; Coeditor, Knowing Our Own Minds

The Experience of the Normally Functioning Mind is the Exception

For a long time I regarded neuroscience as a fascinating source of information about the workings of the visual system and its dual pathways for sight and action; the fear system in humans and animals, and numerous puzzling pathology cases arising from site-specific lesions.

Yet, despite the interest of these findings, I had little faith that the profusion of fMRI studies of different cortical regions would tell us much about the problems that had pre-occupied philosophers for centuries. After all, some of the greatest minds of history had long pondered the nature of consciousness, the self, the relation between self and others, only to produce a greater realisation of how hard it was to say something illuminating about any of these phenomena. The more one is immersed in neural mechanisms the less one seems to be talking about consciousness, and the more attends to the qualities of conscious experience the less easy it is to connect with the mechanism of the brain. In despair, some philosophers suggested that we must reduce or eliminate the everyday way of speaking about our mental lives to arrive at a science of mind. There appeared to be a growing gulf between how things appeared to us and how reductionist neuroscience told us they were.

However, I have changed my mind about the relevance of neuroscience to philosophers' questions, and vice-versa. Why? Well, firstly because the most interesting findings in cognitive neuroscience are not in the least reductionist. On the contrary, neuroscientists rely on subjects' reports of their experiences in familiar terms to target the states they wish to correlate with increased activity in the cortex. Researchers disrupt specific cortical areas with TMS to discover how subject's experiences or cognitive capacities are altered.

This search for the neural correlates of specific states and abilities has proved far more successful than any reductionist programme; the aim being to explain precisely which neural areas are responsible for sustaining the experiences we typically have as human subjects. And what we are discovering is just how many sub-systems cooperate to maintain a unified and coherent field of conscious experience in us. When any of these systems is damaged what results are bizarre pathologies of mind we find it hard to comprehend. It is here that neuroscientists seek the help of philosophers in analysing the character of normal experience and describing the nature of the altered states. Reciprocally, what philosophers are learning from neuroscience is leading to revisions in cherished philosophical views; mostly for the better. For example, the early stages of sensory processing show considerable cross-modal influence of one sense on another: the nose smells what the eye sees, the tongue tastes what the ear hears, the recognition of voice is enhanced by, and enhances, facial recognition in the fusiform face area; all of which leads us to conclude that the five senses are not nearly as separate as common sense, and most philosophers, have always assumed.

Similar break-throughs in understanding how our sense of self depends on the somatosensory system are leading to revised philosophical thinking about the nature of self. And while philosophers have wondered how individuals come to know about the minds of others, neuroscience assumes the problem to have been partly solved by the discovery of the mirror neuron system which suggests an elementary, almost bodily, level of intersubjective connection between individuals from which the more sophisticated notions of self and other may develop. We don't start, like Descartes, with the self and bridge to our knowledge of other minds. We start instead with primitive social interactions from which the notions of self and other are constructed.

Neuroscientists present us with strange phenomena like patients with lesions in the right parietal region who are convinced that their left arm does not belong to them. Some still feel sensations of pain in their hand but do not believe that it their pain that is felt: something philosophers previously believed to be conceptually impossible.

I think the startling conclusion should be just how precarious the typical experience of the normally functioning mind really is. We should not find it strange to come across people who do not believe their hand belongs to them, or that it acts under someone else's command. Instead, we should think how remarkable it is that this assembly of sub-systems that keeps track of our limbs, our volitions, our position in space, and our recognition of others should cooperate to sustain the sense of self and the feeling of a coherent and unified experience of the world so familiar to us that philosophers have believed it to be the most certain things we know. It isn't the pathology cases of cognitive neuropsychology exceptional: it is the normally functioning minds that we should find the most surprising.


JESSE BERING
Director of the Institute of Cognition and Culture, Queens University, Belfast

I Have No Destiny (and Neither Do You)

If asked years ago whether I believed in God, my answer would have gone something like this: "I believe there's something…" This response leaves enough wiggle room for a few quasi-religious notions to slip comfortably through. I no longer believe that my soul is immortal, that the universe sends me messages every now and then, or that my life story will unfold according to some inscrutable plan. But it is more like knowing how and why a perceptual illusion is deceiving my evolved senses than it is becoming immune to the illusion altogether.
Here's a snapshot of how these particular illusions work:

Psychological Immortality
There's a scene in Gide's The Counterfeiters where a suicidal man puts a pistol to his temple but hesitates for fear of the noise from the blast. Similarly, a group of college students who rejected the idea that consciousness survives death nonetheless told me that someone who'd died in a car accident would know he was dead. "There's no afterlife," one participant said. "He sees that now."
In wondering what it's like to be dead, our psychology responds by running mental simulations using previous states of consciousness. The trouble is that death is not like anything we've ever experienced — or can experience. (What's it like to be conscious yet unconscious at the same time?) I doubt you'd find anyone who believes less in the afterlife, yet I have a very real fear of ghosts and I feel guilty for not visiting my mother's grave more often.

Symbolic Meaning of Natural Events
Psychologist Becky Parker and I told a seven-year-old that an invisible princess was in the room with her. The task was to find a hidden ball by placing her hand on top of the box she thought it was inside. If you change your mind, we said, just move your hand to the other box. Now, Princess Alice likes you and she's going to help you find the ball. "I don't know how she's going to tell you," said Becky, "but somehow she'll tell you if you pick the wrong box."

The child picked a box, held her hand there, and after 15 seconds the box opened to reveal the ball (there were two identical balls). On the second trial, as soon as the girl chose a box, a picture crashed to the ground, and the child moved her hand to the other box. In doing so, she responded just like most other seven-year-olds we tested. They didn't need to believe in Princess Alice to see the picture falling as a sign. In fact, if scepticism can be operationally measured by the degree of tilt in rolling eyes, many of them could be called sceptics.
More surprising was that slightly younger children, the credulous five-year-olds, didn't move their hands, and when asked why the picture fell, they said things like "I don't know why she did it, she just did it." They saw Princess Alice as running about making things happen, not as a communicative partner. To them, the events had nothing to do with their behaviour. Finally, the three-year-olds we tested simply shrugged their shoulders and said that the picture was broken. Princess Alice who?

Seeing signs in natural events is a developmental accomplishment rather than the result of a gap in scientific knowledge. To experience an illusion, the psychological infrastructure must first be in place. Whenever I hear mayors blaming hurricanes on drug use or evangelicals attributing tsunamis to homosexuality, I think of Princess Alice. Still, after receiving bad news my first impulse is to ask myself "why?"  Even for someone like me, scientific explanations just don't scratch the itch like supernatural ones.

Personal Destiny
Jean-Paul Sartre, the atheistic existentialist, observed that he couldn't help but feel as though a divine hand had guided his life. "It contradicts many of my other ideas," he said. "But it is there, floating vaguely. And when I think of myself I often think rather in this way, for want of being able to think otherwise."

My own atheism is not as organic as was Sartre's. Only scientific evidence and eternal vigilance have enabled me to step outside of this particular illusion of personal destiny. Psychologists now know that human beings intuitively reason as though natural categories exist for an intelligently designed purpose. Clouds don't just exist, say kindergartners, they're there for raining.

Erring this way about clouds is one thing, but when it colours our reasoning about our own existence, that's where this teleo-functional bias gets really interesting. The illusion of personal destiny is intricately woven together with other quasi-religious illusions in a complex web that researchers have not even begun to pull apart. My own private thoughts remain curiously saturated with doubts about whether I'm doing what I'm "meant" for.

Some beliefs are arrived at so easily, held so deeply, and divorced so painfully that it seems unnatural to give them up. Such beliefs can be abandoned when the illusions giving rise to them are punctured by scientific knowledge, but a mind designed by nature cannot be changed fundamentally. I stopped believing in God long ago, but he still casts a long shadow.


ROGER BINGHAM
Cofounder and Director, The Science Network; Neuroscience Researcher, Center for Brain and Cognition, UCSD; Coauthor, The Origin of Minds; Creator PBS Science Programs

Changing My Religion

I was once a devout member of the Church of Evolutionary Psychology.

I believed in modules — lots of them. I believed that the mind could be thought of as a confederation of hundreds, possibly thousands, of information-processing neural adaptations. I believed that each of these mental modules had been fashioned by the relentless winnowing of natural selection as a solution to problems encountered by our hunter-gatherer ancestors in the Pleistocene. I believe I actually said that we were living in the Space Age with brains from the Stone Age. Which was clever — but not, it turned out, particularly wise.

Along with the Church Elders, I believed that this was our universal evolutionary heritage; that if you added together a whole host of these
domain-specific mini-computers — a face recognition module, a spatial relations module, a rigid object mechanics module, a tool-use module, a social exchange module, a child-care module, a kin-oriented motivation module, a sexual attraction module, a grammar acquisition module and so on —  then you had the neurocognitive architecture that comprises the human mind. Along with them, I believed that what made the human mind special was not fewer of these 'instincts', but more of them.

I was so enchanted by this view of life that I used it as the conceptual scaffolding upon which to build a multi-million dollar critically- acclaimed PBS series that I created and hosted in 1996.

And then I changed my mind.

Actually, I prefer to say that I experienced a conversion. My
conversion — literally, the turning around — the adoption of new beliefs was prompted primarily by conversations. First and foremost with an apostate from the Church of Evolutionary Psychology's inner sanctum (Peggy La Cerra); then with a group of colleagues including neuroscientists, evolutionary biologists and philosophers. Two years later, La Cerra and I published in PNAS an alternative model of the mind and followed that with a book in 2002.

Although this is not the place to detail the arguments, we suggested that the selective pressures of navigating ancestral environments — particularly the social world — would have required an adaptively flexible, on-line information-processing system and would have driven the evolution of the neocortex.  We claimed that the ultimate function of the mind is to devise behavior that wards off the depredations of entropy and keeps our energy bank balance in the black. So our universal evolutionary heritage is not a bundle of instincts, but a self-adapting system that is responsive to environmental stimuli, constantly analyzing bioenergetic costs and benefits, creating a customized database of experiences and outcomes, and generating minds that are unique by design.     

We also explained the construction of selves, how our systems adapt to different 'marketplaces', and the importance of reputation effects — a richly nuanced story, which explains why the phrase "I changed my mind" is, with all due respect, the kind of rather simplistic folk psychological language that I hope we will eventually clean up. I think it was Mallarmé who said it was the duty of the poet to purify the language of the tribe. That task now falls also to the scientist.

This model of the mind that I have now subscribed to for about a decade is the bible at the Church of Theoretical Evolutionary Neuroscience (of which I am a co-founder). It was created in alignment with both the adaptationist principles of evolutionary biologists and psychologists (who, at the time, tended to pay little attention to the actual workings of the brain at the implementation level of neurons) and the constructivist principles of neuroscientists (who tended to pay little attention to adaptationism). It would be unrealistic, however, to claim that the two perspectives have yet been satisfactorily reconciled.

And this time, I am not so devout.

Some Evolutionary Psychologists promoted their ideas with a fervor that has been described as evangelical. To a certain extent, that seems to go with the evolutionary territory: think of the ideological feuds surrounding sociobiology, the renewed debates about levels of selection and so on. Of course, it could be argued that the latest subfields of neuroscience (like neuroeconomics and social cognitive neuroscience) are not immune to these enthusiasms (the word comes from the Greek enthousiasmos: inspired or possessed by a god or gods). Think of the fMRI-mediated  neophrenological explosion of areas said to be the neural correlate of some characteristic or other; or whether the mirror neuron system can possibly carry all the conceptual freight currently being assigned to it.

Even in science, a seductive story will sometimes, at least for a while, outpace the data. Maybe that's inevitable in the pioneering phase of a fledgling discipline. But that's when caution is most necessary — when the engine of discovery is running more on faith than facts. That's the time to remember that hubris is a sin in science as well as religion.


RICHARD DAWKINS
Evolutionary Biologist, Charles Simonyi Professor For The Understanding Of Science, Oxford University; Author,
The God Delusion

A flip-flop should be no handicap

When a politician changes his mind, he is a 'flip-flopper.' Politicians will do almost anything to disown the virtue — as some of us might see it — of flexibility. Margaret Thatcher said, "The lady is not for turning." Tony Blair said, "I don't have a reverse gear." Leading Democratic Presidential candidates, whose original decision to vote in favour of invading Iraq had been based on information believed in good faith but now known to be false, still stand by their earlier error for fear of the dread accusation: 'flip-flopper'. How very different is the world of science. Scientists actually gain kudos through changing their minds. If a scientist cannot come up with an example where he has changed his mind during his career, he is hidebound, rigid, inflexible, dogmatic! It is not really all that paradoxical, when you think about it further, that prestige in politics and science should push in opposite directions.

I have changed my mind, as it happens, about a highly paradoxical theory of prestige, in my own field of evolutionary biology. That theory is the Handicap Principle suggested by the Israeli zoologist Amotz Zahavi. I thought it was nonsense and said so in my first book, The Selfish Gene. In the Second Edition I changed my mind, as the result of some brilliant theoretical modelling by my Oxford colleague Alan Grafen.

Zahavi originally proposed his Handicap Principle in the context of sexual advertisement by male animals to females. The long tail of a cock pheasant is a handicap. It endangers the male's own survival. Other theories of sexual selection reasoned — plausibly enough — that the long tail is favoured in spite of its being a handicap. Zahavi's maddeningly contrary suggestion was that females prefer long tailed males, not in spite of the handicap but precisely because of it. To use Zahavi's own preferred style of anthropomorphic whimsy, the male pheasant is saying to the female, "Look what a fine pheasant I must be, for I have survived in spite of lugging this incapacitating burden around behind me."

For Zahavi, the handicap has to be a genuine one, authentically costly. A fake burden — the equivalent of the padded shoulder as counterfeit of physical strength — would be rumbled by the females. In Darwinian terms, natural selection would favour females who scorn padded males and choose instead males who demonstrate genuine physical strength in a costly, and therefore, unfakeable way. For Zahavi, cost is paramount. The male has to pay a genuine cost, or females would be selected to favour a rival male who does so.

Zahavi generalized his theory from sexual selection to all spheres in which animals communicate with one another. He himself studies Arabian Babblers, little brown birds of communal habit, who often 'altruistically' feed each other. Conventional 'selfish gene' theory would seek an explanation in terms of kin selection or reciprocation. Indeed, such explanations are usually right (I haven't changed my mind about that). But Zahavi noticed that the most generous babblers are the socially dominant individuals, and he interpreted this in handicap terms. Translating, as ever, from bird to human language, he put it into the mouth of a donor bird like this: "Look how superior I am to you, I can even afford to give you food." Similarly, some individuals act as 'sentinels', sitting conspicuously in a high tree and not feeding, watching for hawks and warning the rest of the flock who are therefore able to get on with feeding. Again eschewing kin selection and other manifestations of conventional selfish genery, Zahavi's explanation followed his own paradoxical logic: "Look what a great bird I am, I can afford to risk my life sitting high in a tree watching out for hawks, saving your miserable skins for you and allowing you to feed while I don't." What the sentinel pays out in personal cost he gains in social prestige, which translates into reproductive success. Natural selection favours conspicuous and costly generosity.

You can see why I was sceptical. It is all very well to pay a high cost to gain social prestige; maybe the raised prestige does indeed translate into Darwinian fitness; but the cost itself still has to be paid, and that will wipe out the fitness gain. Don't evade the issue by saying that the cost is only partial and will only partially wipe out the fitness gain. After all, won't a rival individual come along and out-compete you in the prestige stakes by paying a greater cost? And won't the cost therefore escalate until the point where it exactly wipes out the alleged fitness gain?

Verbal arguments of this kind can take us only so far. Mathematical models are needed, and various people supplied them, notably John Maynard Smith who concluded that Zahavi's idea, though interesting, just wouldn't work. Or, to be more precise, Maynard Smith couldn't find a mathematical model that led to the conclusion that Zahavi's theory might work. He left open the possibility that somebody else might come along later with a better model. That is exactly what Alan Grafen did, and now we all have to change our minds.

I translated Grafen's mathematical model back into words, in the Second Edition of The Selfish Gene (pp 309-313), and I shall not repeat myself here. In one sentence, Grafen found an evolutionarily stable combination of male advertising strategy and female credulity strategy that turned out to be unmistakeably Zahavian. I was wrong to dismiss Zahavi, and so were a lot of other people.

Nevertheless, a word of caution. Grafen's role in this story is of the utmost importance. Zahavi advanced a wildly paradoxical and implausible idea, which — as Grafen was able to show — eventually turned out to be right. But we must not fall into the trap of thinking that, therefore, the next time somebody comes up with a wildly paradoxical and implausible idea, that one too will turn out to be right. Most implausible ideas are implausible for a good reason. Although I was wrong in my scepticism, and I have now changed my mind, I was still right to have been sceptical in the first place! We need our sceptics, and we need our Grafens to go to the trouble of proving them wrong.

 


< previous

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >


|Top|