"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?"
Earth Catalog, cofounder; The Well; cofounder, Global Business
Network; Author, How Buildings Learn
Good Old Stuff Sucks
In the 90's I was praising the remarkable grassroots success of the building preservation movement. Keep the fabric and continuity of the old buildings and neighborhoods alive! Revive those sash windows.
As a landlocked youth in Illinois I mooned over the yacht sales pictures in the back of sailboat books. I knew what I wanted — a gaff-rigged ketch! Wood, of course.
The Christmas mail order catalog people know what my age group wants (I'm 69). We want to give a child wooden blocks, Monopoly or Clue, a Lionel train. We want to give ourselves a bomber jacket, a fancy leather belt, a fine cotton shirt. We study the Restoration Hardware catalog. My own Whole Earth Catalog, back when, pushed no end of retro stuff in a back-to-basics agenda.
Well, I bought a sequence of wooden sailboats. Their gaff rigs couldn't sail to windward. Their leaky wood hulls and decks were a maintenance nightmare. I learned that the fiberglass hulls we'd all sneered at were superior in every way to wood.
Remodeling an old farmhouse two years ago and replacing its sash windows, I discovered the current state of window technology. A standard Andersen window, factory-made exactly to the dimensions you want, has superb insulation qualities; superb hinges, crank, and lock; a flick-in, flick-out screen; and it looks great. The same goes for the new kinds of doors, kitchen cabinetry, and even furniture feet that are available — all drastically improved.
The message finally got through. Good old stuff sucks. Sticking with the fine old whatevers is like wearing 100% cotton in the mountains; it's just stupid.
Give me 100% not-cotton clothing, genetically modified food (from a farmers' market, preferably), this-year's laptop, cutting-edge dentistry and drugs.
The Precautionary Principle tells me I should worry about everything new because it might have hidden dangers. The handwringers should worry more about the old stuff. It's mostly crap.
(New stuff is mostly crap too, of course. But the best new stuff is invariably better than the best old stuff.)
News and Features Editor, Nature;
Author, Mapping Mars
have, falteringly and with various intermediary about-faces
and caveats, changed my mind about human spaceflight. I am
of the generation to have had its childhood imagination stoked
by the sight of Apollo missions on the television — I
can't put hand on heart and say I remember the Eagle landing,
but I remember the sights of the moon relayed to our homes.
I was fascinated by space and only through that, by way of
the science fiction that a fascination with space inexorably
led to, by science. And astronauts were what space was about.
was not, as I grew older, uncritical of human spaceflight — I
remember my anger at the Challenger explosion, my sense that
if people were going to die, it should be for something grander
than just another shuttle mission. But I was still struck by
its romance, and by the way its romance touched some of the unlikeliest
people. By all logic The Economist should have been, when I worked
there, highly dubious about the aspirations of human spaceflight,
as it is today. But the then editor would hear not a word against
the undertaking, at least not against its principle. With some
relief at this I became while the magazine's science editor a
sort of critical apologist — critical of the human space
programme there actually was, but sensitive to the possibility
that a better space programme was possible.
bought into, at least at some level, the argument that a joint
US-Russian programme offered advantages in terms of aerospace
employment in the former USSR. I bought into the argument that
continuity of effort was needed — that so much would be
lost if a programme was dismantled it might not be possible to
reassemble it. I bought into the crucial safety-net argument — that
it would not be possible to cancel the US programme anyway, so
strong were the interests of the military industrial complex
and so broad, if shallow, the support of the public. (Like the
Powder River, a mile wide, an inch deep and rolling uphill all
the way from Texas.) And I could see science it would offer that
was unavailable by any other means.
though, I can no longer find much to respect in those arguments.
US Russian cooperation seems to have bought little benefit. The
idea of continuous effort seems at best unproven — and
indeed perhaps worth checking. Leaving a technology fallow for
a few decades and coming back with new people, tools and mindsets
is not such a bad idea. And at least one serious presidential
candidate is talking about actually freezing the American programme,
cancelling the shuttle without in the short term developing its
successor. Whether Obama will get elected or be willing or able
to carry through the idea remains to be seen — but if politicians
are talking like this the "it will never happen so why worry" argument
becomes far more suspect.
the crucial idea (crucial to me) that human exploration of Mars
might answer great questions about life in the universe no longer
seems as plausible or as likely to pay off in my lifetime as
once it did. I increasingly think that life in a Martian deep
biosphere if there is any, will be related to earth life and
teach us relatively little that's new. At the same time it will
be fiendishly hard to reach without contamination. Mars continues
to fascinate me — but it has ever less need of a putative
future human presence in order to do so.
excitement at the idea of life in the universe — excitement
undoubtedly spurred by Apollo and the works of Clarke, Heinlein
and Roddenberry that followed on from it in my education — is
now more engaged with exoplanets, to which human spaceflight
is entirely irrelevant (though post-human spaceflight may be
a different kettle of lobsters). If we want to understand the
depth of the various relations between life and planets, which
is what I want to understand, it is by studying other planets
with vibrant biospheres, as well as this one, that we will do
so. A world with a spartan $100 billion moonbase but no ability
to measure spectra and lightcurves from earthlike planets around
distant stars is not the world for me.
general, I try to avoid arguing from my own interests. But in
this case it seems to me that all the other arguments against
human spaceflight are so strong that to be against it merely
meant realising that an atavistic part of me had failed to understand
what those interests are. I'm interested in how life works on
astronomical scales, and that interest has nothing to do, in
the short term, with human spaceflight. And I see no reason beyond
my own interests to suggest that it is something worth spending
so much money on. It does not make the world a better place in
any objective way that can be measured, or in any subjective
way that compels respect.
is possibly also the case that seeing human spaceflight reduced
to a matter of suborobital hops for the rich, or even low earth
orbit hotels, has hardened my heart further against it. I hope
this is not a manifestation of the politics of envy, though I
fear that in part it could be.
JUDITH RICH HARRIS
Investigator and Theoretician; Author, No
Two Alike: Human Nature and Human Individuality
Anyone who has taken a course in introductory psychology has
heard the story of how the behaviorist John B. Watson produced "conditioned
fear" of a white rat — or was it a white rabbit? — in
an unfortunate infant called Little Albert, and how Albert "generalized" that
fear to other white, furry things (including, in some accounts,
his mother's coat). It was a vividly convincing story and, like
my fellow students, I saw no reason to doubt it. Nor did I see
any reason, until many years later, to read Watson's original
account of the experiment, published in 1920. What a mess! You
could find better methodology at a high school science fair.
Not surprisingly — at least it doesn't surprise me now
— Watson's experiment has not stood up well to attempts
to replicate it. But the failures to replicate are seldom mentioned
in the introductory textbooks.
The idea of generalization is a very basic one in psychology.
Psychologists of every stripe take it for granted that learned
responses — behaviors, emotions, expectations, and so on — generalize
readily and automatically to other stimuli of the same general
type. It is assumed, for example, that once the baby has learned
that his mother is dependable and his brother is aggressive,
he will expect other adults to be dependable and other children
to be aggressive.
now believe that generalization is the exception, not the rule.
Careful research has shown that babies arrive in the world with
a bias against generalizing. This is true for learned motor skills
and it is also true for expectations about people. Babies are
born with the desire to learn about the beings who populate their
world and the ability to store information about each individual
separately. They do not expect all adults to behave like their
mother or all children to behave like their siblings. Children
who quarrel incessantly with their brothers and sisters generally
get along much better with their peers. A firstborn who is accustomed
to dominating his younger siblings at home is no more likely
than a laterborn to try to dominate his schoolmates on the playground.
A boy's relationship with his father does not form the template
for his later relationship with his boss.
am not, of course, the only one in the world who has given up
the belief in ubiquitous generalization, but if we formed a club,
we could probably hold meetings in my kitchen. Confirmation bias — the
tendency to notice things that support one's assumptions and
to ignore or explain away anything that doesn't fit — keeps
most people faithful to what they learned in intro psych. They
observe that the child who is agreeable or timid or conscientious
at home tends, to a certain extent, to behave in a similar manner
outside the home, and they interpret this correlation as evidence
that the child learns patterns of behavior at home which she
then carries along with her to other situations.
mistake they are making is to ignore the effects of genes. Studies
using advanced methods of data analysis have shown that the similarities
in behavior from one context to another are due chiefly to genetic
influences. Our inborn predispositions to behave in certain ways
go with us wherever we go, but learned behaviors are tailored
to the situation. The fact that genetic predispositions tend
to show up early is the reason why some psychologists also make
the mistake of attributing too much importance to early experiences.
changed my mind about these things was the realization that if
I tossed out the assumption about generalization, some hitherto
puzzling findings about human behavior suddenly made more sense.
I was 56 years old at the time but fairly new to the field of
child development, and I had no stake in maintaining the status
quo. It is a luxury to have the freedom to change one's mind.
Professor of Genetics, Harvard Medical School; Director,
Center for Computational Genetics
Evolution of Faith In Experiments
Why does my mind change based on thinking, faith, and science? One of the main functions of a mind is to change — constantly — to repair damage and add new thoughts, or to gradually replace old thoughts with new ones in a zero-sum game.
When I first heard about the century-old 4-color map conjecture as a boy, I noted how well it fit a few anecdotal scribbles and then took leap of faith that 4 colors were always enough. A decade later when Appel, Haken and a computer proved it, you could say that my boyish opinion was intact, but my mind was changed — by "facts" (the exhaustive computer search), by "thinking" (the mathematicians, computer and me collectively), and by "faith" (that the program had no bugs and that basic idea of proofs is reasonable). There were false proofs before that, and shorter confirmatory proofs since then, but the best proof is still too complex for even experts to check by hand.
While I rarely change my mind from one strongly held belief to it's opposite, I do often change from no opinion to acceptance. Perhaps my acquiescence is too easy — rarely confirming the experiments with my own hands. Like many scientists, I form some opinions without reading the primary data (especially if outside of my field). Often, the key experiments could be done, but aren't.
depressingly small bit of medical practice is based on randomized
placebo-controlled, double-blind studies. In this age of vast
electronic documentation is there a list of which medical "facts" have achieved this level and which have not? Other times experiments in the usual sense can't be done, e.g. huge fractions of astronomical and biological evolution are far away in space and time. Nevertheless, both of these fields can inspire experiments. I've done hands-on measurements of gravitation with massive lab spheres and mutation/selection evolution in lab bacteria. Such microcosmic simulacra "changed my mind" subtly and allowed me to connect to the larger-scale (non-experimental) facts.
of this still adds up to a lot of faith and delegated thinking
among scientists. The system works because of a trusted network
with feedback from practical outcomes. Researchers who stray
away from standard protocols (especially core evidentiary and
epistemological beliefs) or question too many useful facts, had
better have some utility close at hand or they will be ignored — until
someone comes along who can both challenge and deliver. In 1993
Pope John Paul II acquitted Galileo, of his indictment in 1632,
of heretical support for Copernicus‚s heliocentrism. In
1996 John Paul made a similarly accepting statement about Darwinian
religion and science do overlap, and societal minds do change.
Even the most fundamentalist creationists accept a huge part
of Darwinism, i.e. micro-evolution — which
was by no means obvious in the early 19th century. Their remaining
doubt is whether largish (macro) changes in morphology or function
emerge from the sum of random steps — accepting that small
(micro) changes can do so? What happens as we see increasingly
dramatic and useful examples of experimental macro-evolution?
recently seen the selection of various enzyme catalysts from
billions of random RNA sequences. Increasingly biotechnology
depends on lab evolution of new, complex synthetic-biology functions
and shapes. Admittedly these experiments do involve 'design',
but as the lab evolution achieved gets more macro with less intervention,
perhaps minds will change about how much intervention is needed
in natural macro-evolution.
At least as profound as getting function from randomness, is evolving clever speech from mute brutes. We've made huge progress in revealing the communication potential of chimp, gorilla and African Grey parrot. We've also found genes like FOXP2, which affects vocalization in humans and mice and a variation that separates humans from chimps — but not from Neanderthal genomes. (Yes — extinct Neanderthals are being sequenced!) As we test combinations of such DNA differences in primates, will we discover just how few genetic changes might separate us functionally from chimps? What human blind-spots will be unearthed by talking with other species?
And how fast should we change our mind? Did our 'leap' to agriculture lead to malaria? Did our leap to DDT lead to loss of birds? We'll try DDT again, this time restricted to homes and we'll try transgenic malaria-resistant mosquitoes...and that will lead to what? Arguably faith and spirituality are needed to buffer and govern our technological progress so we don't leap too fast, or look too superficially. Many micro mind-changes add up to macro mind-changes eventually. What's the rush?
Computational Neuroscientist, Salk Institute, Coauthor, The
I have changed my mind about cortical neurons and now think that they are far more capable than we ever imagined.
How is it that insects manage to get by on many fewer neurons than we have? A fly brain has a few hundred thousand neurons, compared to the few hundred billion in our brains, a million times more neurons. Flies are quite successful in their niche. They can see, find food, mate, and create the next generation of flies. The traditional view is that unique neurons evolved in the brain of the fly to perform specific tasks, in contrast to the mammalian strategy of creating many more neurons of the same type, working together in a collective fashion. This view was bolstered when it became possible to record from single cortical neurons, which responded to sensory stimuli with highly variable spike trains from trial to trial. Reliability could be achieved only by averaging the responses of many neurons.
Theoretical analysis of neural signals in large networks assumed statistical randomness in the responses of neurons. These theories used the average firing rates of neurons as the primary statistical variable. Individual spikes and the times when they occurred were not relevant in these theories. In contrast, the timing of single spikes in flies has been shown to carry specific information about sensory stimuli important for guiding the behavior of flies, and in mammals the timing of spikes in the peripheral auditory system carried information about the spatial locations of sound sources. However, cortical neurons did not seem to care about the timing of spikes.
I have changed my mind about cortical neurons and now think that they are far more capable than we ever imagined. Two important experimental results pointed me in this direction. First, if you repeatedly inject the same fluctuating current into a neuron in a cortical slice, to mimic the inputs that occur in an intact piece of tissue, the spike times are highly reproducible from trial to trial. This shows that cortical neurons are capable of initiating spikes with millisecond precision. Second, if you arrange for a single synapse to be stimulated a few milliseconds just before or just after a spike in the neuron, the synaptic strength will increase or decrease, respectively. This tells us that the machinery in the cortex is every bit as capable as a fly brain, but what is it being used for?
The cerebral cortex is constantly being bombarded by sensory inputs and has to sort though the myriad of signals for those that are the most important and to respond selectively to them. The cortex also needs to organize the signals being generated internally, in the absence of sensory inputs. The hypothesis that I have been pursuing over the last decade is that spike timing in cortical neurons is used internally as a way of controlling the flow of communication between neurons. This is a different from the traditional view that spike times code sensory information, as occurs in the periphery. Rather, spike timing and the synchronous firing of large numbers of cortical neurons may be used to enhance the salience of sensory inputs, as occurs during focal attention, and to decide what information is worth saving for future use. According to this view, the firing rates of neurons are used as an internal representation of the world but the timing of spikes is used to regulate the communication of signals between cortical areas.
The way that neuroscientists perform experiments is biased by their theoretical views. If cortical neurons use rate coding you only need to record, and report, their average firing rates. But to find out if spike timing is important new experiments need to be designed and new types of analysis need to be performed on the data. Neuroscientists have begun to pursue these new experiments and we should know before too long where they will lead us.
Psychologist, University of Virginia, author The Happiness Hypothesis
Sports and fraternities are not so bad
I was born without the neural cluster that makes boys find pleasure in moving balls and pucks around through space, and in talking endlessly about men who get paid to do such things. I always knew I could never join a fraternity or the military because I wouldn't be able to fake the sports talk. By the time I became a professor I had developed the contempt that I think is widespread in academe for any institution that brings young men together to do groupish things. Primitive tribalism, I thought. Initiation rites, alcohol, sports, sexism, and baseball caps turn decent boys into knuckleheads. I'd have gladly voted to ban fraternities, ROTC, and most sports teams from my university.
But not anymore. Three books convinced me that I had misunderstood such institutions because I had too individualistic a view of human nature. The first book was David Sloan Wilson's Darwin's Cathedral, which argued that human beings were shaped by natural selection operating simultaneously at multiple levels, including the group level. Humans went through a major transition in evolution when we developed religiously inclined minds and religious institutions that activated those minds, binding people into groups capable of extraordinary cooperation without kinship.
The second book was William McNeill's Keeping Together in Time, about the historical prevalence and cultural importance of synchronized dance, marching, and other forms of movement. McNeill argued that such "muscular bonding" was an evolutionary innovation, an "indefinitely expansible basis for social cohesion among any and every group that keeps together in time." The third book was Barbara Ehrenreich's Dancing in the Streets, which made the same argument as McNeill but with much more attention to recent history, and to the concept of communitas or group love. Most traditional societies had group dance rituals that functioned to soften structure and hierarchy and to increase trust, love, and cohesion. Westerners too have a need for communitas, Ehrenreich argues, but our society makes it hard to satisfy it, and our social scientists have little to say about it.
These three books gave me a new outlook on human nature. I began to see us not just as chimpanzees with symbolic lives but also as bees without hives. When we made the transition over the last 200 years from tight communities (Gemeinschaft) to free and mobile societies (Gesellschaft), we escaped from bonds that were sometimes oppressive, yes, but into a world so free that it left many of us gasping for connection, purpose, and meaning. I began to think about the many ways that people, particularly young people, have found to combat this isolation. Rave parties and the Burning Man festival are spectacular examples of new ways to satisfy the ancient longing for communitas. But suddenly sports teams, fraternities, and even the military made a lot more sense.
I now believe that such groups do great things for their members, and that they often create social capital and other benefits that spread beyond their borders. The strong school spirit and alumni loyalty we all benefit from at the University of Virginia would drop sharply if fraternities and major sports were eliminated. If my son grows up to be a sports playing fraternity brother, a part of me may still be disappointed. But I'll give him my blessing, along with three great books to read.
Professor of Ethology, Cambridge University, author Design for a Life
Changing my Mind
Near the end of his life Charles Darwin invited for lunch at Down House Dr Ludwig Büchner, President of the Congress of the International Federation of Freethinkers, and Edward Aveling, a self-proclaimed and active atheist. The invitation was at their request. Emma Darwin, devout as ever, was appalled by the thought of entertaining such guests and at table insulated herself from the atheists with an old family friend, the Rev. Brodie Innes, on her right and with her grandson and his friends on her left. After lunch Darwin and his son Frank smoked cigarettes with the two visitors in Darwin's old study. Darwin asked them with surprising directness: "Why do you call yourselves atheists?" He said that he preferred the word agnostic. While Darwin agreed that Christianity was not supported by evidence, he felt that atheist was too aggressive a term to describe his own position.
For many years what had been good enough for Darwin was good enough for me. I too described myself as an agnostic. I had been brought up in a Christian culture and some of the most rational humanists I knew were believers. I loved the music and art that had been inspired by a belief in God and saw no hypocrisy in participating in the great carol services held in the Chapel of King's College Cambridge. I did not accept the views of some of my scientific colleagues that the march of science has disposed of religion. The wish that I and many biologists had to understand biological evolution was not the same as the wish had by those with deep religious conviction to understand the meaning of life.
I had, however, led a sheltered life and had never met anybody who was aggressively religious. I hated, of course, what I had read about the ugly fanaticism of all forms of religious fundamentalism or what I had seen of it on television. However, such wickedness did not seem to be simply correlated with religious belief since many non-believers were just as totalitarian in their behaviour as the believers. My unwillingness to be involved in religious debates was shaken at a grand dinner party. The woman sitting next to me asked me what I did and I told her that I am a biologist. "Oh well," she said, "then we have plenty to talk about, because I believe that every word of the Bible is literally true." My heart sank.
As things turned out, we didn't have a great deal to talk about because she wasn't going to be persuaded by any argument that I could throw at her. She did not seem to wonder about the inconsistencies between the gospels of the New Testament or those between the first and second chapters of Genesis. Nor was she concerned about where Cain's wife came from? The Victorians were delicate about such matters and were not going to entertain the thought that Cain married an unnamed sister or, horrors, that his own mother bore his children, his grand children and so on down the line of descendants until other women became available. Nevertheless, the devout Victorians were obviously troubled by the question and they speculated on the existence of pre-Adamite people, angels probably, who would have furnished Cain with his wife.
My creationist dinner companion was not worried by such trivialities and dismissed my lack of politesse as the problem of a scientist being too literal. However, being too literal was not my problem, it was hers and those of her fellow creationists. She was hoist on her own petard. In any event, it was quite simply stupid to try to take on science on its own terms by appealing to the intelligence implicit in natural design. Science provides orderly methods for examining the natural world. One of those methods is to develop theories that integrate as much as possible of what we know about the phenomena encompassed by the theory. The theories provide frameworks for testing the characteristics of the world — and though some theorists may not wish to believe it, their theories are eminently disposable. Facts are widely shared opinions and, every so often the consensus breaks — and minds change. Nevertheless it is crying for the moon to hope that the enormous bodies of thought that have been built up about cosmology, geology and biological evolution are all due to fall apart. No serious theologian would rest his or her beliefs on such a hope. If faith rests on the supposed implausibility of a current scientific explanation, it is vulnerable to the appearance of a plausible one. To build on such sand is a crass mistake.
Not long after that dreadful dinner, Richard Dawkins wrote to me to ask whether I would publicly affirm my atheism. I could see no reason why not. One of the clear definitions of an atheist is a lack of a belief in a God. That certainly described my position, even though I am disinclined to attack the beliefs of the sincere and thoughtful people with strong religious beliefs whom I continue to meet. I completed the questionnaire that Richard had sent to me. I had changed my mind. A dear friend, Peter Lipton, who died suddenly in November 2007, had been assiduous in maintaining Jewish customs in his own home and in his public defence of Israel. After he died I was surprised to discover that he described himself as a religious atheist. I should not have been surprised.
writer, director, and host of PBS program "Scientific American Frontiers."
So far, I've changed my mind twice about God.
Until I was twenty I was sure there was a being who could see everything I did and who didn't like most of it. He seemed to care about minute aspects of my life, like on what day of the week I ate a piece of meat. And yet, he let earthquakes and mudslides take out whole communities, apparently ignoring the saints among them who ate their meat on the assigned days. Eventually, I realized that I didn't believe there was such a being. It didn't seem reasonable. And I assumed that I was an atheist.
As I understood the word, it meant that I was someone who didn't believe in a God; I was without a God. I didn't broadcast this in public because I noticed that people who do believe in a god get upset to hear that others don't. (Why this is so is one of the most pressing of human questions, and I wish a few of the bright people in this conversation would try to answer it through research.)
But, slowly I realized that in the popular mind the word atheist was
coming to mean something more: a statement that there couldn't be a God. God was, in this formulation, not possible, and this was something that could be proved. But I had been changed by eleven years of interviewing six or seven hundred scientists around the world on the television program Scientific American Frontiers. And that change was reflected in how I would now identify myself.
The most striking thing about the scientists I met was their complete
dedication to evidence. It reminded me of the wonderfully plainspoken
words of Richard Feynman who felt it was better not to know than
to know something that was wrong. The problem for me was that just
as I couldn't find any evidence that there was a god, I couldn't
find any that there wasn't a god. I would have to call myself an agnostic. At first, this seemed a little wimpy, but after a while I began to hope it might be an example of Feynman's heroic willingness to accept, even glory in, uncertainty.
I still don't like the word agnostic. It's too fancy. I'm simply not a believer. But, as simple as this notion is, it confuses some people. Someone wrote a Wikipedia entry about me, identifying me as an atheist because I'd said in a book I wrote that I wasn't a believer. I guess in a world uncomfortable with uncertainty, an unbeliever must be an atheist, and possibly an infidel. This gets us back to that most pressing of human questions: why do people worry so much about other people's holding beliefs other than their own? This is the question that makes the subject over which I changed my mind something of global importance, and not just a personal, semantic dalliance.
Do our beliefs identify us the way our language, foods and customs do? Is this why people who think the universe chugs along on its own are as repellent to some as people who eat live monkey brains are to others? Are we saying, you threaten my identity with your infidelity to my beliefs? You're trying to kill me with your thoughts, so I'll get you first with this stone? And, if so, is this really something that can be resolved through reasonable discourse?
Maybe this is an even more difficult problem; one that's written in the letters that spell out our DNA. Why is the belief in God and Gods so ubiquitous? Does belief in a higher power confer some slight health benefit, and has natural selection favored those who are genetically inclined to believe in such a power — and is that why so many of us are inclined to believe? (Whether or not a God actually exists, the tendency to believe we'll be saved might give us the strength to escape sickness and disaster and live the extra few minutes it takes to replicate ourselves.)
These are wild speculations, of course, and they're probably based on a desperate belief I once had that we could one day understand ourselves.
But, I might have changed my mind on that one, too.
Psychologist, Harvard University; Author, The Stuff of Thought
Have Humans Stopped Evolving?
Ten years ago, I wrote:
For ninety-nine percent of human existence, people lived as foragers in small nomadic bands. Our brains are adapted to that long-vanished way of life, not to brand-new agricultural and industrial civilizations. They are not wired to cope with anonymous crowds, schooling, written language, government, police, courts, armies, modern medicine, formal social institutions, high technology, and other newcomers to the human experience.
Are we still evolving? Biologically, probably not much. Evolution has no momentum, so we will not turn into the creepy bloat-heads of science fiction. The modern human condition is not conducive to real evolution either. We infest the whole habitable and not-so-habitable earth, migrate at will, and zigzag from lifestyle to lifestyle. This makes us a nebulous, moving target for natural selection. If the species is evolving at all, it is happening too slowly and unpredictably for us to know the direction. (How the Mind Works)
Though I stand by a lot of those statements, I've had to question the overall assumption that human evolution pretty much stopped by the time of the agricultural revolution. When I wrote these passages, completion of the Human Genome Project was several years away, and so was the use of statistical techniques that test for signs of selection in the genome. Some of these searches for "Darwin's Fingerprint," as the technique has been called, have confirmed predictions I had made. For example, the modern version gene associated with language and speech has been under selection for several hundred thousand years, and has even been extracted from a Neanderthal bone, consistent with my hypothesis (with Paul Bloom) that language is a product of gradual natural selection. But the assumption of no-recent-human-evolution has not.
New results from the labs of Jonathan Pritchard, Robert Moyzis, Pardis Sabeti, and others have suggested that thousands of genes, perhaps as much as ten percent of the human genome, have been under strong recent selection, and the selection may even have accelerated during the past several thousand years. The numbers are comparable to those for maize, which has been artificially selected beyond recognition during the past few millennia.
If these results hold up, and apply to psychologically relevant brain function (as opposed to disease resistance, skin color, and digestion, which we already know have evolved in recent millennia), then the field of evolutionary psychology might have to reconsider the simplifying assumption that biological evolution was pretty much over and done with 10-000 — 50,000 years ago.
And if so, the result could be evolutionary psychology on steroids. Humans might have evolutionary adaptations not just to the conditions that prevailed for hundreds of thousands of years, but also to some of the conditions that have prevailed only for millennia or even centuries. Currently, evolutionary psychology assumes that any adaptation to post-agricultural ways of life are 100% cultural.
Though I suspect some revisions will be called for, I doubt they will be radical, for two reasons. One is that many aspects of the human (and ape) environments have been constant for a much longer time than the period in which selection has recently been claimed to operate. Examples include dangerous animals and insects, toxins and pathogens in spoiled food and other animal products, dependent children, sexual dimorphism, risks of cuckoldry and desertion, parent-offspring conflict, risk of cheaters in cooperation, fitness variation among potential mates, causal laws governing solid bodies, presence of conspecifics with minds, and many others. Recent adaptations would have to be an icing on this cake -- quantitative variations within complex emotional and cognitive systems.
The other is the empirical fact that human races and ethnic groups are psychologically highly similar, if not identical. People everywhere use language, get jealous, are selective in choosing mates, find their children cute, are afraid of heights and the dark, experience anger and disgust, learn names for local species, and so on. If you adopt children from a technologically undeveloped part of the world, they will fit in to modern society just fine. To the extent that this is true, there can't have been a whole lot of uneven psychological evolution postdating the split among the races 50-100,000 years ago (though there could have been parallel evolution in all the branches).
Physicist, Arizona State University; Author, The
I used to be a committed Platonist.
For most of my career, I believed that the bedrock of physical reality lay with the laws of physics — magnificent, immutable, transcendent, universal, infinitely-precise mathematical relationships that rule the universe with as sure a hand as that of any god. And I had orthdoxy on my side, for most of my physicist colleagues also believe that these perfect laws are the levitating superturtle that holds up the mighty edifice we call nature, as disclosed through science. About three years ago, however, it dawned on me that such laws are an extraordinary and unjustified idealization.
can we be sure that the laws are infinitely precise? How do we
know they are immutable, and apply without the slightest change
from the beginning to the end of time? Furthermore, the laws
themselves remain unexplained. Where do they come from? Why do
they have the form that they do? Indeed, why do they exist at
all? And if there are many possible such laws, then, as Stephen
Hawking has expressed it, what is it that "breathes fire" into a particular set of laws and makes a universe for them to govern?
I did a U turn and embraced the notion of laws as emergent with
the universe rather than stamped on it from without like a maker's
mark. The "inherent" laws I now espouse are not absolute and perfect, but are instrinsically fuzzy and flexible, although for almost all practical purposes we don't notice the tiny flaws.
Why did I change my mind? I am not content to merely accept the laws of physics as a brute fact. Rather, I want to explain the laws, or at least explain the form they have, as part of the scientific enterprise. One of the oddities about the laws is the well known fact that they are weirdly well-suited to the emergence of life in the universe. Had they been slightly different, chances are there would be no sentient beings around to discover them.
fashionable explanation for this — that there is a multiplicity
of laws in a multiplicity of parallel universes, with each set
of laws fixed and perfect within its host universe — is a nice
try, but still leaves a lot unexplained. And simply saying that
the laws "just are" seems no better than declaring "God made them that way."
orthodox view of perfect physical laws is a thinly-veiled vestige
of monothesim, the reigning world view that prevailed at the birth
of modern science. If we want to explain the laws, however, we
have to abandon the theological legacy that the laws are fixed
and absolute, and replace them with the notion that the states
of the world and the laws that link them form a dynamic interdependent