howard_gardner's picture
Hobbs Professor of Cognition and Education, Harvard Graduate School of Education; Author, A Synthesizing Mind

Like many other college students, I turned to the study of psychology for personal reasons. I wanted to understand myself better. And so I read the works of Freud; and I was privileged to have as my undergraduate tutor, the psychoanalyst Erik Erikson, himself a sometime pupil of Freud. But once I learned about new trends in psychology, through contacts with another mentor Jerome Bruner, I turned my attention to the operation of the mind in a cognitive sense — and I've remained at that post ever since.

The giant at the time — the middle 1960s — was Jean Piaget. Though I met and interviewed him a few times, Piaget really functioned for me as a paragon. In the term of Dean Keith Simonton, a paragon is someone whom one does not know personally but who serves as a virtual teacher and point of reference. I thought that Piaget had identified the most important question in cognitive psychology — how does the mind develop; developed brilliant methods of observation and experimentation; and put forth a convincing picture of development — a set of general cognitive operations that unfold in the course of essentially lockstep, universally occurring stages. I wrote my first books about Piaget; saw myself as carrying on the Piagetian tradition in my own studies of artistic and symbolic development (two areas that he had not focused on); and even defended Piaget vigorously in print against those who would critique his approach and claims.

Yet, now forty years later, I have come to realize that the bulk of my scholarly career has been a critique of the principal claims that Piaget put forth. As to the specifics of how I changed my mind:

Piaget believed in general stages of development that cut across contents (Space, time, number); I now believe that each area of content has its own rules and operations and I am dubious about the existence of general stages and structures.

Piaget believed that intelligence was a single general capacity that developed pretty much in the same way across individuals: I now believe that humans posses a number of relatively independent intelligences and these can function and interact in idiosyncratic ways,

Piaget was not interested in individual differences; he studied the 'epistemic subject.' Most of my work has focused on individual differences, with particular attention to those with special talents or deficits, and unusual profiles of abilities and disabilities.

Piaget assumed that the newborn had a few basic biological capacities — like sucking and looking — and two major processes of acquiring knowledge, that he called assimilation and accommodation. Nowadays, with many others, I assume that human beings possess considerable innate or easily elicited cognitive capacities, and that Piaget way underestimated the power of this inborn cognitive architecture.

Piaget downplayed the importance of historical and cultural factors — cognitive development consisted of the growing child experimenting largely on his own with the physical (and, minimally, the social ) world. I see development as permeated from the first by contingent forces pervading the time and place of origin.

Finally, Piaget saw language and other symbols systems (graphic, musical, bodily etc) as manifestations, almost epiphenomena, of a single cognitive motor; I see each of these systems as having its own origins and being heavily colored by the particular uses to which a systems is put in one's own culture and one's own time.

Why I changed my mind is an issue principally of biography: some of the change has to do with my own choices (I worked for 20 years with brain damaged patients); and some with the Zeitgeist (I was strongly influenced by the ideas of Noam Chomsky and Jerry Fodor, on the one hand, and by empirical discoveries in psychology and biology on the other).

Still, I consider Piaget to be the giant of the field. He raised the right questions; he developed exquisite methods; and his observations of phenomena have turned out to be robust. It's a tribute to Piaget that we continue to ponder these questions, even as many of us are now far more critical than we once were. Any serious scientist or scholar will change his or her mind; put differently, we will come to agree with those with whom we used to disagree, and vice versa. We differ in whether we are open or secretive about such "changes of mind": and in whether we choose to attack, ignore, or continue to celebrate those with whose views we are no longer in agreement.

donald_d_hoffman's picture
Cognitive Scientist, UC, Irvine; Author, The Case Against Reality

I have changed my mind about the nature of perception. I thought that a goal of perception is to estimate properties of an objective physical world, and that perception is useful precisely to the extent that its estimates are veridical. After all, incorrect perceptions beget incorrect actions, and incorrect actions beget fewer offspring than correct actions. Hence, on evolutionary grounds, veridical perceptions should proliferate.

Although the image at the eye, for instance, contains insufficient information by itself to recover the true state of the world, natural selection has built into the visual system the correct prior assumptions about the world, and about how it projects onto our retinas, so that our visual estimates are, in general, veridical. And we can verify that this is the case, by deducing those prior assumptions from psychological experiments, and comparing them with the world. Vision scientists are now succeeding in this enterprise. But we need not wait for their final report to conclude with confidence that perception is veridical. All we need is the obvious rhetorical question: Of what possible use is non-veridical perception?

I now think that perception is useful because it is not veridical. The argument that evolution favors veridical perceptions is wrong, both theoretically and empirically. It is wrong in theory, because natural selection hinges on reproductive fitness, not on truth, and the two are not the same: Reproductive fitness in a particular niche might, for instance, be enhanced by reducing expenditures of time and energy in perception; true perceptions, in consequence, might be less fit than niche-specific shortcuts. It is wrong empirically: mimicry, camouflage, mating errors and supernormal stimuli are ubiquitous in nature, and all are predicated on non-veridical perceptions. The cockroach, we suspect, sees little of the truth, but is quite fit, though easily fooled, with its niche-specific perceptual hacks. Moreover, computational simulations based on evolutionary game theory, in which virtual animals that perceive the truth compete with others that sacrifice truth for speed and energy-efficiency, find that true perception generally goes extinct.

It used to be hard to imagine how perceptions could possibly be useful if they were not true. Now, thanks to technology, we have a metaphor that makes it clear — the windows interface of the personal computer. This interface sports colorful geometric icons on a two-dimensional screen. The colors, shapes and positions of the icons on the screen are not true depictions of what they represent inside the computer. And that is why the interface is useful. It hides the complexity of the diodes, resistors, voltages and magnetic fields inside the computer. It allows us to effectively interact with the truth because it hides the truth.

It has not been easy for me to change my mind about the nature of perception. The culprit, I think, is natural selection. I have been shaped by it to take my perceptions seriously. After all, those of our predecessors who did not, for instance, take their tiger or viper or cliff perceptions seriously had less chance of becoming our ancestors. It is apparently a small step, though not a logical one, from taking perception seriously to taking it literally. 

Unfortunately our ancestors faced no selective pressures that would prevent them from conflating the serious with the literal: One who takes the cliff both seriously and literally avoids harms just as much as one who takes the cliff seriously but not literally. Hence our collective history of believing in flat earth, geocentric cosmology, and veridical perception. I should very much like to join Samuel Johnson in rejecting the claim that perception is not veridical, by kicking a stone and exclaiming "I refute it thus." But even as my foot ached from the ill-advised kick, I would still harbor the skeptical thought, "Yes, you should have taken that rock more seriously, but should you take it literally?"

michael_shermer's picture
Publisher, Skeptic magazine; Monthly Columnist, Scientific American; Presidential Fellow, Chapman University; Author, Heavens on Earth

When I was a graduate student in experimental psychology I cut my teeth in a Skinnerian behavioral laboratory. As a behaviorist I believed that human nature was largely a blank slate on which we could impose positive and negative reinforcements (and punishments if necessary) to shape people and society into almost anything we want. As a young college professor I taught psychology from this perspective and even created a new course on the history and psychology of war, in which I argued that people are by nature peaceful and nonviolent, and that wars were thus a byproduct of corrupt governments and misguided societies.

The data from evolutionary psychology has now convinced me that we evolved a dual set of moral sentiments: within groups we tend to be pro-social and cooperative, but between groups we are tribal and xenophobic. Archaeological evidence indicates that Paleolithic humans were anything but noble savages, and that civilization has gradually but ineluctably reduced the amount of within-group aggression and between group violence. And behavior genetics has erased the tabula rasa and replaced it with a highly constrained biological template upon which the environment can act.

I have thus changed my mind about this theory of human nature in its extreme form. Human nature is more evolutionarily determined, more cognitively irrational, and more morally complex than I thought.

james_j_odonnell's picture
Classics Scholar, University Librarian, ASU; Author, Pagans

Sometimes the later Roman empire seems very long ago and far away, but at other times, when we explore Edward Gibbon's famous claim to have described the triumph of "barbarism and religion", it can seem as fresh as next week.  And we always know that we're supposed root for the Romans.  When I began my career as historian thirty years ago, I was all in favor of those who were fighting to preserve the old order.  "I'd rather be Belisarius than Stilicho," I said to my classes often enough that they heard it as a mantra of my attitude — preferring the empire-restoring Roman general of the sixth-century to the barbarian general who served Rome and sought compromise and adjustment with neighbors in the fourth.  

But a career as a historian means growth, development, and change.  I did what the historian — as much a scientist as any biochemist, as the German use of the word Wissenschaft for what both practice — should do:  I studied the primary evidence, I listened to and participated in the debates of the scholars.  I had moments when a new book blew me away, and others when I read the incisive critique of the book that had blown me away and thought through the issues again.  I've been back and forth over a range of about four centuries of late Roman history many times now, looking at events, people, ideas, and evidence in different lights and moods.

What I have found is that the closer historical examination comes to the lived moment of the past, the harder it is to take sides with anybody.  And it is a real fact that the ancient past (I'm talking now about the period from 300-700 CE) draws closer and closer to us all the time.  There is a surprisingly large body of material that survives and really only a handful of hardy scholars sorting through it.  Much remains to be done:  The sophist Libanius of Antioch in the late fourth century, partisan for the renegade 'pagan' emperor Julian, left behind a ton of personal letters and essays that few have read, only a handful have been translated, and so only a few scholars have really worked through his career and thought — but I'd love to read, and even more dearly love to write, a good book about him someday.  In addition to the books, there is a growing body of archaeological evidence as diggers fan out across the Mediterranean, Near East, and Europe, and we are beginning to see new kinds of quantitative evidence as well — climate change measured from tree-ring dating, even genetic analysis that suggests that my O'Donnell ancestors came from one of the most seriously inbred populations (Ireland) on the planet — and right now the argument is going on about the genetic evidence for the size of the Anglo-Saxon migrations to Britain.  We know more than we ever did, and we are learning more all the time, and with each decade, we get closer and closer to even the remote past.  

When you do that, you find that the past is more a tissue of  choices and chances than we had imagined, that fifty or a hundred years of bad times can happen — and can end and be replaced by the united work of people with heads and hearts that makes society peaceful and prosperous again; or the opportunity can be kicked away.  

And we should remember that when we root for the Romans, there are contradictory impulses at work.  Rome brought the ancient world a secure environment (Pompey cleaning up the pirates in the Mediterranean was a real service), a standard currency, and a huge free trade zone.  Its taxes were heavy, but the wealth it taxed so immense that it could support a huge bureaucracy for a long time without damaging local prosperity.  Fine:  but it was an empire by conquest, ruled as a military dictatorship, fundamentally dependent on a slave economy, and with no clue whatever about the realities of economic development and management.  A prosperous emperor was one who managed by conquest or taxation to bring a flood of wealth into the capital city and squander it as ostentatiously as possible.  Rome "fell", if that's the right word for it, partly because it ran out of ideas for new peoples to plunder, and fell into a funk of outrage at the thought that some of the neighboring peoples preferred to move inside the empire's borders, settle down, buy fixer-upper houses, send their kids to the local schools, and generally enjoy the benefits of civilization.  (The real barbarians stayed outside.)  Much of the worst damage to Rome was done by Roman emperors and armies thrashing about, thinking they were preserving what they were in fact destroying.

So now I have a new mantra for my students:  "two hundred years is a long time."  When we talk about Shakespeare's time or the Crusades or the Roman Empire or the ancient Israelites, it's all too easy to talk about centuries as objects, a habit we bring even closer to our own time, but real human beings live in the short window of a generation, and with ancient lifespans shorter than our own, that window was brief.  We need to understand and respect just how much possibility was there and how much accomplishment was achieved if we are to understand as well the opportunities that were squandered.  Learning to do that, learning to sift the finest grains of evidence with care, learning to learn from and debate with others — that's how history gets done.  

The excitement begins when you discover that the past is constantly changing.

colin_tudge's picture
Biologist; Author, Six Steps Back to the Land

I have changed my mind about the omniscience and omnipotence of science. I now realize that science is strictly limited, and that it is extremely dangerous not to appreciate this.

Science proceeds in general by being reductionist. This term is used in different ways in different contexts but here I take it to mean that scientists begin by observing a world that seems infinitely complex and inchoate, and in order to make sense of it they first "reduce" it to a series of bite-sized problems, each of which can then be made the subject of testable hypotheses which, as far as possible, take mathematical form.

Fair enough. The approach is obviously powerful, and it is hard to see how solid progress of a factual kind could be made in any other way. It produces answers of the kind known as "robust". "Robust" does not of course mean "unequivocally true" and still less does it meet the lawyers' criteria — "the whole truth, and nothing but the truth". But robustness is pretty good; certainly good enough to be going on with.

The limitation is obvious, however. Scientists produce robust answers only because they take great care to tailor the questions. As Sir Peter Medawar said, "Science is the art of the soluble" (within the time and with the tools available). 

Clearly it is a huge mistake to assume that what is soluble is all there is — but some scientists make this mistake routinely.

Or to put the matter another way: they tend conveniently to forget that they arrived at their "robust" conclusions by ignoring as a matter of strategy all the complexities of a kind that seemed inconvenient. But all too often, scientists then are apt to extrapolate from the conclusions they have drawn from their strategically simplified view of the world, to the whole, real world.

Two examples of a quite different kind will suffice: 

1: In the 19th century the study of animal psychology was a mess. On the one hand we had some studies of nerve function by a few physiologists, and on the other we had reams of wondrous but intractable natural history which George Romanes in particular tried to put into some kind of order. But there was nothing much in between. The behaviourists of the 20th century did much to sort out the mess by focusing on the one manifestation of animal psychology that is directly observable and measurable — their behaviour.

Fair enough. But when I was at university in the early 1960s behaviourism ruled everything. Concepts such as "mind" and "consciousness" were banished. B F Skinner even tried to explain the human acquisition of language in terms of his "operant conditioning".

Since then the behaviourist agenda has largely been put in its place. Its methods are still useful (still helping to provide "robust" results) but discussions now are far broader. "Consciousness", "feeling", even "mind" are back on the agenda.

Of course you can argue that in this instance science proved itself to be self-correcting — although this historically is not quite true. Noam Chomsky, not generally recognized as a scientist, did much to dent behaviourist confidence through his own analysis of language.

But for decades the confident assertions of the behaviourists ruled and, I reckon, they were in many ways immensely damaging. In particular they reinforced the Cartesian notion that animals are mere machines, and can be treated as such. Animals such as chimpanzees were routinely regarded simply as useful physiological "models" of human beings who could be more readily abused than humans can. Jane Goodall in particular provided the corrective to this — but she had difficulty getting published at first precisely because she refused to toe the hard-nosed Cartesian (behaviourist-inspired) line. The causes of animal welfare and conservation are still bedeviled by the attitude that animals are simply "machines" and by the crude belief that modern science has "proved" that this is so.

2: In the matter of GMOs we are seeing the crude simplifications still in their uncorrected form. By genetic engineering it is possible (sometimes) to increase crop yield. Other things being equal, high yields are better than low yields. Ergo (the argument goes) GMOs must be good and anyone who says differently must be a fool (unable to understand the science) or wicked (some kind of elitist, trying to hold the peasants back).

But anyone who knows anything about farming in the real world (as opposed to the cosseted experimental fields of the English home counties and of California) knows that yield is by no means the be-all and end-all. Inter alia, high yields require high inputs of resources and capital — the very things that are often lacking. Yield typically matters far less than long-term security — acceptable yields in bad years rather than bumper yields in the best conditions. Security requires individual toughness and variety — neither of which necessarily correlate with super-crop status. In a time of climate change, resilience is obviously of paramount importance — but this is not, alas, obvious to the people who make policy. Bumper crops in good years cause glut — unless the market is regulated;  and glut in the current economic climate (though not necessarily in the real world of the US and the EU) depresses prices and put farmers out of work.

Eventually the penny may drop — that the benison of the trial plot over a few years cannot necessarily be transferred to real farms in the world as a whole. But by that time the traditional crops that could have carried humanity through will be gone, and the people who know how to farm them will be living and dying in urban slums (which, says the UN, are now home to a billion people).

Behind all this nonsense and horror lies the simplistic belief, of a lot of scientists (though by no means all, to be fair) and politicians and captains of industry, that science understands all (ie is omniscient, or soon will be) and that its high technologies can dig us out of any hole we may dig ourselves into (ie is omnipotent).

Absolutely not.

martin_seligman's picture
Professor and Director, Positive Psychology Center, University of Pennsylvania; Author, Flourish

If my math had been better, I would have become an astronomer rather than a psychologist. I was after the very greatest questions and finding life elsewhere in the universe seemed the greatest of them all. Understanding thinking, emotion, and mental health was second best — science for weaker minds like mine.Carl Sagan and I were close colleagues in the late 1960's when we both taught at Cornell. I devoured his thrilling book with I.I. Shklovskii (Intelligent Life in the Universe, 1966) in one twenty-four hour sitting, and I came away convinced that intelligent life was commonplace across our galaxy.

The book, as most readers know, estimates a handful of parameters necessary to intelligent life, such as the probability that an advanced technical civilization will in short order destroy itself and the number of "sol-like" stars in the galaxy. Their conclusion is that there are between 10,000 and two million advanced technical civilizations hereabouts. Some of my happiest memories are of discussing all this with Carl, our colleagues, and our students into the wee hours of many a chill Ithaca night.And this made the universe a less chilly place as well. What consolation! That homo sapiens might really partake of something larger, that there really might be numerous civilizations out there populated by more intelligent beings than we are, wiser because they had outlived the dangers of premature self-destruction. What's more we might contact them and learn from them.

A fledging program of listening for intelligent radio signals from out there was starting up. Homo sapiens was just taking its first balky steps off the planet; we exuberantly watched the moon landing together at the faculty club. We worked on the question of how we would respond if humans actually heard an intelligent signal. What would our first "words" be? We worked on what would be inscribed on the almost immortal Voyager plaque that would leave our solar system just about now — allowing the sentient beings who cadged it epochs hence to surmise who we were, where we were, when we were, and what we were (Should the man and woman be holding hands? No, they might think we were one conjoined organism.) SETI (the Search for Extraterrestrial Intelligence) and its forerunners are almost forty years old. They scan the heavens for intelligent radio signals, with three million participants using their home computers to analyze the input. The result has been zilch. There are plenty of excuses for zilch, however, and lots of reason to hope: only a small fraction of the sky has been scanned and larger more efficient arrays are coming on line. Maybe really advanced civilizations don't use communication techniques that produce waves we can pick up.

Maybe intelligent life is so unimaginably different from us that we are looking in all the wrong "places." Maybe really intelligent life forms hide their presence.So I changed my mind. I now take the null hypothesis very seriously: that Sagan and Shklovskii were wrong: that the number of advanced technical civilizations in our galaxy is exactly one, that the number of advanced technical civilizations in the universe is exactly one.What is the implication of the possibility, mounting a bit every day, that we are alone in the universe? It reverses the millennial progression from a geocentric to a heliocentric to a Milky Way centered universe, back to, of all things, a geocentric universe. We are the solitary point of light in a darkness without end. It means that we are precious, infinitely so. It means that nuclear or environmental cataclysm is an infinitely worse fate than we thought.

It means that we have a job to do, a mission that will last all our ages to come: to seed and then to shepherd intelligent life beyond this pale blue dot.

irene_pepperberg's picture
Research Associate & Lecturer, Harvard; Author, Alex & Me

I've begun to rethink the way we teach students to engage in scientific research. I was trained, as a chemist, to use the classic scientific method: Devise a testable hypothesis, and then design an experiment to see if the hypothesis is correct or not. And I was told that this method is equally valid for the social sciences. I've changed my mind that this is the best way to do science. I have three reasons for this change of mind.

First, and probably most importantly, I've learned that one often needs simply to sit and observe and learn about one's subject before even attempting to devise a testable hypothesis. What are the physical capacities of the subject? What is the social and ecological structure in which it lives? Does some anecdotal evidence suggest the form that the hypothesis should take? Few granting agencies are willing to provide support for this step, but it is critical to the scientific process, particularly for truly innovative research. Often, a proposal to gain observational experience is dismissed as being a "fishing expedition"…but how can one devise a workable hypothesis to test without first acquiring basic knowledge of the system, and how better to obtain such basic knowledge than to observe the system without any preconceived notions?

Second, I've learned that truly interesting questions really often can't be reduced to a simple testable hypothesis, at least not without being somewhat absurd. "Can a parrot label objects?" may be a testable hypothesis, but actually isn't very interesting…what is interesting, for example, is how that labeling compares to the behavior of a young child, exactly what type of training might enable such learning and what type of training is useless, how far can such labeling transfer across exemplars, and….Well, you get the picture…the exciting part is a series of interrelated questions that arise and expand almost indefinitely.

Third, I've learned that the scientific community's emphasis on hypothesis-based research leads too many scientists to devise experiments to prove, rather than test, their hypotheses. Many journal submissions lack any discussion of alternative competing hypotheses: Researchers don't seem to realize that collecting data that are consistent with their original hypothesis doesn't mean that it is unconditionally true. Alternatively, they buy into the fallacy that absence of evidence for something is always evidence of its absence.

I'm all for rigor in scientific research — but let's emphasize the gathering of knowledge rather than the proving of a point.

joseph_ledoux's picture
Professor of Neural Science, Psychology, Psychiatry, and Child and Adolescent Psychiatry, NYU; Director Emotional Brain Institute; Author, Anxious

Like many scientists in the field of memory, I used to think that a memory is something stored in the brain and then accessed when used. Then, in 2000, a researcher in my lab, Karim Nader, did an experiment that convinced me, and many others, that our usual way of thinking was wrong. In a nutshell, what Karim showed was that each time a memory is used, it has to be restored as a new memory in order to be accessible later. The old memory is either not there or is inaccessible. In short, your memory about something is only as good as your last memory about it. This is why people who witness crimes testify about what they read in the paper rather than what they witnessed. Research on this topic, called reconsolidation, has become the basis of a possible treatment for post-traumatic stress disorder, drug addiction, and any other disorder that is based on learning.

That Karim's study changed my mind is clear from the fact that I told him, when he proposed to do the study, that it was a waste of time. I'm not swayed by arguments based on faith, can be moved by good logic, but am always swayed by a good experiment, even if it goes against my scientific beliefs. I might not give up on a scientific belief after one experiment, but when the evidence mounts over multiple studies, I change my mind.

marcelo_gleiser's picture
Appleton Professor of Natural Philosophy, Dartmouth College; Author, The Island of Knowledge

I grew up infused with the idea of unification. It came first from religion, from my Jewish background. God was all over, was all-powerful, and had a knack for interfering with human affairs, at least in the Old Testament. He then appeared to have decided to be a bit shyer, sending a Son instead, and only revealing Himself through visions and prophecies. Needless to say, when, as a teenager, I started to get interested in science, this vision of an all-pervading God, stories of floods, commandments and plagues, started to become very suspicious. I turned to physics, idolizing Einstein and his science; here was a Jew that saw further, that found a way of translating this old monotheistic tradition into the universal language of science.

As I started my research career, I had absolutely no doubt that I wanted to become a theoretical physicist working on particle physics and cosmology. Why the choice? Simple: it was the joining of the two worlds, of the very large and the very small, that offered the best hope for finding a unified theory of all Nature, that brought together matter and forces into one single magnificent formulation, the final Platonist triumph. This was what Einstein tried to do for the last three decades of his life, although in his days it was more a search for unifying only half of the forces of Nature, gravity and electromagnetism.

I wrote dozens of papers related to the subject of unification, even my Ph.D. dissertation was on the topic. I was fascinated by the modern approaches to the idea, supersymmetry, superstrings, a space with extra, hidden dimensions. A part of me still is. But then, a few years ago, something snapped. It probably was brought by a combination of factors, a deeper understanding of the historical and cultural processes that shape scientific ideas. I started to doubt unification, finding it to be the scientific equivalent of a monotheistic formulation of reality, a search for God revealed in equations. Of course, had we the slightest experimental evidence in favor of unification, of supersymmetry and superstrings, I'd be the first popping the champagne open. But it's been over twenty years, and all attempts so far have failed. Nothing in particle accelerators, nothing in cryogenic dark matter detectors, no magnetic monopoles, no proton decay, all tell-tale signs of unification predicted over the years. Even our wonderful Standard Model of particle physics, where we formulate the unification of electromagnetism and the weak nuclear interactions, is not really a true unification: the theory retains information from both interactions in the form of their strengths or, in more technical jargon, of their coupling constants. A true unification should have a single coupling constant, a single interaction.

All of my recent anti-unification convictions can crumble during the next few years, after our big new machine, the Large Hadron Collider, is turned on. Many colleagues hope that supersymmetry will finally show its face. Others even bet on possible signs of extra dimensions revealed. However, I have a feeling things won't turn out so nicely. The model of unification, which is so aesthetically appealing, may be simply this, an aesthetically appealing description of Nature, which, unfortunately, doesn't correspond to physical reality. Nature doesn't share our myths. The stakes are high indeed. But being a mild agnostic, I don't believe until there is evidence. And then, there is no need to believe any longer, which is precisely the beauty of science.

karl_sabbagh's picture
Producer; Founder, Managing Director, Skyscraper Productions; Author, The Antisemitism Wars: How the British Media Failed Their Public

I used to believe that there were experts and non-experts and that, on the whole, the judgment of experts is more accurate, more valid, and more correct than my own judgment. But over the years, thinking — and I should add, experience — has changed my mind. What experts have that I don't are knowledge and experience in some specialized area. What, as a class, they don't have any more than I do is the skills of judgment, rational thinking and wisdom. And I've come to believe that some highly ‘qualified' people have less of that than I do.

I now believe that the people I know who are wise are not necessarily knowledgeable; the people I know who are knowledgeable are not necessarily wise. Most of us confuse expertise with judgment. Even in politics, where the only qualities politicians have that the rest of us lack are knowledge of the procedures of parliament or congress, and of how government works, occasionally combined with specific knowledge of economics or foreign affairs, we tend to look to such people for wisdom and decision-making of a high order.

Many people enroll for MBA's to become more successful businessmen. An article in Fortune magazine a couple of years ago compared the academic qualifications of people in business and found the qualification that correlated most highly with success was a philosophy degree. When I ran a television production company and was approached for a job by budding directors or producers, I never employed anyone with a degree in media studies. But I did employ lots of intelligent people with good judgment who knew nothing about television to start with but could make good decisions. The results justified that approach.

Scientists — with a few eccentric exceptions — are, perhaps, the one group of experts who have never claimed for themselves wisdom outside the narrow confines of their specialties. Paradoxically, they are the one group who are blamed for the mistakes of others. Science and scientists are criticized for judgments about weapons, stem cells, global warming, nuclear power, when the decisions are made by people who are not scientists.

As a result of changing my mind about this, I now view the judgments of others, however distinguished or expert they are, as no more valid than my own. If someone who is a ‘specialist' in the field disagrees with me about a book idea, the solution to the Middle East problems, the non-existence of the paranormal or nuclear power, I am now entirely comfortable with the disagreement because I know I'm just as likely to be right as they are.

david_bodanis's picture
Writer; Futurist; Author, Einstein's Greatest Mistake

When I was very little the question was easy. I simply assumed the whole Bible was true, albeit in a mysterious, grown-up sort of way. But once I learned something of science, at school and then at university, that unquestioning belief slid away.

Mathematics was especially important here, and I remember how entranced I was when I first saw the power of axiomatic systems. Those were logical structures that were as beautiful as complex crystals —  but far, far clearer. If there was one inaccuracy at any point in the system, you could trace it, like a scarcely visible stretching crack through the whole crystal; you could see exactly how it had to undermine the validity of far distant parts as well. Since there are obvious factual inaccuracies in the Bible, as well as repugnant moral commands, then —  just as with any tight axiomatic system —  huge other parts of it had to be wrong, as well. In my mind that discredited it all.

What I've come to see more recently is that the Bible isn't monolithic in that way. It's built up in many, often quite distinct layers. For example, the book of Joshua describes a merciless killing of Jericho's inhabitants, after that city's walls were destroyed. But archaeology shows that when this was supposed to be happening, there was no large city with walls there to be destroyed. On the contrary, careful dating of artifacts, as well as translations from documents of the great empires in surrounding regions, shows that the bloodthirsty Joshua story was quite likely written by one particular group, centuries later, trying to give some validity to a particular royal line in 7th century BC Jerusalem, which wanted to show its rights to the entire country around it. Yet when that Joshua layer is stripped away, other layers in the Bible remain. They can stand, or be judged, on their own.

A few of those remaining layers have survived only because they became taken up by narrow power structures, concerned with aggrandizing themselves, in the style of Philip Pullman's excellent books. But others have survived across the millennia for different reasons. Some speak to the human condition with poetry of aching beauty. And others —  well, there's a further reason I began to doubt the inanity of everything I couldn't understand.

A child age three, however intelligent, and however much it squinches his or her fact tight in concentration, still won't be able to grasp notions that are easy for us, such as 'century', or 'henceforth', let alone greater subtleties which 20th century science has clarified, such as 'simultaneity' or 'causality'. True and important things exist, which young children can't comprehend. It seems odd to be sure that we, adult humans, existing at this one particular moment in evolution, have no such limits.

I realized that the world  isn't divided into science on the one hand, and nonsense or arbitrary biases on the other. And I wonder now what might be worth looking for, hidden there, fleetingly in-between.

freeman_dyson's picture
Physicist, Institute of Advanced Study; Author, Disturbing the Universe; Maker of Patterns

When facts change your mind, that's not always science. It may be history. I changed my mind about an important historical question: did the nuclear bombings of Hiroshima and Nagasaki bring World War Two to an end? Until this year I used to say, perhaps. Now, because of new facts, I say no. This question is important, because the myth of the nuclear bombs bringing the war to an end is widely believed. To demolish this myth may be a useful first step toward ridding the world of nuclear weapons.

Until the last few years, the best summary of evidence concerning this question was a book, "Japan's Decision to Surrender", by Robert Butow, published in 1954. Butow interviewed the surviving Japanese leaders who had been directly involved in the decision. He asked them whether Japan would have surrendered if the nuclear bombs had not been dropped. His conclusion, "The Japanese leaders themselves do not know the answer to that question, and if they cannot answer it, neither can I". Until recently, I believed what the Japanese leaders said to Butow, and I concluded that the answer to the question was unknowable.

Facts causing me to change my mind were brought to my attention by Ward Wilson. Wilson summarized the facts in an article, "The Winning Weapon? Rethinking Nuclear Weapons in the Light of Hiroshima", in the Spring 2007 issue of the magazine, "International Security". He gives references to primary source documents and to analyses published by other historians, in particular by Robert Pape and Tsuyoshi Hasegawa. The facts are as follows:

1. Members of the Supreme Council, which customarily met with the Emperor to take important decisions, learned of the nuclear bombing of Hiroshima on the morning of August 6, 1945. Although Foreign Minister Togo asked for a meeting, no meeting was held for three days.

2. A surviving diary records a conversation of Navy Minister Yonai, who was a member of the Supreme Council, with his deputy on August 8. The Hiroshima bombing is mentioned only incidentally. More attention is given to the fact that the rice ration in Tokyo is to be reduced by ten percent.

3. On the morning of August 9, Soviet troops invaded Manchuria. Six hours after hearing this news, the Supreme Council was in session. News of the Nagasaki bombing, which happened the same morning, only reached the Council after the session started.

4. The August 9 session of the Supreme Council resulted in the decision to surrender.

5. The Emperor, in his rescript to the military forces ordering their surrender, does not mention the nuclear bombs but emphasizes the historical analogy between the situation in 1945 and the situation at the end of the Sino-Japanese war in 1895. In 1895 Japan had defeated China, but accepted a humiliating peace when European powers led by Russia moved into Manchuria and the Russians occupied Port Arthur. By making peace, the emperor Meiji had kept the Russians out of Japan. Emperor Hirohito had this analogy in his mind when he ordered the surrender.

6. The Japanese leaders had two good reasons for lying when they spoke to Robert Butow. The first reason was explained afterwards by Lord Privy Seal Kido, another member of the Supreme Council: "If military leaders could convince themselves that they were defeated by the power of science but not by lack of spiritual power or strategic errors, they could save face to some extent". The second reason was that they were telling the Americans what the Americans wanted to hear, and the Americans did not want to hear that the Soviet invasion of Manchuria brought the war to an end.

In addition to the myth of two nuclear bombs bringing the war to an end, there are other myths that need to be demolished. There is the myth that, if Hitler had acquired nuclear weapons before we did, he could have used them to conquer the world. There is the myth that the invention of the hydrogen bomb changed the nature of nuclear warfare. There is the myth that international agreements to abolish weapons without perfect verification are worthless. All these myths are false. After they are demolished, dramatic moves toward a world without nuclear weapons may become possible.

douglas_rushkoff's picture
Media Analyst; Documentary Writer; Author, Throwing Rocks at the Google Bus

I thought that it would change people. I thought it would allow us to build a new world through which we could model new behaviors, values, and relationships. In the 90's, I thought the experience of going online for the first time would change a person's consciousness as much as if they had dropped acid in the 60's.

I thought Amazon.com was a ridiculous idea, and that the Internet would shrug off business as easily as it did its original Defense Department minders.

For now, at least, it's turned out to be different.

Virtual worlds like Second Life have been reduced to market opportunities: advertisers from banks to soft drinks purchase space and create fake characters, while kids (and Chinese digital sweatshop laborers) earn "play money" in the game only to sell it to lazier players on eBay for real cash.

The businesspeople running Facebook and MySpace are rivaled only by the members of these online "communities" in their willingness to surrender their identities and ideals for a buck, a click-through, or a better market valuation.

The open source ethos has been reinterpreted through the lens of corporatism as "crowd sourcing" — meaning just another way to get people to do work for no compensation. And even "file-sharing" has been reduced to a frenzy of acquisition that has less to do with music than it does the ever-expanding hard drives of successive iPods.

Sadly, cyberspace has become just another place to do business. The question is no longer how browsing the Internet changes the way we look at the world; it's which browser we'll be using to buy and sell stuff in the same old world.

haim_harari's picture
Physicist, former President, Weizmann Institute of Science; Author, A View from the Eye of the Storm

I used to think that if something is clear and simple, it must also be provable or at least well defined, and if something is well defined, it might be relatively simple. It isn't so.

If you hear about sightings of a weird glow approaching us in the night sky, it might be explained as a meteorite or as little green men arriving in a spaceship from another galaxy. In most specific cases, both hypotheses can be neither proved nor disproved, rigorously. Nothing is well defined here. Yet, it is clear that the meteorite hypothesis is scientifically much more likely.

When you hear about a new perpetual motion machine or about yet another claim of cold fusion, you raise an eyebrow, you are willing to bet against it and, in your guts, you know it is wrong, but it is not always easy to disprove it rigorously.

The reliability of forecasts regarding weather, stock markets and astrology is descending in that order. All of them are based on guesses, with or without historical data. Most of them are rarely revisited by the media, after the fact, thus avoiding being exposed as unreliable. In most cases, predicting that the immediate future will be the same as the immediate past, has a higher probability of being correct, than the predictions of the gurus. Yet, we, as scientists, have considerable faith in weather predictions; much less faith in predicting peaks and dips of the stock market and no faith at all is astrology. We can explain why, and we are certainly right, but we cannot prove why. Proving it by historical success data, is as convincing (for the future) as the predictions themselves.

Richard Feynman in his famous Lectures on Physics provided the ultimate physics definition of Energy: It is that quantity which is conserved. Any Lawyer, Mathematician or Accountant would have laughed at this statement. Energy is perhaps the most useful, clear and common concept in all of science, and Feynman is telling us, correctly and shamelessly, that it has no proper rigorous and logical definition.

How much is five thousand plus two? Not so simple. Sometimes it is five thousands and two (as in your bank statement) and sometimes it is actually five thousand (as in the case of the Cairo tour guide who said "this pyramid is 5002 years old; when I started working here two years ago, I was told it was 5000 years old").

The public thinks, incorrectly, that science is a very accurate discipline where everything is well defined. Not so. But the beauty of it is that all of the above statements are scientific, obvious and useful, without being precisely defined. That is as much part of the scientific method as verifying a theory by an experiment (which is always accurate only to a point).

To speak and to understand the language of science is, among other things, to understand this "clear vagueness". It exists, of course, in other areas of life. Every normal language possesses numerous such examples, and so do all fields of social science.

Judaism is a religion and I am an atheist. Nevertheless, it is clear that I am Jewish. It would take a volume to explain why, and the explanation will remain rather obscure and ill defined. But the fact is simple, clear, well understood and undeniable.

Somehow, it is acceptable to face such situations in nonscientific matters, but most people think, incorrectly, that the quantitative natural sciences must be different. They are different, in many ways, but not in this way.

Common sense has as much place as logic, in scientific research. Intuition often leads to more insight than algorithmic thinking. Familiarity with previous failed attempts to solve a problem may be detrimental, rather than helpful. This may explain why almost all important physics breakthroughs are made by people under forty. This also explains why, in science, asking the right question is at least as important as being able to solve a well posed problem.

You might say that the above kind of thinking is prejudiced and inaccurate, and that it might hinder new discoveries and new scientific ideas. Not so. Good scientists know very well how to treat and use all of these "fuzzy" statements. They also know how to reconsider them, when there is a good reason to do so, based on new solid facts or on a new original line of thinking. This is one of the beautiful features of science.

ed_regis's picture
Science writer; Author, Monsters

I used to think you could predict the future.  In "Profiles of the Future," Arthur C. Clarke made it seem so easy.  And so did all those other experts who confidently predicted the paperless office, the artificial intelligentsia who for decades predicted "human equivalence in ten years," the nanotechnology prophets who kept foreseeing major advances toward molecular manufacturing within fifteen years, and so on. 

Mostly, the predictions of science and technology types were wonderful: space colonies, flying cars in everyone's garage, the conquest (or even reversal) of aging.  (There were of course the doomsayers, too, such as the population-bomb theorists who said the world would run out of food by the turn of the century.) 

But at last, after watching all those forecasts not come true, and in fact become falsified in a crashing, breathtaking manner, I began to question the entire business of making predictions.  I mean, if even Nobel prizewinning scientists such as Ernest Rutherford, who gave us essentially the modern concept of the nuclear atom, could say, as he did in 1933, that "We cannot control atomic energy to an extent which would be of any value commercially, and I believe we are not likely ever to be able to do so," and be so spectacularly wrong about it, what hope was there for the rest of us? 

And then I finally decided that I knew the source of this incredible mismatch between confident forecast and actual result.  The universe is a complex system in which countless causal chains are acting and interacting independently and simultaneously (the ultimate nature of some of them unknown to science even today).  There are in fact so many causal sequences and forces at work, all of them running in parallel, and each of them often affecting the course of the others, that it is hopeless to try to specify in advance what's going to happen as they jointly work themselves out.  In the face of that complexity, it becomes difficult if not impossible to know with any assurance the future state of the system except in those comparatively few cases in which the system is governed by ironclad laws of nature such as those that allow us to predict the  phases of the moon, the tides, or the position of Jupiter in tomorrow night's sky.  Otherwise, forget it. 

Further, it's an illusion to think that supercomputer modeling is up to the task of truly reliable crystal-ball gazing.  It isn't.  Witness the epidemiologists who predicted that last year's influenza season would be severe (in fact it was mild); the professional hurricane-forecasters whose models told them that the last two hurricane seasons would be monsters (whereas instead they were wimps).  Certain systems in nature, it seems, are computationally irreducible phenomena, meaning that there is no way of knowing the outcome short of waiting for it to happen. 

Formerly, when I heard or read a prediction, I believed it.  Nowadays I just roll my eyes, shake my head, and turn the page.

piet_hut's picture
professor of astrophysics at the Institute for Advanced Study, in Princeton

I used to pride myself on the fact that I could explain almost anything to anyone, on a simple enough level, using analogies. No matter how abstract an idea in physics may be, there always seems to be some way in which we can get at least some part of the idea across. If colleagues shrugged and said, oh, well, that idea is too complicated or too abstract to be explained in simple terms, I thought they were either lazy or not very skilled in thinking creatively around a problem. I could not imagine a form of knowledge that could not be communicated in some limited but valid approximation or other.

However, I've changed my mind, in what was for me a rather unexpected way. I still think I was right in thinking that any type of insight can be summarized to some degree, in what is clearly a correct first approximation when judged by someone who shares in the insight. For a long time my mistake was that I had not realized how totally wrong this first approximation can come across for someone who does not share the original insight.

Quantum mechanics offers a striking example. When someone hears that there is a limit on how accurately you can simultaneously measure various properties of an object, it is tempting to think that the limitations lie in the measuring procedure, and that the object itself somehow can be held to have exact values for each of those properties, even if they cannot be measured. Surprisingly, that interpretation is wrong: John Bell showed that such a 'hidden variables' picture is actually in clear disagreement with quantum mechanics. An initial attempt at explaining the measurement problem in quantum mechanics can be more misleading than not saying anything at all.

So for each insight there is at least some explanation possible, but the same explanation may then be given for radically different insights. There is nothing that cannot be explained, but there are wrong insights that can lead to explanations that are identical to the explanation for a correct but rather subtle insight.

timothy_taylor's picture
Jan Eisner Professor of Archaeology, Comenius University in Bratislava; Author, The Artificial Ape

Where once I would have striven to see Incan child sacrifice 'in their terms', I am increasingly committed to seeing it in ours. Where once I would have directed attention to understanding a past cosmology of equal validity to my own, I now feel the urgency to go beyond a culturally-attuned explanation and reveal cold sadism, deployed as a means of social control by a burgeoning imperial power.

In Cambridge at the end of the 70s, I began to be inculcated with the idea that understanding the internal logic and value system of a past culture was the best way to do archaeology and anthropology. The challenge was to achieve this through sensitivity to context, classification and symbolism. A pot was no longer just a pot, but a polyvalent signifier, with a range of case-sensitive meanings. A rubbish pit was no longer an unproblematic heap of trash, but a semiotic entity embodying concepts of contagion and purity, sacred and profane. A ritual killing was not to be judged bad, but as having validity within a different worldview.

Using such 'contextual' thinking, a lump of slag found in a 5000 BC female grave in Serbia was no longer seen as chance contaminant — bi-product garbage from making copper jewelry. Rather it was a kind of poetic statement bearing on the relationship between biological and cultural reproduction. Just as births in the Vin?a culture were attended by midwives who also delivered the warm but useless slab of afterbirth, so Vinca culture ore was heated in a clay furnace that gave birth to metal. From the furnace — known from many ethnographies to have projecting clay breasts and a graphically vulvic stoking opening — the smelters delivered technology's baby. With it came a warm but useless lump of slag. Thus the slag in a Vinca woman's grave, far from being accidental trash, hinted at a complex symbolism of gender, death and rebirth.

So far, so good: relativism worked as a way towards understanding that our industrial waste was not theirs, and their idea of how a woman should be appropriately buried not ours. But what happens when relativism says that our concepts of right and wrong, good and evil, kindness and cruelty, are inherently inapplicable? Relativism self-consciously divests itself of a series of anthropocentric and anachronistic skins — modern, white, western, male-focused, individualist, scientific (or 'scientistic') — to say that the recognition of such value-concepts is radically unstable, the 'objective' outsider opinion a worthless myth.

My colleague Andy Wilson and our team have recently examined the hair of sacrificed children found on some of the high peaks of the Andes. Contrary to historic chronicles that claim that being ritually killed to join the mountain gods was an honour that the Incan rulers accorded only to their own privileged offspring, diachronic isotopic analyses along the scalp hairs of victims indicate that it was peasant children, who, twelve months before death, were given the outward trappings of high status and a much improved diet to make them acceptable offerings. Thus we see past the self-serving accounts of those of the indigenous elite who survived on into Spanish rule. We now understand that the central command in Cuzco engineered the high-visibility sacrifice of children drawn from newly subject populations. And we can guess that this was a means to social control during the massive, 'shock & awe' style imperial expansion southwards into what became Argentina.

But the relativists demur from this understanding, and have painted us as culturally insensitive, ignorant scientists (the last label a clear pejorative). For them, our isotope work is informative only as it reveals 'the inner fantasy life of, mostly, Euro-American archaeologists, who can't possibly access the inner cognitive/cultural life of those Others.' The capital 'O' is significant. Here we have what the journalist Julie Burchill mordantly unpacked as 'the ever-estimable Other' — the albatross that post-Enlightenment and, more importantly, post-colonial scholarship must wear round its neck as a sign of penance.

We need relativism as an aid to understanding past cultural logic, but it does not free us from a duty to discriminate morally and to understand that there are regularities in the negatives of human behaviour as well as in its positives. In this case, it seeks to ignore what Victor Nell has described as 'the historical and cross-cultural stability of the uses of cruelty for punishment, amusement, and social control.' By denying the basis for a consistent underlying algebra of positive and negative, yet consistently claiming the necessary rightness of the internal cultural conduct of 'the Other', relativism steps away from logic into incoherence.

david_brin's picture
Scientist; Speaker; Technical Consultant; Author, Existence

Sometimes you are glad to discover you were wrong. My best example of that kind of pleasant surprise is India. I'm delighted to see its recent rise, on (tentative) course toward economic, intellectual and social success. If these trends continue, it will matter a lot to Earth civilization, as a whole. The factors that fostered this trend appear to have been atypical — at least according to common preconceptions like "west and east" or "right vs left." I learned a lesson, about questioning my assumptions. 

Alas, there have been darker surprises. The biggest example has been America's slide into what could be diagnosed as bona fide Future Shock. 

Alvin Toffler appears to have sussed it. Back in 1999, while we were fretting over a silly "Y2K Bug" in ancient COBOL code, something else happened, at a deeper level. Our weird governance issues are only surface symptoms of what may have been a culture-wide crisis of confidence, upon the arrival of that "2" in the millennium column. Yes, people seemed to take the shift complacently, going about their business, But, underneath all the blithe shrugs, millions have turned their backs upon the future, even as a topic of discussion or interest.

Other than the tenacious grip of Culture War, what evidence can I offer? Well, in my own fields, let me point to a decline in the futurist-punditry industry. (A recent turnaround offers hope.) And a plummet in the popularity of science fiction literature (as opposed to feudal-retro fantasy.) John B. has already shown us how little draw science books offer, in the public imagination — an observation that not only matches my own, but also reflects the anti-modernist fervor displayed by all dogmatic movements. 

One casualty: the assertive, pragmatic approach to negotiation and human-wrought progress that used to be mother's milk to this civilization. 

Yes, there were initial signs of all this, even in the 1990s. But the extent of future-anomie and distaste for science took me completely by surprise. It makes me wonder why Toffler gets mentioned so seldom. 

Let me close with a final surprise, that's more of a disappointment. 

I certainly expected that, by now, online tools for conversation, work, collaboration and discourse would have become far more useful, sophisticated and effective than they currently are. I know I'm pretty well alone here, but all the glossy avatars and video and social network sites conceal a trivialization of interaction, dragging it down to the level of single-sentence grunts, flirtation and ROTFL [rolling on the floor laughing], at a time when we need discussion and argument to be more effective than ever. 

Indeed, most adults won't have anything to do with all the wondrous gloss that fills the synchronous online world, preferring by far the older, asynchronous modes, like web sites, email, downloads etc. 

This isn't grouchy old-fart testiness toward the new. In fact, there are dozens of discourse-elevating tools just waiting out there to be born. Everybody is still banging rocks together, while bragging about the colors. Meanwhile, half of the tricks that human beings normally use, in real world conversation, have never even been tried online.

leon_m_lederman's picture
Director emeritus of Fermi National Accelerator Laboratory

My academic experience, mainly at Columbia University from 1946-1978, instilled the following firm beliefs:

The role of the Professor, reflecting the mission of the University, is research and dissemination of the knowledge gained. However, the Professor has many citizenship obligations: to his community, State and Nation, to his University, to his field of research, e.g. physics, to his students. In the latter case, one must add to the content knowledge transferred, the moral and ethical concerns that science brings to society. So scientists have an obligation to communicate their knowledge, popularize, and whenever relevant, bring his knowledge to bear on the issues of the time. However, additionally, scientists play a large role in advisory boards and systems from the President's Advisory system all the way to local school boards and PTAs. I have always believed that the above menu more or less covered all the obligations and responsibilities of the scientist. His most sacred obligation is to continue to do science. Now I know that I was dead wrong.

Taking even a cursory stock of current events, I am driven to the ultimately wise advice of my Columbia mentor, I.I. Rabi, who, in our many corridor bull sessions, urged his students to run for public office and get elected. He insisted that to be an advisor (he was an advisor to Oppenheimer at Los Alamos, later to Eisenhower and to the AEC) was ultimately an exercise in futility and that the power belonged to those who are elected. Then, we thought the old man was bonkers. But today......

Just look at our national and international dilemmas: global climate change (U.S. booed in Bali); nuclear weapons (seventeen years after the end of the Cold War, the U.S. has over 7,000 nuclear weapons, many poised to instant flight. Who decided?); stem cell research (still hobbled by White House obstacles). Basic research and science education are rated several nations below "Lower Slobovenia", our national deficit will burden the nation for generations, a wave of religious fundamentalism, an endless war in Iraq and the growing security restrictions on our privacy and freedom (excused by an even more endless and mindless war on terrorism) seem to be paralyzing the Congress. We need to elect people who can think critically.

A Congress which is overwhelmingly dominated by lawyers and MBAs makes no sense in this 21st century in which almost all issues have a science and technology aspect. We need a national movement to seek out scientists and engineers who have demonstrated the required management and communication skills. And we need a strong consensus of mentors that the need for wisdom and knowledge in the Congress must have a huge priority.

rudy_rucker's picture
Mathematician; Computer Scientist; Cyberpunk Pioneer; Novelist, Infinity and the Mind, Postsingular, and (with Bruce Sterling) Transreal Cyberpunk.

Studying mathematical logic in the 1970s I believed it was possible to put together a convincing argument that no computer program can fully emulate a human mind. Although nobody had quite gotten the argument right, I hoped to straighten it out. 

My belief in this will-o-the-wisp was motivated by a gut feeling that people have numinous inner qualities that will not be found in machines.  For one thing, our self-awareness lets us reflect on ourselves and get into endless mental regresses: "I know that I know that I know..."  For another, we have moments of mystical illumination when we seem to be in contact, if not with God, then with some higher cosmic mind.  I felt that surely no machine could be self-aware or experience the divine light. 

At that point, I'd never actually touched a computer — they were still inaccessible, stygian tools of the establishment.  Three decades rolled by, and I'd morphed into a Silicon Valley computer scientist, in constant contact with nimble chips.  Setting aside my old prejudices, I changed my mind — and came to believe that we can in fact create human-like computer programs.

Although writing out such a program is in some sense beyond the abilities of any one person, we can set up simulated worlds in which such computer programs evolve.  I feel confident that some relatively simple set-up will, in time, produce a human-like program capable of emulating all known intelligent human behaviors: writing books, painting pictures, designing machines, creating scientific theories, discussing philosophy, and even falling in love.  More than that, we will be able to generate an unlimited number of such programs, each with its own particular style and personality. 

What of the old-style attacks from the quarters of mathematical logic?  Roughly speaking, these arguments always hinged upon a spurious belief that we can somehow discern between, on the one hand, human-like systems which are fully reliable and, on the other hand, human-like systems fated to begin spouting gibberish.  But the correct deduction from mathematical logic is that there is absolutely no way to separate the sheep from the goats.  Note that this is already our situation vis-a-vis real humans: you have no way to tell if and when a friend or a loved one will forever stop making sense. 

With the rise of new practical strategies for creating human-like programs and the collapse of the old a priori logical arguments against this endeavor, I have to reconsider my former reasons for believing humans to be different from machines.   Might robots become self-aware?  And — not to put too fine a point on it — might they see God?  I believe both answers are yes. 

Consciousness probably isn't that big a deal.  A simple pair of facing mirrors exhibit a kind of endlessly regressing self-awareness, and this type of pattern can readily be turned into computer code. 

And what about basking in the divine light?  Certainly if we take a reductionistic view that mystical illumination is just a bath of intoxicating brain chemicals, then there seems to be no reason that machines couldn't occasionally be nudged into exceptional states as well.  But I prefer to suppose that mystical experiences involve an objective union with a higher level of mind, possibly mediated by offbeat physics such as quantum entanglement, dark matter, or higher dimensions. 

Might a robot enjoy these true mystical experiences?  Based on my studies of the essential complexity of simple systems, I feel that any physical object at all must be equally capable of enlightenment.  As the Zen apothegm has it, "The universal rain moistens all creatures." 

So, yes, I now think that robots can see God.

dan_sperber's picture
Social and Cognitive Scientist; CEU Budapest and CNRS Paris; Co-author (with Deirdre Wilson), Meaning and Relevance; and (with Hugo Mercier), The Enigma of Reason

As a student, I was influenced by Claude Lévi-Strauss and even more by Noam Chomsky. Both of them dared talk about "human nature" when the received view was that there was no such thing. In my own work, I argued for a naturalistic approach in the social sciences. I took for granted that human cognitive dispositions were shaped by biological evolution and more specifically by Darwinian selection. While I did occasionally toy with evolutionary speculations, I failed to see at the time how they could play more than a quite marginal role in the study of human psychology and culture.

Luckily, in 1987, I was asked by Jacques Mehler, the founder and editor of Cognition, to review a very long article intriguingly entitled "The logic of social exchange: Has natural selection shaped how humans reason?" In most experimental psychology articles the theoretical sections are short and relatively shallow. Here, on the other hand, the young author, Leda Cosmides, was arguing in an altogether novel way for an ambitious theoretical claim. The forms of cooperation unique to and characteristic of humans could only have evolved, she maintained, if there had also been, at a psychological level, the evolution of a mental mechanism tailored to understand and manage social exchanges and in particular to detect cheaters. Moreover, this mechanism could be investigated by means of standard reasoning experiments.

This is not the place to go into the details of the theoretical argument — which I found and still find remarkably insightful — or of the experimental evidence — which I have criticized in detail with experiments of my own as inadequate. Whatever its shortcoming, this was an extraordinarily stimulating paper, and I strongly recommended acceptance of a revised version. The article was published in 1989 and the controversies it stirred have not yet abated.

Reading the work of Leda Cosmides and of John Tooby, her collaborator (and husband), meeting them shortly after, and initiating a conversation with them that has never ceased made me change my mind. I had known that we could reflect on the mental capacities of our ancestors on the basis of what we know of our minds; I now understood that we can also draw fundamental insights about our present minds through reflecting on the environmental problems and opportunities that have exerted selective pressure on our Paleolithic ancestors.

Ever since, I have tried to contribute to the development of evolutionary psychology, to the surprise and dismay of some of my more standard-social-science friends and also of some evolutionary psychologists who see me more as a heretic than a genuine convert. True, I have no taste or talent for orthodoxy. Moreover, I find much of the work done so far under the label "evolutionary psychology" rather disappointing. Evolutionary psychology will succeed to the extent that it causes cognitive psychologists to rethink central aspects of human cognition in an evolutionary perspective, to the extent, that is, that psychology in general becomes evolutionary.

The human species is exceptional in its massive investment in cognition, and in forms of cognitive activity — language, higher-order thinking, abstraction — that are as unique to humans as echolocation is to bats. Yet more than half of all work done in evolutionary psychology today is about mate choice, a mental activity found in a great many species. There is nothing intrinsically wrong in studying mate choice, of course, and some of the work done in this area is outstanding.

However the promise of evolutionary psychology is first and foremost to help explain aspects of human psychology that are genuinely exceptional among earthly species and that in turn help explain the exceptional character of human culture and ecology. This is what has to be achieved to a much greater extent than has been the case so far if we want more skeptical cognitive and social scientists to change their minds too.

nick_bostrom's picture
Professor, Oxford University; Director, Future of Humanity Institute; Author, Superintelligence: Paths, Dangers, Strategies

For me, belief is not an all or nothing thing — believe or disbelieve, accept or reject.  Instead, I have degrees of belief, a subjective probability distribution over different possible ways the world could be.  This means that I am constantly changing my mind about all sorts of things, as I reflect or gain more evidence.  While I don't always thinkexplicitly in terms of probabilities, I often do so when I give careful consideration to some matter.  And when I reflect on my own cognitive processes, I must acknowledge the graduated nature of my beliefs. 

The commonest way in which I change my mind is by concentrating my credence function on a narrower set of possibilities than before.  This occurs every time I learn a new piece of information.  Since I started my life knowing virtually nothing, I have changed my mind about virtually everything.  For example, not knowing a friend's birthday, I assign a 1/365 chance (approximately) of it being the 11th of August.  After she tells me that the 11th of August is her birthday, I assign that date a probability of close to 100%.  (Never exactly 100%, for there is always a non-zero probability of miscommunication, deception, or other error.) 

It can also happen that I change my mind by smearing out my credence function over a wider set of possibilities.  I might forget the exact date of my friend's birthday but remember that it is sometime in the summer.  The forgetting changes my credence function, from being almost entirely concentrated on 11th of August to being spread out more or less evenly over all the summer months.  After this change of mind, I might assign a 1% probability to my friend's birthday being on the 11th of August.

My credence function can become more smeared out not only by forgetting but also by learning — learning that what I previously took to be strong evidence for some hypothesis is in fact weak or misleading evidence.  (This type of belief change can often be mathematically modeled as a narrowing rather than a broadening of credence function, but the technicalities of this are not relevant here.)

For example, over the years I have become moderately more uncertain about the benefits of medicine, nutritional supplements, and much conventional health wisdom.  This belief change has come about as a result of several factors.  One of the factors is that I have read some papers that cast doubt on the reliability of the standard methodological protocols used in medical studies and their reporting.  Another factor is my own experience of following up on MEDLINE some of the exciting medical findings reported in the media — almost always, the search of the source literature reveals a much more complicated picture with many studies showing a positive effect, many showing a negative effect, and many showing no effect.  A third factor is the arguments of a health economist friend of mine, who holds a dim view of the marginal benefits of medical care. 

Typically, my beliefs about big issues change in small steps.  Ideally, these steps should approximate a random walk, like the stock market.  It should be impossible for me to predict how my beliefs on some topic will change in the future.  If I believed that a year hence I will assign a higher probability to some hypothesis than I do today — why, in that case I could raise the probability right away.  Given knowledge of what I will believe in the future, I would defer to the beliefs of my future self, provided that I think my future self will be better informed than I am now and at least as rational. 

I have no crystal ball to show me what my future self will believe.  But I do have access to many other selves, who are better informed than I am on many topics.  I can defer to experts.  Provided they are unbiased and are giving me their honest opinion, I should perhaps always defer to people who have more information than I do — or to some weighted average of expert opinion if there is no consensus.  Of course, the proviso is a very big one: often I have reason to disbelieve that other people are unbiased or that they are giving me their honest opinion.  However, it is also possible that I am biased and self-deceiving.  An important unresolved question is how much epistemic weight a wannabe Bayesian thinker should give to the opinions of others.  I'm looking forward to changing my mind on that issue, hopefully by my credence function becoming concentrated on the correct answer.

thomas_metzinger's picture
Professor of Theoretical Philosophy, Johannes Gutenberg-Universität Mainz; Adjunct Fellow, Frankfurt Institute for Advanced Study; Author, The Ego Tunnel

I have become convinced that it would be of fundamental importance to know what a good state of consciousness is. Are there forms of subjective experience which — in a strictly normative sense — are better than others? Or worse? What states of consciousness should be illegal? What states of consciousness do we want to foster and cultivate and integrate into our societies? What states of consciousness can we force upon animals — for instance, in consciousness research itself? What states of consciousness do we want to show our children? And what state of consciousness do we eventually die in ourselves?

2007 has seen the rise of an important new discipline: "neuroethics". This is not simply a new branch of applied ethics for neuroscience — it raises deeper issues about selfhood, society and the image of man.  Neuroscience is now quickly transformed into neurotechnology. I predict that parts of neurotechnology will turn into consciousness technology. In 2002, out-of-body experiences were, for the first time, induced with an electrode in the brain of an epileptic patient.  In 2007 we saw the first two studies, published in Science, demonstrating how the conscious self can be transposed to a location outside of the physical body as experienced, non-invasively and in healthy subjects. Cognitive enhancers are on the rise. The conscious experience of will has been experimentally constructed and manipulated in a number of ways. Acute episodes of depression can be caused by direct interventions in the brain, and they have also been successfully blocked in previously treatment-resistant patients. And so on.

Whenever we understand the specific neural dynamics underlying a specific form of conscious content, we can in principle delete, amplify or modulate this content in our minds. So shouldn’t we have a new ethics of consciousness — one that does not ask what a good actionis, but that goes directly to the heart of the matter, asks what we want to do with all this new knowledge and what the moral value ofstates of subjective experience is?

Here is where I have changed my mind. There are no moral facts. Moral sentences have no truth-values. The world itself is silent, it just doesn’t speak to us in normative affairs — nothing in the physical universe tells us what makes an action a good action or a specific brain-state a desirable one. Sure, we all would like to know what a good neurophenomenological configuration really is, and how we should optimize our conscious minds in the future. But it looks like, in a more rigorous and serious sense, there is just no ethical knowledge to be had. We are alone. And if that is true, all we have to go by are the contingent moral intuitions evolution has hard-wired into our emotional self-model. If we choose to simply go by what feels good, then our future is easy to predict: It will be primitive hedonism and organized religion.

gino_segre's picture
Professor of Physics & Astronomy, University of Pennsylvania; Author, The Pope of Physics: Enrico Fermi and the Birth of the Atomic Age

The first topic you treat in freshman physics is showing how a ball shot straight up out of the mouth of a cannon will reach a maximum height and then fall back to Earth, unless its initial velocity, known now as escape velocity, is great enough that it breaks out of the Earth' gravitational field. If that is the case, its final velocity is however always less than its initial one. Calculating escape velocity may not be very relevant for cannon balls, but certainly is for rocket ships. 

The situation with the explosion we call the Big Bang is obviously more complicated, but really not that different, or so I thought. The standard picture said that there was an initial explosion, space began to expand and galaxies moved away from one another. The density of matter in the Universe determined whether the Big Bang would eventually be followed by a Big Crunch or whether the celestial objects would continue to move away from one another with decreasing acceleration. In other words one could calculate the Universe's escape velocity.  Admittedly the discovery of Dark Matter, an unknown quantity seemingly five times as abundant as known matter, seriously altered the framework but not in a fundamental way since Dark Matter was after all still matter, even if its identity is unknown. 

This picture changed in 1998 with the announcement by two teams, working independently, that the rate of acceleration of the Universe's expansion was increasing, not decreasing. It was as if freshman physics' cannonball miraculously moved faster and faster as it left the Earth. There was no possibility of a Big Crunch, in which the Universe would collapse back on itself. The groups' analyses, based on observing distant stars of known luminosity, supernovae 1a, was solid. Sciencemagazine dubbed it 1998's Discovery of The Year.

The cause of this apparent gravitational repulsion is not known. Called Dark Energy to distinguish it from Dark Matter, it appears to be the dominant force in the Universe's expansion, roughly three times as abundant as its Dark matter counterpart. The prime candidate for its identity is the so-called Cosmological Constant, a term first introduced into the cosmic gravitation equations by Einstein to neutralize expansion, but done away with by him when Hubble reported that the Universe was in fact expanding.

Finding a theory that will successfully calculate the magnitude of this cosmological constant, assuming this is the cause of the accelerating expansion, is perhaps the outstanding problem in the conjoined areas of cosmology and elementary particle physics. Despite many attempts, success does not seem to be in sight. If the cosmological constant is not the answer, an alternate explanation of the Dark Energy would be equally exciting. 

Furthermore the apparent present equality, to within a factor of three, of matter density and the cosmological constant has raised a series of important questions. Since matter density decreases rapidly as the Universe expands (matter per volume decreases as volume increases) and the cosmological constant does not, we seem to be living in that privileged moment of the Universe's history when the two factors are roughly equal. Is this simply an accident? Will the distant future really be one in which, with Dark Energy increasingly important, celestial objects have moved so far apart so quickly as to fade from sight? 

The discovery of Dark Energy has radically changed our view of the Universe. Future, keenly awaited findings, such as the identities of Dark Matter and Dark Energy will do so again.

Darwin is the man, and like so many biologists, I have benefited from his prescient insights, handed to us 150 years ago. The logic of adaptation has been a guiding engine of my research and my view of life. In fact, it has been difficult to view the world through any other filter. I can still recall with great vividness the day I arrived in Cambridge, in June 1992, a few months before starting my job as an assistant professor at Harvard. I was standing on a street corner, waiting for a bus to arrive, and noticed a group of pigeons on the sidewalk. There were several males displaying, head bobbing and cooing, attempting to seduce the females. The females, however, were not paying attention. They were all turned, in Prussian solider formation, out toward the street, looking at the middle of the intersection where traffic was whizzing by. There, in the intersection, was one male pigeon, displaying his heart out. Was this guy insane? Hadn’t he read the handbook of natural selection. Dude, it’s about survival. Get out of the street!!!

Further reflection provided the solution to this apparently mutant, male pigeon. The logic of adaptation requires us to ask about the costs and benefits of behavior, trying to understand what the fitness payoffs might be. Even for behaviors that appear absurdly deleterious, there is often a benefit lurking. In the case of our apparently suicidal male pigeon, there was a benefit, and it was lurking in the females’ voyeurism, their rubber necking. The females were oriented toward this male, as opposed to the conservative guys on the sidewalk, because he was playing with danger, showing off, proving that even in the face of heavy traffic, he could fly like a butterfly and sting like a bee, jabbing and jiving like the great Muhammed Ali.

The theory comes from the evolutionary biologist Amotz Zahavi who proposed that even costly behaviors that challenge survival can evolve if they have payoffs to genetic fitness; these payoffs arrive in the currency of more matings, and ultimately, more babies. Our male pigeon was showing off his handicap. He was advertising to the females that even in the face of potential costs from Hummers and Beamers and Buses, he was still walking the walk and talking the talk. The females were hooked, mesmerized by this extraordinarily macho male. Handicaps evolve because they are honest indicators of fitness. And Zahavi’s theory represents the intellectual descendent of Darwin’s original proposal.

I must admit, however, that in recent years, I have made less use of Darwin’s adaptive logic. It is not because I think that the adaptive program has failed, or that it can’t continue to account for a wide variety of human and animal behavior. But with respect to questions of human and animal mind, and especially some of the unique products of the human mind — language, morality, music, mathematics — I have, well, changed my mind about the power of Darwinian reasoning.

Let me be clear about the claim here. I am not rejecting Darwin’s emphasis on comparative approaches, that is, the use of phylogenetic or historical data. I still practice this approach, contrasting the abilities of humans and animals in the service of understanding what is uniquely human and what is shared. And I still think our cognitive prowess evolved, and that the human brain and mind can be studied in some of the same ways that we study other bits of anatomy and behavior. But where I have lost the faith, so to speak, is in the power of the adaptive program to explain or predict particular design features of human thought.

Although it is certainly reasonable to say that language, morality and music have design features that are adaptive, that would enhance reproduction and survival, evidence for such claims is sorely missing. Further, for those who wish to argue that the evidence comes from the complexity of the behavior itself, and the absurdly low odds of constructing such complexity by chance, these arguments just don’t cut it with respect to explaining or predicting the intricacies of language, morality, music or many other domains of knowledge.

In fact, I would say that although Darwin’s theory has been around, and readily available for the taking for 150 years, it has not advanced the fields of linguistics, ethics, or mathematics. This is not to say that it can’t advance these fields. But unlike the areas of economic decision making, mate choice, and social relationships, where the adaptive program has fundamentally transformed our understanding, the same can not be said for linguistics, ethics, and mathematics. What has transformed these disciplines is our growing understanding of mechanism, that is, how the mind represents the world, how physiological processes generate these representations, and how the child grows these systems of knowledge.

Bidding Darwin adieu is not easy. My old friend has served me well. And perhaps one day he will again. Until then, farewell.

arnold_trehub's picture
Psychologist, University of Massachusetts, Amherst; Author, The Cognitive Brain

I have never questioned the conventional view that a good grounding in the physical sciences is needed for a deep understanding of the biological sciences. It did not occur to me that the opposite view might also be true. If someone were to have asked me if biological knowledge might significantly influence my understanding of our basic physical sciences, I would have denied it.

Now I am convinced that the future understanding of our most important physical principles will be profoundly shaped by what we learn in the living realm of biology. What have changed my mind are the relatively recent developments in the theoretical constructs and empirical findings in the sciences of the brain — the biological foundation of all thought. Progress here can cast new light on the fundamental subjective factors that constrain our scientific formulations in what we take to be an objective enterprise.

robert_provine's picture
Professor Emeritus, University of Maryland, Baltimore County; Author, Curious Behavior: Yawning, Laughing, Hiccupping, and Beyond

Mentors, paper referees and grant reviewers have warned me on occasion about scientific "fishing expeditions," the conduct of empirical research that does not test a specific hypothesis or is not guided by theory. Such "blind empiricism" was said to be unscientific, to waste time and produce useless data. Although I have never been completely convinced of the hazards of fishing, I now reject them outright, with a few reservations.

I'm not advocating the collection of random facts, but the use of broad-based descriptive studies to learn what to study and how to study it. Those who fish learn where the fish are, their species, number and habits. Without the guidance of preliminary descriptive studies, hypothesis testing can be inefficient and misguided. Hypothesis testing is a powerful means of rejecting error — of trimming the dead limbs from the scientific tree — but it does not generate hypotheses or signify which are worthy of test. I'll provide two examples from my experience.

In graduate school, I became intrigued with neuroembryology and wanted to introduce it to developmental psychology, a discipline that essentially starts at birth. My dissertation was a fishing expedition that described embryonic behavior and its neurophysiological mechanism. I was exploring uncharted waters and sought advice by observing the ultimate expert, the embryo. In this and related work, I discovered that prenatal movement is the product of seizure-like discharges in the spinal cord (not the brain), that the spinal discharges occurred spontaneously (not a response to sensory stimuli), that the function of  movement was to sculpt joints (not to shape postnatal behavior such walking), and to regulate the number of motorneurons. Remarkable! 

But decades later, this and similar work is largely unknown to developmental psychologists who have no category for it. The traditional psychological specialties of perception, learning, memory, motivation and the like, are not relevant during most of the prenatal period. The finding that embryos are profoundly unpsychological beings guided by unique developmental priorities and processes is not appreciated by theory-driven developmental psychologists. When the fishing expedition indicates that there is no appropriate spot in the scientific filing cabinet, it may be time to add another drawer. 

Years later and unrepentant, I embarked on a new fishing expedition, this time in pursuit of the human universal of laughter — what it is, when we do it, and what it means. In the spirit of my embryonic research, I wanted the expert to define my agenda—a laughing person. Explorations about research funding with administrators at a federal agency were unpromising. One linguist patiently explained that my project "had no obvious implications for any of the major theoretical issues in linguistics."  Another, a speech scientist, noted that "laughter isn't speech, and therefore had no relevance to my agency's mission." 

Ultimately, this atheoretical and largely descriptive work provided many surprises and counterintuitive findings. For example, laughter, like crying, is not consciously controlled, contrary to literature suggesting that we speak ha-ha as we would choose a word in speech. Most laughter is not a response to humor. Laughter and speech are controlled by different brain mechanisms, with speech dominating laughter. Contagious laughter is the product of neurologically programmed social behavior. Contrasts between chimpanzee and human laughter reveal why chimpanzees can't talk (inadequate breath control), and the evolutionary event necessary for the selection for human speech (bipedality).

Whether embryonic behavior or laughter, fishing expeditions guided me down the appropriate empirical path, provided unanticipated insights, and prevented flights of theoretical fancy. Contrary to lifelong advice, when planning a new research project, I always start by going fishing.

mark_pagel's picture
Professor of Evolutionary Biology, Reading University, UK; Fellow, Royal Society; Author, Wired for Culture

The last thirty to forty years of social science has brought an overbearing censorship to the way we are allowed to think and talk about the diversity of people on Earth. People of Siberian descent, New Guinean Highlanders, those from the Indian sub-continent, Caucasians, Australian aborigines, Polynesians, Africans — we are, officially, all the same: there are no races. 

Flawed as the old ideas about race are, modern genomic studies reveal a surprising, compelling and different picture of human genetic diversity.  We are on average about 99.5% similar to each other genetically. This is a new figure, down from the previous estimate of 99.9%. To put what may seem like miniscule differences in perspective, we are somewhere around 98.5% similar, maybe more, to chimpanzees, our nearest evolutionary relatives. 

The new figure for us, then, is significant. It derives from among other things, many small genetic differences that have emerged from studies that compare human populations. Some confer the ability among adults to digest milk, others to withstand equatorial sun, others yet confer differences in body shape or size, resistance to particular diseases, tolerance to hot or cold, how many offspring a female might eventually produce, and even the production of endorphins — those internal opiate-like compounds. We also differ by surprising amounts in the numbers of copies of some genes we have. 

Modern humans spread out of Africa only within the last 60-70,000 years, little more than the blink of an eye when stacked against the 6 million or so years that separate us from our Great Ape ancestors. The genetic differences amongst us reveal a species with a propensity to form small and relatively isolated groups on which natural selection has often acted strongly to promote genetic adaptations to particular environments. 

We differ genetically more than we thought, but we should have expected this: how else but through isolation can we explain a single species that speaks at least 7,000 mutually unintelligible languages around the World? 

What this all means is that, like it or not, there may be many genetic differences among human populations — including differences that may even correspond to old categories of 'race' — that are real differences in the sense of making one group better than another at responding to some particular environmental problem. This in no way says one group is in general 'superior' to another, or that one group should be preferred over another.  But it warns us that we must be prepared to discuss genetic differences among human populations.

todd_e_feinberg's picture
M.D. is Associate Professor of Neurology and Psychiatry at Albert Einstein College of Medicine and Chief of the Yarmon Neurobehavior and Alzheimer's Disease Center, Beth Israel Medical Center in New York City

For most of my life I viewed any notion of the "soul" a fanciful religious invention. I agreed with the view of the late Nobel Laureate Francis Crick who in his book The Astonishing Hypothesis claimed "A modern neurobiologist sees no need for the religious concept of a soul to explain the behavior of humans and other animals." But is the idea of a soul really so crazy and beyond the limits of scientific reason? 

From the standpoint of neuroscience, it is easy to make the claim that Descartes is simply wrong about the separateness of brain and mind. The plain fact is that there is no scientific evidence that a self, an individual mind, or a soul could exist without a physical brain. However, there are persisting reasons why the self and the mind do not appear to be identical with, or entirely reducible to, the brain.

For example, in spite of the claims of Massachusetts physician Dr. Duncan MacDougall, who estimated through his experiments on dying humans that approximately 21 grams of matter — the presumed weight of the human soul — was lost upon death (The New York Times "Soul Has Weight, Physician Thinks" March 11, 1907), unlike the brain, the mind cannot be objectively observed, but only subjectively experienced. The subject that represents the "I" in the statement "I think therefore I am" cannot be directly observed, weighed, or measured. And the experiences of that self, its pains and pleasures, sights and sounds possess an objective reality only to the one who experiences them. In other words, as the philosopher John Searle puts it, the mind is "irreducibly first-person." 

On the other hand, although there are many perplexing properties about the brain, mind, and the self that remain to be scientifically explained — subjectivity among them — this does not mean that there must be an immaterial entity at work that explains these mysterious features. Nonetheless, I have come to believe that an individual consciousness represents an entity that is so personal and ontologically unique that it qualifies as something that we might as well call "a soul."

I am not suggesting that anything like a soul survives the death of the brain. Indeed, the link between the life of the brain and the life of the mind is irreducible, the one completely dependant upon the other. Indeed the danger of capturing the beauty and mystery of a personal consciousness and identity with the somewhat metaphorical designation "soul" is the tendency for the grandiose metaphor to obscure the actual accomplishments of the brain. The soul is not a "thing" independent of the living brain; it is part and parcel of it, its most remarkable feature, but nonetheless inextricably bound to its life and death.

kenneth_w_ford's picture
Retired Director of the American Institute of Physics

I used to believe that the ethos of science, the very nature of science, guaranteed the ethical behavior of its practitioners. As a student and a young researcher, I could not conceive of cheating, claiming credit for the work of others, or fabricating data. Among my mentors and my colleagues, I saw no evidence that anyone else believed otherwise. And I didn't know enough of the history of my own subject to be aware of ethical lapses by earlier scientists. There was, I sensed, a wonderful purity to science. Looking back, I have to count naiveté as among my virtues as a scientist.

Now I have changed my mind, and I have changed it because of evidence, which is what we scientists are supposed to do. Various examples of cheating, some of them quite serious, have come to light in the last few decades, and misbehaviors in earlier times have been reported as well. Scientists are, as the saying goes, "only human," which, in my opinion, is neither an excuse nor an adequate explanation. Unfortunately, scientists are now subjected to greater competitive pressures, financial and otherwise, than was typical when I was starting out. Some — a few — succumb.

We do need to teach ethics as essential to the conduct of science, and we need to teach the simple lesson that in science crime doesn't pay. But above all, we need to demonstrate by example that the highest ethical standards should, and often do, come naturally.

paul_j_steinhardt's picture
Albert Einstein Professor in Science, Departments of Physics and Astrophysical Sciences, Princeton University; Coauthor, Endless Universe

Most cosmologists would say the answer is "inflation," and, until recently, I would have been among them. But "facts have changed my mind" — and I now feel compelled to seek a new explanation that may or may not incorporate inflation. 

The idea always seemed incredibly simple. Inflation is a period of rapid accelerated expansion that can transform the chaos emerging from the big bang into the smooth, flat homogeny observed by astronomers. If one likens the conditions following the bang to a wrinkled and twisted sheet of perfectly elastic rubber, then inflation corresponds to stretching the sheet at faster-than-light speeds until no vestige of its initial state remains. The "inflationary energy" driving the accelerated expansion then decays into the matter and radiation seen today and the stretching slows to a modest pace that allows the matter to condense into atoms, molecules, dust, planets, stars and galaxies. 

I would describe this version as the "classical view" of inflation in two senses. First, this is the historic picture of inflation first introduced and now appearing in most popular descriptions. Second, this picture is founded on the laws of classical physics, assuming quantum physics plays a minor role. Unfortunately, this classical view is dead wrong. Quantum physics turns out to play an absolutely dominant role in shaping the inflationary universe. In fact, inflation amplifies the randomness inherent in quantum physics to produce an universe that is random and unpredictable. 

This realization has come slowly. Ironically, the role of quantum physics was believed to be a boon to the inflationary paradigm when it was first considered twenty-five years ago by several theorists, including myself. The classical picture of inflation could not be strictly true, we recognized, or else the universe would be so smooth after inflation that galaxies and other large-scale structures would never form. However, inflation ends through the quantum decay of inflationary energy into matter and radiation. The quantum decay is analogous to the decay of radioactive uranium, in which there is some mean rate of decay but inherent unpredictability as to when any particular uranium nucleus will decay. Long after most uranium nuclei have decayed, there remain some nuclei that have yet to fission.

Similarly, inflationary energy decays at slightly different times in different places, leading to spatial variations in the temperature and matter density after inflation ends. The "average" statistical pattern appears to agree beautifully with the pattern of microwave background radiation emanating from the earliest stages of the universe and to produce just the pattern of non-uniformities needed to explain the evolution and distribution of galaxies. The agreement between theoretical calculation and observations is a celebrated triumph of the inflationary picture. 

But is this really a triumph? Only if the classical view were correct. In the quantum view, it makes no sense to talk about an "average" pattern. The problem is that, as in the case of uranium nuclei, there always remain some regions of space in which the inflationary energy has not yet decayed into matter and radiation at all. Although one might have guessed the undecayed regions are rare, they expand so much faster than those that have decayed that they soon overtake the volume of the universe. The patches where inflationary energy has decayed and galaxies and stars have evolved become the oddity — rare pockets surrounded by space that continues to inflate away.

The process repeats itself over and over, with the number of pockets and the volume of surrounding space increasing from moment to moment. Due to random quantum fluctuations, pockets with all kinds of properties are produced — some flat, but some curved; some with variations in temperature and density like what we observe, but some not; some with forces and physical laws like those we experience, but some with different laws. The alarming result is that there are an infinite number of pockets of each type and, despite over a decade of attempts to avoid the situation, no mathematical way of deciding which is more probable has been shown to exist.

Curiously, this unpredictable "quantum view" of inflation has not yet found its way into the consciousness of many astronomers working in the field, let alone the greater scientific community or the public at large.

One often reads that recent measurements of the cosmic microwave background or the large-scale structure of the universe have verified a prediction of inflation. This invariably refers to a prediction based on the naïve classical view. But if the measurements ever come out differently, this could not rule out inflation. According to the quantum view, there are invariably pockets with matching properties.

And what of the theorists who have been developing the inflationary theory for the last twenty-five years? Some, like me, have been in denial, harboring the hope that a way can be found to tame the quantum effects and restore the classical view. Others have embraced the idea that cosmology may be inherently unpredictable, although this group is also vociferous in pointing how observations agree with the (classical) predictions of inflation.

Speaking for myself, it may have taken me longer to accept its quantum nature than it should have, but, now that facts have changed my mind, I cannot go back again. Inflation does not explain the structure of the universe. Perhaps some enhancement can explain why the classical view works so well, but then it will be that enhancement rather than inflation itself that explains the structure of the universe. Or maybe the answer lies beyond the big bang. Some of us are considering the possibility that the evolution of the universe is cyclic and that the structure was set by events that occurred before the big bang. One of the draws of this picture is that quantum physics does not play the same dominant role, and there is no escaping its predictions of the uniformity, flatness and structure of the universe.

marcel_kinsbourne's picture
Neurologist and Cognitive Neuroscientist, The New School; Co-author, Children's Learning and Attention Problems

When the phenomenon of "mirror neurons" that fire both when a specific action is perceived and when it is intended was first reported, I was impressed by the research but skeptical about its significance. Specifically, I doubted, and continue to doubt, that these circuits are specific adaptations for purposes of various higher mental functions. I saw mirror neurons as simple units in circuits that represent specific actions, oblivious as to whether they had been viewed when performed by someone else, or represented as the goal of one's own intended action (so-called reafference copy). Why have two separate representations of the same thing when one will do? Activity elsewhere in the brain represents who the agent is, self or another. I still think that this is the most economical interpretation. But from a broader perspective I have come to realize that mirror neurons are not only less than meets the eye but also more. Instead of being a specific specialization, they play their role as part of a fundamental design characteristic of the brain; that is, when percepts are activated, relevant intentions, memories and feelings automatically fall into place.

External event are "represented" by the patterns of neuronal activity that they engender in sensory cortex. These representations also incorporate the actions that the percepts potentially afford. This "enactive coding" or "common coding" of input implies a propensity in the observer's brain to imitate the actions of others (consciously or unconsciously). This propensity need not result in overt imitation. Prefrontal cortex is thought to hold these impulses to imitate in check. Nonetheless, the fact that these action circuits have been activated, lowers their threshold by subtle increments as the experience in question is repeated over and over again, and the relative loading of synaptic weights in brain circuitry become correspondingly adjusted. Mirror neurons exemplify this type of functioning, which extends far beyond individual circuits to all cell assemblies that can form representations,

That an individual is likely to act in the same ways that others act is seen in the documented benefit for sports training of watching experts perform. "Emotional contagion" occurs when someone witnesses the emotional expressions of another person and therefore experiences that mood state oneself. People's viewpoints can subtly and unconsciously converge when their patterns of neural activation match, in the total absence of argument or attempts at persuasion. When people entrain with each other in gatherings, crowds, assemblies and mobs, diverse individual views reduce into a unified group viewpoint. An extreme example of gradual convergence might be the "Stockholm Syndrome"; captives gradually adopt the worldview of their captors. In general, interacting with others makes one converge to their point of view (and vice versa). Much ink has been spilled on the topic of the lamentable limitations of human rationality. Here is one reason why.

People's views are surreptitiously shaped by their experiences, and rationality comes limping after, downgraded to rationalization. Once opinions are established, they engender corresponding anticipations. People actively seek those experiences that corroborate their own self-serving expectations. This may be why as we grow older, we become ever more like ourselves. Insights become consolidated and biases reinforced when one only pays attention to confirming evidence. Diverse mutually contradictory "firm convictions" are the result. Science does take account of the negative instance as well as the positive instance. It therefore has the potential to help us understand ourselves, and each other.

If I am correct in my changed views as to what mirror neurons stand for and how representation routinely merges perception, action, memory and affect into dynamic reciprocal interaction, these views would have a bearing on currently disputed issues. Whether an effect is due to the brain or the environment would be moot if environmental causes indeed become brain causes, as the impressionable brain resonates with changing circumstances. What we experience contributes mightily to what we are and what we become. An act of kindness has consequences for the beneficiary far beyond the immediate benefit. Acts of violence inculcate violence and contaminate the minds of those who stand by and watch. Not only our private experiences, but also the experiences that are imposed on us by the media, transform our predispositions, whether we want them to or not. The implications for child rearing are obvious, but the same implications apply beyond childhood to the end of personal time.

What people experience indeed changes their brain, for better and for worse. In turn, the changed brain changes what is experienced. Regardless of its apparent stability over time, the brain is in constant flux, and constantly remodels. Heraclitus was right: "You shall not go down twice to the same river". The river will not be the same, but for that matter, neither will you. We are never the same person twice. The past is etched into the neural network, biasing what the brain is and does in the present. William Faulkner recognized this: "The past is never dead. In fact, it's not even past".

The question presupposes a well defined "you", and an implied ability that is under "your" control to change your "mind". The "you" I now believe is distributed amongst others (family friends , in hierarchal structures,) i.e. suicide bombers, believe their sacrifice is for the other parts of their "you". The question carries with it an intention that I believe is out of one's control. My mind changed as a result of its interaction with its environment. Why? because it is a part of it.

charles_seife's picture
Professor of Journalism, New York University; Former Journalist, Science Magazine; Author, Hawking Hawking

I used to think that a modern, democratic society had to be a scientific society. After all, the scientific revolution and the American Revolution were forged in the same flames of the enlightenment. Naturally, I thought, a society that embraces the freedom of thought and expression of a democracy would also embrace science.

However, when I first started reporting on science, I quickly realized that science didn't spring up naturally in the fertile soil of the young American democracy. Americans were extraordinary innovators — wonderful tinkerers and engineers — but you can count the great 19th century American physicists on one hand and have two fingers left over. The United States owes its scientific tradition to aristocratic Europe's universities (and to its refugees), not to any native drive.

In fact, science clashes with the democratic ideal. Though it is meritocratic, it is practiced in the elite and effete world of academe, leaving the vast majority of citizens unable to contribute to it in any meaningful way. Science is about freedom of thought, yet at the same time it imposes a tyranny of ideas.

In a democracy, ideas are protected. It's the sacred right of a citizen to hold — and to disseminate — beliefs that the majority disagrees with, ideas that are abhorrent, ideas that are wrong. However, scientists are not free to be completely open minded; a scientist stops becoming a scientist if he clings to discredited notions. The basic scientific urge to falsify, to disprove, to discredit ideas clashes with the democratic drive to tolerate and protect them.

This is why even those politicians who accept evolution will never attack those politicians who don't; at least publicly, they cast evolutionary theory as a mere personal belief. Attempting to squelch creationism smacks of elitism and intolerance — it would be political suicide. Yet this is exactly what biologists are compelled to do; they exorcise falsehoods and drive them from the realm of public discourse.

We've been lucky that the transplant of science has flourished so beautifully on American soil. But I no longer take it for granted that this will continue; our democratic tendencies might get the best of us in the end.

kevin_kelly's picture
Senior Maverick, Wired; Author, What Technology Wants and The Inevitable

Much of what I believed about human nature, and the nature of knowledge, has been upended by the Wikipedia. I knew that the human propensity for mischief among the young and bored — of which there were many online — would make an encyclopedia editable by anyone an impossibility. I also knew that even among the responsible contributors, the temptation to exaggerate and misremember what we think we know was inescapable, adding to the impossibility of a reliable text. I knew from my own 20-year experience online that you could not rely on what you read in a random posting, and believed that an aggregation of random contributions would be a total mess. Even unedited web pages created by experts failed to impress me, so an entire encyclopedia written by unedited amateurs, not to mention ignoramuses, seemed destined to be junk.

Everything I knew about the structure of information convinced me that knowledge would not spontaneously emerge from data, without a lot of energy and intelligence deliberately directed to transforming it. All the attempts at headless collective writing I had been involved with in the past only generated forgettable trash. Why would anything online be any different?

So when the first incarnation of the Wikipedia launched in 2000 (then called Nupedia) I gave it a look, and was not surprised that it never took off. There was a laborious process of top-down editing and re-writing that discouraged a would-be random contributor. When the back-office wiki created to facilitate the administration of the Nupedia text became the main event and anyone could edit as well as post an article, I expected even less from the effort, now re-named Wikipedia.

How wrong I was. The success of the Wikipedia keeps surpassing my expectations. Despite the flaws of human nature, it keeps getting better. Both the weakness and virtues of individuals are transformed into common wealth, with a minimum of rules and elites. It turns out that with the right tools it is easier to restore damage text (the revert function on Wikipedia) than to create damage text (vandalism) in the first place, and so the good enough article prospers and continues. With the right tools, it turns out the collaborative community can outpace the same number of ambitious individuals competing.

It has always been clear that collectives amplify power — that is what cities and civilizations are — but what's been the big surprise for me is how minimal the tools and oversight are needed. The bureaucracy of Wikipedia is relatively so small as to be invisible. It's the Wiki's embedded code-based governance, versus manager-based governance that is the real news. Yet the greatest surprise brought by the Wikipedia is that we still don't know how far this power can go. We haven't seen the limits of wiki-ized intelligence. Can it make textbooks, music and movies? What about law and political governance?

Before we say, "Impossible!" I say, let's see. I know all the reasons why law can never be written by know-nothing amateurs. But having already changed my mind once on this, I am slow to jump to conclusions again. The Wikipedia is impossible, but here it is. It is one of those things impossible in theory, but possible in practice. Once you confront the fact that it works, you have to shift your expectation of what else that is impossible in theory might work in practice.

I am not the only one who has had his mind changed about this. The reality of a working Wikipedia has made a type of communitarian socialism not only thinkable, but desirable. Along with other tools such as open-source software and open-source everything, this communtarian bias runs deep in the online world.

In other words it runs deep in this young next generation. It may take several decades for this shifting world perspective to show its full colors.  When you grow up knowing rather than admitting that such a thing as the Wikipedia works; when it is obvious to you that open source software is better; when you are certain that sharing your photos and other data yields more than safeguarding them — then these assumptions will become a platform for a yet more radical embrace of the commonwealth. I hate to say it but there is a new type of communism or socialism loose in the world, although neither of these outdated and tinged terms can accurately capture what is new about it.

The Wikipedia has changed my mind, a fairly steady individualist, and lead me toward this new social sphere. I am now much more interested in both the new power of the collective, and the new obligations stemming from individuals toward the collective. In addition to expanding civil rights, I want to expand civil duties. I am convinced that the full impact of the Wikipedia is still subterranean, and that its mind-changing power is working subconsciously on the global millennial generation, providing them with an existence proof of a beneficial hive mind, and an appreciation for believing in the impossible.

That's what it's done for me.

keith_devlin's picture
Mathematician; Executive Director, H-STAR Institute, Stanford; Author, Finding Fibonacci

What is the nature of mathematics? Becoming a mathematician in the 1960s, I swallowed hook, line, and sinker the Platonistic philosophy dominant at the time, that the objects of mathematics (the numbers, the geometric figures, the topological spaces, and so forth) had a form of existence in some abstract ("Platonic") realm. Their existence was independent of our existence as living, cognitive creatures, and searching for new mathematical knowledge was a process of explorative discovery not unlike geographic exploration or sending out probes to distant planets.

I now see mathematics as something entirely different, as the creation of the (collective) human mind. As such, mathematics says as much about we ourselves as it does about the external universe we inhabit. Mathematical facts are not eternal truths about the external universe, which held before we entered the picture and will endure long after we are gone. Rather, they are based on, and reflect, our interactions with that external environment.

This is not to say that mathematics is something we have freedom to invent. It's not like literature or music, where there are constraints on the form but writers and musicians exercise great creative freedom within those constraints. From the perspective of the individual human mathematician, mathematics is indeed a process of discovery. But what is being discovered is a product of the human (species)-environment interaction.

This view raises the fascinating possibility that other cognitive creatures in another part of the universe might have different mathematics. Of course, as a human, I cannot begin to imagine what that might mean. It would classify as "mathematics" only insofar as it amounted to that species analyzing the abstract structures that arose from their interactions with their environment.

This shift in philosophy has influenced the way I teach, in that I now stress social aspects of mathematics. But when I'm giving a specific lecture on, say, calculus or topology, my approach is entirely platonistic. We do our mathematics using a physical brain that evolved over hundreds of thousands of years by a process of natural selection to handle the physical and more recently the social environments in which our ancestors found themselves. As a result, the only way for the brain to actually do mathematics is to approach it "platonistically," treating mathematical abstractions as physical objects that exist.

A platonistic standpoint is essential to doing mathematics, just as Cartesian dualism is virtually impossible to dispense with in doing science or just plain communicating with one another ("one another"?). But ultimately, our mathematics is just that: our mathematics, not the universe's.

rodney_a_brooks's picture
Panasonic Professor of Robotics (emeritus); Former Director, MIT Computer Science and Artificial Intelligence Lab (1997-2007); Founder, CTO, Robust.AI; Author, Flesh and Machines

Our science, including mine, treats living systems as mechanisms at multiple levels of abstraction.  As we talk about how one bio-molecule docks with another our explanations are purely mechanistic and our science never invokes "and then the soul intercedes and gets them to link up". The underlying assumption of molecular biologists is that their level of mechanistic explanation is ultimately adequate for high level mechanistic descriptions such as physiology and neuroscience to build on as a foundation.

Those of us who are computer scientists by training, and I'm afraid many collaterally damaged scientists of other stripes, tend to use computation as the mechanistic level of explanation for how living systems behave and "think".  I originally gleefully embraced the computational metaphor

If we look back over recent centuries we will see the brain described as a hydrodynamic machine, clockwork, and as a steam engine.  When I was a child in the 1950's I read that the human brain was a telephone switching network.  Later it became a digital computer, and then a massively parallel digital computer.  A few years ago someone put up their hand after a talk I had given at the University of Utah and asked a question I had been waiting for for a couple of years: "Isn't the human brain just like the world wide web?".  The brain always seems to be one of the most advanced technologies that we humans currently have.

The metaphors we have used in the past for the brain have not stood the test of time.  I doubt that our current metaphor of the brain as a network of computers doing computations is going to stand for all eternity either.

Note that I do not doubt that there are mechanistic explanations for how we think, and I certainly proceed with my work of trying to build intelligent robots using computation as a primary tool for expressing mechanisms within those robots.

But I have relatively recently come to question computation as the ultimate metaphor to be used in both the understanding of living systems and as the only important design tool for engineering intelligent artifacts.

Some of my colleagues have managed to recast Pluto's orbital behavior as the body itself carrying out computations on forces that apply to it.  I think we are perhaps better off using Newtonian mechanics (with a little Einstein thrown in) to understand and predict the orbits of planets and others.  It is so much simpler.

Likewise we can think about spike trains as codes and worry about neural coding.  We can think about human memory as data storage and retrieval.  And we can think about walking over rough terrain as computing the optimal place to put down each of our feet.  But I suspect that somewhere down the line we are going to come up with better, less computational metaphors.  The entities we use for metaphors may be more complex but the useful ones will lead to simpler explanations.

Just as the notion of computation is only a short step beyond discrete mathematics, but opens up vast new territories of questions and technologies, these new metaphors might well be just a few steps beyond where we are now in understanding organizational dynamics, but they may have rich and far reaching implications in our abilities to understand the natural world and to engineer new creations.

david_g_myers's picture
Professor of Psychology, Hope College; Co-author, Psychology, 11th Edition

Reading and reporting on psychological science has changed my mind many times, leading me now to believe that 

• newborns are not the blank slates I once presumed,
• electroconvulsive therapy often alleviates intractable depression,
• economic growth has not improved our morale,
• the automatic unconscious mind dwarfs the controlled conscious mind,
• traumatic experiences rarely get repressed,
• personality is unrelated to birth order, 
• most folks have high self-esteem (which sometimes causes problems), 
• opposites do not attract,
• sexual orientation is a natural, enduring disposition (most clearly so for men), not a choice.

In this era of science-religion conflict, such revelations underscore our need for what science and religion jointly mandate: humility. Humility, I remind my student audience, is fundamental to the empirical spirit advocated long ago by Moses: "If a prophet speaks in the name of the Lord and what he says does not come true, then it is not the Lord's message." Ergo, if our or anyone's ideas survive being put to the test, so much the better for them. If they crash against a wall of evidence, it is time to rethink.

robert_trivers's picture
Evolutionary Biologist; Professor of Anthropology and Biological Sciences, Rutgers University; Author, Wild Life: Adventures of an Evolutionary Biologist

When I first saw the possibility (some 30 years ago) of grounding a science of human self-deception in evolutionary logic (based on its value in furthering deception of others), I imagined joining evolutionary theory with animal behavior and with those parts of psychology worth preserving. The latter I regarded as a formidable hurdle since so much of psychology (depth and social) appeared to be pure crap, or more generously put, without any foundation in reality or logic.

Now after a couple of years of intensive study of the subject, I am surprised at the number of areas of biology that are important, if not key, to the subject yet are relatively undeveloped by biologists. I am also surprised that many of the important new findings in this regard have been made by psychologists and not biologists.

It was always obvious that when neurophysiology actually became a science (which it did when it learned to measure on-going mental activity) it would be relevant to deceit and self-deception and this is becoming more apparent every day. Also, endocrinology could scarcely be irrelevant and Richard Wrangham has recently argued for an intimate connection between testosterone and self-deception in men but the connections must be much deeper still. The proper way to conceptualize the endocrine system (as David Haig has pointed out to me) is as a series of signals with varying half-lives which give relevant information to organs downstream and many such signals may be relevant to deceit and self-deception and to selves-deception, as defined below.

One thing I never imagined was that the immune system would be a vital component of any science of self-deception, yet two lines of work within psychology make this clear. Richard Davidson and co-workers have shown that relatively positive, up, approach-seeking people are more likely to be left-brain activated (as measured by EEG) and show stronger immune responses to a novel challenge (flu vaccine) than are avoidance, negative emotion (depression, anxiety) right-brained people.  At the same time, James Pennebaker and colleagues have shown that the very act of repressing information from consciousness lowers immune function while sharing information with others (or even a diary) has the opposite effect. Why should the immune system be so important and why should it react in this way?

A key variable in my mind is that the immune system is an extremely expensive one—we produce a grapefruit-sized set of tissue every two weeks—and we can thus borrow against it, apparently in part for brain function. But this immediately raises the larger question of how much we can borrow against any given system—yes fat for energy, bone and teeth when necessary (as for a child in utero), muscle when not used and so on—but with what effects? Why immune function and repression?

While genetics is, in principle, important to all of biology, I thought it would be irrelevant to the study of self-deception until way into the distant future. Yet the 1980s produced the striking discovery that the maternal half of our genome could act against the paternal, and vice-versa, discoveries beautifully exploited in the 90’s and 00’s by David Haig to produce a range of expected (and demonstrated) internal conflicts which must inevitably interact with self-deception directed toward others. Put differently, internal genetic conflict leads to a quite novel possibility: selves-deception, equally powerful maternal and paternal halves selected to deceive each other (with unknown effects on deception of others).

And consider one of the great mysteries of mental biology. The human brain consumes about 20% of resting metabolic rate come rain or shine, whether depressed or happy, asleep or awake. Why? And why is the brain so quick to die when deprived of this energy? What is the cellular basis for all of this? How exactly does borrowing from other systems, such as immune, interact with this basic metabolic cost? Biologists have been very slow to see the larger picture and to see that fundamental discoveries within psychobiology require a deeper understanding of many fundamental biological processes, especially the logic of energy borrowed from various sources.

Finally, let me express a surprise about psychology. It has led the way in most of the areas mentioned, e.g. immune effects, neurophysiology, brain metabolism. Also, while classical depth psychology (Freud and sundries) can safely be thrown overboard almost in its entirety, social psychology has produced some very clever and hopeful methods, as well as a body of secure results on biased human mentation, from perception, to organization of data, to analysis, to further propagation. Daniel Gilbert gives a well-appreciated lecture in which he likens the human mind to a bad scientist, everything from biased exposure to data and biased analysis of information to outright forgery. Hidden here is a deeper point. Science progresses precisely because it has a series of anti-deceit-and-self-deception devices built into it, from full description of experiments permitting exact replication, to explicit statement of theory permitting precise counter-arguments, to the preference for exploring alternative working hypothesis, to a statistical apparatus able to weed out the effects of chance, and so on.

daniel_l_everett's picture
Linguistic Researcher; Dean of Arts and Sciences, Bentley University; Author, How Language Began

I have wondered why some authors claim that people rarely if ever change their mind. I have changed my mind many times. This could be because I have weak character, because I have no self-defining philosophy, or because I like change. Whatever the reason, I enjoy changing my mind. I have occasionally irritated colleagues with my seeming motto of 'If it ain't broke, break it.'

At the same time, I adhered to a value common in the day-to-day business of scientific research, namely, that changing one's mind is alright for little matters but is suspect when it comes to big questions. Take a theory that is compatible with either conclusion 'x' or conclusion 'y'. First you believed 'x'. Then you received new information and you believed 'y'. This is a little change. And it is a natural form of learning - a change in behavior resulting from exposure to new information.

But change your mind, say, about the general theory that you work with, at least in some fields, and you are looked upon as a kind of maverick, a person without proper research priorities, a pot-stirrer. Why is that, I wonder?

I think that the stigma against major mind changes in science results from what I call 'homeopathic bias' - scientific knowledge is built up bit by little bit as we move cumulatively towards the truth.

This bias can lead researchers to avoid concluding that their work undermines the dominant theory in any significant way. Non-homeopathic doses of criticism can be considered not merely inappropriate, but even arrogant - implying somehow that the researcher is superior to his or her colleagues, whose unifying conceptual scheme is now judged to be weaker than they have noticed or are been willing to concede.

So any scientist publishing an article or a book about a non-homeopathic mind-change could be committing a career-endangering act. But I love to read these kinds of books. They bother people. They bother me.

I changed my mind about this homeopathic bias. I think it is myopic for the most part. And I changed my mind on this because I changed my mind regarding the largest question of my field - where language comes from. This change taught me about the empirical issues that had led to my shift and about the forces that can hold science and scientists in check if we aren't aware of them.

I believed at one time that culture and language were largely independent. Yet there is a growing body of research that suggests the opposite - deep reflexes from culture are to be found in grammar.

But if culture can exercise major effects on grammar, then the theory I had committed most of my research career to - the theory that grammar is part of the human genome and that the variations in the grammars of the world's languages are largely insignificant, was dead wrong. There did not have to be a specific genetic capacity for grammar - the biological basis of grammar could also be the basis of gourmet cooking, of mathematical reasoning, and of medical advances - human reasoning.

Grammar had once seemed to me too complicated to derive from any general human cognitive properties. It appeared to cry out for a specialized component of the brain, or what some linguists call the language organ. But such an organ becomes implausible if we can show that it is not needed because there are other forces that can explain language as both ontogenetic and phlyogentic fact.

Many researchers have discussed the kinds of things that hunters and gatherers needed to talk about and how these influenced language evolution. Our ancestors had to talk about things and events, about relative quantities, and about the contents of the minds of their conspecifics, among other things. If you can't talk about things and what happens to them (events) or what they are like (states), you can't talk about anything. So all languages need verbs and nouns. But I have been convinced by the research of others, as well as my own, that if a language has these, then the basic skeleton of the grammar largely follows. The meanings of verbs require a certain number of nouns and those nouns plus the verb make simple sentences, ordered in logically restricted ways. Other permutations of this foundational grammar follow from culture, contextual prominence, and modification of nouns and verbs. There are other components to grammar, but not all that many. Put like this, as I began to see things, there really doesn't seem to be much need for grammar proper to be part of the human genome as it were. Perhaps there is even much less need for grammar as an independent entity than we might have once thought.

laurence_c_smith's picture
Professor of Environmental Studies, Brown University; Author, Rivers of Power

The year 2007 marked three memorable events in climate science:  Release of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4); a decade of drought in the American West and the arrival of severe drought in the American Southeast; and the disappearance of nearly half of the polar sea-ice floating over the Arctic Ocean. The IPCC report (a three-volume, three-thousand page synthesis of current scientific knowledge written for policymakers) and the American droughts merely hardened my conviction that anthropogenic climate warming is real and just getting going — a view shared, in the case of the IPCC, a few weeks ago by the Nobel Foundation. The sea-ice collapse, however, changed my mind that it will be decades before we see the real impacts of the warming. I now believe they will happen much sooner.

Let's put the 2007 sea-ice year into context. In the 1970's, when NASA first began mapping sea ice from microwave satellites, its annual minimum extent (in September, at summer's end) hovered close to 8 million square kilometers, about the area of the conterminous United States minus Ohio. In September 2007 it dropped abruptly to 4.3 million square kilometers, the area of the conterminous United State minus Ohio and all the other twenty-four states east of the Mississippi, as well as North Dakota, Minnesota, Missouri, Arkansas, Louisiana, and Iowa. Canada's Northwest Passage was freed of ice for the first time in human memory. From Bering Strait where the U.S. and Russia brush lips, open blue water stretched almost to the North Pole.

What makes the 2007 sea-ice collapse so unnerving is that it happened too soon.  The ensemble averages of our most sophisticated climate model predictions, put forth in the IPCC AR4 report and various other model intercomparison studies, don't predict a downwards lurch of that magnitude for another fifty years. Even the aggressive models -the National Center for Atmospheric Research (NCAR) CCSM3 and the Centre National de Recherches Meteorologiques (CNRM) CM3 simulations, for example — must whittle ice until 2035 or later before the 2007 conditions can be replicated.  Put simply, the models are too slow to match reality. Geophysicists, accustomed to non-linearities and hard to impress after a decade of 'unprecedented' events, are stunned by the totter:  Apparently, the climate system can move even faster than we thought.  This has decidedly recalibrated scientist's attitudes — including my own — to the possibility that even the direst IPCC scenario predictions for the end of this century — 10 to 24 inch higher global sea levels, for example — may be prudish.

What does all this say to us about the future? The first is that rapid climate change — a nonlinearity that occurs when a climate forcing reaches a threshold beyond which little additional forcing is needed to trigger a large impact — is a distinct threat not well captured in our current generation of computer models. This situation will doubtless improve — as the underlying physics of the 2007 ice event and others such as the American Southeast drought are dissected, understood, and codified — but in the meantime, policymakers must work from the IPCC blueprint which seems almost staid after the events of this summer and fall.  The second is that it now seems probable that the northern hemisphere will lose its ice lid far sooner than we ever thought possible.  Over the past three years experts have shifted from 2050, to 2035, to 2013 as plausible dates for an ice-free Arctic Ocean — estimates at first guided by models then revised by reality.

The broader significance of vanishing sea ice extends far beyond suffering polar bears, new shipping routes, or even development of vast Arctic energy reserves. It is absolutely unequivocal that the disappearance of summer sea ice — regardless of exactly which year it arrives — will profoundly alter the northern hemisphere climate, particularly through amplified winter warming of at least twice the global average rate. Its further impacts on the world's precipitation and pressure systems are under study but are likely significant. Effects both positive and negative, from reduced heating oil consumption to outbreaks of fire and disease, will propagate far southward into the United States, Canada, Russia and Scandinavia. Scientists have expected such things in eventuality — but in 2007 we learned they may already be upon us.

david_dalrymple's picture
Research affiliate, MIT Media Lab

Not that long ago, I was under the impression that the basic problem of computer architecture had been solved. After all, computers got faster every year, and gradually whole new application domains emerged. There was constantly more memory available, and software hungrily consumed it. Each new computer had a bigger power supply, and more airflow to extract the increasing heat from the processor. 

Now, clock speeds aren't rising quite as quickly, and the progress that is made doesn't seem to help our computers start up or run any faster. The traditions of the computing industry, some going as far back as the first digital computers built by John von Neumann in the 1950s, are starting to grow obsolete. The slower computers seem to get faster, and the more deeply I understand the way things actually work, the more these problems become apparent to me. They really come to light when you think about a computer as a business.

Imagine if your company or organization had one fellow [the CPU] who sat in an isolated office, and refused to talk with anyone except his two most trusted deputies [the Northbridge and Southbridge], through which all the actual work the company does must be funneled. Because this one man — let's call him Bob — is so overloaded doing all the work of the entire company, he has several assistants [memory controllers] who remember everything for him. They do this through a complex system [virtual memory] of file cabinets of various sizes [physical memories], the organization over which they have strictly limited autonomy.

Because it is faster to find things in the smaller cabinets [RAM], where there is less to sift through, Bob asks them to put the most commonly used information there. But since he is constantly switching between different tasks, the assistants must swap in and out the files in the smaller cabinets with those in the larger ones whenever Bob works on something different ["thrashing"]. The largest file cabinet is humongous, and rotates slowly in front of a narrow slit [magnetic storage]. The assistant in charge of it must simply wait for the right folder to appear in front of him before passing it along [disk latency].

Any communication with customers must be handled through a team of receptionists [I/O controllers] who don't take the initiative to relay requests to one of Bob's deputies. When Bob needs customer input to continue on a difficult problem, he drops what he is doing to chase after his deputy to chase after a receptionist to chase down the customer, thus preventing work for other customers to be done in that time.

This model is clearly horrendous for numerous reasons. If any staff member goes out to lunch, the whole operation is likely to grind to a halt. Tasks that ought to be quite simple turn out to take a lot of time, since Bob must re-acquaint himself with the issues in question. If a spy gains Bob's trust, all is lost. The only way to make the model any better without giving up and starting over is to hire people who just do their work faster and spend more hours in the office. And yet, this is the way almost every computer in the world operates today.

It is much more sane to hire a large pool of individuals, and, depending on slow-changing customer needs, organize them into business units and assign them to customer accounts. Each person keeps track of his own small workload, and everyone can work on a separate task simultaneously. If the company suddenly acquires new customers, it can recruit more staff instead of forcing Bob to work overtime. If a certain customer demands more attention than was foreseen, more people can be devoted to the effort. And perhaps most importantly, collaboration with other businesses becomes far more meaningful than the highly coded, formal game of telephone that Bob must play with Frank, who works in a similar position at another corporation [a server]. Essentially, this is a business model problem as much as a computer science one.

These complaints only scratch the surface of the design flaws of today's computers. On an extremely low level, with voltages, charge, and transistors, energy is handled recklessly, causing tremendous heat, which would melt the parts in a matter of seconds were it not for the noisy cooling systems we find in most computers. And on a high level, software engineers have constructed a city of competing abstractions based on the fundamentally flawed "CPU" idea.

So I have changed my mind. I used to believe that computers were on the right track, but now I think the right thing to do is to move forward from our 1950s models to a ground-up, fundamentally distributed computing architecture. I started to use computers at 17 months of age and started programming them at 5, so I took the model for granted. But the present stagnation of perceptual computer performance, and the counter-intuitiveness of programming languages, led me to question what I was born into and wonder if there's a better way. Now I'm eager to help make it happen. When discontent changes your mind, that's innovation.

lee_m_silver's picture
professor at Princeton University in the Department of Molecular Biology

In an interview with the New York Times, shortly before he died, Francis Crick told a reporter, "the view of ourselves as [ensouled] 'persons' is just as erroneous as the view that the Sun goes around the Earth. This sort of language will disappear in a few hundred years. In the fullness of time, educated people will believe there is no soul independent of the body, and hence no life after death."

Like the vast majority of academic scientists and philosophers alive today, I accept Crick's philosophical assertion — that when your body dies, you cease to exist — without any reservations. I also used to agree with Crick's psychosocial prognosis — that modern education would inevitably give rise to a populace that rejected the idea of a supernatural soul. But on this point, I have changed my mind.

Underlying Crick's psychosocial claim is a common assumption: the minds of all intelligent people must operate according to the same universal principles of human nature. Of course, anyone who makes this assumption will naturally believe that their own mind-type is the universal one. In the case of Crick and most other molecular biologists, the assumed universal mind-type is highly receptive to the persuasive power of pure logic and rational analysis.

Once upon a time, my own worldview was similarly informed. I was convinced that scientific facts and rational argument alone could win the day with people who were sufficiently intelligent and educated. To my mind, the rejection of rational thought by such people was a sign of disingenuousness to serve political or ideological goals.

My mind began to change one evening in November 2003. I had given a lecture at small liberal arts college along with a member of The President's Council on Bioethics, whose views on human embryo research are diametrically opposed to my own. Surrounded by students at the wine and cheese reception that followed our lectures, the two of us began an informal debate about the true meaning and significance of changes in gene expression and DNA methylation during embryonic development. Six hours later, long after the last student had crept off to sleep, it was 4:00 am, and we were both still convinced that with just one more round of debate, we'd get the other to capitulate. It didn't happen.

Since this experience, I have purposely engaged other well-educated defenders of the irrational, as well as numerous students at my university, in spontaneous one-on-one debates about a host of contentious biological subjects including evolution, organic farming, homeopathy, cloned animals, "chemicals" in our food, and genetic engineering. Much to my chagrin, even after politics, ideology, economics, and other cultural issues have been put aside, there is often a refusal to accept scientific implications of rational argumentation.

While its mode of expression may change over cultures and time, irrationality and mysticism seem to be an integral part of normal human nature, even among highly educated people. No matter what scientific and technological advances are made in the future, I now doubt that supernatural beliefs will ever be eradicated from the human species.

max_tegmark's picture
Physicist, MIT; Researcher, Precision Cosmology; Scientific Director, Foundational Questions Institute; President, Future of Life Institute; Author, Life 3.0

Do we need to understand consciousness to understand physics?  I used to answer "yes", thinking that we could never figure out the elusive "theory of everything" for our external physical reality without first understanding the distorting mental lens through which we perceive it.

After all, physical reality has turned out to be very different from how it seems, and I feel that most of our notions about it have turned out to be illusions. The world looks like it has three primary colors, but that number three tells us nothing about the world out there, merely something about our senses: that our retina has three kinds of cone cells. The world looks like it has impenetrably solid and stationary objects, but all except a quadrillionth of the volume of a rock is empty space between particles in restless schizophrenic vibration. The world feels like a three-dimensional stage where events unfold over time, but Einstein's work suggests that change is an illusion, time being merely the fourth dimension of an unchanging space-time that just is, never created and never destroyed, containing our cosmic history like a DVD contains a movie. The quantum world feels random, but Everett's work suggests that randomness too is an illusion, being simply the way our minds feel when cloned into diverging parallel universes.

The ultimate triumph of physics would be to start with a mathematical description of the world from the "bird's eye view" of a mathematician studying the equations (which are ideally simple enough to fit on her T-shirt) and to derive from them the "frog's eye view" of the world, the way her mind subjectively perceives it. However, there is also a third and intermediate "consensus view" of the world. From your subjectively perceived frog perspective, the world turns upside down when you stand on your head and disappears when you close your eyes, yet you subconsciously interpret your sensory inputs as though there is an external reality that is independent of your orientation, your location and your state of mind. It is striking that although this third view involves both censorship (like rejecting dreams), interpolation (as between eye-blinks) and extrapolation (like attributing existence to unseen cities) of your frog's eye view, independent observers nonetheless appear to share this consensus view. Although the frog's eye view looks black-and-white to a cat, iridescent to a bird seeing four primary colors, and still more different to a bee seeing polarized light, a bat using sonar, a blind person with keener touch and hearing, or the latest robotic vacuum cleaner, all agree on whether the door is open.

This reconstructed consensus view of the world that humans, cats, aliens and future robots would all agree on is not free from some of the above-mentioned shared illusions. However, it is by definition free from illusions that are unique to biological minds, and therefore decouples from the issue of how our human consciousness works. This is why I've changed my mind: although understanding the detailed nature of human consciousness is a fascinating challenge in its own right, it is notnecessary for a fundamental theory of physics, which need "only" derive the consensus view from its equations.

In other words, what Douglas Adams called "the ultimate question of life, the universe and everything" splits cleanly into two parts that can be tackled separately: the challenge for physics is deriving the consensus view from the bird's eye view, and the challenge for cognitive science is to derive the frog's eye view from the consensus view. These are two great challenges for the third millennium. They are each daunting in their own right, and I'm relieved that we need not solve them simultaneously.

gary_marcus's picture
Professor of Psychology, Director NYU Center for Language and Music; Author, Guitar Zero

When I was in graduate school, in the early 1990s, I learned two important things: that the human capacity for language was innate, and that the machinery that allowed human beings to learn language was "special", in the sense of being separate from the rest of the human mind.

Both ideas sounded great at the time. But (as far as I can tell know) only one of them turns out to be true.

I still think that I was right to believe in "innateness", the idea that the human mind, arrives, fresh from the factory, with a considerable amount of elaborate machinery. When a human embryo emerges from the womb, it has almost all the neurons it will ever have. All of the basic neural structures are already in place, and most or all of the basic neural pathways are established. There is, to be sure, lots of learning yet to come — an infant's brain is more rough draft than final product — but anybody who still imagines the infant human mind to be little more than an empty sponge isn't in touch with the realities of modern genetics and neuroscience. Almost half our genome is dedicated to the development of brain function, and those ten or fifteen thousand brain-related genes choreograph an enormous amount of biological sophistication.  Chomsky (whose classes I sat in on while in graduate school) was absolutely right to be insisting, for all these years, that language has its origins in the built-in structure of the mind.

But now I believe that I was wrong to accept the idea that language was separate from the rest of the human mind. It's always been clear that we can talk about what we think about, but when I was in graduate school it was popular to talk about language as being acquired by a separate "module" or "instinct" from the rest of cognition, by what Chomsky called a  "Language Acquisition Device" (or LAD). Its mission in life was to acquire language, and nothing else. 

In keeping with idea of language as product of specialized in-born mechanism, we noted how quickly how human toddlers acquired language, and how determined they were to do so; all normal human children acquire language, not just a select few raised in privileged environments, and they manage to do so rapidly, learning most of what they need to know in the first few years of life.  (The average adult, in contrast, often gives up around the time they have to face their fourth list of irregular verbs.)  Combine that with the fact that some children with normal intelligence couldn't learn language and that others with normal language lacked normal cognitive function, and I was convinced. Humans acquired language because they had a built-in module that was uniquely dedicated to that function.

Or so I thought then. By the late 1990s, I started looking beyond the walls of my own field (developmental psycholinguistics) and out towards a whole host of other fields, including genetics, neuroscience, and evolutionary biology.

The idea that most impressed me — and did the most to shake me of the belief that language was separate from the rest of the mind — goes back to Darwin. Not "survival of the fittest" (a phrase actually coined by Herbert Spencer) but his notion, now amply confirmed at the molecular level, that all biology is the product of what he called "descent with modification". Every species, and every biological system evolves through a combination of inheritance (descent) and change (modification). Nothing, no matter how original it may appear, emerges from scratch.

Language, I ultimately realized, must be no different: it emerged quickly, in the space of a few hundred thousand years, and with comparatively little genetic change. It suddenly dawned on me that the striking fact that our genomes overlap almost 99% with those of chimpanzees must be telling something: language couldn't possibly have started from scratch. There isn't enough room in the genome, or in our evolutionary history, for it to be plausible that language is completely separate from what came before.

Instead, I have now come to believe, language must be, largely, a recombination of spare parts, a kind of jury-rigged kluge built largely out of cognitive machinery that evolved for other purposes, long before there was such a thing as language. If there's something special about language, it is not the parts from which it is composed, but the way in which they are put together.

Neuorimaging studies seem to bear this out. Whereas we once imagined language to be produced and comprehended almost entirely by two purpose-built regions — Broca's area and Wernicke's area, we now see that many other parts of the brain are involved (e.g. the cerebellum and basal ganglia) and that the classic language areas (i.e. Broca's and Wernicke's) participate in other aspects of mental life (e.g., music and motor control) and have counterparts in other apes.

At the narrowest level, this means that psycholinguists and cognitive neuroscientists need to rethink their theories about what language is. But if there is a broader lesson, it is this: although we humans in many ways differ radically from any other species, our greatest gifts are built upon a genomic bedrock that we share with the many other apes that walk the earth.

robert_sapolsky's picture
Neuroscientist, Stanford University; Author, Behave

Well, my biggest change of mind came only a few years ago. It was the outcome of a painful journey of self-discovery, where my wife and children stood behind me and made it possible, where I struggled with all my soul, and all my heart and all my might. But that had to do with my realizing that Broadway musicals are not cultural travesties, so it's a little tangential here. Instead I'll focus on science.
I'm both a neurobiologist and a primatologist, and I've changed my mind about plenty of things in both of these realms. But the most fundamental change is one that transcends either of those disciplines — this was my realizing that the most interesting and important things in the life sciences are not going to be explained with sheer reductionism. 

A specific change of mind concerned my work as a neurobiologist. 

This came about 15 years ago, and it challenged neurobiological dogma that I had learned in pre-school, namely that the adult brain does not make new neurons. This fact had always been a point of weird pride in the field — hey, the brain is SO fancy and amazing that its elements are irreplaceable, not like some dumb-ass simplistic liver that's so totally fungible that it can regrow itself. And what this fact also reinforced, in passing, was the dogma that the brain is set in stone very early on in life, that there's all sorts of things that can't be changed once a certain time-window had passed. 

Starting in the 1960's, a handful of crackpot scientists had been crying in the wilderness about how the adult brain does make new neurons. At best, their unorthodoxy was ignored; at worst, they were punished for it. But by the 1990's, it had become clear that they were right. And "adult neurogenesis" has turned into the hottest subject in the field — the brain makes new neurons, makes them under interesting circumstances, fails to under other interesting ones. 

The new neurons function, are integrated into circuits, might even be required for certain types of learning. And the phenomenon is a cornerstone of a new type of neurobiological chauvinism — part of the very complexity and magnificence of the brain is how it can rebuild itself in response to the world around it. 

So, I'll admit, this business about new neurons was a tough one for me to assimilate. I wasn't invested enough in the whole business to be in the crowd indignantly saying, No, this can't be true. Instead, I just tried to ignore it. "New neurons", christ, I can't deal with this, turn the page. And after an embarrassingly long time, enough evidence had piled up that I had to change my mind and decide that I needed to deal with it after all. And it's now one of the things that my lab studies.

The other change concerned my life as a primatologist, where I have been studying male baboons in East Africa. This also came in the early 90's. I study what social behavior has to do with health, and my shtick always was that if you want to know which baboons are going to be festering with stress-related disease, look at the low-ranking ones.  Rank is physiological destiny, and if you have a choice in the matter, you want to win some critical fights and become a dominant male, because you'll be healthier. And my change of mind involved two pieces.

The first was realizing, from my own data and that of others, that being dominant has far less to do with winning fights than with social intelligence and impulse control. The other was realizing that while health has something to do with social rank, it has far more to do with personality and social affiliation — if you want to be a healthy baboon, don't be a socially isolated one. This particular shift has something to do with the accretion of new facts, new statistical techniques for analyzing data, blah blah. Probably most importantly, it has to do with the fact that I was once a hermetic 22-year old studying baboons and now, 30 years later, I've changed my mind about a lot of things in my own life.

john_mccarthy's picture
Professor of Computer Science at Stanford University

I have a collection of web pages on the sustainability of material progress that treats many problems that have been proposed as possible stoppers. I get email about the pages, both unfavorable and favorable, mostly the latter.

I had believed that the email would concern specific problems or would raise new ones, e.g. "What about erosion of agricultural land?"

There's some of that, but overwhelmingly the email, both pro and con, concerns my attitude,  not my (alleged) facts. "How can you be so blithely cornucopian when everybody knows ..." or "I'm glad someone has the courage to take on all those doomsters."

It seems, to my surprise, that people's attitude that the future stems at least as much from personality as from opinions about facts. People look for facts to support their attitudes — which have earlier antecedents.

lee_smolin's picture
Physicist, Perimeter Institute; Author, Einstein's Unfinished Revolution

Although I have changed my mind about several ideas and theories, my longest struggle has been with the concept of time.  The most obvious and universal aspect about reality, as we experience it, is that it is structured as a succession of moments, each of which comes into being, supplanting what was just present and is now past.  But, as soon as we describe nature in terms of mathematical equations, the present moment and the flow of time seem to disappear, and time becomes just a number, a reading on an instrument,  like any other.

Consequently, many philosophers and physicists argue that time is an illusion, that reality consists of the whole four dimensional history of the universe, as represented in Einstein’s theory of general relativity.  Some, like Julian Barbour, go further and argue that, when quantum theory is unified with gravity,  time disappears completely.  The world is just a vast collection of moments which are represented by the "wave-function  of the universe."  Time not real, it is just an "emergent quantity" that is helpful to organize our observations of the universe when it is big and complex.

Other physicists argue that aspects of time are real, such as the relationships of causality, that record which events were the necessary causes of others. Penrose, Sorkin and Markopoulou have proposed models of quantum spacetime in which everything real reduces to these relationships of causality.

In my own  thinking, I first embraced the view that quantum reality is timeless.  In our work on loop quantum gravity we were able to take this idea more seriously than people before us could, because we could construct and study exact wave-functions of the universe. Carlo Rovelli , Bianca Dittrich and others worked out in detail how time would "emerge" from the study of the question of what quantities of the theory are observable.

But, somehow, the more this view was worked out in detail the less I was convinced. This was partly due to technical challenges in realizing the emergence of time, and partly because some naïve part of me could never understand conceptually how the basic experience of the passage of time could emerge from a world without time.

So in the late 90s I embraced the view that time,  as causality,  is real. This fit best the next stage of development of loop quantum gravity, which was based on quantum spacetime histories.    However,  even as we continued to make  progress on the technical side of these studies,  I found myself worrying  that the present moment and the flow of time were still nowhere represented.  And I had another motivation, which was to make sense of the idea that laws of nature could evolve in time.

Back in the early 90s I had formulated a view of laws evolving on a landscape of theories along with the universe they govern.  This had been initially ignored, but in the last few years there has been much study of dynamics on landscapes of theories. Most of these are framed in the timeless language of the "wavefunction  of the universe," in contrast to my original presentation, in which theories evolved in real time. As these studies progressed, it became clear that only those in which time played a role could generate  testable predictions — and this made me want  to think more deeply about time.

It is becoming clear to me that the mystery of the nature of time is connected with other fundamental questions such as the nature of truth in mathematics and whether there must be timeless laws of nature. Rather than being an illusion, time may be the only aspect of our present understanding of nature that is not temporary and emergent.

tor_n_rretranders's picture
Writer; Speaker; Thinker, Copenhagen, Denmark

I have changed my mind about my body. I used to think of it as a kind of hardware on which my mental and behavioral software was running. Now, I primarily think of my body as software. 

My body is not like a typical material object, a stable thing.  It is more like a flame, a river or an eddie. Matter is flowing through it all the time. The constituents are being replaced over and over again.

A chair or a table is stable because the atoms stay where they are. The stability of a river stems from the constant flow of water through it.

98 percent of the atoms in the body are replaced every year. 98 percent! Water molecules stays in your body for two weeks (and for an even shorter time in a hot climate), the atoms in your bones stays there for a few months. Some atoms stay for years. But almost not one single atom stay with you in your body from cradle to grave.

What is constant in you is not material. An average person takes in 1.5 ton of matter every year as food, drinks and oxygen. All this matter has to learn to be you. Every year. New atoms will have to learn to remember your childhood.

These numbers has been known for half a century or more, mostly from studies of radioactive isotopes. Physicist Richard Feynman said in 1955: "Last week's potatoes! They now can remember what was going on in your mind a year ago."

But why is this simple insight not on the all-time Top 10 list of important discoveries? Perhaps because it tastes a little like spiritualism and idealism? Only the ghosts are for real? Wandering souls? 

But digital media now makes it possible to think of all this in a simple way. The music I danced to as a teenager has been moved from vinyl-LPs to magnetic audio tapes to CDs to Pods and whatnot. The physical representation can change and is not important — as long as it is there. The music can jump from medium to medium, but it is lost if it does not have a representation. This physics of information was sorted out by Rolf Landauer in the 1960'ies. Likewise, out memories can move from potato-atoms to burger-atoms to banana-atoms. But the moment they are on their own, they are lost.

We reincarnate ourselves all the time. We constantly give our personality new flesh. I keep my mental life alive by making it jump from atom to atom. A constant flow. Never the same atoms, always the same river. No flow, no river. No flow, no me.

This is what I call permanent reincarnation: Software replacing its hardware all the time. Atoms replacing atoms all the time. Life. This is very different from religious reincarnation with souls jumping from body to body (and souls sitting out there waiting for a body to take home in).

There has to be material continuity for permanent reincarnation to be possible. The software is what is preserved, but it cannot live on its own. It has to jump from molecule to molecule, always in carnation.

I have changed my mind about the stability of my body: It keeps changing all the time. Or I could not stay the same.

ernst_p_ppel's picture
Head of Research Group Systems, Neuroscience and Cognitive Research, Ludwig-Maximilians-University Munich, Germany; Guest Professor, Peking University, China

When I look at something, when I talk to somebody, when I write a few sentences about "what I have changed my mind about and why", the neuronal network in my brain changes all the time and there are even structural changes in the brain. Why is it that these changes don't come to mind all the time but remain subthreshold?  Certainly, if everything would come to mind what goes on in the brain, and if there would not be an efficient mechanism of informational garbage disposal, we would end up in mental chaos (which sometimes happens in unfortunate cases with neuronal dysfunctioning). It is only sometimes that certain events produce so much neuronal energy and catch so much attention that a conscious representation is made possible.

As most neuronal information processing remains in mental darkness, i.e. happens on an implicit level, it is in my view impossible to make a clear statement why somebody changed his or her mind about something. If somebody gives an explicit reason for having changed the mind about something, I am very suspicious. As "it thinks" all the time in my brain, and as these processes are beyond voluntary control, I am much less transparent to myself as I might want, and this is true for everybody. Thus, I cannot give a good reason why I changed my mind about a strong hypothesis or even belief or perhaps a prejudice in my scientific work which I had until several years ago.

A sentence of Ludwig Wittgenstein from his Tractatus Logico-Philosophicus (5.6) was like a dogma for me: "Die Grenzen meiner Sprache bedeuten die Grenzen meiner Welt. — The limits of my language signify the limits of my world " (my translation). Now I react to this sentence with an emphatic "No!".

As a neuroscientist I have to stay away from the language trap. In our research we are easily misguided by words. Without too much thinking we are referring to "consciousness", to "free will", to "thoughts", to "attention", to the "self", etc, and we give an ontological status to these terms. Some people even start to look at the potential site of consciousness or of free will in the brain, or some people ask the "what is ..." question that never can find an answer. The prototypical "what is ..." question was formulated 1600 years ago by Augustinus who said in the 11th book of his Confessions: "Quid est ergo tempus? Si nemo ex me quaerat scio, si quaerenti explicare velim nescio. — What is time? If nobody asks me, I know it, but if I have to explain it to somebody, I don't know it" (my translation).

Interestingly, Augustinus made a nice categorical mistake by referring to "knowing" at first on an implicit, and second on an explicit level. This categorical mistake is still with us when we ask questions like: "What is consciousness, free will,..."; one knows, but one does not. As neuroscientists we have to focus on processes in the brain which rarely or perhaps never map directly onto such terms as we use them. Complexity reduction in brains is necessary and it happens all the time, but the goal of this reductive process is not such terms, that might be useful for our communication, but efficient action. This is what I think today, but why I came to this conclusion I don't know; it was probably several reasons that finally resulted in a shift of mind. i.e. overcoming Wittgenstein's straitjacket.

antony_garrett_lisi's picture
Theoretical physicist

As a scientist, I am motivated to build an objective model of reality. Since we always have incomplete information, it is eminently rational to construct a Bayesian network of likelihoods — assigning a probability for each possibility, supported by a chain of priors. When new facts arise, or if new conditional relationships are discovered, these probabilities are adjusted accordingly — our minds should change. When judgment or action is required, it is based on knowledge of these probabilities. This method of logical inference and prediction is the sine qua non of rational thought, and the method all scientists aspire to employ. However, the ambivalence associated with an even probability distribution makes it terribly difficult for an ideal scientist to decide where to go for dinner.

Even though I strive to achieve an impartial assessment of probabilities for the purpose of making predictions, I cannot consider my assessments to be unbiased. In fact, I no longer think humans are naturally inclined to work this way. When I casually consider the beliefs I hold, I am not readily able to assign them numerical probabilities. If pressed, I can manufacture these numbers, but this seems more akin to rationalization than rational thought. Also, when I learn something new, I do not immediately erase the information I knew before, even if it is contradictory. Instead, the new model of reality is stacked atop the old. And it is in this sense that a mind doesn't change; vestigial knowledge may fade over a long period of time, but it isn't simply replaced. This model of learning matches a parable from Douglas Adams, relayed by Richard Dawkins:

A man didn't understand how televisions work, and was convinced that there must be lots of little men inside the box, manipulating images at high speed. An engineer explained to him about high frequency modulations of the electromagnetic spectrum, about transmitters and receivers, about amplifiers and cathode ray tubes, about scan lines moving across and down a phosphorescent screen. The man listened to the engineer with careful attention, nodding his head at every step of the argument. At the end he pronounced himself satisfied. He really did now understand how televisions work. "But I expect there are just a few little men in there, aren't there?"

As humans, we are inefficient inference engines — we are attached to our "little men," some dormant and some active. To a degree, these imperfect probability assessments and pet beliefs provide scientists with the emotional conviction necessary to motivate the hard work of science. Without the hope that an improbable line of research may succeed where others have failed, difficult challenges would go unmet. People should be encouraged to take long shots in science, since, with so many possibilities, the probability of something improbable happening is very high. At the same time, this emotional optimism must be tempered by a rational estimation of the chance of success — we must not be so optimistic as to delude ourselves. In science, we must test every step, trying to prove our ideas wrong, because nature is merciless. To have a chance of understanding nature, we must challenge our predispositions. And even if we can't fundamentally change our minds, we can acknowledge that others working in science may make progress along their own lines of research. By accommodating a diverse variety of approaches to any existing problem, the scientific community will progress expeditiously in unlocking nature's secrets.

helen_fisher's picture
Biological Anthropologist, Rutgers University; Author, Why Him? Why Her? How to Find and Keep Lasting Love

When asked why all of her marriages failed, anthropologist Margaret Mead apparently replied, "I beg your pardon, I have had three marriages and none of them was a failure."  There are many people like Mead.  Some 90% of Americans marry by middle age.  And when I looked at United Nations data on 97 other societies, I found that more than 90% of men and women eventually wed in the vast majority of these cultures, too.  Moreover, most human beings around the world marry one person at a time: monogamy.  Yet, almost everywhere people have devised social or legal means to untie the knot.  And where they can divorce — and remarry — many do. 

So I had long suspected this human habit of "serial monogamy" had evolved for some biological purpose.  Planned obsolescence of the pairbond?  Perhaps the mythological "seven-year itch" evolved millions of years ago to enable a bonded pair to rear two children through infancy together.  If each departed after about seven years to seek "fresh features," as poet Lord Byron put it, both would have ostensibly reproduced themselves and both could breed again — creating more genetic variety in their young. 

So I began to cull divorce data on 58 societies collected since 1947 by the Statistical Office of the United Nations.  My mission: to prove that the "seven year itch" was a worldwide biological phenomenon associated in some way with rearing young.  

Not to be.  My intellectual transformation came while I was pouring over these divorce statistics in a rambling cottage, a shack really, on the Massachusetts coast one August morning.  I regularly got up around 5:30, went to a tiny desk that overlooked the deep woods, and poured over the pages I had Xeroxed from the United Nations Demographic Yearbooks.  But in country after country, and decade after decade, divorces tended to peak (the divorce mode) during and around the fourth year of marriage.  There were variations, of course.  Americans tended to divorce between the second and third year of marriage, for example.  Interestingly, this corresponds with the normal duration of intense, early stage, romantic love — often about 18 months to 3 years.  Indeed, in a 2007 Harris poll, 47% of American respondents said they would depart an unhappy marriage when the romance wore off, unless they had conceived a child.

Nevertheless, there was no denying it:  Among these hundreds of millions of people from vastly different cultures, three patterns kept emerging.  Divorces regularly peaked during and around the fourth year after wedding.  Divorces peaked among couples in their late twenties.  And the more children a couple had, the less likely they were to divorce: some 39% of worldwide divorces occurred among couples with no dependent children; 26% occurred among those with one child; 19% occurred among couples with two children; and 7% of divorces occurred among couples with three young. 

I was so disappointed.  I mulled about this endlessly.  My friend used to wave his hand over my face, saying, "Earth to Helen; earth to Helen."  Why do so many men and women divorce during and around the 4-year mark; at the height of their reproductive years; and often with a single child?  It seemed like such an unstable reproductive strategy.  Then suddenly I got that "ah-ha" moment:  Women in hunting and gathering societies breastfeed around the clock, eat a low-fat diet and get a lot of exercise — habits that tend to inhibit ovulation.  As a result, they regularly space their children about four years apart.  Thus, the modern duration of many marriages—about four years—conforms to the traditional period of human birth spacing, four years. 

Perhaps human parental bonds originally evolved to last only long enough to raise a single child through infancy, about four years, unless a second infant was conceived.  By age five, a youngster could be reared by mother and a host of relatives. Equally important, both parents could choose a new partner and bear more varied young.

My new theory fit nicely with data on other species.  Only about three percent of mammals form a pairbond to rear their young.  Take foxes.  The vixen's milk is low in fat and protein; she must feed her kits constantly; and she will starve unless the dog fox brings her food.  So foxes pair in February and rear their young together.  But when the kits leave the den in mid summer, the pairbond breaks up.  Among foxes, the partnership lasts only through the breeding season.  This pattern is common in birds.  Among the more than 8,000 avian species, some 90% form a pairbond to rear their young.  But most do not pair for life. A male and female robin, for example, form a bond in the early spring and rear one or more broods together.  But when the last of the fledgling fly away, the pairbond breaks up.  

Like pair-bonding in many other creatures, humans have probably inherited a tendency to love and love again—to create more genetic variety in our young.  We aren't puppets on a string of DNA, of course.  Today some 57% of American marriages last for life.  But deep in the human spirit is a restlessness in long relationships, born of a time long gone, as poet John Dryden put it, "when wild in wood the noble savage ran."

scott_sampson's picture
President & CEO, Science World British Columbia; Dinosaur paleontologist and science communicator; Author, How To Raise A Wild Child

An asteroid did it . . . .

Ok, so this may not seem like news to you. The father-son team of Luis and Walter Alvarez first put forth the asteroid hypothesis in 1980 to account for the extinction of dinosaurs and many other lifeforms at the end of the Mesozoic (about 65.5 million years ago). According to this now familiar scenario, an asteroid about 10 km in diameter slammed into the planet at about 100,000 km/hour. Upon impact, the bolide disintegrated, vaporizing a chunk of the earth's crust and propelling a gargantuan cloud of gas and dust high into the atmosphere. This airborne matter circulated around the globe, blocking out the sun and halting photosynthesis for a period of weeks or months. If turning the lights out wasn't bad enough, massive wild fires and copious amounts of acid rain apparently ensued. 

Put simply, it was hell on Earth. Species succumbed in great numbers and food webs collapsed the world over, ultimately wiping out about half of the planet's biodiversity. Key geologic evidence includes remnants of the murder weapon itself; iridium, an element that occurs in small amounts in the Earth's crust but is abundant in asteroids, was found by the Alvarez team to be anomalously abundant in a thin layer within Cretaceous-Tertiary (K-T) boundary sediments at various sites around the world. In 1990, announcement came of discovery of the actual impact crater in the Gulf of Mexico. It seemed as if arguably the most enduring mystery in prehistory had finally been solved. Unsurprisingly, this hypothesis was also a media darling, providing a tidy yet incredibly violent explanation to one of paleontology's most perplexing problems, with the added bonus of a possible repeat performance, this time with humans on the roster of victims.

To some paleontologists, however, the whole idea seemed just a bit too tidy.

Ever since the Alvarezes proposed the asteroid, or "impact winter," hypothesis, many (at times the bulk of) dinosaur paleontologists have argued for an alternative scenario to account for the K-T extinction. I have long counted myself amongst the ranks of doubters. It is not so much that I and my colleagues have questioned the occurrence of an asteroid impact; supporting evidence for this catastrophic event has been firmly established for some time. At issue has been the timing of the event. Whereas the impact hypothesis invokes a rapid extinction—on the order of weeks to years—others argue for a more gradual dying that spanned from one million to several million years. Evidence cited in support of the latter view includes an end-Cretaceous drop in global sea levels and a multi-million year bout of volcanism that makes Mount St. Helens look like brushfire. 

Thus, at present the debate has effectively been reduced to two alternatives. First is the Alvarez scenario, which proposes that the K-T extinction was a sudden event triggered by a single extraterrestrial bullet. Second is the gradualist view, which proposes that the asteroid impact was accompanied by two other global-scale perturbations (volcanism and decreasing sea-level), and that it was only this combination of factors acting in concert that decimated the end-Mesozoic biosphere.

Paleontologists of the gradualist ilk have argued that dinosaurs (and certain other groups) were already on their way out well before the K-T "big bang" occurred. Unfortunately, the fossil record of dinosaurs is relatively poor for the last stage of the Mesozoic and only one place on Earth — a small swath of badlands in the Western Interior of North America — has been investigated in detail. Several authors have argued that the latest Cretaceous Hell Creek fauna, as it's called (best known from eastern Montana), was depauperate relative to earlier dinosaur faunas. In particular, comparisons are often been made with the ca. 75 million year old Late Cretaceous Dinosaur Park Formation of southern Alberta, which has yielded a bewildering array of herbivorous and carnivorous dinosaurs.

For a long time, I regarded myself a card-carrying member of the gradualist camp. However, at least two lines of evidence have persuaded me to change my mind and join the ranks of the sudden-extinction-precipitated-by-an-asteroid group. 

First is a growing database indicating that the terminal Cretaceous world was not stressed to the breaking point, awaiting arrival of the coup de grâce from outer space. With regard to dinosaurs in particular, recent work has demonstrated that the Hell Creek fauna was much more diverse than previously realized. Second, new and improved stratigraphic age controls for dinosaurs and other Late Cretaceous vertebrates in the Western Interior indicate that ecosystems like those preserved the Dinosaur Park Formation were not nearly as diverse as previously supposed. 

Instead, many dinosaur species appear to have existed for relatively short durations (< 1 million years), with some geologic units preserving a succession of relatively short-lived faunas. So, even within the well sampled Western Interior of North America (let alone the rest of the world, for which we currently have little hard data), I see no grounds for arguing that dinosaurs were undergoing a slow, attritional demise. Other groups, like plants, also seem to have been doing fine in the interval leading up to that fateful day 65.5 million years ago. Finally, extraordinary events demand extraordinary explanations, and it does not seem parsimonious to make an argument for a lethal cascade of agents when compelling evidence exists for a single agent capable of doing the job on its own.

So yes, as far as I'm concerned (at least for now), the asteroid did it.

john_baez's picture
Mathematical Physicist, U.C. Riverside

One of the big problems in physics — perhaps the biggest! — is figuring out how our two current best theories fit together. On the one hand we have the Standard Model, which tries to explain all the forces except gravity, and takes quantum mechanics into account.  On the other hand we have General Relativity, which tries to explain gravity, and does not take quantum mechanics into account. Both theories seem to be more or less on the right track — but until we somehow fit them together, or completely discard one or both, our picture of the world will be deeply schizophrenic.

It seems plausible that as a step in the right direction we should figure out a theory of gravity that takes quantum mechanics into account, but reduces to General Relativity when we ignore quantum effects (which should be small in many situations). This is what people mean by "quantum gravity" — the quest for such a theory.

The most popular approach to quantum gravity is string theory.  Despite decades of hard work by many very smart people, it's far from clear that this theory is successful. It's made no predictions that have been confirmed by experiment.  In fact, it's made few predictions that we have any hope of testing anytime soon!  Finding certain sorts of particles at the big new particle accelerator near Geneva would count as partial confirmation, but string theory says very little about the details of what we should expect. In fact, thanks to the vast "landscape" of string theory models that researchers are uncovering, it keeps getting harder to squeeze specific predictions out of this theory.

When I was a postdoc, back in the 1980s, I decided I wanted to work on quantum gravity. The appeal of this big puzzle seemed irresistible.  String theory was very popular back then, but I was skeptical of it.  I became excited when I learned of an alternative approach pioneered by Ashtekar, Rovelli and Smolin, called loop quantum gravity.

Loop quantum gravity was less ambitious than string theory. Instead of a "theory of everything", it only sought to be a theory of something: namely, a theory of quantum gravity.

So, I jumped aboard this train, and for about a decade I was very happy with the progress we were making. A beautiful picture emerged, in which spacetime resembles a random "foam" at very short distance scales, following the laws of quantum mechanics.

We can write down lots of theories of this general sort. However, we have never yet found one for which we can show that General Relativity emerges as a good approximation at large distance scales — the quantum soap suds approximating a smooth surface when viewed from afar, as it were.

I helped my colleagues Dan Christensen and Greg Egan do a lot of computer simulations to study this problem. Most of our results went completely against what everyone had expected.  But worse, the more work we did, the more I realized I didn't know what questions we should be asking!  It's hard to know what to compute to check that a quantum foam is doing its best to mimic General Relativity.

Around this time, string theorists took note of loop quantum gravity people and other critics — in part thanks to Peter Woit's blog, his bookNot Even Wrong, and Lee Smolin's book The Trouble with Physics.  String theorists weren't used to criticism like this.  A kind of "string-loop war" began.  There was a lot of pressure for physicists to take sides for one theory or the other. Tempers ran high.

Jaron Lanier put it this way: "One gets the impression that some physicists have gone for so long without any experimental data that might resolve the quantum-gravity debates that they are going a little crazy."  But even more depressing was that as this debate raged on, cosmologists were making wonderful discoveries left and right, getting precise data about dark energy, dark matter and inflation.  None of this data could resolve the string-loop war! Why?  Because neither of the contending theories could make predictions about the numbers the cosmologists were measuring! Both theories were too flexible.

I realized I didn't have enough confidence in either theory to engage in these heated debates.  I also realized that there were other questions to work on: questions where I could actually tell when I was on the right track, questions where researchers cooperate more and fight less.  So, I eventually decided to quit working on quantum gravity.

It was very painful to do this, since quantum gravity had been my holy grail for decades.  After you've convinced yourself that some problem is the one you want to spend your life working on, it's hard to change your mind.  But when I finally did, it was tremendously liberating.

I wouldn't urge anyone else to quit working on quantum gravity. Someday, someone is going to make real progress.  When this happens, I may even rejoin the subject.  But for now, I'm thinking about other things.  And, I'm making more real progress understanding the universe than I ever did before.

steve_nadis's picture
Contributing Editor to Astronomy Magazine and a freelance writer

When I was 21, I began working for the Union of Concerned Scientists (UCS) in Cambridge Massachusetts. I was still an undergraduate at the time, planning on doing a brief research stint in energy policy before finishing college and heading to graduate school in physics. That "brief research stint" lasted about seven years, off and on, and I never did make it to graduate school. But the experience was instructive nevertheless.

When I started at UCS in the 1970s, nuclear power safety was a hot topic, and I squared off in many debates against nuclear proponents from utility companies, nuclear engineering departments, and so forth regarding reactor safety, radioactive wastes, and the viability of renewable energy alternatives. The next issue I took on for UCS was the nuclear arms race, which was equally polarized. (The neocons of that day weren't "neo" back then; they were just cons.) As with nuclear safety, there was essentially no common ground between the two sides. Each faction was invariably trying to do the other in, through oral rhetoric and tendentious prose, always looking for new material to buttress their case or undermine that of their opponents.

Even though the organization I worked for was called the Union of Concern Scientists, and even though many of the staff members there referred to me as a "scientist" (despite my lack of academic credentials), I knew that what I was doing was not science. (Nor were the many physics PhD's in arms control and energy policy doing science either.) In the back of my head, I always assumed that "real science" was different — that scientists are guided by facts rather than by ideological positions, personal rivalries, and whatnot.

In the decades since, I've learned that while this may be true in many instances, oftentimes it's not. When it comes to the biggest, most contentious issues in physics and cosmology — such as the validity of inflationary theory, string theory, or the multiverse/landscape scenario — the image of the objective truth seeker, standing above the fray, calmly sifting through the evidence without preconceptions or prejudice, may be less accurate than the adversarial model of our justice system. Both sides, to the extent there are sides on these matters, are constantly assembling their briefs, trying to convince themselves as well as the jury at large, while at the same time looking for flaws in the arguments of the opposing counsel.

This fractionalization may stem from scientific intuition, political or philosophical differences,  personal grudges, or pure academic competition. It's not surprising that this happens, nor is it necessarily a bad thing. In fact, it's my impression that this approach works pretty well in the law and in science too. It means that, on the big things at least, science will be vetted; it has to withstand scrutiny, pass muster.

But it's not a cold, passionless exercise either. At its heart, science is a human endeavor, carried out by people. When the questions are truly ambitious, it takes a great personal commitment to make any headway — a big investment in energy and in emotion as well. I know from having met with many of the lead researchers that the debates can get heated, sometimes uncomfortably so. More importantly, when you're engaged in an epic struggle like this — trying, for instance, to put together a theory of broad sweep — it may be difficult, if not impossible, to keep an "open mind" because you may be well beyond that stage, having long since cast your lot with a particular line of reasoning. And after making an investment over the course of many years, it's natural to want to protect it. That doesn't mean you can't change your mind — and I know of several cases where this has occurred — but, no matter what you do, it's never easy to shift from forward to reverse.

Although I haven't worked as a scientist in any of these areas, I have written about many of the "big questions" and know how hard it is to get all the facts lined up so that they fit together into something resembling an organic whole. Doing that, even as a mere scribe, involves periods of single-minded exertion, and during that process the issues can almost take on a life of their own, at least while you're actively thinking about them. Before long, of course, you've moved onto the next story and the excitement of the former recedes. As the urgency fades, you start wondering why you felt so strongly about the landscape or eternal inflation or whatever it was that had taken over your desk some months ago.

It's different, of course, for researchers who may stake out an entire career — or at least big chunks thereof — in a certain field.  You're obliged to keep abreast of all that's going on of note, which means one's interest is continually renewed. As new data comes in, you try to see how it fits in with the pieces of the puzzle you're already grappling with. Or if something significant emerges from the opposing camp, you may instinctively seek out the weak spots, trying to see how those guys messed up this time.

It's possible, of course, that a day may come when, try as you might, you can't find the weak spots in the other guy's story. After many attempts and an equal number of setbacks, you may ultimately have to accede to the view of an intellectual, if not personal, rival. Not that you want to but rather because you can't see any way around it. On the one hand, you might chalk it up as a defeat, something that will hopefully build character down the road. But in the grand scheme of things, it's more of a victory — a sign that sometimes our adversarial system of science actually works.

peter_schwartz's picture
Futurist; Senior Vice President for Global Government Relations and Strategic Planning, Salesforce.com; Author, Inevitable Surprises

In the last few years I have changed my mind about nuclear power. I used to believe that expanding nuclear power was too risky. Now I believe that the risks of climate change are much greater than the risks of nuclear power. As a result we need to move urgently toward a new generation of nuclear reactors.  

What led to the change of view? First I came to believe that the likelihood of major climate related catastrophes was increasing rapidly and that they were likely to occur much sooner than the simple  linear models of the IPCC indicated. My analysis developed as a result of work we did for the defense and intelligence community on the national security implications of climate change. Many regions of the Earth are likely to experience an increasing frequency of extreme weather events. These catastrophic events include megastorms, super tornados, torrential rains and floods, extended droughts, ecosystem disruptions all added to steadily rising sea levels. It also became clear that human induced climate change is ever more at the causal center of the story.

Research by climatologists like William Ruddiman indicate that the climate is more sensitive to changes in human societies ranging from agricultural practices like forest clearing and  irrigated rice growing to major plagues to the use of fossil fuels. Human societies have often gone to war as a result of the ecological exhaustion of their local environments. So it becomes an issue of war and peace. Will Vietnam simply roll over and die when the Chinese dam what remains of the trickle of the Mekong as an extended drought develops at is source in the Tibetan highlands?

 Even allowing for much greater efficiency and a huge expansion of renewable energy, the real fuel of the future is coal, especially in the US, China and India. if all three go ahead with their current plans on building coal fired electric generating plants then that alone will over the next two decades double all the CO2 that human kind has put into the atmosphere since the industrial revolution began more than two hundred years ago. And the only meaningful alternative to coal is nuclear power. It is true that we can hope that our ability to capture the CO2 from coal burning and sequester it in various ways will grow, but it will take a decade or more before that technology reaches commercial maturity.

At the same time I also came to believe that risks of  nuclear power are less than we feared. That shift began with a trip to visit the proposed nuclear waste depository at Yucca Mountain in Nevada. A number ofEdge folk went including Stewart Brand, Kevin Kelly, Danny Hillis, and Pierre Omidyar. When it became clear that very long term storage of waste (e.g. 10,000 to 250,000 years) is a silly idea and not meaningfully realistic we began to question many of the assumptions about the future of nuclear power. The right answer to nuclear waste is temporary storage for perhaps decades and then recycling the fuel as much of the world already does, not sticking it underground for millennia. We will likely need the fuel we can extract from the waste.

There are emerging technologies for both nuclear power and waste reprocessing that will reduce safety risk, the amount of waste and most especially the risk of nuclear weapons proliferation as the new fuel cycle produces no plutonium, the offending substance of concern. And the economics are increasingly favorable as the French have demonstrated for decades. The average French citizen produces 70% less CO2 than the average American as a result. We have also learned that the long term consequences of the worst nuclear accident in history, Chernobyl were much less than feared.

So the conclusion is that the risks of climate change are far greater than the risks of nuclear power. Furthermore, human skill and knowledge in managing a nuclear system are only likely to grow with time. While the risks of climate change will grow as billions more people get rich and change the face of the planet with their demands for more stuff. Nuclear power is the only source of electricity that we can now see that is likely to enable the next three or four billion who want what we all have to get what they want without radically changing the climate of the Earth.

patrick_bateson's picture
Professor of Ethology, Cambridge University; Co-author, Design for a Life (*Deceased)

Near the end of his life Charles Darwin invited for lunch at Down House Dr Ludwig Büchner, President of the Congress of the International Federation of Freethinkers, and Edward Aveling, a self-proclaimed and active atheist.  The invitation was at their request.  Emma Darwin, devout as ever, was appalled by the thought of entertaining such guests and at table insulated herself from the atheists with an old family friend, the Rev. Brodie Innes, on her right and with her grandson and his friends on her left.  After lunch Darwin and his son Frank smoked cigarettes with the two visitors in Darwin's old study.  Darwin asked them with surprising directness: "Why do you call yourselves atheists?"  He said that he preferred the word agnostic.  While Darwin agreed that Christianity was not supported by evidence, he felt that atheist was too aggressive a term to describe his own position.

For many years what had been good enough for Darwin was good enough for me.  I too described myself as an agnostic.  I had been brought up in a Christian culture and some of the most rational humanists I knew were believers.  I loved the music and art that had been inspired by a belief in God and saw no hypocrisy in participating in the great carol services held in the Chapel of King's College Cambridge. I did not accept the views of some of my scientific colleagues that the march of science has disposed of religion.  The wish that I and many biologists had to understand biological evolution was not the same as the wish had by those with deep religious conviction to understand the meaning of life.

I had, however, led a sheltered life and had never met anybody who was aggressively religious.  I hated, of course, what I had read about the ugly fanaticism of all forms of religious fundamentalism or what I had seen of it on television.  However, such wickedness did not seem to be simply correlated with religious belief since many non-believers were just as totalitarian in their behaviour as the believers.   My unwillingness to be involved in religious debates was shaken at a grand dinner party.  The woman sitting next to me asked me what I did and I told her that I am a biologist.  "Oh well," she said, "then we have plenty to talk about, because I believe that every word of the Bible is literally true."  My heart sank.

As things turned out, we didn't have a great deal to talk about because she wasn't going to be persuaded by any argument that I could throw at her.  She did not seem to wonder about the inconsistencies between the gospels of the New Testament or those between the first and second chapters of Genesis.  Nor was she concerned about where Cain's wife came from?  The Victorians were delicate about such matters and were not going to entertain the thought that Cain married an unnamed sister or, horrors, that his own mother bore his children, his grand children and so on down the line of descendants until other women became available.  Nevertheless, the devout Victorians were obviously troubled by the question and they speculated on the existence of pre-Adamite people, angels probably, who would have furnished Cain with his wife.

My creationist dinner companion was not worried by such trivialities and dismissed my lack of politesse as the problem of a scientist being too literal.  However, being too literal was not my problem, it was hers and those of her fellow creationists.   She was hoist on her own petard.  In any event, it was quite simply stupid to try to take on science on its own terms by appealing to the intelligence implicit in natural design.  Science provides orderly methods for examining the natural world.  One of those methods is to develop theories that integrate as much as possible of what we know about the phenomena encompassed by the theory.  The theories provide frameworks for testing the characteristics of the world — and though some theorists may not wish to believe it, their theories are eminently disposable.  Facts are widely shared opinions and, every so often the consensus breaks — and minds change.  Nevertheless it is crying for the moon to hope that the enormous bodies of thought that have been built up about cosmology, geology and biological evolution are all due to fall apart.  No serious theologian would rest his or her beliefs on such a hope. If faith rests on the supposed implausibility of a current scientific explanation, it is vulnerable to the appearance of a plausible one.  To build on such sand is a crass mistake.

Not long after that dreadful dinner, Richard Dawkins wrote to me to ask whether I would publicly affirm my atheism.  I could see no reason why not.  One of the clear definitions of an atheist is a lack of a belief in a God.  That certainly described my position, even though I am disinclined to attack the beliefs of the sincere and thoughtful people with strong religious beliefs whom I continue to meet.  I completed the questionnaire that Richard had sent to me.  I had changed my mind. A dear friend, Peter Lipton, who died suddenly in November 2007, had been assiduous in maintaining Jewish customs in his own home and in his public defence of Israel.  After he died I was surprised to discover that he described himself as a religious atheist.  I should not have been surprised.

john_horgan's picture
Director, Center for Science Writings, Stevens Institute of Technology

A decade ago, I thought the mind-body problem would never be solved, but I've recently, tentatively, changed my mind.

Philosophers and scientists have long puzzled over how matter — more specifically, gray matter — makes mind, and some have concluded that we'll never find the answer. In 1991 the philosopher Owen Flanagan called these pessimists "mysterians, a term he borrowed from the 1960s rock group "Question Mark and the Mysterians."

One of the earliest mysterians was the German genius Leibniz, who wrote: "Suppose that there be a machine, the structure of which produces thinking, feeling, and perceiving; imagine this machine enlarged but preserving the same proportions, so that you could enter it as if it were a mill… What would you observe there? Nothing but parts which push and move each other, and never anything that could explain perception."

A decade ago I was a hard-core mysterian, because I couldn't imagine what form a solution to the mind-body problem might take. Now I can. If there is a solution, it will come in the form of a neural code, an algorithm, set of rules or syntax that transforms the electrochemical pulses emitted by brain cells into perceptions, memories, decisions, thoughts.

Until recently, a complete decoding of the brain seemed impossibly remote, because technologies for probing living brains were so crude. But over the past decade the temporal and spatial resolution of magnetic resonance imaging, electroencephalography and other external scanning methods has leaped forward. Even more importantly, researchers keep improving the design of microelectrode arrays that can be embedded in the brain to receive messages from — and transmit them to — thousands of individual neurons simultaneously.

Scientists are gleaning information about neural coding not only from non-human animals but also from patients who have had electrodes implanted in their brains to treat epilepsy, paralysis, psychiatric illnesses and other brain disorders. Given these advances, I'm cautiously optimistic that scientists will crack the neural code within the next few decades.

The neural code may resemble relativity and quantum mechanics, in the following sense. These fundamental theories have not resolved all our questions about physical reality. Far from it. Phenomena such as gravity and light still remain profoundly puzzling. Physicists have nonetheless embraced relativity and quantum mechanics because they allow us to predict and manipulate physical reality with extraordinary precision. Relativity and quantum mechanics work.

In the same way, the neural code is unlikely to resolve the mind-body problem to everyone's satisfaction. When it comes to consciousness, many of us seek not an explanation but a revelation, which dispels mystery like sun burning off a morning fog. And yet we will embrace a neural-code theory of mind if it works — that is, if it helps us predict, heal and enhance ourselves. If we can control our minds, who cares if we still cannot comprehend them?

scott_atran's picture
Anthropologist; Emeritus Research Director, Centre National de la Recherche Scientifique, Institut Jean Nicod, Paris; Co-Founder, Centre for the Resolution of Intractable Conflict, University of Oxford; Author, Talking to the Enemy

I am an anthropologist who has traveled to many places and met many different kinds of people. I try to know what it is like to be someone very different from me in order to better understand what it means to be human. But it is only in the last few years that my thinking has deeply changed on what drives major differences between animal and human behavior, such as willingness to kill and die for a cause.

I once thought that individual cognition and personality, influences from broad socio-economic factors, and degree of devotion to religious or political ideology were determinant. Now I see friendship and others aspects of small group dynamics, especially acting together, trumping most everything else.

Here's an anecdote that kick-started me thinking about this.

While preparing a psychological experiment on limits of rational choice with Muslim mujahedin on the Indonesian Island of Sulawesi, I noticed tears welling in my traveling companion and bodyguard, Farhin (who had earlier hosted 9-11 mastermind Khalid Sheikh Muhammed in Jakarta and helped to blow up the Philippines' ambassador's residence). Farhin had just heard of a young man recently been killed in a skirmish with Christian fighters.

"Farhin," I asked, "you knew the boy?"

"No," he said, "but he was only in the jihad a few weeks. I've been fighting since Afghanistan [late 1980s] and still not a martyr."

I tried consoling with my own disbelief, "But you love your wife and children."

"Yes," he nodded sadly, "God has given this, and I must have faith in His way."

I had come to the limits of my understanding of the other. There was something in Farhin that was incalculably different from me yet almost everything else was not.

"Farhin, in all those years, after you and the others came back from Afghanistan, how did you stay a part of the Jihad?" I asked.

I expected him to tell me about his religious fervor and devotion to a Great Cause.

"The (Indonesian) Afghan Alumni never stopped playing soccer together," he replied matter-of-factly, "that's when we were closest together in the camp." He smiled, "except when we went on vacation to fight the communists, we played soccer and remained brothers."

Maybe people don't kill and die simply for a cause. They do it for friends — campmates, schoolmates, workmates, soccer buddies, body-building buddies, pin-ball buddies — who share a cause. Some die for dreams of jihad — of justice and glory — but nearly all in devotion to a family-like group of friends and mentors, of "fictive kin."

Then it became embarrassingly obvious: it is no accident that nearly all religious and political movements express allegiance through the idiom of the family — Brothers and Sisters, Children of God, Fatherland, Motherland, Homeland, and the like. Nearly all such movements require subordination, or at least assimilation, of any real family (genetic kinship) to the larger imagined community of "Brothers and Sisters." Indeed, the complete subordination of biological loyalty to ideological loyalty for the Ikhwan, the "Brotherhood" of the Prophet, is Islam's original meaning, "Submission."

My research team has analyzed every attack by Farhin and his friends, who belong to Southeast Asia's Jemmah Islamiyah (JI). I have interviewed key JI operatives (including co-founder, Abu Bakr Ba'asyir) and counterterrorism officials who track JI. Our data show that support for suicide actions is triggered by moral outrage at perceived attacks against Islam and sacred values, but this is converted to action as a result of small world factors. Out of millions who express sympathy with global jihad, only a few thousand show willingness to commit violence. They tend to go to violence in small groups consisting mostly of friends, and some kin. These groups arise within specific "scenes": neighborhoods, schools (classes, dorms), workplaces and common leisure activities (soccer, mosque, barbershop, café, online chat-rooms).

Three other examples:

1. In Al Qaeda, about 70 percent join with friends, 20 percent with kin. Our interviews with friends of the 9/11 suicide pilots reveal they weren't "recruited" into Qaeda. They were Middle Eastern Arabs isolated in a Moroccan Islamic community in a Hamburg suburb. Seeking friendship, they started hanging out after mosque services, in local restaurants and barbershops, eventually living together when they self-radicalized. They wanted to go to Chechnya, then Kosovo, only landing in a Qaeda camp in Afghanistan as a distant third choice. 

2. Five of the seven plotters in the 2004 Madrid train bombing who blew themselves up when cornered by police grew up in the tumble-down neighborhood of Jemaa Mezuaq in Tetuan, Morocco. In 2006, at least five more young Mezuaq men went to Iraq on "martyrdom missions." One in the Madrid group was related to one in the Iraq group by marriage; each group included a pair of brothers. All went to the same elementary school, all but one to the same high school. They played soccer as friends, went to the same mosque, mingled in the same cafes. 

3. Hamas's most sustained suicide bombing campaign in 2003 (Hamas suspended bombings in 2004) involved seven soccer buddies from Hebron's Abu Katila neighborhood, including four kin (Kawasmeh clan).

Social psychology tends to support the finding that "groupthink" often trumps individual volition and knowledge, whether in our society or any other. But for Americans bred on a constant diet of individualism the group is not where one generally looks for explanation. This was particularly true for me, but the data caused me to change my mind.

alan_alda's picture
Actor; Writer; Director; Host, PBS program Brains on Trial; Author, Things I Overheard While Talking to Myself

Until I was twenty I was sure there was a being who could see everything I did and who didn't like most of it. He seemed to care about minute aspects of my life, like on what day of the week I ate a piece of meat. And yet, he let earthquakes and mudslides take out whole communities, apparently ignoring the saints among them who ate their meat on the assigned days.  Eventually, I realized that I didn't believe there was such a being. It didn't seem reasonable. And I assumed that I was an atheist. 

As I understood the word, it meant that I was someone who didn't believe in a God; I was without a God. I didn't broadcast this in public because I noticed that people who do believe in a god get upset to hear that others don't. (Why this is so is one of the most pressing of human questions, and I wish a few of the bright people in this conversation would try to answer it through research.) 

But, slowly I realized that in the popular mind the word atheist was coming to mean something more: a statement that there couldn't be a God. God was, in this formulation, not possible, and this was something that could be proved. But I had been changed by eleven years of interviewing six or seven hundred scientists around the world on the television program Scientific American Frontiers. And that change was reflected in how I would now identify myself. 

The most striking thing about the scientists I met was their complete dedication to evidence. It reminded me of the wonderfully plainspoken words of Richard Feynman who felt it was better not to know than to know something that was wrong. The problem for me was that just as I couldn't find any evidence that there was a god, I couldn't find any that there wasn't a god. I would have to call myself an agnostic. At first, this seemed a little wimpy, but after a while I began to hope it might be an example of Feynman's heroic willingness to accept, even glory in, uncertainty. 

I still don't like the word agnostic. It's too fancy. I'm simply not a believer. But, as simple as this notion is, it confuses some people. Someone wrote a Wikipedia entry about me, identifying me as an atheist because I'd said in a book I wrote that I wasn't a believer. I guess in a world uncomfortable with uncertainty, an unbeliever must be an atheist, and possibly an infidel. This gets us back to that most pressing of human questions: why do people worry so much about other people's holding beliefs other than their own? This is the question that makes the subject over which I changed my mind something of global importance, and not just a personal, semantic dalliance.

Do our beliefs identify us the way our language, foods and customs do? Is this why people who think the universe chugs along on its own are as repellent to some as people who eat live monkey brains are to others? Are we saying, you threaten my identity with your infidelity to my beliefs? You're trying to kill me with your thoughts, so I'll get you first with this stone? And, if so, is this really something that can be resolved through reasonable discourse? 

Maybe this is an even more difficult problem; one that's written in the letters that spell out our DNA. Why is the belief in God and Gods so ubiquitous? Does belief in a higher power confer some slight health benefit, and has natural selection favored those who are genetically inclined to believe in such a power — and is that why so many of us are inclined to believe? (Whether or not a God actually exists, the tendency to believe we'll be saved might give us the strength to escape sickness and disaster and live the extra few minutes it takes to replicate ourselves.)

These are wild speculations, of course, and they're probably based on a desperate belief I once had that we could one day understand ourselves. 

But, I might have changed my mind on that one, too.

sherry_turkle's picture
Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, MIT; Internet Culture Researcher; Author, The Empathy Diaries

Throughout my academic career – when I was studying the relationship between psychoanalysis and society and when I moved to the social and psychological studies of technology – I've seen myself as a cultural critic. I don't mention this to stress how lofty a job I put myself in, but rather that I saw the job as theoretical in its essence. Technologists designed things; I was able to offer insights about the nature of people's connections to them, the mix of feelings in the thoughts, how passions mixed with cognition. Trained in psychoanalysis, I didn't see my stance as therapeutic, but it did borrow from the reticence of that discipline. I was not there to meddle. I was there to listen and interpret. Over the past year, I've changed my mind: our current relationship with technology calls forth a more meddlesome me.

In the past, because I didn't criticize but tried to analyze, some of my colleagues found me complicit with the agenda of technology-builders. I didn't like that much, but understood that this was perhaps the price to pay for maintaining my distance, as Goldilock's wolf would say, "the better to hear them with." This year I realized that I had changed my stance. In studying reactions to advanced robots, robots that look you in the eye, remember your name, and track your motions, I found more people who were considering such robots as friends, confidants, and as they imagined technical improvements, even as lovers. I became less distanced. I began to think about technological promiscuity. Are we so lonely that we will really love whatever is put in front of us?

I kept listening for what stood behind the new promiscuity – my habit of listening didn't change – and I began to get evidence of a certain fatigue with the difficulties of dealing with people. A female graduate student came up to me after a lecture and told me that she would gladly trade in her boyfriend for a sophisticated humanoid robot as long as the robot could produce what she called "caring behavior." She told me that "she needed the feeling of civility in the house and I don't want to be alone." She said: "If the robot could provide a civil environment, I would be happy to help produce the illusion that there is somebody really with me." What she was looking for, she told me, was a "no-risk relationship" that would stave off loneliness; a responsive robot, even if it was just exhibiting scripted behavior, seemed better to her than an demanding boyfriend. I thought she was joking. She was not.

In a way, I should not have been surprised. For a decade I had studied the appeal of sociable robots. They push our Darwinian buttons. They are programmed to exhibit the kind of behavior we have come to associate with sentience and empathy, which leads us to think of them as creatures with intentions, emotions, and autonomy. Once people see robots as creatures, they feel a desire to nurture them. With this feeling comes the fantasy of reciprocation. As you begin to care for these creatures, you want them to care about you.

And yet, in the past, I had found that people approached computational intelligence with a certain "romantic reaction." Their basic position was that simulated thinking might be feeling but simulated feeling was never feeling and simulated love was never love. Now, I was hearing something new. People were more likely to tell me that human beings might be "simulating" their feelings, or as one woman put it: "How do I know that my lover is not just simulating everything he says he feels?" Everyone I spoke with was busier than ever on with their e-mail and virtual friendships. Everyone was busier than ever with their social networking and always-on/always-on-you PDAs. Someone once said that loneliness is failed solitude. Could no one stand to be alone anymore before they turned to a device? Were cyberconnections paving the way to think that a robotic one might be sufficient unto the day? I was not left contemplating the cleverness of engineering but the vulnerabilities of people.

Last spring I had a public exchange in which a colleague wrote about the "I-Thou" dyad of people and robots and I could only see Martin Buber spinning in his grave. The "I" was the person in the relationship, but how could the robot be the "Thou"? In the past, I would have approached such an interchange with discipline, interested only in the projection of feeling onto the robot. But I had taken that position when robots seemed only an evocative object for better understanding people's hopes and frustrations. Now, people were doing more than fantasizing. There was a new earnestness. They saw the robot in the wings and were excited to welcome it onstage.

It seemed no time at all that a book came out called Love and Sex with Robots and a reporter from Scientific American was interviewing me about the psychology of robot marriage. The conversation was memorable and I warned my interviewer that I would use it as data. He asked me if my opposition to people marrying robots put me in the same camp as those who oppose the marriage of lesbians or gay men. I tried to explain that just because I didn't think people could marry machines didn't mean that I didn't think that any mix of people with people was fair play. He accused me of species chauvinism. Wasn't this the kind of talk that homophobes once used, not considering gays as "real" people? Right there I changed my mind about my vocation. I changed my mind about where my energies were most needed. I was turning in my card as a cultural critic the way I had always envisaged that identity. Now I was a cultural critic. I wasn't neutral; I was very sad.

marco_iacoboni's picture
Neuroscientist; Professor of Psychiatry & Biobehavioral Sciences, David Geffen School of Medicine, UCLA; Author, Mirroring People

Some time ago I thought that rational, enlightened thinking would eventually eradicate irrational thinking and supernatural beliefs. How could it be otherwise? Scientists and enlightened people have facts and logical arguments on their side, whereas people 'on the other side' have only unprovable beliefs and bad reasoning. I guess I was wrong, way wrong. Thirty years later, irrational thinking and supernatural beliefs are much stronger than they used to be, permeate ours and other societies and it does not seem they will go away any time soon. How is it possible? Shouldn't 'history' always move forward? What went wrong? What can we do to fix this backward movement toward the irrational?

The problem is that science has still a marginal role in our public discourse. Indeed, there are no science books on the New York Times100 Notable Books of the Year list, no science category in theEconomist Books of the Year 2007 and only Oliver Sacks in the New Yorker's list of Books From Our Pages.

Why does science have such a marginal role? I think there is more than one reason. First, scientists tend to confine themselves in well-defined, narrow boundaries. They tend not to claim any wisdom outside the confines of their specialties. By doing so, they marginalize themselves and make it difficult for science to have an impact on their society. It is high time for scientists to step up and claim wisdom outside their specialty.

There are also other ways, however, to have an impact on our society. For instance, by making some changes in scientific practices. In these days, scientific practices are dominated by the 'hypothesis testing' paradigm. While there is nothing wrong with hypothesis testing, it is definitely wrong to confine all science only to hypothesis testing. This approach precludes the study of complex, real world phenomena, the phenomena that are important to people outside academia. It is time to perform more broad-based descriptive studies on issues that are highly relevant to our society.

Another dominant practice in science (definitely in neuroscience, my own field) is to study phenomena from an atemporal perspective. Only the timeless seems to matter to most neuroscientists. Even time itself tends to be studied from this 'platonic ideal' perspective. I guess this approach stems from the general tendency of science to adopt the detached 'view from nowhere,' as Thomas Nagel puts is. If there is one major thing we have learned from modern science, however, is that there is no such thing, there is no 'view from nowhere.' It is time for scientists (especially neuroscientists) to commit to the study of the finite and temporal. The issues that matter 'here and now' are the issues that people relate to.

How should we do all this? One way of disseminating the scientific method in our public discourse is to use the tools and approaches of science to investigate issues that are salient to the general public. In neuroscience, we have now powerful tools that let us do this. We can study how people make decisions and form affiliations not from a timeless perspective, but from the perspective of what is salient 'here and now.' These are the kind of studies that naturally engage people. While they read about these studies, people are more likely to learn scientific facts (even the 'atemporal' ones) and to absorb the scientific method and reasoning. My hope is that by being exposed to and engaged by scientific facts, methods, and reasoning, people will eventually find it difficult to believe unprovable things.

steven_pinker's picture
Johnstone Family Professor, Department of Psychology; Harvard University; Author, Rationality

Ten years ago, I wrote:

For ninety-nine percent of human existence, people lived as foragers in small nomadic bands. Our brains are adapted to that long-vanished way of life, not to brand-new agricultural and industrial civilizations. They are not wired to cope with anonymous crowds, schooling, written language, government, police, courts, armies, modern medicine, formal social institutions, high technology, and other newcomers to the human experience.


Are we still evolving? Biologically, probably not much. Evolution has no momentum, so we will not turn into the creepy bloat-heads of science fiction. The modern human condition is not conducive to real evolution either. We infest the whole habitable and not-so-habitable earth, migrate at will, and zigzag from lifestyle to lifestyle. This makes us a nebulous, moving target for natural selection. If the species is evolving at all, it is happening too slowly and unpredictably for us to know the direction. (How the Mind Works)

Though I stand by a lot of those statements, I've had to question the overall assumption that human evolution pretty much stopped by the time of the agricultural revolution. When I wrote these passages, completion of the Human Genome Project was several years away, and so was the use of statistical techniques that test for signs of selection in the genome. Some of these searches for "Darwin's Fingerprint," as the technique has been called, have confirmed predictions I had made. For example, the modern version gene associated with language and speech has been under selection for several hundred thousand years, and has even been extracted from a Neanderthal bone, consistent with my hypothesis (with Paul Bloom) that language is a product of gradual natural selection. But the assumption of no-recent-human-evolution has not.

New results from the labs of Jonathan Pritchard, Robert Moyzis, Pardis Sabeti, and others have suggested that thousands of genes, perhaps as much as ten percent of the human genome, have been under strong recent selection, and the selection may even have accelerated during the past several thousand years. The numbers are comparable to those for maize, which has been artificially selected beyond recognition during the past few millennia.

If these results hold up, and apply to psychologically relevant brain function (as opposed to disease resistance, skin color, and digestion, which we already know have evolved in recent millennia), then the field of evolutionary psychology might have to reconsider the simplifying assumption that biological evolution was pretty much over and done with 10-000 — 50,000 years ago.

And if so, the result could be evolutionary psychology on steroids. Humans might have evolutionary adaptations not just to the conditions that prevailed for hundreds of thousands of years, but also to some of the conditions that have prevailed only for millennia or even centuries. Currently, evolutionary psychology assumes that any adaptation to post-agricultural ways of life are 100% cultural.

Though I suspect some revisions will be called for, I doubt they will be radical, for two reasons. One is that many aspects of the human (and ape) environments have been constant for a much longer time than the period in which selection has recently been claimed to operate. Examples include dangerous animals and insects, toxins and pathogens in spoiled food and other animal products, dependent children, sexual dimorphism, risks of cuckoldry and desertion, parent-offspring conflict, risk of cheaters in cooperation, fitness variation among potential mates, causal laws governing solid bodies, presence of conspecifics with minds, and many others. Recent adaptations would have to be an icing on this cake -- quantitative variations within complex emotional and cognitive systems.

The other is the empirical fact that human races and ethnic groups are psychologically highly similar, if not identical. People everywhere use language, get jealous, are selective in choosing mates, find their children cute, are afraid of heights and the dark, experience anger and disgust, learn names for local species, and so on. If you adopt children from a technologically undeveloped part of the world, they will fit in to modern society just fine. To the extent that this is true, there can't have been a whole lot of uneven psychological evolution postdating the split among the races 50-100,000 years ago (though there could have been parallel evolution in all the branches).

daniel_gilbert's picture
Professor of Psychology at Harvard University

Six years ago, I changed my mind about the benefit of being able to change my mind.

In 2002, Jane Ebert and I discovered that people are generally happier with decisions when they can't undo them. When subjects in our experiments were able to undo their decisions they tended to consider both the positive and negative features of the decisions they had made, but when they couldn't undo their decisions they tended to concentrate on the good features and ignore the bad. As such, they were more satisfied when they made irrevocable than revocable decisions. Ironically, subjects did not realize this would happen and strongly preferred to have the opportunity to change their minds.

Now up until this point I had always believed that love causes marriage. But these experiments suggested to me that marriage could also cause love. If you take data seriously you act on it, so when these results came in I went home and proposed to the woman I was living with. She said yes, and it turned out that the data were right: I love my wife more than I loved my girlfriend.

The willingness to change one's mind is a sign of intelligence, but the freedom to do so comes at a cost.

richard_wrangham's picture
Ruth Moore Professor of Biological Anthropology, Curator of Primate Behavioral Biology at Harvard University; Author, Catching Fire: How Cooking Made Us Human

Like people since even before Darwin, I used to think that human origins were explained by meat-eating. But three epiphanies have changed my mind.  I now think that cooking was the major advance that made us human.

First, an improved fossil record has shown that meat-eating arose too early to explain human origins. Significant meat-eating by our ancestors is initially attested in the pre-human world of 2.6 million years ago, when hominids began to flake stones into simple knives. Around the same time there appears a fossil species variously calledAustralopithecus habilis or  Homo habilis. These habilis presumably made the stone knives, but they were not human. They were Calibans, missing links with intricate mixture of advanced and primitive traits. Their brains, being twice the size of ape brains, tell of incipient humanity: but as Bernard Wood has stressed, their chimpanzee-sized bodies, long arms, big guts and jutting faces made them ape-like. Meat-eating likely explains the origin of habilis.

Humans emerged almost a million years later when habilis evolved intoHomo erectus. At 1.6 million years ago Homo erectus were the size and shape of people today. Their brains were bigger than those ofhabilis, and they walked and ran as fluently as we do. Their mouths were small and their teeth relatively dwarfed — a pygmy-faced hominoid, just like all later humans. To judge from the reduced flaring of their rib-cage they had lost the capacious guts that allow great apes and habilis to eat large volumes of plant food. Equally strange for a “helpless and defenceless” species they had also lost their climbing ability, forcing them to sleep on the ground — a surprising commitment in a continent full of big cats, sabretooths, hyenas, rhinos and elephants. 

So the question of what made us human is the question of why a population of habilis became Homo erectus. My second epiphany was a double insight: humans are biologically adapted to eating cooked diets, and the signs of this adaptation start with Homo erectus. Cooked food is the signature feature of human diet. It not only makes our food safe and easy to eat, but it also grants us large amounts of energy compared to a raw diet, obviating the need to ingest big meals. Cooking softens food too, thereby making eating so speedy that as eaters of cooked food, we are granted many extra hours of free time every day.

So cooked food allows our guts, teeth and mouths to be small, while giving us abundant food energy and freeing our time. Cooked food, of course, requires the control of fire; and a fire at night explains howHomo erectus dared sleep on the ground.

 Cooked food has so many important biological effects that its adoption should be clearly marked in the fossil record by signals of a reduced digestive system and increased energy use. While such signs are clear at the origin of Homo erectus, they are not found later in human evolution. The match between the biological merits of cooked food and the evolutionary changes in Homo erectus is thus so obvious that except for a scientific obstacle, I believe it would have been noticed long ago. The obstacle is the insistence of archaeologists that the control of fire is not firmly evidenced before about a quarter of a million years ago. As a result of this archaeological caution, the idea that humans could have used fire before about 250,000 to 500,000 years ago has long been sidelined. 

But I finally realized that the archaeological record decays so steadily that it gives us no information about when fire was first controlled. The fire record is better at 10,000 years than at 20,000 years; at 50,000 years than 100,000 years; at 250,000 years than 500,000 years; and so on. Evidence for the control of fire is always better when it is closer to the present, but during the course of human evolution it never completely goes away. There is only one date beyond which no evidence for the control of fire has been found: 1.6 million years ago, around the time when Homo erectus evolved. Between now and then, the erratic record tells us only one thing: the archaeological evidence is incapable of telling us when fire was first controlled. The biological evidence is more helpful. That was my third epiphany.

The origin of Homo erectus is too late for meat-eating; the adoption of cooking solves the problem; and archaeology does not gainsay it. In a roast potato and a hunk of beef we have a new theory of what made us human.

paul_davies's picture
Theoretical physicist; cosmologist; astro-biologist; co-Director of BEYOND, Arizona State University; principle investigator, Center for the Convergence of Physical Sciences and Cancer Biology; Author, The Eerie Silence and The Cosmic Jackpot

For most of my career, I believed that the bedrock of physical reality lay with the laws of physics — magnificent, immutable, transcendent, universal, infinitely-precise mathematical relationships that rule the universe with as sure a hand as that of any god. And I had orthdoxy on my side, for most of my physicist colleagues also believe that these perfect laws are the levitating superturtle that holds up the mighty edifice we call nature, as disclosed through science. About three years ago, however, it dawned on me that such laws are an extraordinary and unjustified idealization.

How can we be sure that the laws are infinitely precise? How do we know they are immutable, and apply without the slightest change from the beginning to the end of time? Furthermore, the laws themselves remain unexplained. Where do they come from? Why do they have the form that they do? Indeed, why do they exist at all? And if there are many possible such laws, then, as Stephen Hawking has expressed it, what is it that "breathes fire" into a particular set of laws and makes a universe for them to govern?

So I did a U turn and embraced the notion of laws as emergent with the universe rather than stamped on it from without like a maker's mark. The "inherent" laws I now espouse are not absolute and perfect, but are instrinsically fuzzy and flexible, although for almost all practical purposes we don't notice the tiny flaws.

Why did I change my mind? I am not content to merely accept the laws of physics as a brute fact. Rather, I want to explain the laws, or at least explain the form they have, as part of the scientific enterprise. One of the oddities about the laws is the well known fact that they are weirdly well-suited to the emergence of life in the universe. Had they been slightly different, chances are there would be no sentient beings around to discover them.

The fashionable explanation for this — that there is a multiplicity of laws in a multiplicity of parallel universes, with each set of laws fixed and perfect within its host universe — is a nice try, but still leaves a lot unexplained. And simply saying that the laws "just are" seems no better than declaring "God made them that way."

The orthodox view of perfect physical laws is a thinly-veiled vestige of monothesim, the reigning world view that prevailed at the birth of modern science. If we want to explain the laws, however, we have to abandon the theological legacy that the laws are fixed and absolute, and replace them with the notion that the states of the world and the laws that link them form a dynamic interdependent unity.

lawrence_m_krauss's picture
Theoretical Physicist; Foundation Professor, School of Earth and Space Exploration and Physics Department, ASU; Author, The Greatest Story Ever Told . . . So Far

Like 99% of particle physicists, and 95% of cosmologist (perhaps 98% of theorists and 90% of observers, to be more specific), I was relatively certain that there was precisely enough matter in the universe to make it geometrically flat.  What does geometrically flat mean?  Well, according to general relativity it means there is a precise balance between the positive kinetic energy associated with the expansion of space, and the negative potential energy associated with the gravitational attraction of matter in the universe so that the total energy is precisely zero. This is not only mathematically attractive, but in fact the only theory we have that explains why the universe looks the way it does today tends to predict a flat universe today.

Now, the only problem with this prediction is that visible matter in the universe only accounts for a few percent of the total amount of matter required to make the universe flat.  Happily, however, during the period from 1970 or so to the early 1990's it had become abundantly clear that our galaxy, and indeed all galaxies are dominated by 'dark matter'... material that does not shine, or, as far as we can tell, interact electromagnetically.  This material, which we think is made up of a new type of elementary particle, accounts for at least 10 times as much matter as can be accounted for in stars, hot gas etc.. With the inference that dark matter existed in such profusion, it was natural to suspect that there was enough of it to account for a flat universe.

The only problem was that the more our observations of the universe improved, the less evidence there appeared to be that there was enough dark matter to result in a flat universe. Moreover, all other other indicators of cosmology, from the age of the universe, to the data on large scale structure, all began to suggest a flat universe dominated by dark matter was inconsistent with observation.  In 1995, this led my colleague Mike Turner and I to suggest that the only way a flat universe could be consistent with observation was if most of the energy, indeed almost 75% of the total energy, was contributed not by matter, but by empty space!

As heretical as our suggestion was, to be fair, I think we were being more provocative than anything, because the one thing that everyone knew was that the energy of empty space had to be precisely zero.  The alternative, which would have resulted in something very much like the 'Cosmological Constant' first proposed by Einstein when he incorrectly thought the universe was static and needed some exotic new adjustment to his equations of general relativity so that the attractive force of gravity was balanced by a repulsive force associated with empty space, was just too ugly to imagine.

And then, in 1998 two teams measuring the recession velocity of distant galaxies using observations of exploding stars within them to probe their distance from us at the same time discovered something amazing.  The expansion of the universe seemed to be speed up with time, not slowing down, as any sensible universe should be doing!  Moreover, if one assumed this acceleration was caused by a new repulsive force throughout empty space that would be caused if the energy of empty space was not precisely zero, then the amount of extra energy needed to produce the observed acceleration was precisely the amount needed to account for a flat universe!

Now here is the really weird thing.  Within a year after the observation of an accelerating universe, even though the data was not yet definitive, I and pretty well everyone else in the community who had previously thought there was enough dark matter to result in a flat universe, and who had previously thought the energy of empty space must be precisely zero had completely changed our minds... All of the signals were just too overwhelming to continue to hold on to our previous rosy picture... even if the alternative was so crazy that none of our fundamental theories could yet account for it.

So we are now pretty sure that the dominant energy-stuff in our universe isn't normal matter, and isn't dark matter, but rather is associated with empty space!  And what is worse (or better, depending upon your viewpoint) is that our whole picture of the possible future of the universe has changed..  An accelerating universe will carry away almost everything we now see, so that in the far future our galaxy will exist alone in a dark, and seemingly endless void....

And that is what I find so satisfying about science.  Not just that I could change my own mind because the evidence of reality forced me to... but that the whole community could throw out a cherished notion, and so quickly!  That is what makes science different than religion, and that is what makes it worth continuing to ask questions about the universe ... because it never fails to surprise us.

sean_carroll's picture
Theoretical Physicist, Caltech; Author, Something Deeply Hidden

Growing up as a young proto-scientist, I was always strongly anti-establishmentarian, looking forward to overthrowing the System as our generation's new Galileo.  Now I spend a substantial fraction of my time explaining and defending the status quo to outsiders.  It's very depressing.

As an undergraduate astronomy I was involved in a novel and exciting test of Einstein's general relativity — measuring the precession of orbits, just like Mercury in the Solar System, but using massive eclipsing binary stars.  What made it truly exciting was that the data disagreed with the theory!  (Which they still do, by the way.)  How thrilling is it to have the chance to overthrow Einstein himself?  Of course there are more mundane explanations — the stars are tilted, or there is an invisible companion star perturbing their orbits, and these hypotheses were duly considered.  But I wasn't very patient with such boring possibilities — it was obvious to me that we had dealt a crushing blow to a cornerstone of modern physics, and the Establishment was just too hidebound to admit it.

Now I know better.  Physicists who are experts in the field tend to be skeptical of experimental claims that contradict general relativity, not because they are hopelessly encumbered by tradition, but because Einstein's theory has passed a startlingly diverse array of experimental tests.  Indeed, it turns out to be almost impossible to change general relativity in a way that would be important for those binary stars, but which would not have already shown up in the Solar System.  Experiments and theories don't exist in isolation — they form a tightly connected web, in which changes to any one piece tend to reverberate through various others.

So now I find myself cast as a defender of scientific orthodoxy — from classics like relativity and natural selection, to modern wrinkles like dark matter and dark energy.  In science, no orthodoxy is sacred, or above question — there should always be a healthy exploration of alternatives, and I have always enjoyed inventing new theories of gravity or cosmology, keeping in mind the variety of evidence in favor of the standard picture.  But there is also an unhealthy brand of skepticism, proceeding from ignorance rather than expertise, which insists that any consensus must flow from a reluctance to face up to the truth, rather than an appreciation of the evidence.  It's that kind of skepticism that keeps showing up in my email.  Unsolicited.

Heresy is more romantic than orthodoxy.  Nobody roots for Goliath, as Wilt Chamberlain was fond of saying.  But in science, ideas tend to grow into orthodoxy for good reasons.  They fit the data better than the alternatives.  Many casual heretics can't be bothered with all the detailed theoretical arguments and experimental tests that support the models they hope to overthrow — they have a feeling about how the universe should work, and are convinced that history will eventually vindicate them, just as it did Galileo.

What they fail to appreciate is that, scientifically speaking, Galileo overthrew the system from within.  He understood the reigning orthodoxy of his time better than anyone, so he was better able to see beyond it.  Our present theories are not complete, and nobody believes they are the final word on how Nature works.  But finding the precise way to make progress, to pinpoint the subtle shift of perspective that will illuminate a new way of looking at the world, will require an intimate familiarity with our current ideas, and a respectful appreciation of the evidence supporting them. 

Being a heretic can be fun; but being a successful heretic is mostly hard work.

marti_hearst's picture
Computer Scientist, UC Berkeley, School of Information; Author, Search User Interfaces

To me, having my worldview entirely altered is among the most fun parts of science. One mind-altering event occurred during graduate school. I was studying the field of Artificial Intelligence with a focus on Natural Language Processing. At that time there were intense arguments amongst computer scientists, psychologists, and philosophers about how to represent concepts and knowledge in computers, and if those representations reflected in any realistic way how people represented knowledge. Most researchers thought that language and concepts should be represented in a diffuse manner, distributed across myriad brain cells in a complex network. But some researchers talked about the existence of a "grandmother cell," meaning that one neuron in the brain (or perhaps a concentrated group of neurons) was entirely responsible for representing the concept of, say, your grandmother. I thought this latter view was hogwash.

But one day in the early 90's I heard a story on National Public Radio about children who had Wernicke's aphasia, meaning that a particular region in their brains were damaged. This damage left the children with the ability to form complicated sentences with correct grammatical structure and natural sounding rhythms, but with content that was entirely meaningless. This story was a revelation to me -- it seemed like irrefutable proof that different aspects of language were located in distinct regions of the brain, and that therefore perhaps the grandmother cell could exist. (Steven Pinker subsequently wrote his masterpiece, "The Language Instinct," on this topic.)

Shortly after this, the field of Natural Language Processing became radically changed by an entirely new approach. As I mentioned above, in the early 90's most researchers were introspecting about language use and were trying to hand-code knowledge into computers. So people would enter in data like "when you go to a restaurant, someone shows you to a table. You and your dining partners sit on chairs at your selected table. A waiter or waitress walks up to you and hands you a menu. You read the menu and eventually the waiter comes back and asks for your order. The waiter takes this information back to the kitchen." And so on, in painstaking detail.

But as large volumes of text started to become available online, people started developing algorithms to solve seemingly difficult natural language processing problems using very simple techniques. For example, how hard is it to write a program that can tell which language a stretch of text is written in? Sibun and Reynar found that all you need to do is record how often pairs of characters tend to co-occur in each language, and you only need to extract about a sentence from a piece of text to classify it with 99% accuracy into one of 18 languages! Another wild example is that of author identification. Back in the early 60's, Mosteller and Wallace showed that they could identify which of the disputed Federalist Papers were written by Hamilton vs. those written by Madison, simply by looking at counts of the function words (small structural words like "by", "from", and "to") that each author used.

The field as a whole is chipping away at the hard problems of natural language processing by using statistics derived from that mother-of-all-text-corpora, the Web. For example, how do you write a program to figure out the difference between a "student protest" and a "war protest"? The former is a demonstration against something, done by students, but the latter is not a demonstration done by a war.

In the old days, we would try to code all the information we could about the words in the noun compounds and try to anticipate how they interact. But today we used statistics drawn from counts of simple patterns on the web. Recently my PhD student Preslav Nakov has shown that we can often determine what the intended relationship between two nouns is by simply counting the verbs that fall between the two nouns, if we first reverse their order. So if we search the web for patterns like:

"protests that are * by students"

we find out the important verbs are "draw, involve, galvanize, affect, carried out by" and so on, whereas for "war protests" we find verbs such as "spread by, catalyzed by, precede", and so on.

The lesson we see over and over again is that simple statistics computed over very large text collections can do better at difficult language processing tasks than more complex, elaborate algorithms.

stephen_m_kosslyn's picture
Founding Dean, Minerva Schools at the Keck Graduate Institute

I used to believe that we could understand psychology at different levels of analysis, and events at any one of the levels could be studied independently of events at the other levels. For example, one could study events at the level of the brain (and seek answers in terms of biological mechanisms), the level of the person (and seek answers in terms of the contents of thoughts, beliefs, knowledge, and so forth), or the level of the group (and seek answers in terms of social interactions). This approach seemed reasonable; the strategy of "divide and conquer" is a cornerstone in all of science, isn't it? In fact, virtually all introductory psychology textbooks are written as if events at the different levels are largely independent, with separate chapters (that only rarely include cross-references to each other) on the brain, perception, memory, personality, social psychology, and so on. 

I've changed my mind. I don't think it's possible to understand events at any one level of analysis without taking into account what occurs at other levels. In particular, I'm now convinced that at least some aspects of the structure and function of the brain can only be understood by situating the brain in a specific cultural context. I'm not simply saying that the brain has evolved to function in a specific type of environment (an idea that forms a mainstay of evolutionary psychology and some areas of computer vision, where statistics of the natural environment are used to guide processing). Rather, I'm saying that to understand how any specific brain functions, we need to understand how that person was raised, and currently functions, in the surrounding culture. 

Here's my line of reasoning. Let's begin with a fundamental fact: The genes, of which we have perhaps only some 30,000, cannot program us to function equally effectively in every possible environment. Hence, evolution has licensed the environment to set up and configure each individual's brain, so that it can work well in that context. For example, consider stereovision. We all know about stereo in audition; the sound from each of two loudspeakers has slightly different phases, so the listener's brain glues them together to provide the sense of an auditory panorama. Something similar is at work in vision. In stereovision, the slight disparity in the images that reach the two eyes are a cue for how far away objects are. If you're focused on an object directly in front of you, your eyes will converge slightly. Aside from the exact point of focus, the rest of the image will strike slightly different places on the two retinas (at the back of the eye, which converts light into neural impulses), and the brain uses the slight disparities to figure out how far away something is. 
There are two important points here. First, this stereo process — of computing depth on the basis of the disparities in where images strike the two retinas — depends on the distance between the eyes. And second, and this is absolutely critical, there's no way to know at the moment of conception how far apart a person's eyes are going to be, because that depends on bone growth — and bone growth depends partly on the mother's diet and partly on the infant's diet. 
So, given that bone growth depends partly on the environment, how could the genes set up stereovision circuits in the brain? What the genes did is really clever: Young children (peaking at about age 18 months) have more connections among neurons than do adults; in fact, until about eight years old, children have about twice as many neural connections as they do as adults. But only some of these connections provide useful information. For example, when the infant reaches, only the connections from some neurons will correctly guide reaching. The brain uses a process called pruning to get rid of the useless connections. The connections that turn out to work, with the distance between the eyes the infant happens to have, would not be the ones that would work if the mother did not have enough calcium, or the infant hadn't had enough of various dietary supplements. 
This is a really elegant solution to the problem that the genes can't know in advance how far apart the eyes will be. To cope with this problem, the genes overpopulate the brain, giving us options for different environments (where the distance between eyes and length of the arms are part of the brain's "environment," in this sense), and then the environment selects which connections are appropriate. In other words, the genes take advantage of the environment to configure the brain. 

This overpopulate-and-select mechanism is not limited to stereovision. In general, the environment sets up the brain (above and beyond any role it may have had in the evolution of the species), configuring it to work well in the world a person inhabits. And by environment I'm including everything outside the brain — including the social environment. For example, it's well known that children can learn multiple languages without an accent and with good grammar, if they are exposed to the language before puberty. But after puberty, it's very difficult to learn a second language so well. Similarly, when I first went to Japan, I was told not even to bother trying to bow, that there were something like a dozen different bows and I was always going to "bow with an accent" — and in my case the accent was so thick that it was impenetrable.  
The notion is that a variety of factors in our environment, including in our social environment, configure our brains. It's true for language, and I bet it's true for politeness as well as a raft of other kinds of phenomena. The genes result in a profusion of connections among neurons, which provide a playing field for the world to select and configure so that we fit the environment in which we inhabit. The world comes into our head, configuring us. The brain and its surrounding environment are not as separate as they might appear.  

This perspective leads me to wonder whether we can assume that the brains of people living in different cultures process information in precisely the same ways. Yes, people the world over have much in common (we are members of the same species, after all), but even small changes in the wiring may lead us to use the common machinery in different ways. If so, then people from different cultures may have unique perspectives on common problems, and be poised to make unique contributions toward solving such problems. 

Changing my mind about the relationship between events at different levels of analysis has led me to change fundamental beliefs. In particular, I now believe that understanding how the surrounding culture affects the brain may be of more than merely "academic interest."

stewart_brand's picture
Founder, the Whole Earth Catalog; Co-founder, The Well; Co-Founder, The Long Now Foundation, and Revive & Restore; Author, Whole Earth Discipline

In the 90's I was praising the remarkable grassroots success of the building preservation movement. Keep the fabric and continuity of the old buildings and neighborhoods alive! Revive those sash windows.

As a landlocked youth in Illinois I mooned over the yacht sales pictures in the back of sailboat books. I knew what I wanted — a gaff-rigged ketch! Wood, of course.

The Christmas mail order catalog people know what my age group wants (I'm 69). We want to give a child wooden blocks, Monopoly or Clue, a Lionel train. We want to give ourselves a bomber jacket, a fancy leather belt, a fine cotton shirt. We study the Restoration Hardware catalog. My own Whole Earth Catalog, back when, pushed no end of retro stuff in a back-to-basics agenda.

Well, I bought a sequence of wooden sailboats. Their gaff rigs couldn't sail to windward. Their leaky wood hulls and decks were a maintenance nightmare. I learned that the fiberglass hulls we'd all sneered at were superior in every way to wood.

Remodeling an old farmhouse two years ago and replacing its sash windows, I discovered the current state of window technology. A standard Andersen window, factory-made exactly to the dimensions you want, has superb insulation qualities; superb hinges, crank, and lock; a flick-in, flick-out screen; and it looks great. The same goes for the new kinds of doors, kitchen cabinetry, and even furniture feet that are available — all drastically improved.

The message finally got through. Good old stuff sucks. Sticking with the fine old whatevers is like wearing 100% cotton in the mountains; it's just stupid.

Give me 100% not-cotton clothing, genetically modified food (from a farmers' market, preferably), this-year's laptop, cutting-edge dentistry and drugs.

The Precautionary Principle tells me I should worry about everything new because it might have hidden dangers. The handwringers should worry more about the old stuff. It's mostly crap.

(New stuff is mostly crap too, of course. But the best new stuff is invariably better than the best old stuff.)

alan_kay's picture
Founding member of Xerox PARC; President of Viewpoints Research Institute, Inc

At age 10 in 1950, one of the department stores had a pneumatic tube system for moving receipts and money from counters to the cashier's office. I loved this and tried to figure out how it worked The clerks in the store knew all about it. "Vacuum", they said, "Vacuum sucks the canisters, just like your mom's vacuum cleaner". But how does it work, I asked? "Vacuum", they said, "Vacuum, does it all". This was what adults called "an explanation"!

So I took apart my mom's Hoover vacuum cleaner to find out how it worked. There was an electric motor in there, which I had expected, but the only other thing in there was a fan! How could a fan produce a vacuum, and how could it suck?

We had a room fan and I looked at it more closely. I knew that it worked like the propeller of an airplane, but I'd never thought about how those worked. I picked up a board and moved it. This moved air just fine. So the blades of the propeller and the fan were just boards that the motor kept on moving to push air.

But what about the vacuum? I found that a sheet of paper would stick to the back of the fan. But why? I "knew" that air was supposed to be made up of particles too small to be seen. So it was clear why you got a gust of breeze by moving a board — you were knocking little particles one way and not another. But where did the sucking of the paper on the fan and in the vacuum cleaner come from?

Suddenly it occurred to me that the air particles must be already moving very quickly and bumping into each other. When the board or fan blades moved air particles away from the fan there were less near the fan and the already moving particles would have less to bump into and would thus move towards the fan. They didn't know about the fan, but they appeared to.

The "suck" of the vacuum cleaner was not a suck at all. What was happening is that things went into the vacuum cleaner because they were being "blown in" by the air particles' normal movement, which were not being opposed by the usual pressure of air particles inside the fan!

When my physiologist father came home that evening I exclaimed "Dad, the air particles must be moving at least a hundred miles an hour!". I told him what I'd found out and he looked in his physics book. In there was a formula to compute the speed of various air molecules at various temperatures. It turned out that at room temperature ordinary air molecules were moving much faster than I had guessed: more like 1500 miles an hour! This completely blew my mind!

Then I got worried because even small things were clearly not moving that fast going into the vacuum cleaner (nor in the pneumatic tubes). By putting my hand out the window of the car I could feel that the air was probably going into the vacuum cleaner closer to 50 or 60 miles an hour. Another conversation with my Dad led to two ideas (a) the fan was probably not very efficient at moving particles away, and (b) the particles themselves were going in every direction and bumping into each other (this is why it takes a while for perfume from an open bottle to be smelled across a room.

This experience was a big deal for me because I had thought one way using a metaphor and a story about "sucking", and then I suddenly thought just the opposite because of an experiment and non-story thinking. The world was not as it seemed! Or as most adults thought and claimed! I never trusted "just a story" again.

gary_klein's picture
Senior Scientist, MacroCognition LLC; Author, Seeing What Others Don't: The Remarkable Ways We Gain Insights

It's generally a bad idea to change your mind and an even worse idea to do it publicly. Politicians who get caught changing their minds are labeled "flip-floppers." When managers change their minds about what they want they risk losing credibility and they create frustration in subordinates who find that much of their work has now been wasted. Researchers who change their minds may be regarded as sloppy, shooting from the hip rather than delaying publication until they nail down all the loose ends in their data.

Clearly the Edge Annual Question for 2008 carries with it some dangers in disclosure:  "What have you changed your mind about? Why?" Nevertheless, I'll take the bait and describe a case where I changed my mind about the nature of the phenomenon I was studying.

My colleagues Roberta Calderwood, Anne Clinton-Cirocco, and I were investigating how people make decisions under time pressure. Obviously, under time pressure people can't canvass all the relevant possibilities and compare them along a common set of dimensions. So what are they doing instead?

I thought I knew what happened. Peer Soelberg had investigated the job-choice strategy of students. In most cases they quickly identified a favorite job option and evaluated it by comparing it to another option, a choice comparison, trying to show that their favorite option was as good as or better than this comparison case on every relevant dimension. This strategy seemed like a very useful way to handle time pressure. Instead of systematically assessing a large number of options, you only have to compare two options until you're satisfied that your favorite dominates the other.

To demonstrate that people used this strategy to handle time pressure I studied fireground commanders. Unhappily, the firefighters had not read the script. We conducted interviews with them about tough cases, probing them about the options they considered. And in the great majority of cases (about 81%), they insisted that they only considered one option.

The evidence obviously didn't support my hypothesis. Still, I wasn't convinced that my hypothesis was wrong. Perhaps we hadn't phrased the questions appropriately. Perhaps the firefighters' memories were inaccurate. At this point I hadn't changed my mind. I had just conducted a study that didn't work out.

People are very good at deflecting inconvenient evidence. There are very few facts that can't be explained away. Facts rarely force us to change our minds.

Eventually my frustration about not getting the results I wanted was replaced by a different emotion: curiosity. If the firefighters weren't comparing options just what were they doing?

They described how they usually knew what to do once they sized up the situation. This claim generated two mysteries:  How could the first option they considered have such a high likelihood of succeeding?  And how could they evaluate an option except by comparing it to another?

Going back over the data we resolved each of these mysteries. They were using their years of experience to rapidly size up situations. The patterns they had acquired suggested typical ways of reacting. But they still needed to evaluate the options they identified. They did so by imagining what might happen if they carried out the action in the context of their situation. If it worked, they proceeded. If it almost worked then they looked for ways to repair any weaknesses or else looked at other typical reactions until they found one that satisfied them.

Together, this forms a recognition-primed decision strategy that is based on pattern recognition but tests the results using deliberate mental simulation. This strategy is very different from the original hypothesis about comparing the favorite versus a choice comparison.

I had an advantage in that I had never received any formal training in decision research. One of my specialty areas was the nature of expertise. Therefore, the conceptual shift I made was about  peripheral constructs, rather than core constructs about how decisions are made. The notions of Peer Soelberg that I was testing weren't central to my understanding of skilled performance.

Changing one's mind isn't merely revising the numerical value of a fact in a mental data base or changing the beliefs we hold. Changing my mind also means changing the way I will then use my mind to search for and interpret facts. When I changed my understanding of how the fireground commanders were making decisions I altered the way I viewed experts and decision makers. I altered the ways I collected and analyzed data in later studies. As a result, I began looking at events with a different mind, one that I had exchanged for the mind I previously had been using.

oliver_morton's picture
Chief News and Features Editor

I have, falteringly and with various intermediary about-faces and caveats, changed my mind about human spaceflight. I am of the generation to have had its childhood imagination stoked by the sight of Apollo missions on the television — I can't put hand on heart and say I remember the Eagle landing, but I remember the sights of the moon relayed to our homes. I was fascinated by space and only through that, by way of the science fiction that a fascination with space inexorably led to, by science. And astronauts were what space was about.

I was not, as I grew older, uncritical of human spaceflight — I remember my anger at the Challenger explosion, my sense that if people were going to die, it should be for something grander than just another shuttle mission. But I was still struck by its romance, and by the way its romance touched some of the unlikeliest people. By all logic The Economist should have been, when I worked there, highly dubious about the aspirations of human spaceflight, as it is today. But the then editor would hear not a word against the undertaking, at least not against its principle. With some relief at this I became while the magazine's science editor a sort of critical apologist — critical of the human space programme there actually was, but sensitive to the possibility that a better space programme was possible.

I bought into, at least at some level, the argument that a joint US-Russian programme offered advantages in terms of aerospace employment in the former USSR. I bought into the argument that continuity of effort was needed — that so much would be lost if a programme was dismantled it might not be possible to reassemble it. I bought into the crucial safety-net argument — that it would not be possible to cancel the US programme anyway, so strong were the interests of the military industrial complex and so broad, if shallow, the support of the public. (Like the Powder River, a mile wide, an inch deep and rolling uphill all the way from Texas.) And I could see science it would offer that was unavailable by any other means.

Now, though, I can no longer find much to respect in those arguments. US Russian cooperation seems to have bought little benefit. The idea of continuous effort seems at best unproven — and indeed perhaps worth checking. Leaving a technology fallow for a few decades and coming back with new people, tools and mindsets is not such a bad idea. And at least one serious presidential candidate is talking about actually freezing the American programme, cancelling the shuttle without in the short term developing its successor. Whether Obama will get elected or be willing or able to carry through the idea remains to be seen — but if politicians are talking like this the "it will never happen so why worry" argument becomes far more suspect.

And the crucial idea (crucial to me) that human exploration of Mars might answer great questions about life in the universe no longer seems as plausible or as likely to pay off in my lifetime as once it did. I increasingly think that life in a Martian deep biosphere if there is any, will be related to earth life and teach us relatively little that's new. At the same time it will be fiendishly hard to reach without contamination. Mars continues to fascinate me — but it has ever less need of a putative future human presence in order to do so.

My excitement at the idea of life in the universe — excitement undoubtedly spurred by Apollo and the works of Clarke, Heinlein and Roddenberry that followed on from it in my education — is now more engaged with exoplanets, to which human spaceflight is entirely irrelevant (though post-human spaceflight may be a different kettle of lobsters). If we want to understand the depth of the various relations between life and planets, which is what I want to understand, it is by studying other planets with vibrant biospheres, as well as this one, that we will do so. A world with a spartan $100 billion moonbase but no ability to measure spectra and lightcurves from earthlike planets around distant stars is not the world for me.

In general, I try to avoid arguing from my own interests. But in this case it seems to me that all the other arguments against human spaceflight are so strong that to be against it merely meant realising that an atavistic part of me had failed to understand what those interests are. I'm interested in how life works on astronomical scales, and that interest has nothing to do, in the short term, with human spaceflight. And I see no reason beyond my own interests to suggest that it is something worth spending so much money on. It does not make the world a better place in any objective way that can be measured, or in any subjective way that compels respect.

It is possibly also the case that seeing human spaceflight reduced to a matter of suborobital hops for the rich, or even low earth orbit hotels, has hardened my heart further against it. I hope this is not a manifestation of the politics of envy, though I fear that in part it could be.

diane_f_halpern's picture
Professor, Claremont McKenna College; Past-president, American Psychological Association; Author, Sex Differences in Cognitive Abilities

Why are men underrepresented in teaching, child care, and related fields and women underrepresented in engineering, physics, and related fields? I used to know the answer, but that was before I spent several decades reviewing almost everything written about this question. Like most enduring questions, the responses have grown more contentious and even less is "settled" now that we have mountains of research designed to answer them. At some point, my own answer changed from what I believed to be the simple truth to a convoluted statement complete with qualifiers, hedge terms, and caveats. I guess this shift in my own thinking represents progress, but it doesn't feel or look that way.

I am a feminist, a product of the 60s, who believed that group differences in intelligence or most any other trait are mostly traceable to the lifetime of experiences that mold us into the people we are and will be. Of course, I never doubted the basic premises of evolution, but the lessons that I learned from evolution favor the idea that the brain and behavior are adaptable. Hunter-gatherers never solved calculus problems or traveled to the moon, so I find little in our ancient past to explain these modern-day achievements.

There is also the disturbing fact that evolutionary theories can easily explain almost any outcome, so I never found them to be a useful framework for understanding behavior. Even when I knew the simple truth about sex differences in cognitive abilities, I never doubted that heritability plays a role in cognitive development, but like many others, I believed that once the potential to develop an ability exceeded some threshold value, heritability was of little importance. Now I am less sure about any single answer, and nothing is simple any more.

The literature on sex differences in cognitive abilities is filled with inconsistent findings, contradictory theories, and emotional claims that are unsupported by the research. Yet, despite all of the noise in the data, clear and consistent messages can be heard. There are real, and in some cases sizable, sex differences with respect to some cognitive abilities.

Socialization practices are undoubtedly important, but there is also good evidence that biological sex differences play a role in establishing and maintaining cognitive sex differences, a conclusion that I wasn't prepared to make when I began reviewing the relevant literature. I could not ignore or explain away repeated findings about (small) variations over the menstrual cycle, the effects of exogenously administered sex hormones on cognition, a variety of anomalies that allow us to separate prenatal hormone effects on later development, failed attempts to alter the sex roles of a biological male after an accident that destroyed his penis, differences in preferred modes of thought, international data on the achievement of females and males, to name just a few types of evidence that demand the conclusion that there is some biological basis for sex-typed cognitive development.

My thinking about this controversial topic has changed. I have come to understand that nature needs nurture and the dichotomization of these two influences on development is the wrong way to conceptualize their mutual influences on each other. Our brain structures and functions reflect and direct our life experiences, which create feed back loops that alter the hormones we secrete and how we select environments. Learning is a biological and environmental phenomenon.

And so, what had been a simple truth morphed into a complicated answer for the deceptively simple question about why there are sex differences in cognitive abilities. There is nothing in my new understanding that justifies discrimination or predicts the continuation of the status quo. There is plenty of room for motivation, self-regulation, and persistence to make the question about the underrepresentation of women and men in different academic areas moot in coming years.

Like all complex questions, the question about why men and women achieve in different academic areas depends on a laundry list of influences that do not fall neatly into categories labeled biology or environment. It is time to give up this tired way of thinking about nature and nurture as two independent variables and their interaction and recognize how they exert mutual influences on each other. No single number can capture the extent to which one type of variable is important because they do not operate independently. Nature and nurture do not just interact; they fundamentally change each other. The answer that I give today is far more complicated than the simple truth that I used to believe, but we have no reason to expect that complex phenomena like cognitive development have simple answers.

seth_lloyd's picture
Professor of Quantum Mechanical Engineering, MIT; Author, Programming the Universe

I have changed my mind about technology.

I used to take a dim view of technology. One should live one's life in a simple, low-tech fashion, I thought. No cell phone, keep off the computer, don't drive. No nukes, no remote control, no DVD, no TV. Walk, read, think — that was the proper path to follow.

What a fool I was! A dozen years ago or so, by some bizarre accident, I became a professor of Mechanical Engineering at MIT. I had never had any training, experience, or education in engineering. My sole claim to engineering expertise was some work on complex systems and a few designs for quantum computers. Quantum-mechanical engineering was in its early days then, however, and MIT needed a quantum mechanic. I was ready to answer the call.

It was not my fellow professors who converted me to technology, uber-techno-nerds though they were. Indeed, my colleagues in Mech. E. were by and large somewhat suspicious of me, justifiably so. I was wary of them in turn, as one often is of co-workers who are hugely more knowledgeable than one is oneself. (Outside of the Mechanical Engineering department, by contrast, I found large numbers of kindred souls: MIT was full of people whose quanta needed fixing, and as a certified quantum mechanic, I was glad to oblige.) No, it was not the brilliant technologists who filled the faculty lunchroom who changed my mind. Rather, it was the students who had come to have me teach them about engineering who taught me to value technology.

Your average MIT undergraduate is pretty technologically adept. In the old days, freshmen used to arrive MIT having disassembled and reassembled tractors and cars; slightly later on, they arrived having built ham radios and guitar amplifiers; more recently, freshmen and fresh women were showing up with a scary facility with computers. Nowadays, few of them have used a screwdriver (except maybe to install some more memory in their laptop), but they are eager to learn how robots work, and raring to build one themselves.

When I stepped into my first undergraduate classroom, a controls laboratory, I knew just about as little about how to build a robot as the nineteen and twenty year olds who were expectantly sitting, waiting for me to teach them how. I was terrified. Within a half an hour, the basis for my terror was confirmed. Not only did I know as little as the students, in many cases I knew significantly less: about of the quarter of the students knew demonstrably more about robotics than I, and were happy to display their knowledge. I emerged from the first lab session a sweaty mess, having managed to demonstrate my ignorance and incompetence in a startling variety of ways.

I emerged  from the second lab session a little cooler. There is no better way to learn, and learn fast, than to teach. Humility actually turns out to have its virtues, too. It turns out to be rather fun to admit one's ignorance, if that admission takes the form of an appeal to the knowledge of all assembled. In fact, it turned out that, either through my training in math and physics, or through a previous incarnation, I possessed more intuitive knowledge of control theory than I had any right to, given my lack of formal education on the subject. Finally, no student is more empowered than the one who has just correctly told her professor that he is wrong, and showed him why her solution is the right one.

In the end, the experience of teaching the technology that I did not know was one of the most intellectually powerful of my life. In my mental ferment of trying to learn the material faster and deeper than my students, I began to grasp concepts and ways of looking at the world, of whose existence I had no previous notion. One of the primary features of the lab was a set of analog computers, boxy things festooned with dials and plugs, and full of amplifiers, capacitors, and resistors, that were used to simulate, or construct an analog, of the motors and loads that we were trying to control. In my feverish attempt to understand analog computers, I constructed model for a quantum-mechanical analog computer that would operate at the level of individual atoms. This model resulted in one of my best scientific papers. In the end, scarily enough, my student evaluations gave me the highest possible marks for knowledge of the material taught.

And technology? Hey, it's not so bad. When it comes to walking in the rain, Goretex and fleece beat oilskin and wool hollow. If we're not going to swamp our world in greenhouse gases, we damn well better design dramatically more efficient cars and power plants. And if I could contribute to technology by designing and helping to build quantum computers and quantum communication systems, so much the better. Properly conceived and constructed technology does not hinder the simple life, but helps it.

OK. So I was wrong about technology. What's my next misconception? Religion? God forbid.

judith_rich_harris's picture
Independent Investigator and Theoretician; Author, The Nurture Assumption; No Two Alike: Human Nature and Human Individuality

Anyone who has taken a course in introductory psychology has heard the story of how the behaviorist John B. Watson produced "conditioned fear" of a white rat — or was it a white rabbit? — in an unfortunate infant called Little Albert, and how Albert "generalized" that fear to other white, furry things (including, in some accounts, his mother's coat). It was a vividly convincing story and, like my fellow students, I saw no reason to doubt it. Nor did I see any reason, until many years later, to read Watson's original account of the experiment, published in 1920. What a mess! You could find better methodology at a high school science fair. Not surprisingly — at least it doesn't surprise me now — Watson's experiment has not stood up well to attempts to replicate it. But the failures to replicate are seldom mentioned in the introductory textbooks.

The idea of generalization is a very basic one in psychology. Psychologists of every stripe take it for granted that learned responses — behaviors, emotions, expectations, and so on — generalize readily and automatically to other stimuli of the same general type. It is assumed, for example, that once the baby has learned that his mother is dependable and his brother is aggressive, he will expect other adults to be dependable and other children to be aggressive.

I now believe that generalization is the exception, not the rule. Careful research has shown that babies arrive in the world with a bias against generalizing. This is true for learned motor skills and it is also true for expectations about people. Babies are born with the desire to learn about the beings who populate their world and the ability to store information about each individual separately. They do not expect all adults to behave like their mother or all children to behave like their siblings. Children who quarrel incessantly with their brothers and sisters generally get along much better with their peers. A firstborn who is accustomed to dominating his younger siblings at home is no more likely than a laterborn to try to dominate his schoolmates on the playground. A boy's relationship with his father does not form the template for his later relationship with his boss.

I am not, of course, the only one in the world who has given up the belief in ubiquitous generalization, but if we formed a club, we could probably hold meetings in my kitchen. Confirmation bias — the tendency to notice things that support one's assumptions and to ignore or explain away anything that doesn't fit — keeps most people faithful to what they learned in intro psych. They observe that the child who is agreeable or timid or conscientious at home tends, to a certain extent, to behave in a similar manner outside the home, and they interpret this correlation as evidence that the child learns patterns of behavior at home which she then carries along with her to other situations.

The mistake they are making is to ignore the effects of genes. Studies using advanced methods of data analysis have shown that the similarities in behavior from one context to another are due chiefly to genetic influences. Our inborn predispositions to behave in certain ways go with us wherever we go, but learned behaviors are tailored to the situation. The fact that genetic predispositions tend to show up early is the reason why some psychologists also make the mistake of attributing too much importance to early experiences.

What changed my mind about these things was the realization that if I tossed out the assumption about generalization, some hitherto puzzling findings about human behavior suddenly made more sense. I was 56 years old at the time but fairly new to the field of child development, and I had no stake in maintaining the status quo. It is a luxury to have the freedom to change one's mind.

stephen_schneider's picture
climatologist, is a professor in the Department of Biological Sciences

In public appearances about global warming, even these days, I often hear: "I don't believe in global warming" and I then typically get asked why I do "when all the evidence is not in". "Global warming is not about beliefs", I typically retort, "but an accumulation of evidence over decades so that we can now say the vast preponderance of evidence — and its consistency with basic climate theory — supports global warming as well established, not that all aspects are fully known, an impossibility in any complex systems science".

But it hasn't always been that way, especially for me at the outset of my career in 1971, when I co-authored a controversial paper calculating that cooling effects from a shroud of atmospheric dust and smoke — aerosols — from human emissions at a global scale appeared to dominate the opposing warming effect of the growing atmospheric concentrations of the greenhouse gas carbon dioxide. Measurements at the time showed both warming and cooling emissions were on the rise, so a calculation of the net balance was essential — though controlling the aerosols made sense with or without climate side effects since they posed — and still pose — serious health effects on vulnerable populations.  In fact for the latter reason laws to clean up the air in most rich countries were just getting negotiated about that time.

When I traveled the globe in the early 1970s to explain our calculations, what I slowly learned from those out there making measurements was that two facts had only recently come to light, and together they appeared to make me consider flipping sign from cooling to warming as the most likely climatic change direction from humans using the atmosphere as a free sewer to dump some of our volatile industrial and agricultural wastes.  These facts were that human-injected aerosols, which we assumed were global in scale in our cooling calculation — were in fact concentrated primarily in industrial regions and bio-mass burning areas of the globe — about 20% of the Earth's surface, whereas we already knew that CO2 emissions are global in extent and about half of the emitted CO2 lasts for a century or more in the air.

But there were new facts that were even more convincing: not only is CO2 an important human-emitted greenhouse gas, but so too were methane, nitrous oxide and chlorofluorocarbons (many of the latter gases now banned because they also deplete stratospheric ozone) , and that together with CO2, these other greenhouse gasses were an enhanced global set of warming factors. On the other hand, aerosols were primarily regional in extent and could not thus overcome the warming effects of the combined global scale greenhouse gases.

I was very proud to have published in the mid-1970s what was wrong with my early calculations well before the so-called "contrarians" — climate change deniers still all too prevalent even today — understood the issues, let alone incorporated these new facts into updated models to make more credible projections. Of course, today the dominance of warming over cooling agents is now well established in the climatology community, but our remaining inability to be very precise over how much warming the planet can expect to have to deal with is in large part still an uncertainty over the partially counteracting cooling effects of aerosols — enough to offset a significant, even if largely unknown, amount of the warming. So although we are very confident in the existence of human-caused warming in the past several decades from greenhouse gases, we are still are working hard to pin down much more precisely how much aerosols offset this warming. Facts on that offset still lag the critical need to estimate better our impacts on climate before they become potentially irreversible.

The sad part of this story is not about science, but the misinterpretation of it in the political world. I still have to endure polemical blogs from contrarian columnists and others about how, as one put it in a grand polemic: "Schneider is an environmentalist for all temperatures" — citing my early calculations. This famous columnist somehow forgot to bring up the later-corrected (by me) faulty assumptions, nor mention that the 1971 calculation was based on not-yet-gathered facts. Simply getting the sign wrong was cited, ipso facto in this blog, as somehow damning of my current credibility. 

Ironically, inside the scientific world, this switch of sign of projected effects is viewed as precisely what responsible scientists must do when the facts change. Not only did I change my mind, but published almost immediately what had changed and how that played out over time. Scientists have no crystal ball, but we do have modeling methods that are the closest approximation available. They can't give us truth, but they can tell us the logical consequences of explicit assumptions. Those who update their conclusions explicitly as facts evolve are much more likely to be a credible source than those who stick to old stories for political consistency. Two cheers for the scientific method!

george_church's picture
Professor, Harvard University; Director, Personal Genome Project; Co-author (with Ed Regis), Regenesis

Why does my mind change based on thinking, faith, and science? One of the main functions of a mind is to change — constantly — to repair damage and add new thoughts, or to gradually replace old thoughts with new ones in a zero-sum game.

When I first heard about the century-old 4-color map conjecture as a boy, I noted how well it fit a few anecdotal scribbles and then took leap of faith that 4 colors were always enough. A decade later when Appel, Haken and a computer proved it, you could say that my boyish opinion was intact, but my mind was changed — by "facts" (the exhaustive computer search), by "thinking" (the mathematicians, computer and me collectively), and by "faith" (that the program had no bugs and that basic idea of proofs is reasonable). There were false proofs before that, and shorter confirmatory proofs since then, but the best proof is still too complex for even experts to check by hand.

While I rarely change my mind from one strongly held belief to it's opposite, I do often change from no opinion to acceptance. Perhaps my acquiescence is too easy — rarely confirming the experiments with my own hands. Like many scientists, I form some opinions without reading the primary data (especially if outside of my field). Often, the key experiments could be done, but aren't.

A depressingly small bit of medical practice is based on randomized placebo-controlled, double-blind studies. In this age of vast electronic documentation is there a list of which medical "facts" have achieved this level and which have not? Other times experiments in the usual sense can't be done, e.g. huge fractions of astronomical and biological evolution are far away in space and time. Nevertheless, both of these fields can inspire experiments. I've done hands-on measurements of gravitation with massive lab spheres and mutation/selection evolution in lab bacteria. Such microcosmic simulacra "changed my mind" subtly and allowed me to connect to the larger-scale (non-experimental) facts.

All of this still adds up to a lot of faith and delegated thinking among scientists. The system works because of a trusted network with feedback from practical outcomes. Researchers who stray away from standard protocols (especially core evidentiary and epistemological beliefs) or question too many useful facts, had better have some utility close at hand or they will be ignored — until someone comes along who can both challenge and deliver. In 1993 Pope John Paul II acquitted Galileo, of his indictment in 1632, of heretical support for Copernicus‚s heliocentrism. In 1996 John Paul made a similarly accepting statement about Darwinian evolution.

Clearly religion and science do overlap, and societal minds do change. Even the most fundamentalist creationists accept a huge part of Darwinism, i.e. micro-evolution — which was by no means obvious in the early 19th century. Their remaining doubt is whether largish (macro) changes in morphology or function emerge from the sum of random steps — accepting that small (micro) changes can do so? What happens as we see increasingly dramatic and useful examples of experimental macro-evolution?

We've recently seen the selection of various enzyme catalysts from billions of random RNA sequences. Increasingly biotechnology depends on lab evolution of new, complex synthetic-biology functions and shapes. Admittedly these experiments do involve 'design', but as the lab evolution achieved gets more macro with less intervention, perhaps minds will change about how much intervention is needed in natural macro-evolution.

At least as profound as getting function from randomness, is evolving clever speech from mute brutes. We've made huge progress in revealing the communication potential of chimp, gorilla and African Grey parrot. We've also found genes like FOXP2, which affects vocalization in humans and mice and a variation that separates humans from chimps — but not from Neanderthal genomes. (Yes — extinct Neanderthals are being sequenced!) As we test combinations of such DNA differences in primates, will we discover just how few genetic changes might separate us functionally from chimps? What human blind-spots will be unearthed by talking with other species?

And how fast should we change our mind? Did our 'leap' to agriculture lead to malaria? Did our leap to DDT lead to loss of birds? We'll try DDT again, this time restricted to homes and we'll try transgenic malaria-resistant mosquitoes...and that will lead to what? Arguably faith and spirituality are needed to buffer and govern our technological progress so we don't leap too fast, or look too superficially. Many micro mind-changes add up to macro mind-changes eventually. What's the rush?

xeni_jardin's picture
Tech Culture Journalist; Partner, Contributor, Co-editor, Boing Boing; Executive Producer, host, Boing Boing Video

I changed my mind about online community this year.

I co-edit a blog that attracts a large number of daily visitors, many of whom have something to say back to us about whatever we write or produce in video. When our audience was small in the early days, interacting was simple: we tacked a little href tag to an open comments thread at the end of each post: Link, Discuss. No moderation, no complication, come as you are, anonymity's fine. Every once in a while, a thread accumulated more noise than signal, but the balance mostly worked.

But then, the audience grew. Fast. And with that, grew the number of antisocial actors, "drive-by trolls," people for whom dialogue wasn't the point. It doesn't take many of them to ruin the experience for much larger numbers of participants acting in good faith.

Some of the more grotesque attacks were pointed at me, and the new experience of being on the receiving end of that much personally-directed nastiness was upsetting. I dreaded hitting the "publish" button on posts, because I knew what would now follow.

The noise on the blog grew, the interaction ceased to be fun for anyone, and with much regret, we removed the comments feature entirely.

I grew to believe that the easier it is to post a drive-by comment, and the easier it is to remain faceless, reputation-less, and real-world-less while doing so, the greater the volume of antisocial behavior that follows. I decided that no online community could remain civil after it grew too large, and gave up on that aspect of internet life.

My co-editors and I debated, we brainstormed, we observed other big sites that included some kind of community forum or comments feature. Some relied on voting systems to "score" whether a comment is of value — this felt clinical, cold, like grading what a friend says to you in conversation. Dialogue shouldn't be a beauty contest. Other sites used other automated systems to rank the relevance of a speech thread. None of this felt natural to us, or an effective way to prevent the toxic sludge buildup. So we stalled for years, and our blog remained more monologue than dialogue. That felt unnatural, too.

Finally, this year, we resurrected comments on the blog, with the one thing that did feel natural. Human hands.

We hired a community manager, and equipped our comments system with a secret weapon: the "disemvoweller." If someone's misbehaving, she can remove all the vowels from their screed with one click. The dialogue stays, but the misanthrope looks ridiculous, and the emotional sting is neutralized.

Now, once again, the balance mostly works. I still believe that there is no fully automated system capable of managing the complexities of online human interaction — no software fix I know of. But I'd underestimated the power of dedicated human attention.

Plucking one early weed from a bed of germinating seeds changes everything. Small actions by focused participants change the tone of the whole. It is possible to maintain big healthy gardens online. The solution isn't cheap, or easy, or hands-free. Few things of value are.

terrence_j_sejnowski's picture
Computational Neuroscientist; Francis Crick Professor, the Salk Institute; Investigator, Howard Hughes Medical Institute; Co-author (with Patricia Churchland), The Computational Brain

How is it that insects manage to get by on many fewer neurons than we have? A fly brain has a few hundred thousand neurons, compared to the few hundred billion in our brains, a million times more neurons. Flies are quite successful in their niche. They can see, find food, mate, and create the next generation of flies. The traditional view is that unique neurons evolved in the brain of the fly to perform specific tasks, in contrast to the mammalian strategy of creating many more neurons of the same type, working together in a collective fashion. This view was bolstered when it became possible to record from single cortical neurons, which responded to sensory stimuli with highly variable spike trains from trial to trial. Reliability could be achieved only by averaging the responses of many neurons.

Theoretical analysis of neural signals in large networks assumed statistical randomness in the responses of neurons. These theories used the average firing rates of neurons as the primary statistical variable. Individual spikes and the times when they occurred were not relevant in these theories. In contrast, the timing of single spikes in flies has been shown to carry specific information about sensory stimuli important for guiding the behavior of flies, and in mammals the timing of spikes in the peripheral auditory system carried information about the spatial locations of sound sources. However, cortical neurons did not seem to care about the timing of spikes.

I have changed my mind about cortical neurons and now think that they are far more capable than we ever imagined. Two important experimental results pointed me in this direction. First, if you repeatedly inject the same fluctuating current into a neuron in a cortical slice, to mimic the inputs that occur in an intact piece of tissue, the spike times are highly reproducible from trial to trial. This shows that cortical neurons are capable of initiating spikes with millisecond precision.  Second, if you arrange for a single synapse to be stimulated a few milliseconds just before or just after a spike in the neuron, the synaptic strength will increase or decrease, respectively. This tells us that the machinery in the cortex is every bit as capable as a fly brain, but what is it being used for?

The cerebral cortex is constantly being bombarded by sensory inputs and has to sort though the myriad of signals for those that are the most important and to respond selectively to them. The cortex also needs to organize the signals being generated internally, in the absence of sensory inputs. The hypothesis that I have been pursuing over the last decade is that spike timing in cortical neurons is used internally as a way of controlling the flow of communication between neurons. This is a different from the traditional view that spike times code sensory information, as occurs in the periphery. Rather, spike timing and the synchronous firing of large numbers of cortical neurons may be used to enhance the salience of sensory inputs, as occurs during focal attention, and to decide what information is worth saving for future use. According to this view, the firing rates of neurons are used as an internal representation of the world but the timing of spikes is used to regulate the communication of signals between cortical areas.

The way that neuroscientists perform experiments is biased by their theoretical views. If cortical neurons use rate coding you only need to record, and report, their average firing rates. But to find out if spike timing is important new experiments need to be designed and new types of analysis need to be performed on the data. Neuroscientists have begun to pursue these new experiments and we should know before too long where they will lead us.

carlo_rovelli's picture
Theoretical Physicist; Aix-Marseille University, in the Centre de Physique Théorique, Marseille, France; Author, Helgoland; There Are Places in the World Where Rules Are Less Important Than Kindness

I have learned quantum mechanics as a young man, first from the book by Dirac, and then form a multitude of other excellent textbooks. The theory appeared bizarre and marvelous, but it made perfectly sense to me. The world, as Shakespeare put it, is "strange and admirable", but it is coherent. I could not understand why people remained unhappy with such a clear and rational theory. In particular, I could not understand why some people lost their time on a non-problem called the "interpretation of quantum mechanics".

I have remained of this opinion for many years. Then I moved to Pittsburgh, to work in the group of Ted Newman, great relativist and one of the most brilliant minds in the generation before mine. While there, the experiments made by the team of Alain Aspect Aspect at Orsay, in France, which confirmed spectacularly some of the strangest predictions of quantum mechanics, prompted a long period of discussion in our group. Basically, Ted claimed that quantum theory made no sense. I claimed that it does perfectly, since it is able to predict unambiguously the probability distribution of any conceivable observation.

Long time has passed, and I have changed my mind. Ted's arguments have finally convinced me: I was wrong, and he was right. I have slowly came to realize that in its most common textbook version, quantum mechanics makes sense as a theory of a small portion of the universe, a "system", only under the assumption that something else in the universe fails to obey quantum mechanics. Hence it becomes self contradictory, in its usual version, if we take it as a general description of all physical systems of the universe. Or, at least, there is still something key to understand, with respect to it.

This change of opinion has motivated me to start of a novel line of investigation, which I have called "relational quantum mechanics". It has also affected substantially my work in quantum gravity, taking me to consider a different sort of observable quantities as natural probes of quantum spacetime.

I am now sure that quantum theory has still much to tell us about the deep structure of the world. Unless I'll change my mind again, of course.

jonathan_haidt's picture
Social Psychologist; Thomas Cooley Professor of Ethical Leadership, New York University Stern School of Business; Author, The Righteous Mind

I was born without the neural cluster that makes boys find pleasure in moving balls and pucks around through space, and in talking endlessly about men who get paid to do such things. I always knew I could never join a fraternity or the military because I wouldn't be able to fake the sports talk. By the time I became a professor I had developed the contempt that I think is widespread in academe for any institution that brings young men together to do groupish things. Primitive tribalism, I thought. Initiation rites, alcohol, sports, sexism, and baseball caps turn decent boys into knuckleheads. I'd have gladly voted to ban fraternities, ROTC, and most sports teams from my university.

But not anymore. Three books convinced me that I had misunderstood such institutions because I had too individualistic a view of human nature. The first book was David Sloan Wilson's Darwin's Cathedral, which argued that human beings were shaped by natural selection operating simultaneously at multiple levels, including the group level. Humans went through a major transition in evolution when we developed religiously inclined minds and religious institutions that activated those minds, binding people into groups capable of extraordinary cooperation without kinship. 

The second book was William McNeill's Keeping Together in Time, about the historical prevalence and cultural importance of synchronized dance, marching, and other forms of movement. McNeill argued that such "muscular bonding" was an evolutionary innovation, an "indefinitely expansible basis for social cohesion among any and every group that keeps together in time." The third book was Barbara Ehrenreich'sDancing in the Streets, which made the same argument as McNeill but with much more attention to recent history, and to the concept ofcommunitas or group love. Most traditional societies had group dance rituals that functioned to soften structure and hierarchy and to increase trust, love, and cohesion. Westerners too have a need for communitas, Ehrenreich argues, but our society makes it hard to satisfy it, and our social scientists have little to say about it.

These three books gave me a new outlook on human nature. I began to see us not just as chimpanzees with symbolic lives but also as bees without hives. When we made the transition over the last 200 years from tight communities (Gemeinschaft) to free and mobile societies (Gesellschaft), we escaped from bonds that were sometimes oppressive, yes, but into a world so free that it left many of us gasping for connection, purpose, and meaning. I began to think about the many ways that people, particularly young people, have found to combat this isolation. Rave parties and the Burning Man festival are spectacular examples of new ways to satisfy the ancient longing for communitas. But suddenly sports teams, fraternities, and even the military made a lot more sense.

I now believe that such groups do great things for their members, and that they often create social capital and other benefits that spread beyond their borders. The strong school spirit and alumni loyalty we all benefit from at the University of Virginia would drop sharply if fraternities and major sports were eliminated. If my son grows up to be a sports playing fraternity brother, a part of me may still be disappointed. But I'll give him my blessing, along with three great books to read.

roger_schank's picture
CEO, Socratic Arts Inc.; John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern University; Author, Make School Meaningful-And Fun!

When reporters interviewed me in the 70's and 80's about the possibilities for Artificial Intelligence I would always say that we would have machines that are as smart as we are within my lifetime. It seemed a safe answer since no one could ever tell me I was wrong. But I no longer believe that will happen. One   reason is that  I am a lot older and we are barely closer to creating smart machines. 

I have not soured on AI. I still believe that we can create very intelligent machines. But I no longer believe that those machines will be like us. Perhaps it was the movies that led us to believe that we would have intelligent robots as companions. (I was certainly influenced early on by 2001.)  Certainly most AI researchers believed that creating machines that were our intellectual equals or better was a real possibility. Early AI workers sought out intelligent behaviors to focus on, like chess or problem solving, and tried to build machines that could equal human beings in those same endeavors. While this was an understandable approach it was, in retrospect, wrong-headed.     Chess playing is not really a typical intelligent human activity. Only some of us are good at it, and it seems to entail a level of cognitive processing that while impressive seems quite at odds with what makes humans smart. Chess players are methodical planners. Human beings are not.

Humans are constantly learning.  We spend years learning some seemingly simple stuff. Every new experience changes what we know and how we see the world. Getting reminded of our pervious experiences helps us process new experiences better than we did the time before. Doing that depends upon an unconscious indexing method that all people learn to do without quite realizing they are learning it. We spend twenty years (or more) learning how to speak properly and learning how to make good decisions and establish good relationships. But we tend to not know what we know. We can speak properly without knowing how we do it. We don't know how we comprehend. We just do.

All this poses a problem for AI. How can we imitate what humans are doing when humans don't know what they are doing when they do it? This conundrum led to a major failure in AI, expert systems, that relied upon rules that were supposed to characterize expert knowledge. But, the major characteristic of experts is that they get faster when they know more, while more rules made systems slower. The idea that rules were not at the center of intelligent systems meant that the flaw was relying upon specific consciously stated knowledge instead of trying to figure out what people meant when they said they just knew it when they saw it, or they had a gut feeling.

People give reasons for their behaviors but they are typically figuring that stuff out after the fact. We reason non-consciously and explain rationally later. Humans dream. There obviously is some important utility in dreaming.  Even if we don't understand precisely what the consequences of dreaming are, it is safe to assume that it is an important part of our unconscious reasoning process that drives our decision making. So, an intelligent machine would have to dream because it needed to, and would have to have intuitions that proved to be good insights, and it would have to have a set of driving goals that made it see the world in a way that a different entity with different goals would not. In other words it would need a personality, and not one that was artificially installed but one that came with the territory of what is was about as an intelligent entity.

What AI can and should build are intelligent special purpose entities. (We can call them Specialized Intelligences or SI's.) Smart computers will indeed be created. But they will arrive in the form of SI's, ones that make lousy companions but know every shipping accident that ever happened and why (the shipping industry's SI) or as an expert on sales (a business world SI.)   The sales SI, because sales is all it ever thought about, would be able to recite every interesting sales story that had ever happened and the lessons to be learned from it. For some salesman about to call on a customer for example, this SI would be quite fascinating. We can expect a foreign policy SI that helps future presidents learn about the past in a timely fashion and helps them make decisions because it knows every decision the government has ever made and has cleverly indexed them so as to be able to apply what it knows to current situations. 

So AI in the traditional sense, will not happen in my lifetime nor in my grandson's lifetime. Perhaps a new kind of machine intelligence will one day evolve and be smarter than us, but we are a really long way from that.

leo_m_chalupa's picture
Neurobiologist; Professor of Pharmacology and Physiology, George Washington University

The hottest topic in neuroscience today is brain plasticity. This catchphrase refers to the fact that various types of experience can significantly modify key attributes of the brain. This field began decades ago by focusing on how different aspects of the developing brain could be impacted by early rearing conditions.  

More recently, the field of brain plasticity has shifted to studies demonstrating a remarkable degree of change in the connections and functional properties of mature and even aged brains. Thousands of published papers have now appeared on this topic, many by reputable scientists, and this has lead to a host of books, programs and even commercial enterprises touting the malleability of the brain with “proper” training. One is practically made to feel guilty for not taking advantage of this thriving store of information to improve one’s own brain or those of one’s children and grandchildren. 

My field of research is developmental neurobiology and I used to be a proponent of the potential benefits documented by brain plasticity studies. I am still of the opinion that brain plasticity is a real phenomenon, one that deserves further study and one that could be utilized to better human welfare. But my careful reading of this literature has tempered my initial enthusiasm.

For one thing, those selling a commercial product are making many of the major claims for the benefits of brain exercise regimes. It is also the case that my experiences outside the laboratory have caused me to question the limitless potential of brain plasticity advocated by some devotees.

Point in fact: Recently I had the chance to meet someone I had not seen since childhood. The person had changed physically beyond all recognition, as might be expected.  Yet after spending some time with this individual, his personality traits of long ago became apparent, including a rather peculiar laugh I remember from grade school.

Point of fact: a close colleague had a near fatal car accident, one that caused him to be in a coma for many days and in intensive care for weeks thereafter. Shortly after returning from his ordeal, this A-type personality changed into a seemingly mellow and serene person. But in less than two months, even before the physical scars of his accident had healed, he was back to his old driven self. 

For a working scientist to invoke anecdotal experience to question a scientific field of endeavor is akin to heresy. But it seems to me that it is foolish to simply ignore what one has learned from a lifetime of experiences. The older I get the more my personal interactions convince me that a person’s core remains remarkably stable in spite of huge experiential variations. With all the recent emphasis on brain plasticity, there has been virtually no attempt to explain the stability of the individual’s core attributes, values and beliefs. 

Here is a real puzzle to ponder: Every cell in your body, including all 100 billion neurons in your brain is in a constant process of breakdown and renewal. Your brain is different than the one you had a year or even a month ago, even without special brain exercises. So how is the constancy of one’s persona maintained?   The answer to that question offers a far greater challenge to our understanding of the brain than the currently in vogue field of brain plasticity.

frank_wilczek's picture
Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, Fundamentals

I was an earnest student in Catechism class. The climax of our early training, as thirteen year-olds, was an intense retreat in preparation for the sacrament of Confirmation. Even now I vividly remember the rapture of belief, the glow everyday events acquired when I felt that they reflected a grand scheme of the universe, in which I had a personal place. Soon afterward, though, came disillusionment. As I learned more about science, some of the concepts and explanations in the ancient sacred texts came to seem clearly wrong; and as I learned more about history and how it is recorded, some of the stories in those texts came to seem very doubtful.

What I found most disillusioning, however, was not that the sacred texts contained errors, but that they suffered by comparison. Compared to what I was learning in science, they offered few truly surprising and powerful insights. Where was there a vision to rival the concepts of infinite space, of vast expanses of time, of distant stars that rivaled and surpassed our Sun? Or of hidden forces and new, invisible forms of "light"? Or of tremendous energies that humans could, by understanding natural processes, learn to liberate and control? I came to think that if God exists, He (or She, or They, or It ) did a much more impressive job revealing Himself in the world than in the old books; and that the power of faith and prayer is elusive and unreliable, compared to the everyday miracles of medicine and technology.

For many years, like some of my colleagues and some recent bestselling authors, I thought that active, aggressive debunking might be in order. I've changed my mind. One factor was my study of intellectual history. Many of my greatest heros in physics, including Galileo, Newton, Faraday, Maxwell, and Planck, were deeply religious people. They truly believed that what they were doing, in their scientific studies, was discovering the mind of God. Many of Bach's and Mozart's most awesome productions are religiously inspired. Saint Augustine's writings display one of the most impressive intellects ever. And so on. Can you imagine hectoring this group? And what would be the point? Did their religious beliefs make them stupid, or stifle their creativity?

Also, debunking hasn't worked very well. David Hume already set out the main arguments for religious skepticism in the early eighteenth century. Bertrand Russell and many others have augmented them since. Textual criticism reduces fundamentalism to absurdity. Modern molecular biology, rooted in physics and chemistry, demonstrates that life is a natural process; Darwinian evolution illuminates its natural origin. These insights have been highly publicized for many decades, yet religious doctrines that contradict some or all of them have not merely survived, but prospered.

Why? Part of the answer is social. People tend to stay with the religion of their birth, for the same sorts of reasons that they stay loyal to their clan, or their country.

But beyond that, religion addresses some deep concerns that science does not yet, for most people, touch. The human yearning for meaningful understanding, our fear of death — these deep motivations are not going to vanish. 

Understanding, of course, is what science is all about. Many people imagine, however, that scientific understanding is dry and mundane, with no scope for wonder and amazement. That is simply ignorant. Looking for wonder and amazement? Try some quantum theory!

Beyond understanding inter-connected facts, people want to discover their significance or meaning. Neuroscientists are beginning to map human motivations and drives at the molecular level. As this work advances, we will attain a deeper understanding of the meaning of meaning. Freud's theories had enormous impact, not because they are correct, but because they "explained" why people feel and act as they do. Correct and powerful theories that address these issues are sure to have much greater impact.

Meanwhile, medical science is taking a deep look at aging. Within the next century, it may be possible for people to prolong youth and good health for many years — perhaps indefinitely. This would, of course, profoundly change our relationship with death. So to me the important challenge is not to debunk religion, but to address its issues in better ways.

brian_goodwin's picture
Professor of Biology at Schumacher College

I have changed my mind about the general validity of the mechanical worldview that underlies the modern scientific understanding of natural processes. Trained in biology and mathematics, I have used the scientific approach to the explanation of natural phenomena during most of my career. The basic assumption is that whatever properties and behaviours have emerged naturally during cosmic evolution can all be understood in terms of the motions and interactions of inanimate entities such as elementary particles, atoms, molecules, membranes and organelles, cells, organs, organisms, and so on.

Modelling natural processes on the basis of these assumptions has provided explanations for myriad natural phenomena ranging from planetary motion and electromagnetic phenomena to the properties and behaviour of nerve cell and the dynamic patterns that emerge in ant colonies or flocks of birds. There appeared to be no limit to the power of this explanatory procedure, which enchanted me and kept me busy throughout most of my scientific career in biology.

However, I have now come to the conclusion that this method of explaining natural phenomena has serious limitations, and that these come from the basic assumptions on which it is based. The crunch came for me with the "explanation" of qualitative experience in humans and other organisms. By this I mean the experience of pain or pleasure or wellbeing, or any other of the qualities that are very familiar to us.

These are described as "subjective", that is, experienced by a living organism, because they cannot be isolated from the subject experiencing them and measured quantitatively. What is often suggested as an explanation of this is evolutionary complexity: when an organism has a nervous system of sufficient complexity, subjective experience and feelings can arise. This implies that something totally new and qualitatively different can emerge from the interaction of "dead", unfeeling components such as cell membranes, molecules and electrical currents.

But this implies getting something from nothing, which violates what I have learned about emergent properties: there is always a precursor property for any phenomenon, and you cannot just introduce a new dimension into the phase space of your model to explain the result. Qualities are different from quantities and cannot be reduced to them.

So what is the precursor of the subjective experience that evolves in organisms? There must be some property of neurones or membranes or charged ions producing the electric associated with the experience of feeling that emerges in the organism.

One possibility is to acknowledge that the world isn't what modern science assumes it to be, mechanical and "dead", but that everything has some basic properties relating to experience or feeling. Philosophers and scientists have been down this route before, and have called this pan-sentience or pan-psychism: the world is impregnated with some form of feeling in every one of its constituents. This makes it possible for the evolution of complex organised beings such as organisms to develop feelings and for qualities to be as real as quantities.

Pan-sentience shifts science into radically new territory. Science can now be about qualities as well as quantities, helping us to recover quality of life, to heal our relationship to the natural world, and to undo the damage we are causing to the earth's capacity to continue its evolution with us. It could help us to recover our place as participants in a world that is not ours to control, but is ours to contribute to creatively, along with all the other diverse members of our living, feeling, planetary society.

janna_levin's picture
Professor of Physics and Astronomy, Barnard College of Columbia University; Author, Black Hole Survival Guide; Director of Sciences, Pioneer Works

I used to take for granted an assumption that the universe is infinite. There are innumerable little things about which I've changed my mind but the size of the universe is literally the biggest physical attribute that has inspired a radical change in my thinking. I won't claim I "believe" the universe is finite, just that I recognize that a finite universe is a realistic possibility for our cosmos.

The general theory of relativity describes local curves in spacetime due to matter and energy. This model of gravity as a warped spacetime has seen countless successes beginning with a confirmation of an anomaly in the orbit of mercury and continuing with the predictions of the existence of black holes, the expansion of spacetime, and the creation of the universe in a big bang. However, general relativity says very little about the global shape and size of the universe. Two spaces can have the same curvature locally but very different global properties. A flat space, for instance, can be infinite but there is another possibility, that it is finite and edgeless, wrapped back onto itself like a doughnut — but still flat. And there are an infinite number of ways of folding spacetime into finite, edgeless shapes, a kind of cosmic origami.

I grew up believing the universe was infinite. It was never taught to me in the sense that no one ever tried to prove to me the universe was infinite. It just seemed a natural assumption based on simplicity. That sense of simplicity no longer resonates as true once we have confronted that there must be a theory of gravity beyond General Relativity that involves the quantization, the discretization, of spacetime itself. In cosmology we have become accustomed to models of the universe that invoke extra dimensions, all of which are finite and it seems fair to imagine a universe born with all of its dimensions finite and compact. Then we are left with the mystery of why only three dimensions become so incredibly huge while the others remain curled up and small. We even hope to test models of extra dimensions in imminent laboratory experiments. These ideas are not remote and fantastical. They are testable.

People have said to me they were very surprised (disappointed) that I suggested the universe was finite. The infinite universe, they believed, was full of infinite potential and so philosophically (emotionally) so much richer and more thrilling. I explained that my suggestion of a finite universe was not a moral failing on my part, nor a consequence of diminished imagination. More thrilling was the knowledge that it does not matter what I believe. It does not matter if I prefer an infinite universe or a finite universe. Nature is not designed to satisfy our personal longings. Nature is what she is and it's a privilege merely to be privy to her mathematical codes.

I don't know that the universe is finite and so I don't believe that it is finite. I don't know that the universe is infinite and so I don't believe that it is infinite. I do see, however, that our mathematical reasoning has led to remarkable and sometimes psychologically uncomfortable discoveries. And I do believe that it is a realistic possibility that one day we may discover the shape of the entire universe. If the universe is too vast for us to ever observe the extent of space, we may still discover the size and shape of internal dimensions. From small extra dimensions we might possibly infer the size and shape of the large dimensions. Until then, I won't make up my mind.

hans_ulrich_obrist's picture
Curator, Serpentine Gallery, London; Editor: A Brief History of Curating; Formulas for Now; Co-author (with Rem Koolhas), Project Japan: Metabolism Talks

The 20th century has been obsessed with this idea of the objects and hopes of architectural and artistic permanence which nobody questionned more thouroughly than the late Cedric Price. The 21st century will increasingly question this fetishization of the object.

What are the architectural and artistic contributions which are going to endure they are not only the ones which have a built physical form. Its not only a question of objects but a questions of ideas and scores.

In a converation I had with her some months ago Doris Lessing questionned the future of museums. It's not that she's fundamentally opposed to these institutions, but she's worried that their prioritisation of material objects from the past may not be enough to convey functional meaning to tomorrow's generations. Her 1999 novel, Mara and Dann, is premised on the aftermath of an ice age thousands of years into the future that has eradicated the entirety of life in the northern hemisphere. Her protagonists, long since confined to the other side of the globe, embark upon a journey but they are at a loss with the cultural remnants; they have no grounding in its *artefacts and cities.

This is pure fiction, but she is nevertheless reticent that 'our entire culture is extremely fragile' In light of point, Lessing urges us to take pause and to reconsider the capacity of our language and cultural systems to proffer knowledge to those outside of our immediate public.

philip_campbell's picture
Editor-in-Chief of Nature since 1995; Beginning summer 2018, he will become Editor-in-Chief of Springer Nature’s portfolio of journals, books and magazines

I've changed my mind about the use of enhancement drugs by healthy people. A year ago, if asked, I'd have been against the idea, whereas now I think there's much to be said for it.

The ultimate test of such a change of mind is how I'd feel if my offspring (both adults) went down that road, and my answer is that with tolerable risks of side effects and zero risk of addiction, then I'd feel OK if there was an appropriate purpose to it. 'Appropriate purposes' exclude gaining an unfair advantage or unwillingly following the demands of others, but include gaining a better return on an investment of study or of developing a skill.

I became interested in the issues surrounding cognitive enhancement as one example of debates about human enhancement — debates that can only get more vigorous in future. It's also an example of a topic in which both natural and social sciences can contribute to better regulation — another theme that interests me. Thinking about the issues and looking at the evidence-based literatures made me realise how shallow was my own instinctive aversion to the use of such drugs by healthy people. It also led to a thoughtful article by Barbara Sahakian and Sharon Morein-Zamir in Nature (20 December 2007) that triggered many blog discussions.

Social scientists report that a small but significant proportion of students on at least some campuses are using prescription drugs in order to help their studies — drugs such as modafinil (prescribed for narcolepsy) and methylphenidate (prescribed for attention-deficit hyperactivity disorder). I've not seen studies that quantify similar use by academic faculty, or by people in other non-military walks of life, though there is no doubt that it is happening. There are anecdotal accounts and experimental small-scale trials showing that such drugs do indeed improve performance to a modest degree under particular circumstances.

New cognitive enhancing drugs are being developed, officially for therapy. And the therapeutic importance — both current and potential — of such drugs is indeed significant. But manufacturers won't turn away the significant revenues from illegal use by the healthy.

That word 'illegal' is the rub. Off-prescription use is illegal in the United States, at least. But that illegality reflects an official drugs culture that is highly questionable. It's a culture in which the Food and Drugs Administration seems reluctant generally to embrace the regulation of enhancement for the healthy, though it is empowered to do so. It is also a culture that is rightly concerned about risk but wrongly founded in the idea that drugs used by healthy people are by definition a Bad Thing. That in turn reflects instinctive attitudes to do with 'naturalness' and 'cheating on yourself' that don't stand up to rational consideration. Perhaps more to the point, they don't stand up to behavioral consideration, as Viagra has shown.

Research and societal discussions are necessary before cognitive enhancement drugs should be made legally available for the healthy, but I now believe that that is the right direction in which to head.

With reference to the precursor statements of this year's annual question, there are facts behind that change of mind, some thinking, and some secular faith in humans, too.

lisa_randall's picture
Physicist, Harvard University; Author, Dark Matter and the Dinosaurs

When I first heard about the solar neutrino puzzle, I had a little trouble taking it seriously. We know that the sun is powered by a chain of nuclear reactions and that in addition to emitting energy these reactions lead to the emission of neutrinos (uncharged fundamental particles that interact only via the weak nuclear force). The original solar neutrino puzzle was that when physicists made experiments to find these neutrinos, none of them were detected. But by the time I learned about the puzzle, physicists had in fact observed solar neutrinos — only the amount they found was only about 1/3 - 1/2 of the amount that other physicists had predicted. But I was skeptical that this deficit was really a problem — how could we make such an accurate prediction about the sun — an object 93 million miles away about which we can measure only so much? To give one example, the prediction for the neutrino flux was strongly temperature-dependent.   Did we really know the temperature sufficiently accurately? Were we sure we understood heat transport inside the sun well enough to trust this prediction?

But I ended up changing my mind (along with many other initially skeptical physicists). The solar neutrino puzzle turned out to be a clue to some very interesting physics. It turns out that neutrinos mix. Every neutrino is labeled by the charged lepton with which it interacts via the weak nuclear force.  (Charged leptons are particles like electrons — there are two heavier versions known as muons and taus.)  It turns out the neutrinos have a bit of an identity crisis and can convert into each other as they travel through the sun and as they make their way to Earth.  An electron neutrino can change into a tau neutrino. Since detectors were looking only for electron neutrinos, they missed the ones that had converted. And that was the very elegant solution to the solar neutrino puzzle. The predictions based on what we knew about the Standard Model of particle physics (that tells us what are the fundamental particles and forces)  had been correct — hence change of mind #1. But the prediction had been inaccurate because no one had yet measured the masses and mixing angles of neutrinos. Subsequent experiments have searched for all types of neutrinos — not just electron neutrinos — and found the different neutrino types, thereby confirming the mixing.

And that leads me to a second thing I changed my mind about (along with much of the particle physics community). These neutrino mixing angles turned out to be big. That is, a significant fraction of electron neutrinos turn into muon neutrinos, and a big fraction of muon neutrinos turn into tau neutrinos (here it was neutrinos in the atmosphere that had gone missing).  Few physicists had thought these mixing angles would be big. That is because similar angles in the quark sector (quarks are particles such as the up and down quarks inside protons and neutrons that interact via the strong nuclear force) are much smaller. Everyone based their guess on what was already known. These big neutrino mixing angles were a real surprise — perhaps the biggest surprise from particle physics measurements since I started studying the field.

Why are these angles important? First of all neutrino mixing does in fact explain the missing neutrinos from the sun and from the atmosphere. But these angles are also are an important clue as to the nature of the fundamental particles of which all known matter is made.   One of the chief open questions about these particles is why there are three "copies" of the known particle types — that is heavier versions with identical charges?  Another is why do these different versions have different masses? And a third question is why do these particles mix in the way they have been measured to do? When we understand the answers to these questions we will have a much greater insight into the fundamental nature of all known matter. We don't know yet if we'll get the right answers but  these questions pose important challenges. And when we find the answer is is likely at this point that neutrinos will provide a clue.

martin_rees's picture
Former President, The Royal Society; Emeritus Professor of Cosmology & Astrophysics, University of Cambridge; Fellow, Trinity College; Author, From Here to Infinity

Public discourse on very long-term planning is riddled with inconsistencies. Mostly we discount the future very heavily — investment decisions are expected to pay off within a decade or two. But when we do look further ahead — in discussions of energy policy, global warming and so forth — we underestimate the possible pace of transformational change. In particular, we need to keep our minds open — or at least ajar — to the possibility that humans themselves could change drastically within a few centuries.

Our medieval forebears in Europe had a cosmic perspective that was a million-fold more constricted than ours. Their entire cosmology — from creation to apocalypse — spanned only a few thousand years. Today, the stupendous time spans of the evolutionary past are part of common culture — except among some creationists and fundamentalists. Moreover, we are mindful of immense future potential. It seems absurd to regard humans as the culmination of the evolutionary tree. Any creatures witnessing the Sun's demise 6 billion years hence won't be human — they could be as different from us as we are from slime mould.

But, despite these hugely stretched conceptual horizons, the timescale on which we can sensibly plan, or make confident forecasts has got shorter rather than longer. Medieval people, despite their constricted cosmology, did not expect drastic changes within a human life; they devotedly added bricks to cathedrals that would take a century to finish. For us, unlike for them, the next century will surely be drastically different from the present. There is a huge disjunction between the every-shortening timescales of historical and technical change, and the near-infinite time spans over which the cosmos itself evolves.

Human-induced changes are occurring with runaway speed. It's hard to predict a mere century from now, because what will happen depends on us — this is the first century where humans can collectively transform, or even ravage, the entire biosphere. Humanity will soon itself be malleable, to an extent that's qualitatively new in the history of our species. New drugs (and perhaps even implants into our brains) could change human character; the cyberworld has potential that is both exhilarating and frightening. We can't confidently guess lifestyles, attitudes, social structures, or population sizes a century hence. Indeed, it's not even clear for how long our descendants would remain distinctively 'human'. Darwin himself noted that "not one living species will transmit its unaltered likeness to a distant futurity". Our own species will surely change and diversify faster than any predecessor —— via human-induced modifications (whether intelligently-controlled or unintended), not by natural selection alone. Just how fast this could happen is disputed by experts, but the post-human era may be only centuries away.

These thoughts might seem irrelevant to practical discussions — and best left to speculative academics and cosmologists. I used to think this. But humans are now, individually and collectively, so greatly empowered by rapidly changing technology that we can, by design, or as unintended consequences — engender global changes that resonate for centuries. And, sometimes at least, policy-makers indeed think far ahead.

The global warming induced by fossil fuels burnt in the next fifty years could trigger gradual sea level rises that continue for a millennium or more. And in assessing sites for radioactive waste disposal, governments impose the requirements that they be secure for ten thousand years.

It's real political progress that these long-term challenges are higher on the international agenda, and that planners seriously worry about what might happen more than a century hence.

But in such planning, we need to be mindful that it may not be people like us who confront the consequences of our actions today. We are custodians of a 'posthuman' future — here on Earth and perhaps beyond — that can't just be left to writers of science fiction.

brian_eno's picture
Artist; Composer; Recording Producer: U2, Coldplay, Talking Heads, Paul Simon; Recording Artist

Experimental art and experimental politics have traditionally been convivial bedfellows, though usually, in my opinion, with very little benefit to each other. George Bernard Shaw and his circle fervently supported Stalin against the mounting tide of evidence; the Mitfords supported Hitler, and numerous gifted Italian poets and artists were persuaded by Fascism. Similarly, in the late sixties and early seventies the avant garde art scene in London was overwhelmed with admiration for Chairman Mao.

As a young artist I was part of that scene, and though never a hardcore Maoist, I was impressed by some of his ideas: that intellectuals shouldn't be separated off from workers, for example, and that art should somehow serve working class society. I was sick of 'Art for Art's sake' and the insularity of the English art-world. I liked too the idea that professors should spend a month each year farming, or that designers should find out how it feels to work in a steel foundry. It sounded so benign from a distance. I felt, like many people felt at the time, that my society was by comparison stagnant, class-bound, stuck in history, and I admired Mao and the Chinese for their courage in reinventing themselves so dramatically.

Of course, the Americans were saying how dreadful it all was, but I thought "Well they would, wouldn't they?" In fact their criticism increased its credibility, for I believed America had gone fundamentally wrong, and her enemies must therefore be my friends. I assumed the US sensed the winds of change issuing from China, and was digging her heels in, resisting the future with all her might.

And then, bit by bit, I started to find out what had actually happened, what Maoism meant. I resisted for a while, but I had to admit it: I'd been willingly propagandised, just like Shaw and Mitford and d'Annunzio and countless others. I'd allowed my prejudices to dominate my reason. Those professors working in the countryside were being bludgeoned and humiliated. Those designers were put in the steel-foundries as 'class enemies' — for the workers to vent their frustrations upon.  I started to realise what a monstrosity Maoism had been, and that it had failed in every sense.

Thus began for me a long process of re-evaluation. I had to accept that I was susceptible to propaganda, and that propaganda comes from all sides — not just the one I happen to dislike. I realised that I was not by any means a neutral observer, that I came with my own set of prejudices which could be easily tweaked.

I realised too that I had to learn to evaluate opinions separately from those who were giving them: the truth might sometimes come out of a mouth I disliked, but that didn't automatically mean it wasn't the truth.

Maoism, or my disappointment with it, also changed my feelings about how politics should be done. I went from revolutionary to evolutionary. I no longer wanted to see radical change dictated from the top — even if that top claimed to be the bottom, the 'voice of the people'. I lost faith in the idea that there were quick solutions, that everyone would simultaneously see the light and things would suddenly flip over into a wonderful new reality. I started to believe it was always going to be slow, messy, compromised, unglamorous, bureaucratic, endlessly negotiated — or else extremely dangerous, chaotic and capricious. In fact I've lost faith in the idea of ideological politics altogether: I want instead to see politics as the articulation and management of a changing society in a changing world, trying to do a half-decent job for as many people as possible, trying to set things up a little better for the future.

Perhaps this is why I've increasingly come to regard the determinedly non-ideological, ecumenical EU as the signal political experiment of our time…

tim_oreilly's picture
Founder and CEO, O'Reilly Media, Inc.; Author, WTF?: What’s the Future and Why It’s Up to Us

In November 2002, Clay Shirky organized a "social software summit," based on the premise that we were entering a "golden age of social software... greatly extending the ability of groups to self-organize."

I was skeptical of the term "social software" at the time. The explicit social software of the day, applications like friendster and meetup, were interesting, but didn't seem likely to be the seed of the next big Silicon Valley revolution.

I preferred to focus instead on the related ideas that I eventually formulated as "Web 2.0," namely that the internet is displacing Microsoft Windows as the dominant software development platform, and that the competitive edge on that platform comes from aggregating the collective intelligence of everyone who uses the platform. The common thread that linked Google's PageRank, ebay's marketplace, Amazon's user reviews, Wikipedia's user-generated encyclopedia, and CraigsList's self-service classified advertising seemed too broad a phenomenon to be successfully captured by the term "social software." (This is also my complaint about the term "user generated content.") By framing the phenomenon too narrowly, you can exclude the exemplars that help to understand its true nature. I was looking for a bigger metaphor, one that would tie together everything from open source software to the rise of web applications.

You wouldn't think to describe Google as social software, yet Google's search results are profoundly shaped by its collective interactions with its users: every time someone makes a link on the web, Google follows that link to find the new site. It weights the value of the link based on a kind of implicit social graph (a link from site A is more authoritative than one from site B, based in part on the size and quality of the network that in turn references either A or B). When someone makes a search, they also benefit from the data Google has mined from the choices millions of other people have made when following links provided as the result of previous searches.

You wouldn't describe ebay or Craigslist or Wikipedia as social software either, yet each of them is the product of a passionate community, without which none of those sites would exist, and from which they draw their strength, like Antaeus touching mother earth. Photo sharing site Flickr or bookmark sharing site del.icio.us (both now owned by Yahoo!) also exploit the power of an internet community to build a collective work that is more valuable than could be provided by an individual contributor. But again, the social aspect is implicit — harnessed and applied, but never the featured act.

Now, five years after Clay's social software summit, Facebook, an application that explicitly explores the notion of the social network, has captured the imagination of those looking for the next internet frontier. I find myself ruefully remembering my skeptical comments to Clay after the summit, and wondering if he's saying "I told you so."

Mark Zuckerberg, Facebook's young founder and CEO, woke up the industry when he began speaking of "the social graph" — that's computer-science-speak for the mathematical structure that maps the relationships between people participating in Facebook — as the core of his platform. There is real power in thinking of today's leading internet applications explicitly as social software.

Mark's insight that the opportunity is not just about building a "social networking site" but rather building a platform based on the social graph itself provides a lens through which to re-think countless other applications. Products like xobni (inbox spelled backwards) and MarkLogic's MarkMail explore the social graph hidden in our email communications; Google and Yahoo! have both announced projects around this same idea. Google also acquired Jaiku, a pioneer in building a social-graph enabled address book for the phone.

This is not to say that the idea of the social graph as the next big thing invalidates the other insights I was working with. Instead, it clarifies and expands them:

  • Massive collections of data and the software that manipulates those collections, not software alone, are the heart of the next generation of applications.
  • The social graph is only one instance of a class of data structure that will prove increasingly important as we build applications powered by data at internet scale. You can think of the mapping of people, businesses, and events to places as the "location graph", or the relationship of search queries to results and advertisements as the "question-answer graph."
  • The graph exists outside of any particular application; multiple applications may explore and expose parts of it, gradually building a model of relationships that exist in the real world.
  • As these various data graphs become the indispensable foundation of the next generation "internet operating system," we face one of two outcomes: either the data will be shared by interoperable applications, or the company that first gets to a critical mass of useful data will become the supplier to other applications, and ultimately the master of that domain.

So have I really changed my mind? As you can see, I'm incorporating "social software" into my own ongoing explanations of the future of computer applications.

It's curious to look back at the notes from that first Social Software summit. Many core insights are there, but the details are all wrong. Many of the projects and companies mentioned have disappeared, while the ideas have moved beyond that small group of 30 or so people, and in the process have become clearer and more focused, imperceptibly shifting from what we thought then to what we think now.

Both Clay, who thought then that "social software" was a meaningful metaphor and I, who found it less useful then than I do today, have changed our minds. A concept is a frame, an organizing principle, a tool that helps us see. It seems to me that we all change our minds every day through the accretion of new facts, new ideas, new circumstances. We constantly retell the story of the past as seen through the lens of the present, and only sometimes are the changes profound enough to require a complete repudiation of what went before.

Ideas themselves are perhaps the ultimate social software, evolving via the conversations we have with each other, the artifacts we create, and the stories we tell to explain them.

Yes, if facts change our mind, that's science. But when ideas change our minds, we see those facts afresh, and that's history, culture, science, and philosophy all in one.

esther_dyson's picture
Investor; Chairman, EDventure Holdings; Executive Founder, Wellville; Author: Release 2.0

What have I changed my mind about? Online privacy.

For a long time, I thought that people would rise to the challenge and start effectively protecting their own privacy online, using tools and services that the market would provide. Many companies offered such services, and almost none of them succeeded (at least not with their original business plans). People simply weren't interested: They were both paranoid and careless, and took little trouble to inform themselves. (Of course, if you've ever attempted to read an online privacy statement, you'll understand why.)

But now I've simply changed my mind and realized that the whole question needs reframing - which Facebook et al. are in the process of doing. Users have never learned the power to say no to marketers who want their data...but they are getting into the habit of controlling it themselves because Facebook is teaching them that this is a natural thing to do.

Yes, Facebook certainly managed to draw attention to the whole "privacy" question with its Beacon tracking tool, but for most Facebook users the big question is how many people they can get to see their feed. They are happy to share their information with friends, and they consider it the most natural thing in the world to distinguish among friends (see new Facebook add-on applications such as Top Friends and Cliquey) and to manage their privacy settings to determine who can see which parts of their profile. So why shouldn't they do the same thing vis a vis marketers?

For example, I fly a lot, and I use various applications to let certain friends know where I am and plan to be. I'd be delighted to share that information with certain airlines and hotels if I knew they would send me special offers. (In fact, United Airlines once asked me to send in my frequent flyer statements from up to three competing airlines in exchange for 2000 bonus miles each. I gladly did so, and would have done it for free. I *want* United to know what a good customer I am...and how much more of my business they could win if they offered me even better deals.)

In short, for many users the Web is becoming a mirror, with users in control, rather than a heavily surveilled stage. The question isn't how to protect users' privacy, but rather how to give them better tools to control their own data - not by selling privacy or by getting them to "sell" their data, , but by feeding their natural fascination with themselves and allowing them to manage their own presence. What once seemed like an onerous, weird task becomes akin to self-grooming online.

This begs a lot of questions, I know, including real, coercive invasions of privacy by government agencies, but I think the in-control users of the future will be better equipped to fight back. Give them a little time and a few bad experiences, and they'll start to make the distinction between an airline selling seats and a government that simply won't allow you to take it off your buddy list.

nicholas_g_carr's picture
Author, Utopia is Creepy

In January of 2007, China's president, Hu Jintao, gave a speech before a group of Communist Party officials. His subject was the Internet. "Strengthening network culture construction and management," he assured the assembled bureaucrats, "will help extend the battlefront of propaganda and ideological work. It is good for increasing the radiant power and infectiousness of socialist spiritual growth."

If I had read those words a few years earlier, they would have struck me as ludicrous. It seemed so obvious that the Internet stood in opposition to the kind of centralized power symbolized by China's regime. A vast array of autonomous nodes, not just decentralized but centerless, the Net was a technology of personal liberation, a force for freedom.

I now see that I was naive. Like many others, I mistakenly interpreted a technical structure as a metaphor for human liberty. In recent years, we have seen clear signs that while the Net may be a decentralized communications system, its technical and commercial workings actually promote the centralization of power and control. Look, for instance, at the growing concentration of web traffic. During the five years from 2002 through 2006, the number of Internet sites nearly doubled, yet the concentration of traffic at the ten most popular sites nonetheless grew substantially, from 31% to 40% of all page views, according to the research firm Compete.

Or look at how Google continues to expand its hegemony over web searching. In March 2006, the company's search engine was used to process a whopping 58% of all searches in the United States, according to Hitwise. By November 2007, the figure had increased yet again, to 65%. The results of searches are also becoming more, not less, homogeneous. Do a search for any common subject, and you're almost guaranteed to find Wikipedia at or near the top of the list of results. 

It's not hard to understand how the Net promotes centralization. For one thing, its prevailing navigational aids, such as search engine algorithms, form feedback loops. By directing people to the most popular sites, they make those sites even more popular. On the web as elsewhere, people stream down the paths of least resistance.

The predominant means of making money on the Net — collecting small sums from small transactions — also promotes centralization. It is only by aggregating vast quantities of content, data, and traffic that businesses can turn large profits. That's why companies like Microsoft and Google have been so aggressive in buying up smaller web properties. Google, which has been acquiring companies at the rate of about one a week, has disclosed that its ultimate goal is to "store 100% of user data."

As the dominant web companies grow, they are able to gain ever larger economies of scale through massive capital investments in the "server farms" that store and process online data. That, too, promotes consolidation and centralization. Executives of Yahoo and Sun Microsystems have recently predicted that control over the net's computing infrastructure will ultimately lie in the hands of five or six organizations.

To what end will the web giants deploy their power? They will, of course, seek to further their own commercial or political interests by monitoring, analyzing, and manipulating the behavior of "users." The connection of previously untethered computers into a single programmable system has created "a new apparatus of control," to quote NYU's Andrew Galloway. Even though the Internet has no center, technically speaking, control can be wielded, through software code, from anywhere. What's different, in comparison to the physical world, is that acts of control are more difficult to detect.

So it's not Hu Jintao who is deluded in believing that the net might serve as a powerful tool for central control. It is those who assume otherwise. I used to count myself among them. But I've changed my mind.

seirian_sumner's picture
Reader, Behavioral Ecology, University College London

I have been a true disciple of kin-selection theory ever since I discovered the wonders of social evolution as a young graduate student. Kin selection theory emphasizes the importance of relatedness (i.e. kin) in the evolution of social behavior. The essence of social living lies in sharing tasks amongst group members; for example some individuals may end up monopolizing reproduction (dominants or queens), whilst others defend or forage for the group (helpers or altruists). The key to understanding how sociality evolves rests on finding a watertight explanation for altruism. Why should any individual sacrifice their reproductive rights in order to help another individual reproduce?

When groups consist of families, there is an intuitive basis for the evolution of altruism. Helping relatives, with whom I share many of my genes, is potentially a lucrative strategy for passing my genes on to future generations. This reasoning led W.D. Hamilton to his theory of inclusive fitness, or kin selection, in 1964: a social action evolves if the benefit (b) weighted by the relatedness (r) between group members exceeds the costs (c) of that action (i.e. br>c). Evolution is satisfyingly parsimonious, so it is only natural that an apparently complicated thing like sociality can be explained in such simple terms.

I am certain I speak for many students of sociality in being eternally grateful to Hamilton in providing such an elegant theory with such clear predictions to test. Off we go, armed with Hamilton’s Rule to settle our quest for understanding what makes an animal social. There are three things to measure: relatedness between group members (or actors and recipients), costs (to the actor in being altruistic) and benefits (to the recipient in receiving help). Happily, the molecular revolution has brought gene-level analytical tools to behavioral ecologists, allowing relatedness to be quantified accurately. Costs and benefits are more problematic to quantify, as they might vary over an individual’s life-time. Relatedness, therefore could be a fast-track route to securing the secrets of sociality in a kin-selected context.

The social Hymenoptera (ants, bees and wasps) are an excellent group for studying sociality because they live in large groups and have a peculiar genetic sex-determination system (haplodiploidy), engendering high levels of relatedness. If relatedness predisposes any animal to be social, it will be the Hymenoptera. As an altruist in a social insect colony there are several ways by which you could favor your most closely related group members. You could selectively feed sibling brood that share the same parents as you. Or, you could eat the eggs laid by your siblings in preference to those laid by your mother (the queen). On planning an elopement from the homestead to start a new colony, you might choose full-sisters as your companions rather than a random relative. The predictions are elegant, simple and depend on the kin structure of a specific colony.

With insect cadavers mounting up in university freezers all over the world, we raced to test these predictions. The results were disheartening: worker wasps headed by multiply mated mother queens were not maximizing their indirect fitness by laying heaps of parthenogenetic male eggs; worker ants were unable to optimally manipulate the brood sex ratios (and hence their inclusive fitness) in relation to how many times their mother had mated; social wasps were feeding larvae in regard to their need rather than relatedness; swarming wasps were indiscriminate in who they founded new colonies with.

On the back of robust experiments like these, I have changed my mind about relatedness being the primary dictator of social evolution. Insects are unable to discriminate relatedness on an individual level. Instead, relatedness may act at the colony/population level, or simply in distinguishing kin from non-kin. This make sense. An individual-level kin discrimination mechanism is vulnerable to invasion by an occasional nepotist, who would favor its closest relatives over others. As the gene for nepotism spreads, the variation on which the kin-discrimination is based (e.g. chemical or visual cues) will disappear and individuals will no longer be able to tell kin from non-kin, let alone full siblings from half siblings: sociality breaks down. We knew this long before many of the kin-discrimination experiments were done, but optimism perseveres until enough evidence pervades.

Does this mean kin selection theory is wrong? Absolutely not! The reason for this is that relatedness is only one (albeit important) component of kin selection theory. The key is likely to be the interaction of a high (and variable) benefit to cost ratio from helping, and a positive relatedness between actors and recipients: relatedness does not have to be high for altruism to evolve, it just needs to be greater than the population average. I still believe you cannot hope to understand sociality unless you put relatedness at the top of your list. But, we need to complement the huge amount of data generated by the molecular hamster wheel with some serious estimates of the costs and benefits of social actions.

james_geary's picture
Deputy Curator, Nieman Foundation for Journalism at Harvard; Author, Wit's End

Often a new field comes along purporting to offer bold new insights into questions that have long vexed us. And often, after the initial excitement dies down, that field turns out to really only offer a bunch of new names for stuff we basically already knew. I used to think neuroeconomics was such a field. But I was wrong.

Neuroeconomics mixes brain science with the dismal science — throwing in some evolutionary psychology and elements of prospect theory as developed by Daniel Kahneman and Amos Tversky — to explain the emotional and psychological quirks of human economic behavior. To take a common example — playing the stock market. Our brains are always prospecting for pattern. Researchers at Duke University showed people randomly generated sequences of circles and squares. Whenever two consecutive circles or squares appeared, the subjects' nucleus accumbens — the part of the brain that's active whenever a stimulus repeats itself — went into overdrive, suggesting the participants expected a third circle or square to continue the sequence.

The stock market is filled with patterns. But the vast majority of those patterns are meaningless, at least in the short term. The hourly variance of a stock price, for example, is far less significant than its annual variance. When you're checking your portfolio every hour, the noise in those statistics drowns out any real information. But our brains evolved to detect patterns of immediate significance, and the nucleus accumbens sends a jolt of pleasure into the investor who thinks he's spotted a winner. Yet studies consistently show that people who follow their investments closely earn lower returns than those who don't pay much attention at all. Why? Because their nucleus accumbens isn't prompting them to make impulsive decisions based on momentary patterns they think they've detected.

The beauty of neuroeconomics is that it's easily verified by personal experience. A while back, I had stock options that I had to exercise within a specific period of time. So I started paying attention to the markets on a daily basis, something I normally never do. I was mildly encouraged every time the stock price ratcheted up a notch or two, smugly satisfied that I hadn't yet cashed in my options. But I was devastated when the price dropped back down again, recriminating myself for missing a golden opportunity. (This was Kahneman and Tversky's "loss aversion" — the tendency to strongly prefer to avoid a loss rather than to acquire a gain — kicking in. Some studies suggest that the fear of a loss has twice the psychological impact as the lure of a gain.) I eventually exercised my options after the price hit a level it hadn't reached for several years. I was pretty pleased with myself — until the firm sold some of its businesses a few weeks later and the stock price shot up by several dollars.

Neuroeconomics really does explain the non-rational aspects of human economic behavior; it is not just another way of saying there's a sucker born every minute. And now, thanks to this new field, I can blame my bad investment decisions on my nucleus accumbens rather than my own stupidity.

anton_zeilinger's picture
Nobel laureate (2022 - Physics); Physicist, University of Vienna; Scientific Director, Institute of Quantum Optics and Quantum Information; President, Austrian Academy of Sciences; Author, Dance of the Photons: From Einstein to Quantum Teleportation

When journalists asked me about 20 years ago what the use of my research is, I proudly told them that it has no use whatsoever. I saw an analog to the usefulness of astronomy or of a Beethoven symphony. We don't do these things, I said, for their use, we do them because they are part of what it means to be human. In the same way, I said, we do basic science, in my case experiments on the foundations of quantum physics. it is part of being human to be curious, to want to know more about the world. There are always some of us who are just curious and they follow their nose and investigate with no idea in mind what it might be useful for. Some of us are even more attracted to a question the more useless it appears. I did my work only because I was attracted by both the mathematical beauty of quantum physics and by the counterintuitive conceptual questions it raises. So I told them all the time up to the early 1990s.

Then a new surprising development started. The scientific community discovered that the same fundamental phenomena of quantum physics suddenly became relevant for more and more novel ways of transmitting and processing of information. We now have the completely new field of quantum information science where some of the basic concepts are quantum cryptography, quantum computation and even quantum teleportation. All this points us towards a new information technology where the same strange fundamental phenomena which attracted me initially into the field are essential. Quantum randomness makes it possible for us in quantum cryptography to send messages such that they are secure against unauthorized third parties. Quantum entanglement, called by Einstein "spooky action at a distance" makes quantum teleportation possible. And quantum computation builds on all counterintuitive features of the quantum world together. When journalists ask me today what the use of my research is I proudly tell them of my conviction that we well have a full quantum information technology in the future, though its specific features are still very much to be developed. So, never say that your research is "useless".

aubrey_de_grey's picture
Gerontologist; Chief Science Officer, SENS Foundation; Author, Ending Aging

The words "science" and "technology," or equivalently the words "research" and "development," are used in the same breath so readily that one might easily presume that they are joined at the hip: that their goals are indistinguishable, and that those who are good at one are, if not necessarily equally good at the other, at least quite good at evaluating the quality of work in the other. I grew up with this assumption, but the longer I work at the interface between science and technology the more I find myself having to accept that it is false — that most, scientists are rather poor at the type of thinking that identifies efficient new ways to get things done, and that, likewise, technologists are mostly not terribly good at identifying efficient ways to find things out.

I've come to feel that there are several reasons underlying this divide.

A major one is the divergent approaches of scientists and technologists to the use of evidence. In basic research, it is exceptionally easy to be seduced by one's data — to see a natural interpretation of it and to overlook the existence of other, comparably economical interpretations of it that lead to dramatically different conclusions. It therefore makes sense for scientists to give the greatest weight, when evaluating the evidence for and against a given hypothesis, to the most direct observational or experimental evidence at hand.

Technologists, on the other hand, succeed best when they stand back from the task before them, thinking laterally about ways in which ostensibly irrelevant techniques might be applied to solve one or another component of the problem. The technologist's approach, when applied to science, is likely to result all too often in wasted time, as experiments are performed that contain too many departures from previous work to allow the drawing of firm conclusions either way concerning the hypothesis of interest.

Conversely, applying the scientist's methodology to technological endeavours can also result in wasted time, resulting from overly small steps away from techniques already known to be futile, like trying to fly by flapping mechanical wings.

But there's another difference between the characteristic mindsets of scientists and technologists, and I've come to view it as the most problematic. Scientists are avowedly "curiosity-driven" rather than "goal-directed" — they are spurred by the knowledge that, throughout the history of civilisation, innumerable useful technologies have become possible not through the stepwise execution of a predefined plan, but rather through the purposely undirected quest for knowledge, letting a dynamically-determined sequence of experiments lead where it may.

That logic is as true as it ever was, and any technologist who doubts it need only examine the recent history of science to change his mind. However, it can be — and, in my view, all too often is — taken too far. A curiosity-driven sequence of experiments is useful not because of the sequence, but because of the technological opportunities that emerge at the end of the sequence. The sequence is not an end in itself. And this is rather important to keep in mind. Any scientist, on completing an experiment, is spoilt for choice concerning what experiment to do next — or, more prosaically, concerning what experiment to apply for funding to do next.

The natural criterion for making this choice is the likelihood that the experiment will generate a wide range of answers to technologically important questions, thereby providing new technological opportunities. But an altogether more frequently adopted criterion, in practice, is that the experiment will generate a wide range of new questions — new reasons to do more experiments. This is only indirectly useful, and I believe that in practice it is indeed less frequently useful than programs of research designed with one eye on the potential for eventual technological utility.

Why, then, is it the norm? Simply because it is the more attractive to those who are making these decisions — the curiosity-driven scientists (whether the grant applicants or the grant reviewers) themselves. Curiosity is addictive: both emotionally and in their own enlightened self-interest, scientists want reasons to do more science, not more technology. But as a society we need science to be as useful as possible, as quickly as possible, and this addiction slows us down.

paul_w_ewald's picture
Professor of Biology, Amherst College; Author, Plague Time

At the end of The Structure of Scientific Revolutions, Thomas Kuhn suggested that it is reasonable to trust the general consensus of experts instead of a revolutionary idea, even when the revolutionary idea is consistent with a finding that could not be explained by  the general consensus.  He reasoned that the general consensus was reached by drawing together countless bits of evidence, and even though it could not explain everything, it had passed a gauntlet to which the revolutionary idea had not yet been subjected. 

Kuhn's idea seemed sufficiently plausible to lead me to generally trust the consensus of experts in disciplines outside my area of expertise.  I still think that it is wise to trust the experts when their profession has a good understanding of the processes under consideration.   This situation applies to experts on car maintenance, for example, because cars were made by people who shared their knowledge about the function of car parts, and top notch car mechanics learn this information.  It also applies generally to the main principles of mechanical and electrical engineering, biology, physics, and chemistry, because these principles are tested directly or indirectly by the countless studies. 

I am becoming convinced, however, that the opposite view is often true when the expert opinion pertains to the unknown: the longer and more widespread the accepted wisdom has been accepted, the more hesitant we should be to trust it, especially if the experts  have been studying the question intensively during this period of acceptance and contradictory findings or logic have been presented.  The reason is simple.  If an explanation has been widely and broadly accepted and convincing evidence still cannot be mustered, then it is quite reasonable to expect that the experts are barking up the wrong, albeit cherished, trees.  That is, its acceptance has more to do with the limitations of intellectual ingenuity than with evidence. 

This argument provides a clear guideline for allocating trust to experts: distrust expert opinion in accordance with what is not known about the subject.  This guideline is, of course, difficult to apply because one has to first ascertain whether a discipline actually has valid answers for a given area of inquiry.  Consider something as simple as a sprained ankle.  Evolutionary considerations suggest that the inflammation and pain associated with sprained ankles are adaptive responses to promote healing, and that suppressing them would be detrimental to long-term functionality of the joint.   I have searched the literature to find out whether any evidence indicates that treatment of sprained ankles with ice, compression, anti-inflammatories, and analgesics promotes or hinders healing and long-term functionality of the joint. In particular, I have been looking for comparisons of treated individuals with untreated controls.  I have not found any and am coming to the conclusion that this widely advocated expert opinion is a detrimental holdover from ancient Greek medicine, which often confused the return of the body to a more healthy appearance with the return of the body to a state of health.  

More generally, I am coming to the disquieting realization that much of scientific opinion and even more of medical opinion falls into the murky area circumscribed by a lack of adequate knowledge about the processes at hand.  This means that I must invoke broadly the guideline to distrust expert opinion in proportion to the lack of knowledge in the area. Although this has made me more objectionable, it has also been of great value intellectually and practically, as when, for example, I sprain my ankle.

daniel_goleman's picture
Psychologist; Author (with Richard Davidson), Altered Traits

One of my most basic assumptions about the relationship between mental effort and brain function has begun to crumble. Here's why.

My earliest research interests as a psychologist were in the ways mental training can shape biological systems.  My doctoral dissertation was a psychophysiological study of meditation as an intervention in stress reactivity; I found (as have many others since) that the practice of meditation seems to speed the rate of physiological recovery from a stressor.

My guiding assumptions included the standard premise that the mind-body relationship operates according to orderly, understandable principles.  One such might be called the "dose-response" rule, that the more time put into a given method of training, the greater the result in the targeted biological system.  This is a basic correlate of neuroplasticity, the mechanism through which repeated experience shapes the brain.

For example, a string of research has now established that more experienced meditators recover more quickly from stress-induced physiological arousal than do novices. Nothing remarkable there.  The dose-response rule would predict this is so. Thus brain imaging studies show that the spatial areas of London taxi drivers become enhanced during the first six months they spend driving around that city's winding streets; likewise, the area for thumb movement in the motor cortex becomes more robust in violinists as they continue to practice over many months.

This relationship has been confirmed in many varieties of mental training. A seminal 2004 article in the Proceedings of the National Academy of Science found that, compared to novices, highly adept meditators generated far more high-amplitude gamma wave activity — which reflects finely focused attention — in areas of the prefrontal cortex while meditating.

The seasoned meditators in this study — all Tibetan lamas — had undergone cumulative levels of mental training akin to the amount of lifetime sports practice put in by Olympic athletes: 10,000 to 50,000 hours. Novices tended to increase gamma activity by around 10 to 15 percent in the key brain area, while most experts had increases on the order of 100 percent from baseline. What caught my eye in this data was not this difference between novices and experts (which might be explained in any number of ways, including a self-selection bias), but rather a discrepancy in the data among the group of Olympic-level meditators.

Although the experts' average boost in gamma was around 100 percent, two lamas were "outliers": their gamma levels leapt 700 to 800 percent. This goes far beyond an orderly dose-response relationship — these jumps in high-amplitude gamma activity are the highest ever reported in the scientific literature apart from pathological conditions like seizures. Yet the lamas were voluntarily inducing this extraordinarily heightened brain activity for just a few minutes at a time — and by meditating on "pure compassion," no less.

I have no explanation for this data, but plenty of questions. At the higher reaches of contemplative expertise, do principles apply (as the Dalai Lama has suggested in dialogues with neuroscientists) that we do not yet grasp? If so, what might these be? In truth, I have no idea. But these puzzling data points have pried open my mind a bit as I've had to question what had been a rock-solid assumption of my own.

simon_baron_cohen's picture
Professor of Developmental Psychopathology, University of Cambridge; Fellow, Trinity College, Cambridge; Director, Autism Research Centre, Cambridge; Author, The Pattern Seekers

When I was young I believed in equality as a guiding principle in life. It's not such a bad idea, when you think about it. If we treat everyone else as being our equals, no one feels inferior. And as an added bonus, no one feels superior. Whilst it is a wonderfully cosy, warm, feel-good idea, I have changed my mind about equality. There seemed to be two moments in my thinking about this principle that revealed some cracks in the perfect idea. Let me describe how these two moments changed my mind.

The first moment was in thinking about economic equality. Living on a kibbutz was an interesting opportunity to see how if you want to aim for everyone to have exactly the same amount of money or exactly the same possessions or exactly the same luxuries, the only way to achieve this is by legislation.  In a small community like a kibbutz, or in an Amish community, where there is an opportunity for all members of the community to decide on their lifestyles collectively and where the legislation is the result of consensual discussion, economic equality might just be possible.

But in the large towns and cities in which most of us live, and with the unfounded opportunities to see how other people live, through travel, television and the web, it is patently untenable to expect complete strangers to accept economic equality if it is forced onto them. So, for small groups of people who know each other and choose to live together, economic equality might be an achievable principle. But for large groups of strangers, I think we have to accept this is an unrealistic principle. Economic equality presumes pre-existing relationships based on trust, mutual respect, and choice, which are hard to achieve when you hardly know your neighbours and feel alienated from how your community is run.

The second moment was in thinking about how to square equality with individual differences. Equality is easy to believe in if you believe everyone is basically the same. The problem is that it is patently obvious that we are not all the same. Once you accept the existence of individual differences, this opens the door to some varieties of difference being better than others. 

Let's take the thorny subject of sex differences. If males have more testosterone than females, and if testosterone causes not only your beard to grow but also your muscles to grow stronger, it is just naïve to hold onto the idea that women and men are going to be starting on a level playing field in competitive sports where strength matters. This is just one example of how individual differences in hormonal levels can play havoc with the idea of biological equality.

Our new research suggests hormones like prenatal testosterone also affect how the mind develops. Higher levels of prenatal testosterone are associated with slower social and language development andreduced empathy. Higher levels of prenatal testosterone are also associated with more autistic traits, stronger interests in systems, andgreater attention to detail. A few more drops of this molecule seem to be associated with important differences in how our minds work.

So, biology has little time for equality. This conclusion should come as no surprise, since Darwin's theory of evolution was premised on the existence of individual differences, upon which natural selection operates. In modern Darwinism such individual differences are the result of genetic differences, either mutations or polymorphisms in the DNA sequence. Given how hormones and genes (which are not mutually exclusive, genetic differences being one way in which differences in hormone levels come about) can put us onto very different paths in development, how can we believe in equality in all respects?

The other way in which biology is patently unequal is in the likelihood of developing different medical conditions. Males are sometimes referred to as the weaker sex because they are more likely to develop a whole host of conditions, among which are autism (four boys for every one girl) or Asperger Syndrome (nine boys for every one girl). Given these risks, it becomes almost comical to believe in equality.

I still believe in some aspects of the idea of equality, but I can no longer accept the whole package. The question is, is it worth holding on to some elements of the idea if you've given up other elements? Does it make sense to have a partial belief in equality? Do you have to either believe in all of it, or none of it? My mind has been changed from my youthful starting point where I might have hoped that equality could be followed in all areas of life, but I still see value in holding on to some aspects of the principle. Striving to give people equality of social opportunity is still a value system worth defending, even if in the realm of biology, we have to accept equality has no place.

gerd_gigerenzer's picture
Psychologist; Director, Harding Center for Risk Literacy, Max Planck Institute for Human Development; Author, How to Stay Smart in a Smart World

In a 2007 radio advertisement, former NYC mayor Rudy Giuliani said, "I had prostate cancer, five, six years ago. My chances of surviving prostate cancer — and thank God I was cured of it — in the United States: 82 percent. My chances of surviving prostate cancer in England: only 44 percent under socialized medicine." Giuliani was lucky to be living in New York, and not in York — true?

In World Brain (1938), H. G. Wells predicted that for an educated citizenship in a modern democracy, statistical thinking would be as indispensable as reading and writing. At the beginning of the 21st century, we have succeeded in teaching millions how to read and write, but many of us still don't know how to reason statistically — how to understand risks and uncertainties in our technological world.

Giuliani is a case in point. One basic concept that everyone should understand is the 5-year survival rate. Giuliani used survival rates from the year 2000, where 49 Britons per 100,000 were diagnosed of prostate cancer, of which 28 died within 5 years — about 44 percent. Is it true that his chances of surviving cancer are about twice as high in what Giuliani believes is the best health care system in the world? Not at all. Survival rates are not the same as mortality rates. The U.S. has in fact about the same prostate cancer mortality rate as the U.K. But far more Americans participate in PSA screening (although its effect on mortality reduction has not been proven). As a consequence, more Americans are diagnosed of prostate cancer, which skyrockets the 5-year survival rate to more than 80%, although no life is saved. Screening detects many "silent" prostate cancers that the patient would have never noticed during his lifetime. Americans live longer with the diagnosis, but they do not live longer. Yet many Americans end up incontinent or impotent for the rest of their lives, due to unnecessary aggressive surgery or radiation therapy, believing that their life has been saved.

Giuliani is not an exception to the prevailing confusion about how to evaluate health statistics. For instance, my research shows that 80% to 90% of German physicians do not understand what a positive screening test means — such as PSA, HIV, or mammography — and most do not know how to explain the patient the potential benefits and harms. Patients however falsely assume that their doctors know and understand the relevant medical research. In most medical schools, education in understanding health statistics is currently lacking or ineffective.

The bare fact of statistical illiteracy among physicians, patients, and politicians is still not well known, much less addressed, made me pessimistic about the chances of any improvement. Statistical illiteracy in health matters turns the ideals of informed consent and shared decision-making into science fiction. Yet I have begun to change my mind. Here are a few reasons why I'm more optimistic.

Consider the concept of relative risks. You may have heard that mammography screening reduces breast cancer mortality by 25%! Impressive, isn't it? Many believe that if 100 women participate, the life of 25 will be saved. But don't be taken in again. The number is based on studies that showed that out of every 1,000 women who do not participate in mammography screening, 4 will die of breast cancer within about 10 years, whereas among those who participate in screening this number decreases to 3. This difference can be expressed as anabsolute risk, that is, one out of every 1,000 women dies less of breast cancer, which is a clear and transparent.  But it also can be phrased in terms of a relative risk: a 25% benefit. I have asked hundreds of gynecologists to explain what this benefit figure means. The good news is that two-thirds understood that 25% means 1 in 1,000. Yet one third overestimated the benefits by one or more orders of magnitudes. Thus, better training in medical school is still wanted.

What makes me optimistic is the reaction of some 1,000 gynecologists I have trained in understanding risks and uncertainties as part of their continuing education. First, learning how to communicate risks was a top topic on their wish list. Second, despite the fact that most had little statistical training, they learned quickly. Consider the situation of a woman who tests positive in a screening mammogram, and asks her doctor whether she has cancer for certain, or what her chances are.  She has a right to get the best answer from medical science: Out of ten women who test positive, only one has breast cancer, the other nine cases are false alarms. Most women are never informed about this relevant fact, and react with panic and fear. Mammography is not a very reliable test. Before the training, the majority of gynecologists mistakenly believed that about 9 out of 10 women who test positive have cancer, as opposed to only one! After the training, however, almost all physicians understood how to read this kind of health statistics. That's real progress, and I didn't expect so much, so soon.

What makes me less optimistic is resistance to transparency in health from government institutions. A few years ago, I presented the program of transparent risk communication to the National Cancer Institute in Bethesda. Two officials took me aside afterwards and lauded the program for its potential to make health care more rational. I asked if they intended to implement it. Their answer was no. Why not? As they explained, transparency in this form was bad news for the government — a benefit of only 0.1% instead of 25% would make poor headlines for the upcoming election! In addition, their board was appointed by the presidential administration, for which transparency in health care is not a priority.

Win some, lose some. But I think the tide is turning. Statistics may still be woefully absent from most school curricula, including medical schools. That could soon change in the realm of health, however, if physicians and patients make common cause, eventually forcing politicians to do their homework.

helena_cronin's picture
Co-director of LSE's Centre for Philosophy of Natural and Social Science; Author, The Ant and the Peacock: Altruism and Sexual Selection from Darwin to Today

What gives rise to the most salient, contested and misunderstood of sex differences… differences that see men persistently walk off with the top positions and prizes, whether influence or income, whether heads of state or CEOs… differences that infuriate feminists, preoccupy policy-makers, galvanize legislators and spawn 'diversity' committees and degrees in gender studies?

I used to think that these patterns of sex differences resulted mainly from average differences between men and women in innate talents, tastes and temperaments. After all, in talents men are on average more mathematical, more technically minded, women more verbal; in tastes, men are more interested in things, women in people; in temperaments, men are more competitive, risk-taking, single-minded, status-conscious, women far less so. And therefore, even where such differences are modest, the distribution of these 3 Ts among males will necessarily be different from that among females — and so will give rise to notable differences between the two groups. Add to this some bias and barriers — a sexist attitude here, a lack of child-care there. And the sex differences are explained. Or so I thought.

But I have now changed my mind. Talents, tastes and temperaments play fundamental roles. But they alone don't fully explain the differences. It is a fourth T that most decisively shapes the distinctive structure of male — female differences. That T is Tails — the tails of these statistical distributions. Females are much of a muchness, clustering round the mean. But, among males, the variance — the difference between the most and the least, the best and the worst — can be vast. So males are almost bound to be over-represented both at the bottom and at the top. I think of this as 'more dumbbells but more Nobels'.

Consider the mathematics sections in the USA's National Academy of Sciences: 95% male. Which contributes most to this predominance — higher means or larger variance? One calculation yields the following answer. If the sex difference between the means was obliterated but the variance was left intact, male membership would drop modestly to 91%. But if the means were left intact but the difference in the variance was obliterated, male membership would plummet to 64%. The overwhelming male predominance stems largely from greater variance.

Similarly, consider the most intellectually gifted of the USA population, an elite 1%. The difference between their bottom and top quartiles is so wide that it encompasses one-third of the entire ability range in the American population, from IQs above 137 to IQs beyond 200. And who's overwhelmingly in the top quartile? Males. Look, for instance, at the boy:girl ratios among adolescents for scores in mathematical-reasoning tests: scores of at least 500, 2:1; scores of at least 600, 4:1; scores of at least 700, 13.1.

Admittedly, those examples are writ large — exceptionally high aptitude and a talent that strongly favours males and with a notably long right-hand tail. Nevertheless, the same combined causes — the forces of natural selection and the facts of statistical distribution — ensure that this is the default template for male-female differences.

Let's look at those causes. The legacy of natural selection is twofold: mean differences in the 3 Ts and males generally being more variable; these two features hold for most sex differences in our species and, as Darwin noted, greater male variance is ubiquitous across the entire animal kingdom. As to the facts of statistical distribution, they are three-fold … and watch what happens at the end of the right tail: first, for overlapping bell-curves, even with only a small difference in the means, the ratios become more inflated as one goes further out along the tail; second, where there's greater variance, there's likely to be a dumbbells-and-Nobels effect; and third, when one group has both greater mean and greater variance, that group becomes even more over-represented at the far end of the right tail.

The upshot? When we're dealing with evolved sex differences, we should expect that the further out we go along the right curve, the more we will find men predominating. So there we are: whether or not there are more male dumbbells, there will certainly be — both figuratively and actually — more male Nobels.

Unfortunately, however, this is not the prevailing perspective in current debates, particularly where policy is concerned. On the contrary, discussions standardly zoom in on the means and blithely ignore the tails. So sex differences are judged to be small. And thus it seems that there's a gaping discrepancy: if women are as good on average as men, why are men overwhelmingly at the top? The answer must be systematic unfairness — bias and barriers. Therefore, so the argument runs, it is to bias and barriers that policy should be directed. And so the results of straightforward facts of statistical distribution get treated as political problems — as 'evidence' of bias and barriers that keep women back and sweep men to the top. (Though how this explains the men at the bottom is an unacknowledged mystery.)

But science has given us biological insights, statistical rules and empirical findings … surely sufficient reason to change one's mind about men at the top.

nicholas_humphrey's picture
Emeritus Professor of Psychology, London School of Economics; Visiting Professor of Philosophy, New College of the Humanities; Senior Member, Darwin College, Cambridge; Author, Soul Dust

The economist, John Maynard Keynes, when criticised for shifting his position on monetary policy, retorted: "When the facts change, I change my mind. What do you do, sir?" Point taken. Yet, despite the way the Edge 2008 Question has been framed,  in science it is not always true  that it requires new facts to change people's minds.  Instead, as Thomas Kuhn recognised,  at major turning points in the history of science, theorists who have previously found themselves struggling  to make sense of "known facts", sometimes undergo a radical change in perspective, such that they see these same  facts in a quite different light. Where people earlier saw  the rabbit, they  now see the duck.

In my own research on consciousness, I have changed my mind more than once. I expect it will happen again.  But it has not — at least so far  — been because I learned  any new facts. Contrary to the hopes of neuroscientists on one side, quantum physicists on the other, I'm pretty sure all the facts that we need to solve the hard problem are already familiar to us  —   if only we could  see them for what they are. No magic bullet is going to emerge from the lab, from brain imaging or particle accelerators. Instead, what we are waiting for is  merely (!)  a revolutionary new way of thinking about things that we all, as conscious creatures, already know —   perhaps a way of making those same facts unfamiliar. 

What is the hard problem? The problem is to explain the mysterious out-of-this-world qualities of conscious experience —   the felt redness of red, the felt sharpness of pain.  I once believed that  the answer lay in introspection: "thoughts about thoughts", I reckoned, could yield the requisite  magical properties as an emergent property. But I later realised on logical (not factual) grounds that this idea was empty. Magic doesn't simply emerge, it has to be constructed. So, since then, I've been working on a constructivist theory of consciousness. And my most promising line yet (as I see it) has been  to turn the problem round and to imagine that  the hardness of the problem may actually be the key to its solution. 

Just suppose that  the "Cartesian theatre of consciousness", about which modern philosophers are generally so sceptical, is in fact a biological reality. Suppose indeed , that Nature has designed our brains to contain a mental theatre, designed for the very purpose of staging the qualia-rich spectacle on which we set such store. Suppose, in short,  that consciousness exists primarily for our entertainment and amazement. 

I may tell you that, with this changed mind-set, I already see the facts quite differently. I hope it does the same for you, sir.

andrian_kreye's picture
Editor-at-large of the German Daily Newspaper, Sueddeutsche Zeitung, Munich

I had witnessed the destructive power of faith more than a few times. As a reporter I had seen how evangelists supported a ruthless war in Guatemala, where the ruling general and evangelical minister Rios Montt had set out to eradicate the remnants of Mayan culture in the name of God. I had spent a month with the Hamas in the refugee camps of Gaza, where fathers and mothers would praise their dead sons' suicide missions against Israel as spiritual quests. Long before 911 I had attended a religious conference in Khartoum, where the spiritual leader of the Sudan Hassan al Turabi found common intellectual ground for such diverse people as Cardinal Arinze, Reverend Moon and the future brothers in arms Osama bin Laden and Ayman al-Zawahiri. There the Catholic Church, dubious cults and Islamist terror declared war on secularism and rational thought

It didn't have to be outright war though. Many a times I saw the paralysis of thinking and intellect in faith. I had listened to evangelical scholar Kent Hovind explain to children how Darwin was wrong, because dinosaurs and man roamed the earth together. I had spent time with the Amish, a stubborn, backwards people romanticized by Luddite sentiment. I had visited the Church of Scientology's Celebrity Center in Hollywood, where a strange and stifling dogma is glamorized by movie and pop stars.

It was during my work on faith in the US that I came across the New Religions Movement studies of David B. Barrett's "World Christian Encyclopedia". Barrett and his fellow researchers George T. Kurian and Todd M. Johnson had come to the conclusion that of all centuries, it was the alleged pinnacle of secular thought the 20th century that had brought on the most new religions in the history of civilization. They had counted 9900 full-fledged religions around the world. The success of new religions, they explained, came with the disintegration of traditional structures like the family, the tribe and the village. In the rootless world of mega cities and in American suburbia alike religious groups function as the very social fabric, society can't provide anymore.

It was hard facts against hard facts. First the visceral experience out in the field first overpowered the raw data of the massive body of scientific research. It still forced me to rethink my hardened stand towards faith. It was hard to let go of the empirical data of experience and accept the hard facts of science. This is a route normally leading from faith to rational thought. No, it hasn't brought me to faith, but I had to acknowledge it's persistence and cultural power. First and foremost it demonstated that the empirical data of journalism are no match for the bigger picture of science.

austin_dacey's picture
Representative to the United Nations for the Center for Inquiry in New York City

As a teenager growing up in the rural American Midwest, I played in a Christian rock band. We wrote worship songs, with texts on religious themes, and experienced the jubilation and transport of the music as a visitation by the Holy Spirit. Then one day, as my faith was beginning to waiver, I wrote a song with an explicitly nonreligious theme. To my surprise I discovered that when I performed it, I was overcome by the same feelings, and it dawned on me that maybe what we had experienced all along was our own spirits, that what had called to us was the power of music itself.

In truth, I wasn't thinking through much at the time. Later, as a graduate student of philosophy, I did start to think a lot about science and ethics, and I began to undergo a parallel shift of outlook. Having embraced a thoroughly naturalistic, materialistic worldview, I wondered: If everything is just matter, how could anything really matter? How could values be among the objective furniture of the universe? It is not as if anyone expected physicists to discover, alongside electrons, protons, and neutrons, a new fundamental moral particle--the moron?--which would show up in high magnitudes whenever people did nice things.

Then there was J. L. Mackie's famous argument from "queerness": objective values would have to be such that merely coming to appreciate them would motivate you to pursue them. But given everything we know about how ordinary natural facts behave (they seem to ask nothing of us), how could there possibly be states of affairs with this strange to-be-pursuedness built into them, and how could we come to appreciate them?

At the same time, I was taken in by the promises found in some early sociobiology that a new evolutionary science of human nature would supplant empty talk about objective values. As Michael Ruse once put it, "morality is just an aid to survival and reproduction," and so "any deeper meaning is illusory." Niceness may seem self-evidently right to us, but things could easily have been the other way around, had nastiness paid off more often for our ancestors.

I have since been convinced that I was looking at all of this in the wrong way. Not only are values a part of nature; we couldn't avoid them if we tried.

There is no doubt that had we evolved differently, we would value different things. However, that alone does not show that values are subjective. After all, hearing is accomplished by psychological mechanisms that evolved under natural selection. But it does not follow that the things we hear are any less real. Rather, the reality of the things around us helps to explain why we have the faculty to detect them. The evolved can put us in touch with the objective.

In fact, we are all intimately familiar with entities which are such that to recognize them is to be moved by them. We call them reasons, where a reason is just a consideration that weighs in favor of an action or belief. As separate lines of research by psychologist Daniel Wegner and psychiatrist George Ainslie (as synthesized and interpreted by Daniel Dennett) strongly suggest, our reasons aren't all "in the head," and we cannot help but heed their call.

At some point in our evolution, the behavioral repertoire of our ancestors became complex enough to involve the review and evaluation of numerous possible courses of action and the formation of intentions on the basis of their projected outcomes. In a word, we got options. However, as an ultrasocial species, for whom survival and reproduction depended on close coordination of behaviors over time, we needed to manage these options in a way that could be communicated to our neighbors. That supervisor and communicator of our mental economy is the self, the more-or-less stable "I" that persists through time and feels like it is the author of action. After all, if you want to be able to make reliable threats or credible promises, you need to keep track of who you are, were, and will be. According to this perspective, reasons are a human organism's way of taking responsibility for some of the happenings in its body and environment. As such, they are inherently public and shareable. Reasons are biological adaptations, every bit as real as our hands, eyes, and ears.

I do not expect (and we do not need) a "science of good and evil." However, scientific evidence can show how it is that things matter objectively. I cannot doubt the power of reasons without presupposing the power of reasons (for doubting). That cannot be said for the power of the Holy Spirit.

p_z_myers's picture
Biologist; Associate Professor, University of Minnesota, Morris

I always change my mind about everything, and I never change my mind about anything.

That flexibility is intrinsic to being human — more, to being conscious. We are (or should be) constantly learning new things, absorbing new information, and reacting to new ideas, so of course we are changing our minds. In the most trivial sense, learning and memory involve a constant remodeling of the fine details of the brain, and the only time the circuitry will stop changing is when we're dead. And in a more profound sense, our major ideas change over time: my 5-year-old self, my 15-year-old self, and my 25-year-old self were very different people with different priorities and different understandings of the world around them than my current 50-year-old self. This is simply in the nature of our existence.

In the context of pursuing science, however, there is a substantive context in which we do not change our minds: we have a commitment to following the evidence wherever it leads. We have a kind of overriding metaphysic that says that we should set out to find data that will change our minds about a subject — every good research program has as its goal the execution of observations and experiments that will challenge our assumptions — and about that all-important foundation of the scientific enterprise I have never changed my mind, nor can I, without abandoning science altogether.

In my own personal intellectual history, I began my academic career with a focus on neuroscience; I shifted to developmental neurobiology; I later got caught up in developmental biology as a whole; I am now most interested in the confluence of evolution and development. Have I ever changed my mind? I don't think that I have, in any significant way — I have instead applied a consistent attitude towards a series of problems.

If I embark on a voyage of exploration, and I set as my goals the willingness to follow any lead, to pursue any interesting observation, to overcome any difficulties, and I end up in some unpredicted, exotic locale that might be very different from my predictions prior to setting out, have I changed my destination in any way? I would say not; thesine qua non of science is not the conclusions we reach but the process we use to arrive at them, and that is the unchanging pole star by which we navigate.

george_dyson's picture
Science Historian; Author, Analogia

Russians arrived on the western shores of North America after crossing their Eastern Ocean in 1741. After an initial period of exploration, they settled down for a full century until relinquishing their colonies to the United States. From 1799 to 1867, the colonies were governed by the Russian-American Company, a for-profit monopoly chartered under the deathbed instructions of Catherine the Great.

The Russian-American period has been treated unkindly by historians from both sides. Soviet-era accounts, though acknowledging the skill and courage of Russian adventurers, saw this Tsarist experiment at building a capitalist, American society as fundamentally flawed, casting the native Aleuts as exploited serfs. American accounts, glossing over our own subsequent exploitation of Alaska's indigenous population and natural resources, sought to emphasize that we liberated Alaska from Russian overseers who were worse, and would never be coming back.

Careful study of primary sources has convinced me that these interpretations are not supported by the facts. The Aleutian archipelago was a spectacularly rich environment with an unusually dense, thriving population whose physical and cultural well-being was devastated by contact with European invaders. But, as permanent colonists, the Russians were not so bad. The results were closer to the settlement of Greenland by Denmark than to our own settlement of the American West.

Although during the initial decades leading up to the consolidation of the Russian-American Company there was sporadic conflict (frequently disastrous to the poorly-armed and vastly-outnumbered Russians) with the native population, the colonies soon entered a relatively stable state based on cooperation, intermarriage, and official policies that provided social status, education, and professional training to children of mixed Aleut-Russian birth. Within a generation or two the day-to-day administration of the Russian-American colonies was largely in the hands of native-born Alaskans. As exemplified by the Russian adoption and adaptation of the Aleut kayak, or baidarka, many indigenous traditions and technologies (including sea otter hunting techniques, and the working of native copper deposits) were adopted by the new arrivals, reversing the usual trend in colonization, when indigenous technologies are replaced.

The Russians instituted public education, preservation of the Aleut language through transliteration of religious and other texts into Aleut via an adaptation of the Cyrillic alphabet, vaccination of the native population against smallpox, and science-based sea mammal conservation policies that were far ahead of their time. There were no such things as "reservations" for the native population in Russian America, and we owe as much to the Russians as to the Alaska Native Claims Settlement Act of 1971 that this remains true today.

The lack of support for the colonies by the home government (St. Petersburg was half a world away, and Empress Catherine's instructions a fading memory) eventually forced the sale to the United States, but also necessitated the resourcefulness and local autonomy that made the venture a success.

Russian America was a social and technological experiment that worked, until political compromises brought the experiment to a halt.

adam_bly's picture
Head of Advanced Analytics, Spotify

When I started Seed, I had a fairly strong aversion to technology. Somehow, sometime, science and technology had become science-and-technology. Two worlds, dissimilar in many respects, likely linked in speech for the practical goal of raising funding and attention for basic research by showing the direct, immediate correlation with usable things. I felt then that the 'and' made science more perfunctory and less romantic. And that this was a bad thing on multiple counts.

In the last year, I've come to see the relationship between science and technology very differently. We have reached the point in physics and cosmology, neuroscience, and genetics at least where technology is quintessential to advancement. Technology is not merely making the practice of science faster, less mundane or, as with microscopes, helping us see the otherwise unseeable; it is a distinct yet complementary landscape from which we can advance our knowledge of the natural world.

A physicist at CERN said to me recently that they likely wouldn't have built a new $8 billion collider if there was a better way of moving the field forward. The Blue Brain Project is using supercomputers to construct a mind because the neuroscientists involved believe it is the best way of attaining an overall understanding of the brain. Robots, I now appreciate, are not simply novelty items or tools of automation, but can be a way of gaining unique insight into humans. From simulation to supercomputing, technology is now (or at least I now see) one of science's very best friends (I could say the same for the arts). And the design, magnitude, and complexity of these technological feats satisfy my (and our) need for romance in our pursuit of truth.

The sum total of all information produced in 2008 will likely exceed the amount of information generated by humans over the past 40,000 years. Science is getting literally bigger, but as these and other major experiments churn out pentabytes of data, how do we ensure that we are actually learning more? Visualization, and more generally a strong relationship between science and design, will be essential to deriving knowledge from all this information.

linda_stone's picture
Hi-Tech Industry Consultant; Former Executive at Apple Computer and Microsoft Corporation

In the past few years, I have been thinking and writing about "attention", and specifically, "continuous partial attention". The impetus came from my years of working at Apple, and then, Microsoft, where I thought a lot about user interface as well as our relationship to the tools we create.

I believe that attention is the most powerful tool of the human spirit and that we can enhance or augment our attention with practices like meditation and exercise, diffuse it with technologies like email and Blackberries, or alter it with pharmaceuticals. 

But lately I have observed that the way in which many of us interact with our personal technologies makes it impossible to use this extraordinary tool of attention to our advantage.

In observing others — in their offices, their homes, at cafes — the vast majority of people hold their breath especially when they first begin responding to email. On cell phones, especially when talking and walking, people tend to hyper-ventilate or over-breathe. Either of these breathing patterns disturbs oxygen and CO2 balance.

Research conducted by two NIH scientists, Margaret Chesney and David Anderson, demonstrates that breath holding can contribute significantly to stress-related diseases. The body becomes acidic, the kidneys begin to re-absorb sodium, and as the oxygen and CO2 balance is undermined, our biochemistry is thrown off.

Around this same time, I became very interested in the vagus nerve and the role it played.  The vagus nerve is one of the major cranial nerves, and wanders from the head, to the neck, chest and abdomen.  It's primary job is to mediate the autonomic nervous system, which includes the sympathetic — "fight or flight," and parasympathetic — "rest and digest" nervous systems.   

The parasympathetic nervous system governs our sense of hunger and satiety, flow of saliva and digestive enzymes, the relaxation response, and many aspects of healthy organ function.  Focusing on diaphragmatic breathing enables us to down regulate the sympathetic nervous system, which then causes the parasympathetic nervous system to become dominant.  Shallow breathing, breath holding and hyper-ventilating triggers the sympathetic nervous system, in a "fight or flight" response.

The activated sympathetic nervous system causes the liver to dump glucose and cholesterol into our blood, our heart rate increases, we don't have a sense of satiety, and our bodies anticipate and resource for the physical activity that, historically, accompanied a physical fight or flight response.  Meanwhile, when the only physical activity is sitting and  responding to email, we're sort of "all dressed up with nowhere to go."    

Some breathing patterns favor our body's move toward parasympathetic functions and other breathing  patterns favor a sympathetic nervous system response.  Buteyko (breathing techniques developed by a Russian M.D.), Andy Weil's breathing exercises, diaphragmatic breathing, certain yoga breathing techniques, all have the potential to soothe us, and to help our bodies differentiate when fight or flight is really necessary and when we can rest and digest. 

I've changed my mind about how much attention to pay to my breathing patterns and how important it is to remember to breathe when I'm using a computer, PDA or cell phone. 

I've discovered that the more consistently I tune in to healthy breathing patterns, the clearer it is to me when I'm hungry or not, the more easily I fall asleep and rest peacefully at night, and the more my outlook is consistently positive. 

I've come to believe that, within the next 5-7 years, breathing exercises will be a significant part of any fitness regime.

Columnist, Slate

Two years ago I watched the Dalai Lama address thousands of laboratory biologists at the Society for Neuroscience meeting in Washington, D.C. At the end of his speech, someone asked about the use of animals in lab research: "That's difficult," replied His Holiness. "Always stress the importance of compassion ... In highly necessary experiments, try to minimize pain."

The first two words of his answer provided most of the moral insight.
Universities already have cumbersome animal research protocols in place to eliminate unnecessary suffering, and few lab workers would do anything but try to minimize pain.

When I first entered graduate school, this Western-cum-Buddhist policy seemed like a neat compromise between protecting animals and supporting the advance of knowledge. But after I'd spent several years cutting up mice, birds, kittens, and monkeys, my mind was changed.

Not because I was any less dedicated to the notion of animal research--I still believe it's necessary to sacrifice living things in the name of scientific progress. But I saw how institutional safeguards served to offload the moral burden from the researchers themselves.

Rank-and-file biologists are rarely asked to consider the key ethical questions around which these policies are based. True, the NIH has for almost 20 years required that graduate training institutions offer a course in responsible research conduct. But in the class I took, we received PR advice rather than moral guidance: What's the best way to keep your animal research out of the public eye?

In practice, I found that scientists were far from monolithic in their attitudes towards animal work. (Drosophila researchers had misgivings about the lab across the hall, where technicians perfused the still-beating hearts of mice with chemical fixative; mouse researchers didn't want to implant titanium posts in the skulls of water-starved monkeys.) They weren't animal rights zealots, of course--they had nothing but contempt for the PETA protestors who passed out fliers in front of the lab buildings. But they did have real misgivings about the extent to which biology research might go in its exploitation of living things.

At the same time, very few of us took the time to consider whether or how we might sacrifice fewer animals (or no animals at all). Why bother, when the Institutional Animal Care and Use Committee had already signed off on the research? The hard part of this work isn't convincing an IACUC board to sanction the killing. It's making sure you've exhausted every possible alternative.

susan_blackmore's picture
Psychologist; Visiting Professor, University of Plymouth; Author, Consciousness: An Introduction

Imagine me, if you will, in the Oxford of 1970; a new undergraduate, thrilled by the intellectual atmosphere, the hippy clothes, joss-stick filled rooms, late nights, early morning lectures, and mind-opening cannabis. 

I joined the Society for Psychical Research and became fascinated with occultism, mediumship and the paranormal — ideas that clashed tantalisingly with the physiology and psychology I was studying. Then late one night something very strange happened. I was sitting around with friends, smoking, listening to music, and enjoying the vivid imagery of rushing down a dark tunnel towards a bright light, when my friend spoke. I couldn't reply.

"Where are you Sue?" he asked, and suddenly I seemed to be on the ceiling looking down.

"Astral projection!" I thought and then I (or some imagined flying "I") set off across Oxford, over the country, and way beyond. For more than two hours I fell through strange scenes and mystical states, losing space and time, and ultimately my self. It was an extraordinary and life-changing experience. Everything seemed brighter, more real, and more meaningful than anything in ordinary life, and I longed to understand it.

But I jumped to all the wrong conclusions. Perhaps understandably, I assumed that my spirit had left my body and that this proved all manner of things — life after death, telepathy, clairvoyance, and much, much more. I decided, with splendid, youthful over-confidence, to become a parapsychologist and prove all my closed-minded science lecturers wrong. I found a PhD place, funded myself by teaching, and began to test my memory theory of ESP. And this is where my change of mind — and heart, and everything else — came about.

I did the experiments. I tested telepathy, precognition, and clairvoyance; I got only chance results. I trained fellow students in imagery techniques and tested them again; chance results. I tested twins in pairs; chance results. I worked in play groups and nursery schools with very young children (their naturally telepathic minds are not yet warped by education, you see); chance results. I trained as a Tarot reader and tested the readings; chance results.

Occasionally I got a significant result. Oh the excitement! I responded as I think any scientist should, by checking for errors, recalculating the statistics, and repeating the experiments. But every time I either found the error responsible, or failed to repeat the results. When my enthusiasm waned, or I began to doubt my original beliefs, there was always another corner to turn — always someone saying "But you must try xxx". It was probably three or four years before I ran out of xxxs.

I remember the very moment when something snapped (or should I say "I seem to …" in case it's a false flash-bulb memory). I was lying in the bath trying to fit my latest null results into paranormal theory, when it occurred to me for the very first time that I might have been completely wrong, and my tutors right. Perhaps there were no paranormal phenomena at all.

As far as I can remember, this scary thought took some time to sink in. I did more experiments, and got more chance results. Parapsychologists called me a "psi-inhibitory experimenter", meaning that I didn't get paranormal results because I didn't believe strongly enough. I studied other people's results and found more errors and even outright fraud. By the time my PhD was completed, I had become a sceptic.

Until then, my whole identity had been bound up with the paranormal. I had shunned a sensible PhD place, and ruined my chances of a career in academia (as my tutor at Oxford liked to say). I had hunted ghosts and poltergeists, trained as a witch, attended spiritualist churches, and stared into crystal balls. But all of that had to go.

Once the decision was made it was actually quite easy. Like many big changes in life this one was terrifying in prospect but easy in retrospect. I soon became "rentasceptic", appearing on TV shows to explain how the illusions work, why there is no telepathy, and how to explain near-death experiences by events in the brain.

What remains now is a kind of openness to evidence. However firmly I believe in some theory (on consciousness, memes or whatever); however closely I might be identified with some position or claim, I know that the world won't fall apart if I have to change my mind.

juan_enriquez's picture
Managing Director, Excel Venture Management; Co-author (with Steve Gullans), Evolving Ourselves

Having grown up in Mexico, it took me a long time to understand what true power is and where it comes from. You see, I had great role models; perhaps that was part of the problem.  Throughout the developing world, as in Lansing and Iowa, there are many smart, tough, hardworking folk who still think power resides in today’s politics, unions, punditry, faith, painting, poetry, literature, agriculture, manufacturing, or architecture. Each sphere can immortalize or destroy individuals or regions. But, long term, to paraphrase Darwin, the only way for a country to survive and thrive is to adapt and adopt. 

Problem is there is a winner’s bias. To paraphrase a winemaker, tradition is an experiment that worked. A religion originally thrives because adherents find it improves their lives. Sometimes faith is a way to cope with the horror of the present and to improve health and survival. (Suppose there is any regional basis for the coincidences in Kosher and Halal restrictions?)  But most religions, and most countries, forget they became powerful by continuously experimenting, learning, tweaking, improving. They begin to ossify myths and traditions. As others grow and thrive, they begin to fear change. They celebrate the past, becoming more nativist and xenophobic. 

The rest of the world does not wait. It keeps gathering data. It keeps changing minds and methods. Successful religions and countries evolve new testaments and beliefs from the foundations of the old. The alternative is to merge, fragment, become irrelevant, go extinct. Museum basements are full of statues of once all powerful emperors and Gods that demanded, and got, blood sacrifices. 

Who is truly powerful over the long term? Those running most of the successful experiments.  And nowhere does this happen faster, more effectively, and more often today than in science-related endeavors. True power flows primarily from science and knowledge.

As we double the amount of data generated by our species over the course of the next five years, universities and science-driven businesses are the key drivers of new experiments. You can find many examples of fast growing countries that are rich and poor, North and South, communist, capitalist, and socialist. But all have a core of great science schools and tech entrepreneurs. Meanwhile much of Latin America lacks many Silicon Valley wannabes. Start ups are rare. Serial entrepreneurs sell cheap Chinese imports on sidewalks instead of dreaming up IPOs. Scientists often earn less than accountants. Real growth has been absent for decades. 

Few governments understand how quickly they must change, adopt, teach, and celebrate the new. (Never mind religions).  It is no coincidence that some of the fastest growing regions today were either isolated or largely irrelevant a few of decades ago. Singapore, Ireland, China, India and Korea, were considered basket cases. But often those with little to lose sometimes risk a new strategy.

Who eventually survives will be largely driven by understanding and applying digital and life code, by creating robots and nanomaterials, by working inconceivably large data sets and upgrading our brains. Meanwhile many U.S. leaders proudly proclaim no evolution and little knowledge of science. They reflect a core of scared voters experiencing massive disruption and declining wages; that core fears elite education, science, immigrants, open borders, and above all rapid change.  As income and knowledge gaps widen, many fall further and further behind; many grow to hate an open, knowledge driven economy. Change is rejected, blocked, vilified.

It took me a long time to shift focus from the politics, art, literature, and concerns of today towards the applied science of tomorrow. But had I not done that, I would have found it much harder to understand  which countries are likely to succeed and which could disappear.  And the disappearance and fragmentation of whole nations is an ever more common phenomenon.  Without the ability to adapt and adopt to science driven change, no matter what type of government, geography, ethnicity, or historic tradition you have, you will find that power devolves… even in the most powerful of empires.

daniel_c_dennett's picture
Philosopher; Austin B. Fletcher Professor of Philosophy, Co-Director, Center for Cognitive Studies, Tufts University; Author, From Bacteria to Bach and Back

I've changed my mind about how to handle the homunculus temptation: the almost irresistible urge to install a "little man in the brain" to be the Boss, the Central Meaner, the Enjoyer of pleasures and the Sufferer of pains. In Brainstorms (1978) I described and defended the classic GOFAI (Good Old Fashioned AI) strategy that came to be known as "homuncular functionalism," replacing the little man with a committee.

The AI programmer begins with an intentionally characterized problem, and thus frankly views the computer anthropomorphically: if he solves the problem he will say he has designed a computer than can [e.g.,] understand questions in English . His first and highest level of design breaks the computer down into subsystems, each of which is given intentionally characterized tasks; he composes a flow chart of evaluators, rememberers, discriminators, overseers and the like. These are homunculi with a vengeance. . . . . Each homunculus in turn is analyzed into smaller homunculi, but, more important, into less clever homunculi. When the level is reached where the homunculi are no more than adders and subtractors, by the time they need only the intelligence to pick the larger of two numbers when directed to, they have been reduced to functionaries "who can be replaced by a machine." (p80)

I still think that this is basically right, but I have recently come to regret–and reject–some of the connotations of two of the terms I used: committee and machine. The cooperative bureaucracy suggested by the former, with its clear reporting relationships (an image enhanced by the no-nonsense flow charts of classical cognitive science models) was fine for the sorts of computer hardware–and also the levels of software, the virtual machines–that embodied GOFAI, but it suggested a sort of efficiency that was profoundly unbiological. And while I am still happy to insist that an individual neuron, like those adders and subtractors in the silicon computer, "can be replaced by a machine," neurons are bio-machines profoundly unlike computer components in several regards.

Notice that computers have been designed to keep needs and job performance almost entirely independent. Down in the hardware, the electric power is doled out evenhandedly and abundantly; no circuit risks starving. At the software level, a benevolent scheduler doles out machine cycles to whatever process has highest priority, and although there may be a bidding mechanism of one sort or another that determines which processes get priority, this is an orderly queue, not a struggle for life. (As Marx would have it, "from each according to his abilities, to each according to his needs). It is a dim appreciation of this fact that probably underlies the common folk intuition that a computer could never "care" about anything. Not because it is made out of the wrong materials — why should silicon be any less suitable a substrate for caring than organic molecules? — but because its internal economy has no built-in risks or opportunities, so it doesn't have to care.

Neurons, I have come to believe, are not like this. My mistake was that I had stopped the finite regress of homunculi at least one step too early! The general run of the cells that compose our bodies are probably just willing slaves–rather like the selfless, sterile worker ants in a colony, doing stereotypic jobs and living out their lives in a relatively non-competitive ("Marxist") environment. But brain cells — I now think — must compete vigorously in a marketplace. For what?

What could a neuron "want"? The energy and raw materials it needs to thrive–just like its unicellular eukaryote ancestors and more distant cousins, the bacteria and archaea. Neurons are robots; they are certainly not conscious in any rich sense–remember, they are eukaryotic cells, akin to yeast cells or fungi. If individual neurons are conscious then so is athlete’s foot. But neurons are, like these mindless but intentional cousins, highly competent agents in a life-or-death struggle, not in the environment between your toes, but in the demanding environment of the brain, where the victories go to those cells that can network more effectively, contribute to more influential trends at the virtual machine levels where large-scale human purposes and urges are discernible.

I now think, then, that the opponent-process dynamics of emotions, and the roles they play in controlling our minds, is underpinned by an "economy" of neurochemistry that harnesses the competitive talents of individual neurons. (Note that the idea is that neurons are still good team players within the larger economy, unlike the more radically selfish cancer cells. Recalling Francois Jacob’s dictum that the dream of every cell is to become two cells, neurons vie to stay active and to be influential, but do not dream of multiplying.)

Intelligent control of an animal’s behavior is still a computational process, but the neurons are "selfish neurons," as Sebastian Seung has said, striving to maximize their intake of the different currencies of reward we have found in the brain. And what do neurons "buy" with their dopamine, their serotonin or oxytocin, etc.? Greater influence in the networks in which they participate.

stanislas_dehaene's picture
Neuroscientist; Collège de France, Paris; Author, How We Learn

What made me change my mind isn't a new fact, but a new theory.

Although a large extent of my work is dedicated to modelling the brain, I always thought that this enterprise would remain rather limited in scope. Unlike physics, neuroscience would never create a single, major, simple yet encompassing theory of how the brain works. There would be never be a single "Schrödinger's equation for the brain".

The vast majority of neuroscientists, I believe, share this pessimistic view. The reason is simple: the brain is the outcome of five hundred million years of tinkering. It consists in millions of distinct pieces, each evolved to solve a distinct yet important problem for our survival. Its overall properties result from an unlikely combination of thousands of receptor types, ad-hoc molecular mechanisms, a great variety of categories of neurons and, above all, a million billion connections criss-crossing the white matter in all directions. How could such a jumble be captured by a single mathematical law?

Well, I wouldn't claim that anyone has achieved that yet… but I have changed my mind about the very possibility that such a law might exist.

For many theoretical neuroscientists, it all started twenty five years ago, when John Hopfield made us realize that a network of neurons could operate as an attractor network, driven to optimize an overall energy function which could be designed to accomplish object recognition or memory completion. Then came Geoff Hinton's Boltzmann machine — again, the brain was seen as an optimizing machine that could solve complex probabilistic inferences. Yet both proposals were frameworks rather than laws. Each individual network realization still required the set-up of thousands of ad-hoc connection weights.

Very recently, however, Karl Friston, from UCL in London, has presented two extraordinarily ambitious and demanding papers in which he presents "a theory of cortical responses".  Friston's theory rests on a single, amazingly compact premise: the brain optimizes a free energy function. This function measures how closely the brain's internal representation of the world approximates the true state of the real world. From this simple postulate, Friston spins off an enormous variety of predictions: the multiple layers of cortex, the hierarchical organization of cortical areas, their reciprocal connection with distinct feedforward and feedback properties, the existence of adaptation and repetition suppression… even the type of learning rule — Hebb's rule, or the more sophisticated spike-timing dependent plasticity — can bededuced, no longer postulated, from this single overarching law.

The theory fits easily within what has become a major area of research — the Bayesian Brain, or the extent to which brains perform optimal inferences and take optimal decisions based on the rules of probabilistic logic. Alex Pouget, for instance, recently showed how neurons might encode probability distributions of parameters of the outside world, a mechanism that could be usefully harnessed by Fristonian optimization. And the physiologist Mike Shadlen has discovered that some neurons closely approximate the log-likelihood ratio in favor of a motor decision, a key element of Bayesian decision making. My colleagues and I have shown that the resulting random-walk decision process nicely accounts for the duration of a central decision stage, present in all human cognitive tasks, which might correspond to the slow, serial phase in which we consciously commit to a single decision. During non-conscious processing, my proposal is that we also perform Bayesian accumulation of evidence, but without attaining the final commitment stage. Thus, Bayesian theory is bringing us increasingly closer to the holy grail of neuroscience — a theory of consciousness.

Another reason why I am excited about Friston's law is, paradoxically, that it isn't simple. It seems to have just the right level of distance from the raw facts. Much like Schrödinger's equation cannot easily be turned into specific predictions, even for an object as simple as a single hydrogen atom, Friston's theory require heavy mathematical derivations before it ultimately provides useful outcomes. Not that it is inapplicable. On the contrary, it readily applies to motion perception, audio-visual integration, mirror neurons, and thousands of other domains — but in each case, a rather involved calculation is needed.

It will take us years to decide whether Friston's theory is the true inheritor of Helmholtz's view of "perception as inference". What is certain, however, is that neuroscience now has a wealth of beautiful theories that should attract the attention of top-notch mathematicians — we will need them!

roger_highfield's picture
Director, External Affairs, Science Museum Group; Co-author (with Martin Nowak), SuperCooperators

I am a heretic. I have come to question the key assumption behind this survey: "When facts change your mind, that's science." This idea that science is an objective fact-driven pursuit is laudable, seductive and - alas - a mirage.

Science is a never-ending dialogue between theorists and experimenters. But people are central to that dialogue. And people ignore facts. They distort them or select the ones that suit their cause, depending on how they interpret their meaning. Or they don't ask the right questions to obtain the relevant facts.

Contrary to the myth of the ugly fact that can topple a beautiful theory - and against the grain of our lofty expectations - scientists sometimes play fast and loose with data, highlighting what that suits them and ignoring stuff that doesn't.

The harsh spotlight of the media often encourages them to strike a confident pose even if the facts don't. I am often struck by how facts are ignored, insufficient or even abused. I back well-designed animal research but am puzzled by how scientists chose to ignore the basic fact that vivisection is so inefficient at generating cures for human disease. Intelligent design is for cretins but, despite the endless proselytizing about the success of Darwin - assuming that evolution is a fact - I could still see it being superseded, rather as Einstein's ideas replaced Newton's law of gravity. I believe in man-made global warming but computer-projected facts that purportedly say what is in store for the Earth in the next century leave me cold.

I support embryo research but was irritated by one oft-cited fact in the recent British debate on the manufacture of animal-human hybrid embryos: "only a tiny fraction" of the hybrid made by the Dolly cloning method (nuclear transfer) contains animal DNA. Given that it features in mitochondria, which are central to a range of diseases; given a single spelling mistake in DNA can be catastrophic; and given no-one really understands what nuclear transfer does, this "fact" was propaganda.

Some of the most exotic and prestigious parts of modern science are unfettered by facts. I have recently written about whether our very ability to study the heavens may have shortened the inferred lifetime of the cosmos, whether there are two dimensions of time, even the prospect that time itself could cease to be in billions of years. The field of cosmology is in desperate need of more facts, as highlighted by the aphorisms made at its expense ("There is speculation, pure speculation and cosmology.... cosmologists are often in error, never in doubt')

Scientists have to make judgements about the merits of new facts. Ignoring them in the light of strong intuition is the mark of a great scientist. Take Einstein, for example: when Kaufmann claimed to have experimental facts that refuted special relativity, he stuck to his guns and was proved right. Equally, Einstein's intuition misled him in his last three decades, when he pursued a fruitless quest for a unified field theory that was not helped by his lack of interest in novel facts - the new theoretical ideas, particles and interactions that had emerged at this time.

When it comes to work in progress, in particular, many scientists treat science like a religion - the facts should be made to fit the creed. However, facts are necessary for science but not sufficient. Science is when, in the face of extreme scepticism, enough facts accrue to change lots of minds.

Our rising and now excessive faith in facts alone can be seen in a change in the translation of the motto of the world's oldest academy of science, the Royal Society. Nullius in Verba was once taken as 'on the word of no one' to highlight the extraordinary power that empirical evidence bestowed upon science. The message was that experimental evidence trumped personal authority.

Today the Society talks of the need to 'verify all statements by an appeal to facts determined by experiment'. But whose facts? Was it a well-designed experiment? And are we getting all the relevant facts? The Society should adopt the snappier version that captures its original spirit: 'Take nobody's word for it'.

david_m_buss's picture
Professor of Psychology, University of Texas, Austin; Author, When Men Behave Badly

I have never thought that female sexual psychology was simple.  But I've changed my mind about the magnitude of its complexity and consequently revamped the scope and orchestration of my entire research program.  I once focused my research on two primary sexual strategies — long-term and short-term.  Empirical work has revealed a deeper, richer repertoire: serial mating, friends with benefits, one-night stands, brief affairs, enduring affairs, polyamory, polyandry, sexual mate poaching, mate expulsion, mate switching, and various combinations of these throughout life.  Women implement their sexual strategies through an astonishing array of tactics.  Scientists have documented at least 34 distinct tactics for promoting short-term sexual encounters and nearly double that for attracting a long-term romantic partner.  

Researchers discovered 28 tactics women use to derogate sexual competitors, from pointing out that her rival's thighs are heavy to telling others that the rival has a sexually transmitted disease.  Women's sexual strategies include at least 19 tactics of mate retention, ranging from vigilance to violence, and 29 tactics of ridding themselves of unwanted mates, including having sex as a way to say good-bye.  Some women use sexual infidelity as a means of getting benefits from two or more men.  Others use it as a means of exiting one relationship in order to enter another.  When a woman wants a man who is already in a relationship, she can use at least 19 tactics of mate poaching to lure him away, from befriending both members of the couple in order to disarm her unsuspecting rival to insidiously sowing seeds of doubt about her rival's fidelity or level of desirability. 

Ovulation and orgasm are yielding scientific insights into female sexuality unimagined five years ago.  The hidden rhythms of the ovulation cycle, for example, have profound effects on women's sexual desire. Women married to men lower in mate value experience an upsurge in sexual fantasies about other men, but mainly during the fertile phase of their cycle.  They are sexually attracted to men with masculine faces, but especially so in the five days leading up to ovulation.  Women's sense of smell spikes around ovulation.  Sexual scents, long thought unimportant in human sexuality, in fact convey information to women about a man's genetic quality.  The female orgasm, once thought by many scientists to be functionless, may turn out to have several distinct adaptive benefits.  And those don't even include the potential gains from faking orgasm.  Some women mislead about their sexual satisfaction in order to get a man to leave; others to deceive him about his paternity in "his" child.  

Female sexual psychology touches every facet of human affairs, from cooperative alliances through strategies of hierarchy negotiation.  Some women use sex to get along.  Some use sex to get ahead.  Sexual motives pervade murder.  Failure in sexual unions sometimes triggers suicidal ideation.  I thought the complexity of women's sexual psychology was finally starting to be captured when recent research revealed 237 reasons why women have sex, ranging from "to get rid of a headache" to "to get closer to God," from "to become emotionally connected with my partner" to "to break up a rival's relationship."  Within a month of that publication, however, researchers discovered another 44 reasons why women have sex ranging from "because life is short and we could die at any moment" to "to get my boyfriend to shut up," bringing the sexual motivation total to 281 and still counting (obviously, trying to pin down exact numbers is a bit of a joke, but scientists work through quantification).

Yet with all these scientific discoveries, I feel that we are still at the beginning of the exploration and humbled by how little we still know.  As a researcher focusing on female sexuality, I'm inherently limited by virtue of possessing a male brain.  Consequently, I've teamed up with brilliant female research scientists, recruited a team of talented female graduate students, and marshaled much of my research to explore the complexities of female sexual psychology.  They have led me to see things previously invisible to my male-blinkered brain.  Female sexual psychology is more complex than I previously thought by several orders of magnitude.  And still I may be underestimating.

rebecca_newberger_goldstein's picture
Philosopher, Novelist; Recipient, 2014 National Humanities Medal; Author, Plato at the Googleplex; 36 Arguments for the Existence of God: A Work of Fiction

Edge’s question this year wittily refers to a way of demarcating science from philosophy and religion.  “When thinking changes your mind, that’s philosophy . . . .  When facts change your mind, that’s science.” Behind the witticism lies the important claim that science—or more precisely, scientific theories—can be clearly distinguished from all other theories, that scientific theories bear a special mark, and what this mark is is falsifiability. Said Popper:  The criterion of the scientific status of a theory is its falsifiability.  

For most scientists, this is all they need to know about the philosophy of science. It was bracing to come upon such a clear and precise criterion for identifying  scientific theories. And it was gratifying to see how Popper used it to discredit the claims that  psychoanalysis and Marxism are scientific theories. It had long seemed to me that the falsifiability test was basically right and enormously useful.

But then I started to read Popper’s work carefully, to teach him in my philosophy of science classes, and to look to scientific practice to see whether his theory survives the test of falsifiability (at least as a description of how successful science gets done). And I’ve changed my mind.

For one thing, Popper’s characterization of how science is practiced—as a cycle of conjecture and refutation—bears little relation to what goes on in the labs and journals. He describes science as if it was skeet-shooting, as if the only goal of science is to prove that one theory after another is false. But just open a copy of Science.  To pick a random example: “In a probabilistic learning task, A1-allele carriers with reduced dopamine D2 receptor densities learned to avoid actions with negative consequences less efficiently.” Not, “We tried to falsify the hypothesis that A1 carriers are less efficient learners, and failed.” Scientists rarely write the way that Popper says they should, and a good Popperian should recognize that the Master may have over-simplified the logic of theory testing.

Also, scientists don’t, and shouldn’t, jettison a theory as soon as a disconfirming datum comes in. As Francis Crick once said, “Any theory that can account for all of the facts is wrong, because some of the facts are always wrong.” Scientists rightly question a datum that appears to falsify an elegant and well-supported theory, and they rightly add assumptions and qualifications and complications to a theory as they learn more about the world. As Imre Lakatos, a less-cited (but more subtle) philosopher of science points out, all scientific theories are unfalsifiable. The ones we take seriously are those that lead to “progressive” research programs, where a small change accommodates a large swath of past and future data. And the ones we abandon are those that lead to “degenerate” ones, where the theory gets patched and re-patched at the same rate as new facts come in.

Another problem with the falsifiability criterion is that I have seen it  become a blunt instrument, unthinkingly applied. Popper tried to use it to discredit not only Marxism and Freudianism as scientific theories but also Darwin’s theory of natural selection—a position that only a creationist could hold today. I have seen scientists claim that major theories in contemporary cosmology and physics are not “science” because they can’t think of a simple test that would falsify them. You’d think that when they are faced with a conflict between what scientists really do and their memorized Popperian sound-bite about how science ought to be done, they might question the sound bite, and go back and learn more than a single sentence from the philosophy of science. But such is the godlike authority of Popper that his is the one theory that can never be falsified!

Finally, I’ve come to think that identifying scientificality with falsifiability lets certain non-scientific theories off the hook, by saying that we should try to find good reasons to believe whether a theory is true or false only when that theory is called “science.” It allows believers to protect their pet theories by saying that they can’t be, and shouldn’t be, subject to falsification, just because they’re clearly not scientific theories. Take the theory that there’s an omnipotent, omniscient, beneficent God. It may not be a scientific hypothesis, but it seems to me to be eminently falsifiable; in fact, it seems to have been amply falsified.   But because falsifiability is seen as demarcating the scientific, and since theism is so clearly not scientific, believers in religious ideologies get a free pass. The same is true for many political ideologies. The parity between scientific and nonscientific ideas is concealed by thinking that there’s a simple test that distinguishes science from nonscience, and that that test is falsifiability.

nicholas_a_christakis's picture
Sterling Professor of Social and Natural Science, Yale University; Co-author, Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives

I work in a borderland between social science and medicine, and I therefore often find myself trying to reconcile conflicting facts and perspectives about human biology and behavior.  There are fellow travelers at this border, of course, heading in both directions, or just dawdling, but the border is both sparsely populated and chaotic.  The border is also, strangely, well patrolled, and it is often quite hard to get authorities on both sides to coordinate activities.  Once in a while, however, I find that my passport (never quite in order, according to officials) has acquired a new visa.  For me, this past year, I acquired the conviction that human evolution may proceed much faster than I had thought, and that humans themselves may be responsible. 

In short, I have changed my mind about how people come literally to embody the social world around them.  I once thought that we internalized cultural factors by forming memories, acquiring language, or bearing emotional and physical marks (of poverty, of conquest).  I thought that this was the limit of the ways in which our bodies were shaped by our social environment.  In particular, I thought that our genes were historically immutable, and that it was not possible to imagine a conversation between culture and genetics.  I thought that we as a species evolved over time frames far too long to be influenced by human actions. 

I now think this is wrong, and that the alternative — that we are evolving in real time, under the pressure of discernable social and historical forces — is true.  Rather than a monologue of genetics, or a soliloquy of culture, there is a dialectic between genetics and culture.

Evidence has been mounting for a decade. The best example so far is the evolution of lactose tolerance in adults.  The ability of adults to digest lactose (a sugar in milk) confers evolutionary advantages only when a stable supply of milk is available, such as after milk-producing animals (sheep, cattle, goats) have been domesticated.  The advantages are several, ranging from a source of valuable calories to a source of necessary hydration during times of water shortage or spoilage.  Amazingly, just over the last 3-9 thousand years, there have been several adaptive mutations in widely separated populations in Africa and Europe, all conferring the ability to digest lactose (as shown by Sarah Tishkoff and others).  These mutations are principally seen in populations who are herders, and not in nearby populations who have retained a hunter/gatherer lifestyle. This trait is sufficiently advantageous that those with the trait have notably many more descendants than those without.

A similar story can be told about mutations that have arisen in the relatively recent historical past that confer advantages in terms of surviving epidemic diseases such as typhoid.  Since these diseases were made more likely when the density of human settlements increased and far-flung trade became possible, here we have another example of how culture may affect our genes.

But this past year, a paper by John Hawks and colleagues in PNASfunctioned like the staccato plunk of a customs agent stamping my documents and waving me on.  The paper showed that the human genome may be changing at an accelerating rate over the past 80,000 years, and that this change may be in response not only to population growth and adaptation to new environments, but also to cultural developments that have made it possible for humans to sustain such population growth or survive in such environments.

Our biology and our culture have always been in conversation of course — just not (I had thought) on the genetic level.  For example, rising socio-economic status with industrial development results in people becoming taller (a biological effect of a cultural development) and taller people require architecture to change (a cultural effect of a biological development).  Anyone marveling at the small size of beds in colonial-era houses knows this firsthand.  Similarly, an epidemic may induce large-scale social changes, modifying kinship systems or political power.  But genetic change over short time periods?  Yes.

Why does this matter?  Because it is hard to know where this would stop.  There may be genetic variants that favor survival in cities, that favor saving for retirement, that favor consumption of alcohol, or that favor a preference for complicated social networks.  There may be genetic variants (based on altruistic genes that are a part of our hominid heritage) that favor living in a democratic society, others that favor living among computers, still others that favor certain kinds of visual perception (maybe we are all more myopic as a result of Medieval lens grinders).  Modern cultural forms may favor some traits over others.  Maybe even the more complex world we live in nowadays really is making us smarter. 

This has been very difficult for me to accept because, unfortunately, this also means that it may be the case that particular ways of living create advantages for some, but not all, members of our species.  Certain groups may acquire (admittedly, over centuries) certain advantages, and there might be positive or negative feedback loops between genetics and culture.  Maybe some of us really are better able to cope with modernity than others.  The idea that what we choose to do with our world modifies what kind of offspring we have is as amazing as it is troubling.

mary_catherine_bateson's picture
Professor Emerita, George Mason University; Visiting Scholar, Sloan Center on Aging & Work, Boston College; Author, Composing a Further Life

We do not so much change our minds about facts, although we necessarily correct and rearrange them in changing contexts. But we do change our minds about the significance of those facts.

I can remember, as a young woman, first grasping the danger of environmental destruction at a conference in 1968.  The context was the intricate interconnection within all living systems, a concept that applied to ecosystems like forests and tide pools and equally well to human communities and to the planet as a whole, the sense of an extraordinary interweaving of life, beautiful and fragile, and threatened by human hubris. It was at that conference also that I first heard of the greenhouse effect, the mechanism that underlies global warming. A few years later, however, I heard of the Gaia Hypothesis (proposed by James Lovelock in 1970), which proposed that the same systemic interconnectivity gives the planet its resilience and a capacity for self correction that might survive human tampering. Some environmentalists welcomed the Gaia hypothesis, while others warned that it might lead to complacency in the face of real and present danger. With each passing year, our knowledge of how things are connected is enriched but the significance of these observations is still debated.

J.B.S. Haldane was asked once what the natural world suggested about the mind of its Creator, and he replied "an inordinate fondness for beetles." This observation also plays differently for different listeners—a delight in diversity, perhaps, as if the Creator might have spent the first sabbath afternoon resting from his work by playfully exploring the possible ramifications of a single idea (beetles make up roughly one fifth of all known species on the planet, some 350,000 of them)—or a humbling (or humiliating?) lack of preoccupation with our own unique kind, which might prove to be a temporary afterthought, survived only by cockroaches.

These two ways of looking at what we observe seem to recur, like the glass half full and the glass half empty.  The more we know of the detail of living systems, the more we seem torn between anxiety and denial on the one hand and wonder and delight on the other as we try to understand the significance of our knowledge. Science has radically altered our awareness of the scale and age of the universe, but this changing awareness seems to stimulate humility in some—our planet a tiny speck dominated by flea-like bipeds—and a sort of megalomania in others who see all of this as directed toward us, our species, as its predestined masters. Similarly, the exploration of human diversity in the twentieth century expanded for some the sense of plasticity and variability and for others reinforced the sense of human unity. Even within these divergent emphases, for some the recognition of human unity includes a capacity for mutual recognition and adaptation while for others it suggests innate tendencies toward violence and xenophobia. As we have slowly explored the mechanisms of memory and learning, we have seen examples of (fragile) human communities demoralized by exposure to other cultures and (resilient) examples of extraordinary adaptability. At one moment humans are depicted as potential stewards of the biosphere, at another as a cancer or a dangerous infestation. The growing awareness of a shared and interconnected destiny has a shadow side, the version of globalization that looks primarily for profit.

We are having much the same sort of debate at present between those who see religion primarily as a source of conflict between groups and others who see the world's religions as potentially convergent systems that have knit peoples together and laid the groundwork for contemporary ideas of human rights and civil society. Some believers feel called to treasure and respect the creation, including the many human cultures that have grown within it, while others regard differences of belief as sinful and the world we know as transitory or illusory. Each of the great religions, with different language and different emphases, offers the basis for environmental responsibility and for peaceful coexistence and compassion, but believers differ in what they choose to emphasize, all too many choosing the apocalyptic over the ethical texts. Nevertheless, major shifts have been occurring in the interpretation of information about climate change, most recently within the evangelical Christian community.   

My guess is that many people have tilted first one way and then the other over the past fifty years, as we have become increasingly aware of diverse understandings—surprised by accounts of human creativity and adaptation on the one hand, and distressed at the resurgence of ancient quarrels and loss of tolerance and mutual respect. Some people are growing away from irresponsible consumerism while others are having their first taste of affluence. Responses are probably partly based on temperament—generalized optimism vs. pessimism—so the tension will not be resolved by scientific findings. But these responses are also based on the decisions we make, on making up our minds about which interpretations we choose to believe. The world's historic religions deal in different ways with loss and the need for sacrifice, but the materials are there for working together, just as they are there for stoking conflict and competition.  We are most likely to survive this century if we decide to approach the choices and potential losses ahead with an awareness of the risks we face but at the same time with an awareness of the natural wonders around us and a determination to deal with each other with respect and faith in the possibility of cooperation and responsibility.

francesco_depretis's picture
Journalist, La Stampa; Italy Correspondent, Science Magazine

I was on a train back from the seaside. The summer was gone and the philosophy teacher (at that time I was attending the high-school) had assigned us a book to read.  The title sounded like "The Birth of modern Science in Europe". I started to leaf through it, without expecting anything special.

Until then, I had a purist vision of science: I supposed that the development of science was - in some way - a deterministic process, scientists proceeded in a linear way doing their experiments, theories arose in the science community under a common agreement.

Well..  my vision of science was dramatically different from that one I experienced some years later! With surprise and astonishment, I discovered that Sir Isaac Newton had a not-hidden passion for Alchemy - probably the furthest thing from science I could imagine - Nicolaus Copernicus wrote to the Pope begging to accept his theories, Galileo and other scientists fought not only against the Roman Church and Aristotle's thought but maybe more often one against the others just to prevail.

In two weeks I finished the book and then my way of thinking changed. I understood that science was not only a pursuit of knowledge but a social process too, with its rules and tricks: a never-ending tale such as human life. I have never forgotten it and since then, my curiosity and passion for science have been rising more and more. Definitely, that book has changed my mind.

joseph_yossi_vardi's picture
chairman of International Technologies

As a young graduate student in operations research I was avid believer in the power of scientific modelling. How wrong I was. Often, modelling is searching for a coin under the light of science, and not where it is.

First, all models have certain degree of approximation. Second, at least in operations research, human factors,errors, randomness, unexpected developments, social behaviors, human errors all are influencing the results, or are not accounted for in the modelling. In time I become more skeptical and doubtful.

It helps if you can maintain your sense of humor. In the end, it is not about the data or the changes in the data, but how you see it, and how much you believe in it.

eduardo_punset's picture
Scientist; Spanish Television Presenter; Author, The Happiness Trip

All right, we knew it. But now we have the whole picture of the molecular process through which past and future link; how the germinal soul, rooted in brain matter and memory, allows for new perceptions, for the future, to emerge. It is both simple and terrifying at the same time.

When the mind is challenged from the outside universe, it searches in its accumulated archives in order to make sense of this new stimulus. This screening of our memory –of our past- produces an immediate answer: the new stimulus either leaves everyone indifferent, or else it blooms into an emotion of love, of pleasure or of sheer curiosity. These are the three touchstones of creativity. So basically, science has discovered that at the very beginning at least, only the past matters. And that holds true also of our future creativity.

Then a process more akin to alchemy than science sparks off and develops into social intelligence. The imitation process, based on mirror neurons, interacts with the corpus of accumulated knowledge -of one´s own species, and of others- which, combined with a good stock of well preserved individual memory, explode into new thinking.

Until very recently, we were missing a fundamental step in the process of knowledge- namely, how to transform short term memory into long term knowledge. At last we are taking into account the detailed contents of durability, specific proteins without which there is no learning and affection in childhood, no schooling at a later stage, no socialization in adult life. The roots are in the past; but there is no knowledge if we hide in a cave alone, with no windows to peer from and no shadows dancing outside.

The past has to be worked upon from the outside in order to transform into the future, and this has brought about the second main discovery in the molecular process of creativity. The so called “technology transfer” from old to new generations is a two-way process: matter, mind, soul, past, memory, future, and also startling new ways of looking at old things, are all marvellously intertwined in the evolutionary process.

rupert_sheldrake's picture
biologist and author

I used to think of skepticism as a primary intellectual virtue, whose goal was truth. I have changed my mind. I now see it as a weapon.

Creationists opened my eyes. They use the techniques of critical thinking to expose weaknesses in the evidence for natural selection, gaps in the fossil record and problems with evolutionary theory. Is this because they are seeking truth? No. They believe they already know the truth. Skepticism is a weapon to defend their beliefs by attacking their opponents.

Skepticism is also an important weapon in the defence of commercial self-interest. According to David Michaels, who was assistant secretary for environment, safety and health in the US Department of Energy in the 1990s, the strategy used by the tobacco industry to create doubt about inconvenient evidence has now been adopted by corporations making toxic products such as lead, mercury, vinyl chloride, and benzene. When confronted with evidence that their activities are causing harm, the standard response is to hire researchers to muddy the waters, branding findings that go against the industry's interests as "junk science." As Michaels noted, "Their conclusions are almost always the same: the evidence is ambiguous, so regulatory action is unwarranted." Climate change skeptics use similar techniques.

In a penetrating essay called "The Skepticism of Believers", Sir Leslie Stephen, a pioneering agnostic (and the father of Virginia Woolf), argued that skepticism is inevitably partial. "In regard to the great bulk of ordinary beliefs, the so-called skeptics are just as much believers as their opponents." Then as now, those who proclaim themselves skeptics had strong beliefs of their own. As Stephen put it in 1893, " The thinkers generally charged with skepticism are equally charged with an excessive belief in the constancy and certainty of the so-called 'laws of nature'. They assign a natural cause to certain phenomena as confidently as their opponents assign a supernatural cause."

Skepticism has even deeper roots in religion than in science. The Old Testament prophets were withering in their scorn for the rival religions of the Holy Land. Psalm 115 mocks those who make idols of silver and gold: "They have mouths, and speak not: eyes have they, and see not." At the Reformation, the Protestants deployed the full force of biblical scholarship and critical thinking against the veneration of relics, cults of saints and other "superstitions" of the Catholic Church. Atheists take religious skepticism to its ultimate limits; but they are defending another faith, a faith in science.

In practice, the goal of skepticism is not the discovery of truth, but the exposure of other people's errors. It plays a useful role in science, religion, scholarship, and common sense. But we need to remember that it is a weapon serving belief or self-interest; we need to be skeptical of skeptics. The more militant the skeptic, the stronger the belief.

william_h_calvin's picture
Theoretical Neurobiologist; Affiliate Professor Emeritus, University of Washington; Author, Global Fever

Back in 1968, when I first heard about global warming while visiting the Scripps Institute of Oceanography, almost everyone thought that serious problems were several centuries in the future. That's because no one realized how ravenous the world's appetite for coal and oil would become during a mere 40 years. They also thought that problems would develop slowly. Wrong again.
I tuned into abrupt climate change about 1984, when the Greenland ice cores showed big jumps in temperature and snowfall, stepping up and down in a mere decade but lasting centuries. I worried about global warming setting off another flip but I still didn't revise my notions about a slow time scale for the present greenhouse warming.

Greenland changed my mind. About 2004, the speedup of the Greenland glaciers made a lot of climate scientists revise their notions about how fast things were changing. When the summer earthquakes associated with glacial movement doubled and then redoubled in a mere ten years, it made me feel as if I was standing on shaky ground, that bigger things could happen at any time.
Then I saw the data on major floods and fires, steep increases every decade since 1950 and on all continents. That's not trouble moving around. It is called global climate change. It may not be abrupt but it's been fast.

For drought, which had been averaging about 15 percent of the world's land surface at any one time, there was a step up to a new baseline of 25 percent which occurred with the 1982 El Niño. That's not gradual change but an abrupt shift to a new global climate.

But the most sobering realization came when I was going through the Amazon drought data on the big El Niños in 1972, 1982, and 1997. Ten years ago, we nearly lost two of the world's three major tropical rain forests to fires. If that mega Nino had lasted two years instead of one, we could have seen the atmosphere's excess CO2 rise 40 percent over a few years — and likely an even bigger increase in our climate troubles. Furthermore, missing all of those green leaves to remove CO2 from the air, the annual bump up of CO2 concentration would have become half again as large. That's like the movie shifting into fast forward.

And we're not even back paddling as fast as we can, just drifting toward the falls. If I were a student or young professional, seeking my future being trashed, I'd be mad as hell. And hell is a pretty good metaphor for where we are heading if we don't get our act together. Quickly.

dimitar_d_sasselov's picture
Professor of Astronomy, Harvard University; Director, Harvard Origins of Life Initiative; Author, The Life of Super-Earths

I change my mind all the time — keeping an open mind in science is a good thing. Most often these are rather unremarkable occasions; most often it is acceptance of something I had been unconvinced or unsure about. But then there is this one time …

October 4th, 1995 was a warm day. Florence was overrun by tourists – and a few scientists from a conference I was attending. The next day one of my older and esteemed colleagues from Geneva was going to announce a curious find – a star that seemed to have a very small companion – as small as a planet like Saturn or Jupiter. Such claims had come and gone in the decades past, but this time the data seemed very good. He was keeping the details to himself until the next day, but he told me when I asked him about the orbital period of the new planet. I was incredulous – the period was so short, it was measured in days, not years – I told my wife back in the hotel that night – just 400 days!

I was not a planetary scientist – stars were my specialty, but I knew my planetary basics – a planet like Jupiter could not possibly exist so close to its star and have a period of 400 days. Some of this I had learned as far back as last year of high school. I did not question it, instead I was questioning my colleague’s claim. He was the first to speak the next day and he began by showing the orbital period for the new planet – it was 4.2 days! The night before, I must have heard “4.2 days”, but being so incredibly foreign to my preconception, my brain had “translated” that number to a more “reasonable” 420 days, or – roughly 400. Deeply held preconceptions can be very powerful.

My Florentine experience took some time to sink in. But when it did, it was sobering and inspiring. It made me curious and motivated to find the answers to those questions that just days before I had taken for granted. And I ended up helping develop the new field of extrasolar planets research.

sam_harris's picture
Neuroscientist; Philosopher; Author, Making Sense

Like many people, I once trusted in the wisdom of Nature. I imagined that there were real boundaries between the natural and the artificial, between one species and another, and thought that, with the advent of genetic engineering, we would be tinkering with life at our peril. I now believe that this romantic view of Nature is a stultifying and dangerous mythology.

Every 100 million years or so, an asteroid or comet the size of a mountain smashes into the earth, killing nearly everything that lives. If ever we needed proof of Nature's indifference to the welfare of complex organisms such as ourselves, there it is. The history of life on this planet has been one of merciless destruction and blind, lurching renewal.

The fossil record suggests that individual species survive, on average, between one and ten million years. The concept of a "species" is misleading, however, and it tempts us to think that we, as homo sapiens, have arrived at some well-defined position in the natural order. The term "species" merely designates a population of organisms that can interbreed and produce fertile offspring; it cannot be aptly applied to the boundaries between species (to what are often called "intermediate" or "transitional" forms). There was, for instance, no first member of the human species, and there are no canonical members now. Life is a continuous flux. Our nonhuman ancestors bred, generation after generation, and incrementally begat what we now deem to be the species homo sapiens — ourselves.  There is nothing about our ancestral line or about our current biology that dictates how we will evolve in the future. Nothing in the natural order demands that our descendants resemble us in any particular way. Very likely, they will not resemble us. We will almost certainly transform ourselves, likely beyond recognition, in the generations to come.

Will this be a good thing? The question presupposes that we have a viable alternative. But what is the alternative to our taking charge of our biological destiny? Might we be better off just leaving things to the wisdom of Nature? I once believed this. But we know that Nature has no concern for individuals or for species. Those that survive do so despite Her indifference. While the process of natural selection has sculpted our genome to its present state, it has not acted to maximize human happiness; nor has it necessarily conferred any advantage upon us beyond the capacity raise the next generation to child-bearing age.  In fact, there may be nothing about human life after the age of forty (the average lifespan until the 20th century) that has been selected by evolution at all. And with a few exceptions (e.g. the gene for lactose tolerance), we probably haven't adapted to our environment much since the Pleistocene.

But our environment and our needs — to say nothing of our desires — have changed radically in the meantime. We are in many respects ill-suited to the task of building a global civilization. This is not a surprise. From the point of view of evolution, much of human culture, along with its cognitive and emotional underpinnings, must be epiphenomenal. Nature cannot "see" most of what we are doing, or hope to do, and has done nothing to prepare us for many of the challenges we now face.

These concerns cannot be waved aside with adages like, "if it ain't broke, don't fix it." There are innumerable perspectives from which our current state of functioning can be aptly described as "broke." Speaking personally, it seems to me that everything I do picks out some point on a spectrum of disability: I was always decent at math, for instance, but this is simply to say that I am like a great mathematician who has been gored in the head by a bull; my musical ability resembles that of a Mozart or a Bach, it is true, though after a near fatal incident on skis; if Tiger Woods awoke from surgery to find that he now possessed (or was possessed by) my golf-swing, rest assured that a crushing lawsuit for medical malpractice would be in the offing.

Considering humanity as a whole, there is nothing about natural selection that suggests our optimal design. We are probably not even optimized for the Paleolithic, much less for life in the 21st century. And yet, we are now acquiring the tools that will enable us to attempt our own optimization. Many people think this project is fraught with risk. But is it riskier than doing nothing? There may be current threats to civilization that we cannot even perceive, much less resolve, at our current level of intelligence. Could any rational strategy be more dangerous than following the whims of Nature? This is not to say that our growing capacity to meddle with the human genome couldn't present some moments of Faustian over-reach. But our fears on this front must be tempered by a sober understanding of how we got here. Mother Nature is not now, nor has she ever been, looking out for us.

john_allen_paulos's picture
professor of mathematics at Temple University

I've changed my mind about countless matters, but most if not all such changes have been vaguely akin to switching from brand A to brand B. In some deep sense, however, I feel that I've never really changed how I think about or evaluate things. This may sound like a severe case of cerebral stenosis, but I think the condition is universal. Although people change their minds, they do so in an invariant, convergent sort of way, and I find this to be of more interest than the brand switches, important as they sometimes are.

I take heart that this stance can be viewed as something other than stubborn rigidity from the so-called Agreement Theorem of Nobel Prize-winning game-theorist Robert Aumann. His theorem can be roughly paraphrased as follows: Two individuals cannot forever agree to disagree.

An important definition allows for a slightly fuller statement. Information is termed "common knowledge" among a group of people if all parties know it, know that the others know it, know that the others know they know it, and so on. It is much more than "mutual knowledge," which requires only that the parties know the particular bit of information, not that they be aware of the others' knowledge.

Aumann showed that as agents' beliefs, formed in rational response to different bits of private information, gradually become common knowledge, their beliefs change and eventually coincide.

Thus in whatever rational ways each of us comes to change his or her mind, in the long run the rest of us will follow suit. Of course, as Keynes observed, in the long run, we're all dead.

Another problem is Aumann's result doesn't say anything about the convergence of irrational agents.

christopher_j_anderson's picture
Curator, TED conferences, TED Talks; author, TED Talks

Aside from whether Apple matters (whoops!), the biggest thing I've changed my mind about is climate change. There was no one thing that convinced me to flip from wait-and-see to the-time-for-debate-is-over. Instead, there were three things, which combined for me in early 2006. There was, of course, the scientific evidence, which kept getting more persuasive. There was also economics, and the recognition that moving to alternative, sustainable energy was going to be cheaper over the long run as oil got more expensive. And finally there was geopolitics, with ample evidence of how top-down oil riches destabilized a region and then the world. No one reason was enough to win me over to total energy regime change, but together they seemed win-win-win.

Now I see the entire energy and environmental picture through a carbon lens. It's very clarifying. Put CO2 above everything else, and suddenly you can make reasonable economic calculations about risks and benefits, without getting caught up in the knotty politics of full-spectrum environmentalism. I was a climate skeptic and now I'm a carbon zealot. I seem to annoy traditional environmentalists just as much, but I like to think that I've moved from behind to in front.

carolyn_porco's picture
Planetary Scientist; Cassini Imaging Team Leader; Director, CICLOPS, Space Science Institute, Boulder, Colorado

I've changed my mind about the manner in which our future on this planet might evolve.

I used to think that the power of science to dissect, inform, illuminate and clarify, its venerable record in improving the human condition, and its role in enabling the technological progress of the modern world were all so glaringly obvious that no one could reasonably question its hallowed position in human culture as the pre-eminent device for separating truth from falsehood.

I used to think that the edifice of knowledge constructed from thousands of years of scientific thought by various cultures all over the globe, and in particular the insights earned over the last 400 years from modern scientific methods, were so universally revered that we could feel comfortably assured of having permanently left our philistine days behind us.

And while I've always appreciated the need for care and perseverance in guiding public evaluation of the complexities of scientific discourse and its findings, I never expected that we would, at this stage in our development, have to justify and defend the scientific process itself.

Yet, that appears to be the case today. And now, I'm no longer sure that scientific inquiry and the cultural value it places on verifiable truth can survive without constant protection, and its ebb and flow over the course of human history affirms this. We have been beset in the past by dark ages, when scientific truths and the ideas that logically spring from them were systematically destroyed or made otherwise unavailable, when the practitioners of science were discredited, imprisoned, and even murdered. Periods of human enlightenment have been the exception throughout time, not the rule, and our language has acknowledged this: 'Two steps forward, one step back' neatly outlines the nonmonotonic stagger inherent in any reading of human history.

And, if we're not mindful, we could stagger again. When the truth becomes problematic, when intellectual honesty clashes with political expediency, when voices of reason are silenced to mere whisper, when fear alloys with ignorance to promote might over intelligence, integrity, and wisdom, the very practice of science can find itself imperiled. At that point, can darkness be far behind?

To avoid so dangerous a tipping point requires us, first and foremost, to recognize the distasteful possibility that it could happen again, at any time. I now suspect the danger will be forever present, the need for vigilance forever great.

jaron_lanier's picture
Computer Scientist; Musician; Author, Who Owns The Future?

Here's a happy example of me being wrong. Other researchers interested in Virtual Reality had been proposing as early as twenty years ago that VR would someday be useful for the treatment of psychological disorders such as post-traumatic stress disorder.

I did not agree. In fact, I had strong arguments as to why this ought not to work. There was evidence that the brain created distinct "homuncular personas" for virtual world experiences, and reasons to believe that these personas were tied to increasingly distinct bundles of emotional patterns. Therefore, emotional patterns attached to real world situations would, I
surmised, remain attached to those situations. The earliest research on PTSD treatment in VR seemed awfully shaky to me, and I was not very encouraging to younger researchers who were interested in it.

The idea of using VR for PTSD treatment seemed less likely to work than various other therapeutic applications of VR, which were more centered around somatic processes. For instance, VR can be used as an enhanced physical training environment. The first example, from the 1980s, involved juggling. If virtual juggling balls fly more slowly than real balls, then they are easier to juggle. You can then gradually increase the speed, in order to provide a more gradual path for improving skills than would be available in physical reality. (This idea came about initially because it was so hard to make early VR systems go as fast as the reality they were emulating. In the old VPL Research lab, where a lot of VR tools were initially prototyped, we were motivated to be alert for potential virtues hiding within the limitations of the era.) Variations on this strategy have become well established. For instance, patients are learning to use prosthetic limbs more quickly by using VR these days.

Beyond rational argument, I was biased in other ways: The therapeutic use of VR seemed "too cute," and sounded too much like a press release in waiting.

Well, I was wrong. PTSD treatment in VR is now a well-established field with its own conferences, journals publishing well-repeated results, and clinical practitioners. Sadly, the Iraq war has provided all too many patients, and has also motivated increased funding for research in this subfield of VR applications.

One of the reasons I was wrong is that I didn't see that the same tactic we used on juggling balls (of gradually adapting the content and design of a virtual world to the instantaneous state of the user/inhabitant) could be applied in a less somatic way. For instance, in some clinical protocols, a traumatic event is represented in VR with gradually changing levels of realism as part of the course of treatment.

Maybe I was locked into seeing VR through the filters of the limitations of its earliest years. Maybe I was too concerned about the cuteness factor. At any rate, I'm glad there was a diversity of mindsets in the research community so that others could see where I didn't.

I'm concerned that diversity of thought in some of the microclimates of the scientific community is narrowing these days instead of broadening. I blame the nature of certain online tools. Tools like the Wikipedia encourage the false worldview that we already know enough to agree on a single account of reality, and anonymous blog comment rolls can bring out mob-like behaviors in young scientists who use them.

At any rate, one of the consolations of science is that being wrong on
occasion lets you know you don't know everything and motivates renewed
curiosity. Being aware of being wrong once in a while keeps you young.

robert_shapiro's picture
Professor Emeritus of Chemistry and Senior Research Scientist, New York University; Author, Planetary Dreams

I used to view the scientific literature as a collective human effort  to build an enduring and expanding structure of knowledge. Each new publication in a respected, refereed journal would be digested and debated with the thoroughness that religious groups devote to the Talmud, Bible or Koran. In science, of  course, new papers can challenge widely held beliefs, so publication does not mean acceptance. The alternative is criticism, which usually provokes a new round of experiments. As a result, the new idea might end up on the scrap heap, perhaps becoming a historical curiosity. Cold fusion seems to have followed this path, and in my own field, the suggestion that the two chains of DNA lay side-by-side, instead of being intertwined in a double helix.

But once it has passed scrutiny, a new contribution would be absorbed into the edifice of science, expanding and enhancing it, while providing a fragment of immortality to the authors.

My perception was wrong. New scientific ideas can be smothered with silence.

I was aware earlier of the case of Gregor Mendel.  His fundamental genetic experiments with peas were ignored for a third of a century. But he had published them in an obscure journal, in an age when meetings and libraries were fewer, and journals were circulated by land mail. When his ideas were rediscovered at the start of the twentieth century, Thomas Hunt Morgan set out to disprove them, and ended up performing experiments that greatly strengthened their case. A Nobel Prize was his reward. He wrote in a textbook: "The investigator must… cultivate also a skeptical state of mind toward all hypotheses — especially his own — and be ready to abandon them the moment the evidence pointed the other way."

Morgan's attitude still has a place in science but I no longer believe that it is standard practice. Another strategy has emerged by which some scientists deal with ideas that they dislike. They act as if the discussion or data had never been published, and proceed about their business without mentioning it.

One example involves the use of a technique called "prebiotic synthesis" to support the most prevalent idea about the origin of life.. This theory proposes that life began on this planet with the accidental formation of an elaborate self-copying molecule, RNA or a close relative.  The chemist Graham Cairns-Smith argued in a 1982 book that the technique was flawed and that life's origin by such an event was extremely improbable. He proposed an imaginative alternative. His alternative was debated, but the practice of prebiotic synthesis was continued without discussion.

As I felt that his case was sound, I took up this cause and extended the arguments against prebiotic synthesis. I published a book, and a series of papers in refereed journals, including one devoted entirely to the origin of life. I expected rebuttals, and hoped that new control experiments would be run that would resolve the issue. The rebuttals did not appear, and citations of my work in the field were sparse. When citations were made, they were usually accompanied by a comment that the RNA-first theory had some problems that were not yet resolved. The resolution would take place by further applications of prebiotic synthesis. A blanket of silence has remained in place in the scientific literature concerning the validity of this technique. Ironically, my ideas have been welcomed by creationists, who advocate a supernatural solution to the origin-of-life problem.

The smother-by-silence practice may be fairly common in science.  Professor Kendric Smith of Stanford University has noted a similar pattern in the field of DNA repair, where the contribution of recombination to the repair of damage by ultraviolet radiation has been ignored in key papers. For a moral judgment on this practice, I cannot improve upon Smith's closing quote in his letter to ASBMB Today:

"In religion one can often be forgiven for one's sins but no one should be forgiven for sins against science."

paul_saffo's picture
Technology Forecaster; Consulting Associate Professor, Stanford University

When I began my career as a forecaster over two decades ago, it was a given that the core of futures research lay beyond the reach of traditional quantitative forecasting and it's mathematical tools.  This meant that futures researchers would not enjoy the full labor-saving benefits of number-crunching computers, but at least it guaranteed job security.  Economists and financial analysts might one day wake up to discover that their computer tools were stealing their jobs, but futurists would not see machines muscling their way into the world of qualitative forecasting anytime soon.

I was mistaken.  I now believe that in the not too distant future, the best forecasters will not be people, but machines: ever more capable "prediction engines" probing ever deeper into stochastic spaces.  Indicators of this trend are everywhere from the rise of quantitative analysis in the financial sector, to the emergence of computer-based horizon scanning systems in use by governments around the world, and of course the relentless advance of computer systems along the upward-sweeping curve of Moore's Law.

We already have human-computer hybrids at work in the discovery/forecasting space, from Amazon's Mechanical Turk, to the myriad online prediction markets.  In time, we will recognize that these systems are an intermediate step towards prediction engines in much the same way that human "computers" who once performed the mathematical calculations on complex projects were replaced by general-purpose electronic digital computers.

The eventual appearance of prediction engines will also be enabled by the steady uploading of reality into cyberspace, from the growth of web-based social activities to the steady accretion of sensor data sucked up by an exponentially growing number of devices observing and increasingly, manipulating the physical world.  The result is an unimaginably vast corpus of raw material, grist for the prediction engines as they sift and sort and peer ahead.  These prediction engines won't ever exhibit perfect foresight, but as they and the underlying data they work on co-evolve, it is a sure bet that they will do far better then mere humans.

david_gelernter's picture
Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, America-Lite: How Imperial Academia Dismantled our Culture (and ushered in the Obamacrats)

What I've changed my mind about is that the public is wedded to obsolete 1970s GUIs & info mgmt forever — PARC's desktop & Bell Labs' Unix file system. I'll give two example from my own experience. Both constitute long term ideas of mine and might seem like self-promotion, but my point is that as a society we don't have the patience to develop fully those big ideas that need time to soak in.

I first described a GUI called "lifestreams" in the Washington Post in 1994. By the early 2000s, I thought this system was dead in the water, destined to be resurrected in a grad student's footnote around the 29th century, The problem was (I thought) that Lifestreams was too unfamiliar, insufficiently "evolutionary" and too "revolutionary" (as the good folks at ARPA like to say [or something like that]); you need to go step-by-step with the public and the industry or you lose.

But today "lifestreams" are all over the net (take a look yourself), and I'm told that "lifestreaming" has turned into a verb at some recent Internet conferences. According to ZDnet.com, "Basically what's important about the OLPC [one laptop per child], has nothing to do with its nominal purposes and everything to do with its interface. Ultimately traceable to David Gelernter's 'Lifestreams' model, this is not just a remake of Apple's evolution of the original work at Palo Alto, but something new."

Moral: the public may be cautions but is not reactionary.

In a 1991 book called Mirror Worlds, I predicted that everyone would be putting his personal stuff in the Cybersphere (AKA "the clouds"); I said the same in a 2000 manifesto on Edge called "The 2nd Coming", & in various other pieces in between. By 2005 or so, I assumed that once again I'd jumped the gun, by too long to learn the results pre-posthumously — but once again this (of all topics) turns out to hot and all over the place nowadays. "Cloud computing" is the next big thing: What does this all prove? If you're patient, good ideas find audiences. But you have to bevery patient.

And if you expect to cash in on long-term ideas in the United States, you're certifiable.

This last point is a lesson I teach my students, and on this item I haven't (and don't expect to) change my mind. But what the hell? It's New Year's, and there are worse things than being proved right once in a while, even if it's too late to count.

roger_bingham's picture
Cofounder and Director, The Science Network; Neuroscience Researcher, Center for Brain and Cognition, UCSD; Coauthor, The Origin of Minds; Creator PBS Science Programs

I was once a devout member of the Church of Evolutionary Psychology.

I believed in modules — lots of them. I believed that the mind could be thought of as a confederation of hundreds, possibly thousands, of information-processing neural adaptations. I believed that each of these mental modules had been fashioned by the relentless winnowing of natural selection as a solution to problems encountered by our hunter-gatherer ancestors in the Pleistocene. I believe I actually said that we were living in the Space Age with brains from the Stone Age. Which was clever — but not, it turned out, particularly wise.

Along with the Church Elders, I believed that this was our universal evolutionary heritage; that if you added together a whole host of these
domain-specific mini-computers — a face recognition module, a spatial relations module, a rigid object mechanics module, a tool-use module, a social exchange module, a child-care module, a kin-oriented motivation module, a sexual attraction module, a grammar acquisition module and so on —  then you had the neurocognitive architecture that comprises the human mind. Along with them, I believed that what made the human mind special was not fewer of these 'instincts', but more of them.

I was so enchanted by this view of life that I used it as the conceptual scaffolding upon which to build a multi-million dollar critically- acclaimed PBS series that I created and hosted in 1996.

And then I changed my mind.

Actually, I prefer to say that I experienced a conversion. My
conversion — literally, the turning around — the adoption of new beliefs was prompted primarily by conversations. First and foremost with an apostate from the Church of Evolutionary Psychology's inner sanctum (Peggy La Cerra); then with a group of colleagues including neuroscientists, evolutionary biologists and philosophers. Two years later, La Cerra and I published in PNAS an alternative model of the mind and followed that with a book in 2002.

Although this is not the place to detail the arguments, we suggested that the selective pressures of navigating ancestral environments — particularly the social world — would have required an adaptively flexible, on-line information-processing system and would have driven the evolution of the neocortex.  We claimed that the ultimate function of the mind is to devise behavior that wards off the depredations of entropy and keeps our energy bank balance in the black. So our universal evolutionary heritage is not a bundle of instincts, but a self-adapting system that is responsive to environmental stimuli, constantly analyzing bioenergetic costs and benefits, creating a customized database of experiences and outcomes, and generating minds that are unique by design.     

We also explained the construction of selves, how our systems adapt to different 'marketplaces', and the importance of reputation effects — a richly nuanced story, which explains why the phrase "I changed my mind" is, with all due respect, the kind of rather simplistic folk psychological language that I hope we will eventually clean up. I think it was Mallarmé who said it was the duty of the poet to purify the language of the tribe. That task now falls also to the scientist.

This model of the mind that I have now subscribed to for about a decade is the bible at the Church of Theoretical Evolutionary Neuroscience (of which I am a co-founder). It was created in alignment with both the adaptationist principles of evolutionary biologists and psychologists (who, at the time, tended to pay little attention to the actual workings of the brain at the implementation level of neurons) and the constructivist principles of neuroscientists (who tended to pay little attention to adaptationism). It would be unrealistic, however, to claim that the two perspectives have yet been satisfactorily reconciled. 

And this time, I am not so devout.

Some Evolutionary Psychologists promoted their ideas with a fervor that has been described as evangelical. To a certain extent, that seems to go with the evolutionary territory: think of the ideological feuds surrounding sociobiology, the renewed debates about levels of selection and so on. Of course, it could be argued that the latest subfields of neuroscience (like neuroeconomics and social cognitive neuroscience) are not immune to these enthusiasms (the word comes from the Greekenthousiasmos: inspired or possessed by a god or gods). Think of the fMRI-mediated  neophrenological explosion of areas said to be the neural correlate of some characteristic or other; or whether the mirror neuron system can possibly carry all the conceptual freight currently being assigned to it.

Even in science, a seductive story will sometimes, at least for a while, outpace the data. Maybe that's inevitable in the pioneering phase of a fledgling discipline. But that's when caution is most necessary — when the engine of discovery is running more on faith than facts. That's the time to remember that hubris is a sin in science as well as religion.

neil_gershenfeld's picture
Physicist, Director, MIT's Center for Bits and Atoms; Co-author, Designing Reality

I've long considered myself as working at the boundary between physical science and computer science; I now believe that that boundary is a historical accident and does not really exist.

There's a sense in which technological progress has turned it into a tautological statement. It's now possible to store data in atomic nuclei and use electron bonds as logical gates. In such a computer the number of information-bearing degrees of freedom is on the same order as the number of physical ones; it's no longer feasible to account for them independently. This means that computer programs can, and I'd argue must, look more like physical models, including spatial and temporal variables in the density and velocity of information propagation and interaction. That shouldn't be surprising; the canon of computer science emerged a few decades ago to describe the available computing technology, while the canon of physics emerged a few centuries ago to describe the accessible aspects of nature. Computing technology has changed more than nature has; progress in the former is reaching the limits of the latter.

Conversely, it makes less and less sense to define physical theories by the information technology of the last millenium (a pencil and piece of paper); a computational model is every bit as fundamental as one written with calculus. This is seen in frontiers of research in nonlinear dynamics, and quantum field theories, and black hole thermodynamics, that look more and more like massively parallel programming models. However, the organization of research has not yet caught up with this content; many of the pioneers doing the work are in neither Physics nor Computer Science departments, but are scattered around (and off) campus. Rather than trying to distinguish between programming nature and the nature of programming, I think that it makes more sense to recognize not just a technology or theory of information, but a single science.

bart_kosko's picture
Information Scientist and Professor of Electrical Engineering and Law, University of Southern California; Author, Noise, Fuzzy Thinking

I have changed my mind about using the sample mean as the best way to combine measurements into a single predictive value.  Sometimes it is the best way to combine data but in general you do not know that in advance.  So it is not the one number from or about a data set that I would want to know in the face of total uncertainty if my life depended on the predicted outcome.

Using the sample mean always seemed like the natural thing to do.  Just add up the numerical data and divide by the number of data.  I do not recall ever doubting that procedure until my college years.  Even then I kept running into the mean in science classes and even in philosophy classes where the discussion of ethics sometimes revolved around Aristotle's theory of the "golden mean." There were occasional mentions of medians and modes and other measures of central tendency but they were only occasional.      

The sample mean also kept emerging as the optimal way to combine data in many formal settings.  At least it did given what appeared to be the reasonable criterion of minimizing the squared errors of the observations.  The sample mean falls out from just one quick application of the differential calculus.  So the sample mean had on its side not only mathematical proof and the resulting prominence of appearing in hundreds if not thousands of textbooks and journal articles.  It was and remains the evidentiary workhorse of modern applied science and engineering.  The sample mean summarizes test scores and gets plotted in trend lines and centers confidence intervals among numerous other applications.

Then I ran into the counter-example of Cauchy data.  These data come from bell curves with tails just slightly thicker than the familiar "normal" bell curve.  Cauchy bell curves also describe "normal" events that correspond to the main bell of the curves.  But Cauchy bell curves have thicker tails than normal bell curves have and these thicker tails allow for many more "outliers" or rare events.  And Cauchy bell curves arise in a variety of real and theoretical cases.  The counter-example is that the sample mean of Cauchy data does not improve no matter how many samples you combine.  This result contrasts with the usual result from sampling theory that the variance of the sample mean falls with each new measurement and hence predictive accuracy improves with sample size (assuming that the square-based variance term measures dispersion and that such a mathematical construct always produces a finite value — which it need not produce in general).  The sample mean of ten thousand Cauchy data points has no more predictive power than does the sample mean of ten such data points.  Indeed the sample mean of Cauchy data has no more predictive power than does any one of the data points picked at random.  This counter-example is but one of the anomalous effects that arise from averaging data from many real-world probability curves that deviate from the normal bell curve or from the twenty or so other closed-form probability curves that have found there way into the literature in the last century.

Nor have scientists always used the sample mean.  Historians of mathematics have pointed to the late sixteenth century and the introduction of the decimal system for the start of the modern practice of computing the sample mean of data sets to estimate typical parameters.  Before then the mean apparently meant the arithmetic average of just two numbers as it did with Aristotle.  So Hernan Cortes may well have had a clear idea about the typical height of an adult male Aztec in the early sixteenth century.  But he quite likely did not arrive at his estimate of the typical height by adding measured heights of Aztec males and then dividing by the number added.  We have no reason to believe that Cortes would have resorted to such a computation if the Church or King Charles had pressed him to justify his estimate.  He might just as well have lined up a large number of Aztec adult males from shortest to tallest and then reported the height of the one in the middle.

There was a related and deeper problem with the sample mean:  It is not robust.  Extremely small or large values distort it.  This rotten-apple property stems from working not with measurement errors but with squared errors.  The squaring operation exaggerates extreme data even though it greatly simplifies the calculus when trying to find the estimate that minimizes the observed errors.  That estimate turns out to be the sample mean but not in general if one works with the raw error itself or other measures.  The statistical surprise of sorts is that using the raw or absolute error of the data gives the sample median as the optimal estimate.

The sample median is robust against outliers.  If you throw away the largest and smallest values in a data set then the median does not change but the sample mean does (and gives a more robust "trimmed" mean as used in combining the judging scores in figure skating and elsewhere to remove judging bias).  Realtors have long since stated typical housing prices as sample medians rather than sample means because a few mansions can so easily skew the sample mean.  The sample median would not change even if the price of the most expensive house rose to infinity.  The median would still be the middle-ranked house if the number of houses were odd.  But this robustness is not a free lunch.  It comes at the cost of ignoring some of the information in the numerical magnitudes of the data and has its own complexities for multidimensional data.

Other evidence pointed to using the sample median rather than the sample mean.  Statisticians have computed the so-called breakdown point of these and other statistical measures of central tendency.  The breakdown point measures the largest proportion of data outliers that a statistic can endure before it breaks down in a formal sense of producing very large deviations.  The sample median achieves the theoretical maximum breakdown point.  The sample mean does not come close.  The sample median also turns out to be the optimal estimate for certain types of data (such as Laplacian data) found in many problems of image processing and elsewhere — if the criterion is maximizing the probability or likelihood of the observed data.  And the sample median can also center confidence intervals.  So it too gives rises to hypothesis tests and does so while making fewer assumptions about the data than the sample mean often requires for the same task. 

The clincher was the increasing use of adaptive or neural-type algorithms in engineering and especially in signal processing.  These algorithms cancel echoes and noise on phone lines as well as steer antennas and dampen vibrations in control systems.  The whole point of using an adaptive algorithm is that the engineer cannot reasonably foresee all the statistical patterns of noise and signals that will bombard the system over its lifetime.  No type of lifetime average will give the kind of performance that real-time adaptation will give if the adaptive algorithm is sufficiently sensitive and responsive to its measured environment.  The trouble is that most of the standard adaptive algorithms derive from the same old and non-robust assumptions about minimizing squared errors and thus they result in the use of sample means or related non-robust quantities.  So real-world gusts of data wind tend to destabilize them.  That is a high price to pay just because in effect it makes nineteenth-century calculus computations easy and because such easy computations still hold sway in so much of the engineering curriculum.  It is an unreasonably high price to pay in many cases where a comparable robust median-based system or its kin both avoids such destabilization and performs similarly in good data weather and does so for only a slightly higher computational cost.  There is a growing trend toward using robust algorithms.  But engineers still have launched thousands of these non-robust adaptive systems into the stream of commerce in recent years.  We do not know whether the social costs involved from using these non-robust algorithms are negligible or substantial.

So if under total uncertainty I had to pick a predictive number from a set of measured data and if my life depended on it — I would now pick the median.

jesse_bering's picture
Psychologist; Associate Professor, Centre for Science Communication, University of Otago, New Zealand; Author, Perv

If asked years ago whether I believed in God, my answer would have gone something like this: "I believe there's something…" This response leaves enough wiggle room for a few quasi-religious notions to slip comfortably through. I no longer believe that my soul is immortal, that the universe sends me messages every now and then, or that my life story will unfold according to some inscrutable plan. But it is more like knowing how and why a perceptual illusion is deceiving my evolved senses than it is becoming immune to the illusion altogether. 
Here's a snapshot of how these particular illusions work:

Psychological Immortality
There's a scene in Gide's The Counterfeiters where a suicidal man puts a pistol to his temple but hesitates for fear of the noise from the blast. Similarly, a group of college students who rejected the idea that consciousness survives death nonetheless told me that someone who'd died in a car accident would know he was dead. "There's no afterlife," one participant said. "He sees that now." 
In wondering what it's like to be dead, our psychology responds by running mental simulations using previous states of consciousness. The trouble is that death is not like anything we've ever experienced — orcan experience. (What's it like to be conscious yet unconscious at the same time?) I doubt you'd find anyone who believes less in the afterlife, yet I have a very real fear of ghosts and I feel guilty for not visiting my mother's grave more often. 

Symbolic Meaning of Natural Events
Psychologist Becky Parker and I told a seven-year-old that an invisible princess was in the room with her. The task was to find a hidden ball by placing her hand on top of the box she thought it was inside. If you change your mind, we said, just move your hand to the other box. Now, Princess Alice likes you and she's going to help you find the ball. "I don't know how she's going to tell you," said Becky, "but somehow she'll tell you if you pick the wrong box."

The child picked a box, held her hand there, and after 15 seconds the box opened to reveal the ball (there were two identical balls). On the second trial, as soon as the girl chose a box, a picture crashed to the ground, and the child moved her hand to the other box. In doing so, she responded just like most other seven-year-olds we tested. They didn't need to believe in Princess Alice to see the picture falling as a sign. In fact, if scepticism can be operationally measured by the degree of tilt in rolling eyes, many of them could be called sceptics.
More surprising was that slightly younger children, the credulous five-year-olds, didn't move their hands, and when asked why the picture fell, they said things like "I don't know why she did it, she just did it." They saw Princess Alice as running about making things happen, not as a communicative partner. To them, the events had nothing to do with their behaviour. Finally, the three-year-olds we tested simply shrugged their shoulders and said that the picture was broken. Princess Alice who?

Seeing signs in natural events is a developmental accomplishment rather than the result of a gap in scientific knowledge. To experience an illusion, the psychological infrastructure must first be in place. Whenever I hear mayors blaming hurricanes on drug use or evangelicals attributing tsunamis to homosexuality, I think of Princess Alice. Still, after receiving bad news my first impulse is to ask myself "why?"  Even for someone like me, scientific explanations just don't scratch the itch like supernatural ones. 

Personal Destiny
Jean-Paul Sartre, the atheistic existentialist, observed that he couldn't help but feel as though a divine hand had guided his life. "It contradicts many of my other ideas," he said. "But it is there, floating vaguely. And when I think of myself I often think rather in this way, for want of being able to think otherwise."

My own atheism is not as organic as was Sartre's. Only scientific evidence and eternal vigilance have enabled me to step outside of this particular illusion of personal destiny. Psychologists now know that human beings intuitively reason as though natural categories exist foran intelligently designed purpose. Clouds don't just exist, say kindergartners, they're there for raining.

Erring this way about clouds is one thing, but when it colours our reasoning about our own existence, that's where this teleo-functional bias gets really interesting. The illusion of personal destiny is intricately woven together with other quasi-religious illusions in a complex web that researchers have not even begun to pull apart. My own private thoughts remain curiously saturated with doubts about whether I'm doing what I'm "meant" for.

Some beliefs are arrived at so easily, held so deeply, and divorced so painfully that it seems unnatural to give them up. Such beliefs can be abandoned when the illusions giving rise to them are punctured by scientific knowledge, but a mind designed by nature cannot be changed fundamentally. I stopped believing in God long ago, but he still casts a long shadow.

j_craig_venter's picture
A leading scientist of the 21st century for Genomic Sciences; Co-Founder, Chairman, Synthetic Genomics, Inc.; Founder, J. Craig Venter Institute; Author, A Life Decoded

Like many or perhaps most I wanted to believe that our oceans and atmosphere were basically unlimited sinks with an endless capacity to absorb the waste products of human existence.  I wanted to believe that solving the carbon fuel problem was for future generations and that the big concern was the limited supply of oil not the rate of adding carbon to the atmosphere. The data is irrefutable--carbon dioxide concentrations have been steadily increasing in our atmosphere as a result of human activity since the earliest measurements began. We know that on the order of 4.1 billion tons of carbon are being added to and staying in our atmosphere each year.  We know that burning fossil fuels and deforestation are the principal contributors to the increasing carbon dioxide concentrations in our atmosphere. Eleven of the last twelve years rank among the warmest years since 1850.  While no one knows for certain the consequences of this continuing unchecked warming, some have argued it could result in catastrophic changes, such as the disruption of the Gulf Steam which keeps the UK out of the ice age or even the possibility of the Greenland ice sheet sliding into the Atlantic Ocean.  Whether or not these devastating changes occur, we are conducting a dangerous experiment with our planet. One we need to stop.

The developed world including the United States, England and Europe contribute disproportionately to the environmental carbon, but the developing world is rapidly catching up.  As the world population increases from 6.5 billion people to 9 billion over the next 45 years and countries like India and China continue to industrialize, some estimates indicate that we will be adding over 20 billion tons of carbon a year to the atmosphere. Continued greenhouse gas emissions at or above current rates would cause further warming and induce many changes to the global climate that could be more extreme than those observed to date. This means we can expect more climate change; more ice cap melts, rising sea levels, warmer oceans and therefore greater storms, as well as more droughts and floods, all which compromise food and fresh water production.

It required close to 100,000 years for the human population to reach 1 billion people on Earth in 1804.  In 1960 the world population passed 3 billion and now we are likely to go from 6.5 billion to 9 billion over the next 45 years.  I was born in 1946 when there were only about 2.4 billion of us on the planet, today there are almost three people for each one of us in 1946 and there will soon be four.

Our planet is in crisis, and we need to mobilize all of our intellectual forces to save it. One solution could lie in building a scientifically literate society in order to survive. There are those who like to believe that the future of life on Earth will continue as it has in the past, but unfortunately for humanity, the natural world around us does not care what we believe. But believing that we can do something to change our situation using our knowledge can very much affect the environment in which we live.

randolph_nesse's picture
Research Professor of Life Sciences, Director (2014-2019), Center for Evolution and Medicine, Arizona State University; Author, Good Reasons for Bad Feelings

I used to believe that you could find out what is true by finding the smartest people and finding out what they think. However, the most brilliant people keep turning out to be wrong.  Linus Pauling's ideas about Vitamin C are fresh in mind, but the famous physicist Lord Kelvin did more harm in 1900 with calculations based on the rate of earth's cooling that seemed to show that there had not been enough time for evolution to take place. A lot of the belief that smart people are right is an illusion caused by smart people being very convincing… even when they are wrong.

I also used to believe that you could find out what is true by relying on experts — smart experts — who devote themselves to a topic.  But most of us remember being told to eat margarine because it is safer than butter — then it turned out that trans-fats are worse.  Doctors told women they must use hormone replacement therapy (HRT) to prevent heart attacks — but HRT turned out to increase heart attacks.  Even when they are not wrong, expert reports often don't tell you what is true.  For instance, read reviews by experts about antidepressants; they provide reams of data, but you won't often find the simple conclusion that these drugs are not all that helpful for most patients.  It is not just others; I shudder to think about all the false beliefs I have unknowingly but confidently passed on to my patients, thanks to my trust in experts. Everyone should read the article by Ioannidis, "Why most published research findings are false."

Finally, I used to believe that truth had a special home in universities.  After all, universities are supposed to be devoted to finding out what is true, and teaching students what we know and how to find out for themselves. Universities may be best show in town for truth pursuers, but most stifle innovation and constructive engagement of real controversies, not just sometimes, but most of the time, systematically. 

How can this be? Everyone is trying so hard to encourage innovation!  The Regents take great pains to find a President who supports integrity and creativity, the President chooses exemplary Deans, who mount massive searches for the best Chairs. Those Chairs often hire supporters who work in their own areas, but what if one wants to hire someone doing truly innovative work, someone who might challenge established opinions?  Faculty committees intervene to ensure that most positions go to people just about like themselves, and the Dean asks how much grant overhead funding a new faculty member will bring in.  No one with new ideas, much less work in a new area or critical of established dogmas, can hope to get through this fine sieve.  If they do, review committees are waiting. And so, by a process of unintentional selection, diversity of thought and topic is excluded.  If it still sneaks in, it is purged.  The disciplines become ever more insular. And universities find themselves unwittingly inhibiting progress and genuine intellectual engagement.  University leaders recognize this and hate it, so they are constantly creating new initiatives to foster innovative interdisciplinary work.  These have the same lovely sincerity as new diets for the New Year, and the same blindness to the structural factors responsible for the problems.

Where can we look to find what is true?  Smart experts in universities are a place to start, but if we could acknowledge how hard it is for truth and its pursuers to find safe university lodgings, and how hard it is for even the smartest experts to offer objective conclusions, we could begin to design new social structures that would support real intellectual innovation and engagement.

barry_c_smith's picture
Professor & Director, Institute of Philosophy School of Advanced Study University of London

For a long time I regarded neuroscience as a fascinating source of information about the workings of the visual system and its dual pathways for sight and action; the fear system in humans and animals, and numerous puzzling pathology cases arising from site-specific lesions.

Yet, despite the interest of these findings, I had little faith that the profusion of fMRI studies of different cortical regions would tell us much about the problems that had pre-occupied philosophers for centuries. After all, some of the greatest minds of history had long pondered the nature of consciousness, the self, the relation between self and others, only to produce a greater realisation of how hard it was to say something illuminating about any of these phenomena. The more one is immersed in neural mechanisms the less one seems to be talking about consciousness, and the more attends to the qualities of conscious experience the less easy it is to connect with the mechanism of the brain. In despair, some philosophers suggested that we must reduce or eliminate the everyday way of speaking about our mental lives to arrive at a science of mind. There appeared to be a growing gulf between how things appeared to us and how reductionist neuroscience told us they were.

However, I have changed my mind about the relevance of neuroscience to philosophers' questions, and vice-versa. Why? Well, firstly because the most interesting findings in cognitive neuroscience are not in the least reductionist. On the contrary, neuroscientists rely on subjects' reports of their experiences in familiar terms to target the states they wish to correlate with increased activity in the cortex. Researchers disrupt specific cortical areas with TMS to discover how subject's experiences or cognitive capacities are altered.

This search for the neural correlates of specific states and abilities has proved far more successful than any reductionist programme; the aim being to explain precisely which neural areas are responsible for sustaining the experiences we typically have as human subjects. And what we are discovering is just how many sub-systems cooperate to maintain a unified and coherent field of conscious experience in us. When any of these systems is damaged what results are bizarre pathologies of mind we find it hard to comprehend. It is here that neuroscientists seek the help of philosophers in analysing the character of normal experience and describing the nature of the altered states. Reciprocally, what philosophers are learning from neuroscience is leading to revisions in cherished philosophical views; mostly for the better. For example, the early stages of sensory processing show considerable cross-modal influence of one sense on another: the nose smells what the eye sees, the tongue tastes what the ear hears, the recognition of voice is enhanced by, and enhances, facial recognition in the fusiform face area; all of which leads us to conclude that the five senses are not nearly as separate as common sense, and most philosophers, have always assumed.

Similar break-throughs in understanding how our sense of self depends on the somatosensory system are leading to revised philosophical thinking about the nature of self. And while philosophers have wondered how individuals come to know about the minds of others, neuroscience assumes the problem to have been partly solved by the discovery of the mirror neuron system which suggests an elementary, almost bodily, level of intersubjective connection between individuals from which the more sophisticated notions of self and other may develop. We don't start, like Descartes, with the self and bridge to our knowledge of other minds. We start instead with primitive social interactions from which the notions of self and other are constructed.

Neuroscientists present us with strange phenomena like patients with lesions in the right parietal region who are convinced that their left arm does not belong to them. Some still feel sensations of pain in their hand but do not believe that it their pain that is felt: something philosophers previously believed to be conceptually impossible.

I think the startling conclusion should be just how precarious the typical experience of the normally functioning mind really is. We should not find it strange to come across people who do not believe their hand belongs to them, or that it acts under someone else's command. Instead, we should think how remarkable it is that this assembly of sub-systems that keeps track of our limbs, our volitions, our position in space, and our recognition of others should cooperate to sustain the sense of self and the feeling of a coherent and unified experience of the world so familiar to us that philosophers have believed it to be the most certain things we know. It isn't the pathology cases of cognitive neuropsychology exceptional: it is the normally functioning minds that we should find the most surprising.

david_sloan_wilson's picture
SUNY distinguished professor of biology and anthropology, Binghamton University; Editor-in-Chief of Evolution: This View of Life

In 1975, as a newly minted PhD who had just published my first paper on group selection, I was invited by Science magazine to review a book by Michael Gilpin titled Group Selection in Predator Prey Communities. Gilpin was one of the first biologists to appreciate the importance of what Stuart Kauffman would call "the sciences of complexity." In his book, he was claiming that complex interactions could make group selection a more important evolutionary force than the vast majority of biologists had concluded on the basis of more simple mathematical models.

Some background: Group selection refers to the evolution of traits that increase the fitness of whole groups, compared to other groups. These traits are often selectively disadvantageous within groups, creating a conflict between levels of selection. Group selection requires the standard ingredients of natural selection-a population of groups, that vary in their phenotypic properties in a heritable fashion, with consequences for collective survival and reproduction. Standard population genetics models give the impression that groups are unlikely to vary unless they are initiated by small numbers of individuals with minimal migration among groups during their existence. This kind of reasoning turned group selection into a pariah concept in the 1960's , taught primarily as an example of how not to think. I had become convinced that group selection could be revived for smaller, more ephemeral groups that I called "trait groups." Gilpin was suggesting that group selection could also be revived for larger, geographically isolated groups on the basis of complex interactions.

Gilpin focused on the most famous conjecture about group selection, advanced by V.C. Wynne-Edwards in 1962, that animals evolve to avoid overexploiting their resources. Wynne-Edwards had become an icon for everything that was wrong and naïve about group selection. Gilpin boldly proposed that animals could indeed evolve to "manage" their resources, based on non-linearities inherent in predator-prey interactions. As resource exploitation evolves by within-group selection, there is not a gradual increase the probability of extinction. Instead, there is a tipping point that suddenly destabilizes the predator-prey interaction, like falling off a cliff. This discontinuity increases the importance of group selection, keeping the predator-prey interaction in the zone of stability.

I didn't get it. To me, Gilpin's model required a house-of-cards of assumptions, a common criticism leveled against earlier models of group selection. I therefore wrote a tepid review of Gilpin's book. I was probably also influenced by a touch of professional jealousy, as someone who myself was trying to acquire a reputation for reviving group selection!

I didn't get the complexity revolution until I read James Gleik's Chaos: Making A New Science, which I regard as one of the best books ever written about science for a general audience. Suddenly I realized that as complex systems, higher-level biological units such as groups, communities, ecosystems, and human cultures would almost certainly vary in their phenotypic properties and that some of this phenotypic variation might be heritable. Complexity theory became a central theme in my own research.

As one experimental demonstration, William Swenson (then my graduate student) created a population of microbial ecosystems by adding 1 ml of pond water from a single, well-mixed source to test tubes containing 29 ml of sterilized growth medium. This amount of pond water includes millions of microbes, so the initial variation among the test tubes, based on sampling error, was vanishingly small. Nevertheless, within four days (which amounts to many microbial generations) the test tubes varied greatly in their composition and phenotypic properties, such as the degradation of a toxic compound that was added to each test tube. Moreover, when the test tubes were selected on the basis of their properties to create a new generation of microbial ecosystems, there was a response to selection. We could select whole ecosystems for their phenotypic properties (in our case, to degrade a toxic compound), in exactly the same way that animal and plant breeders are accustomed to selecting individual organisms!

These results are mystifying in terms of models that assume simple interactions but make perfect sense in terms of complex interactions. Most people have heard about the famous "butterfly effect" whereby an infinitesimal change in initial conditions becomes amplified over the course of time for a complex physical system such as the weather. Something similar to the butterfly effect was occurring in our experiment, amplifying infinitesimal initial differences among our test tubes into substantial variation over time. A response to selection in the experiments is proof that variation caused by complex interactions can be heritable.

Thanks in large part to complexity theory, evolutionary biologists are once again studying evolution as a multi-level process that can evolve adaptations above the level of individual organisms. I welcome this opportunity to credit Michael Gilpin for the original insight.

linda_s_gottfredson's picture
Project for the Study of Intelligence and Society

For an empiricist, science brings many surprises. It has continued to change my thinking about many phenomena by challenging my presumptions about them. Among the first of my assumptions to be felled by evidence was that career choice proceeds in adolescence by identifying one's most preferred options; it actually begins early in childhood as a taken-for-granted process of eliminating the least acceptable from further consideration. Another mistaken presumption was that different abilities would be important for performing well in different occupations. The notion that any single ability (e.g., IQ or g) could predict performance to an appreciable degree in all jobs seemed far-fetched the first time I heard it, but that's just what my own attempt to catalog the predictors of job performance would help confirm. My root error had been to assume that different cognitive abilities (verbal, quantitative, etc.) are independent—in today's terms, that there are "multiple intelligences." Empirical evidence says otherwise.     

The most difficult ideas to change are those which seem so obviously true that we can scarcely imagine otherwise until confronted with unambiguous disconfirmation. For example, even behavior geneticists had long presumed that non-genetic influences on intelligence and other human traits grow with age while genetic ones weaken. Evidence reveals the opposite for intelligence and perhaps other human traits as well: heritabilities actually increase with age. My attempt to explain the evolution of high human intelligence has also led me to question another such "obvious truth," namely, that human evolution ceased when man took control of his environment. I now suspect that precisely the opposite occurred. Here is why.

Human innovation itself may explain the rapid increase in human intelligence during the last 500,000 years. Although it has improved the average lot of mankind, innovation creates evolutionarily novel hazards that put the less intelligent members of a group at relatively greater risk of accidental injury and death. Consider the first and perhaps most important human innovation, the controlled use of fire. It is still a major cause of death worldwide, as are falls from man-made structures and injuries from tools, weapons, vehicles, and domesticated animals. Much of humankind has indeed escaped from its environment of evolutionary adaptation (EEA), but only by fabricating new and increasingly complicated physical ecologies. Brighter individuals are better able not only to extract the benefits of successive innovations, but also to avoid the novel threats to life and limb that they create. Unintentional injuries and deaths have such a large chance component and their causes are so varied that we tend to dismiss them as merely "accidental," as if they were uncontrollable. Yet all are to some extent preventable with foresight or effective response, which gives an edge to more intelligence individuals. Evolution requires only tiny such differences in odds of survival in order to ratchet up intelligence over thousands of generations. If human innovation fueled human evolution in the past, then it likely still does today.

Another of my presumptions bit the dust, but in the process exposed a more fundamental, long-brewing challenge to my thinking about scientific explanation. At least in the social sciences, we seek big effects when predicting human behavior, whether we are trying to explain differences in happiness, job performance, depression, health, or income. "Effect size" (percentage of variance explained, standardized mean difference, etc.) has become our yardstick for judging the substantive importance of potential causes. Yet, while strong correlations between individuals' attributes and their fates may signal causal importance, small correlations do not necessarily signal unimportance.

Evolution provides an obvious example. Like the house in a gambling casino, evolution realizes big gains by playing small odds over myriad players and long stretches of time. The small-is-inconsequential presumption is so ingrained and reflexive, however, that even those of us who seek to explain the evolution of human intelligence over the eons have often rejected hypothesized mechanisms (say, superior hunting skills) when they could not explain differential survival or reproductive success within a single generation.

IQ tests provide a useful analogy for understanding the power of small but consistent effects. No single IQ test item measures intelligence well or has much predictive power. Yet, with enough items, one gets an excellent test of general intelligence (g) from only weakly g-loaded items. How? When test items are considered one by one, the role of chance dominates in determining who answers the item correctly. When test takers' responses to many such items are added together, however, the random effects tend to cancel each other out, and g's small contribution to all answers piles up. The result is a test that measures almost nothing but g.

I have come to suspect that some of the most important forces shaping human populations work in this inconspicuous but inexorable manner. When seen operating in individual instances, their impact is so small as to seem inconsequential, yet their consistent impact over events or individuals produces marked effects. To take a specific example, only the calculus of small but consistent tendencies in health behavior over a lifetime seems likely to explain many demographic disparities in morbidity and mortality, not just accidental death.

Developing techniques to identify, trace, and quantify such influences will be a challenge. It currently bedevils behavior geneticists who, having failed to find any genes with substantial influence on intelligence (within the normal range of variation), are now formulating strategies to identify genes that may account for at most only 0.5% of the variance in intelligence.

geoffrey_miller's picture
Evolutionary psychologist, NYU Stern Business School and University of New Mexico; author of The Mating Mind and Spent

Guys lost on unfamiliar streets often avoid asking for directions from locals.  We try to tough it out with map and compass.  Admitting being lost feels like admitting stupidity.  This is a stereotype, but it has a large grain of truth.  It's also a good metaphor for a big overlooked problem in the human sciences. 

We're trying to find our way around the dark continent of human nature.  We scientists are being paid to be the bus-driving tour guides for the rest of humanity.  They expect us to know our way around the human mind, but we don't.

So we try to fake it, without asking the locals for directions.  We try to find our way from first principles of geography ('theory'), and from maps of our own making ('empirical research').  The roadside is crowded with locals, and their brains are crowded with local knowledge, but we are too arrogant and embarrassed to ask the way.  Besides, they look strange and might not speak our language.  So we drive around in circles, inventing and rejecting successive hypotheses about where to find the scenic vistas that would entertain and enlighten the tourists ('lay people', a.k.a. 'tax-payers').  Eventually, our bus-load starts grumbling about tour-guide rip-offs in boring countries.  We drive faster, make more frantic observations, and promise magnificent sights just around the next bend. 

I used to think that this was the best we could do as behavioural scientists.  I figured that the intricacies of human nature were not just dark, but depopulated — that a few exploratory novelists and artists had sought the sources of our cognitive Amazons and emotional Niles, but that nobody actually lived there.

Now, I've changed my mind — there are local experts about almost all aspects of human nature, and the human sciences should find their way by asking them for directions.  These 'locals' are the thousands or millions of bright professionals and practitioners in each of thousands of different occupations.  They are the people who went to our high schools and colleges, but who found careers with higher pay and shorter hours than academic science.  Almost all of them know important things about human nature that behavioural scientists have not yet described, much less understood.  Marine drill sergeants know a lot about aggression and dominance.  Master chess players know a lot about if-then reasoning.  Prostitutes know a lot about male sexual psychology.  School teachers know a lot about child development.  Trial lawyers know a lot about social influence.  The dark continent of human nature is already richly populated with autochthonous tribes, but we scientists don't bother to talk to these experts. 

My suggestion is that whenever we try to understand human nature in some domain, we should identify several groups of people who are likely to know a lot about that domain already, from personal, practical, or professional experience.  We should seek out the most intelligent, articulate, and experienced locals — the veteran workers, managers, and trainers. Then, we should talk with them, face-to-face, expert-to-expert, as collaborating peers, not as researchers 'running subjects' or 'interviewing informants'.  We may not be able to reimburse them at their professional hourly wage, but we can offer other forms of prestige, such as co-authorship on research papers.

For example, suppose a psychology Ph.D. student wants to study emotional adaptations such as fear and panic, that evolved for avoiding predators.  She learns about the existing research (mostly by Clark Barrett at UCLA), but doesn't have any great ideas for her dissertation research.  The usual response is three years of depressed soul-searching, random speculation, and fruitless literature reviews.  This phase of idea-generation could progress much more happily if she just picked up the telephone and called some of the people who spend their whole professional lives thinking about how to induce fear and panic.  Anyone involved in horror movie production would be a good start: script-writers, monster designers, special effects technicians, directors, and editors.  Other possibilities would include talking with:

  • Halloween mask designers,
  • horror genre novelists,
  • designers of 'first person shooter' computer games,
  • clinicians specializing in animal phobias and panic attacks,
  • Kruger Park safari guides,
  • circus lion-tamers,
  • dog-catchers,
  • bull-fighters,
  • survivors of wild animal attacks, and
  • zoo-keepers who interact with big cats, snakes, and raptors.

A few hours of chatting with such folks would probably be more valuable in sparking some dissertation ideas than months of library research. 

The division of labor generates wondrous prosperity, and an awesome diversity of knowledge about human nature in different occupations.  Psychology could continue trying to rediscover all this knowledge from scratch.  Or, it could learn some humility, and start listening to the real expertise about human nature already acquired by every bright worker in every factory, office, and mall.

daniel_kahneman's picture
Recipient, Nobel Prize in Economics, 2002; Eugene Higgins Professor of Psychology Emeritus, Princeton; Author, Thinking, Fast and Slow and Co-Author of Noise: A Flaw in Human Judgment

The central question for students of well-being is the extent to which people adapt to circumstances.  Ten years ago the generally accepted position was that there is considerable hedonic adaptation to life conditions. The effects of circumstances on life satisfaction appeared surprisingly small: the rich were only slightly more satisfied with their lives than the poor, the married were happier than the unmarried but not by much, and neither age nor moderately poor health diminished life satisfaction.  Evidence that people adapt — though not completely — to becoming paraplegic or winning the lottery supported the idea of a "hedonic treadmill": we move but we remain in place.  The famous "Easterlin paradox" seemed to nail it down:  Self-reported life satisfaction has changed very little in prosperous countries over the last fifty years, in spite of large increases in the standard of living.

Hedonic adaptation is a troubling concept, regardless of where you stand on the political spectrum.  If you believe that economic growth is the key to increased well-being, the Easterlin paradox is bad news.  If you are a compassionate liberal, the finding that the sick and the poor are not very miserable takes wind from your sails.   And if you hope to use a measure of well-being to guide social policy you need an index that will pick up permanent effects of good policies on the happiness of the population. 

About ten years ago I had an idea that seemed to solve these difficulties: perhaps people's satisfaction with their life is not the right measure of well-being.  The idea took shape in discussions with my wife Anne Treisman, who was (and remains) convinced that people are happier in California (or at least Northern California) than in most other places.  The evidence showed that Californians are not particularly satisfied with their life, but Anne was unimpressed.  She argued that Californians are accustomed to a pleasant life and come to expect more pleasure than the unfortunate residents of other states.  Because they have a high standard for what life should be, Californians are not more satisfied than others, although they are actually happier.  This idea included a treadmill, but it was not hedonic – it was an aspiration treadmill: happy people have high aspirations.  

The aspiration treadmill offered an appealing solution to the puzzles of adaptation: it suggested that measure of life satisfaction underestimate the well-being benefits of life circumstances such as income, marital status or living in California.  The hope was that measures of experienced happiness would be more sensitive.  I eventually assembled an interdisciplinary team to develop a measure of experienced happiness (Kahneman, Krueger, Schkade, Stone and Schwarz, 2004) and we set out to demonstrate the aspiration treadmill.   Over several years we asked substantial samples of women to reconstruct a day of their life in detail.  They indicated the feelings they had experienced during each episode, and we computed a measure of experienced happiness: the average quality of affective experience during the day.  Our hypothesis was that differences in life circumstances would have more impact on this measure than on life satisfaction.  We were so convinced that when we got our first batch of data, comparing teachers in top-rated schools to teachers in inferior schools, we actually misread the results as confirming our hypothesis.  In fact, they showed the opposite: the groups of teachers differed more in their work satisfaction than in their affective experience at work. This was the first of many such findings: income, marital status and education all influence experienced happiness less than satisfaction, and we could show that the difference is not a statistical artifact.  Measuring experienced happiness turned out to be interesting and useful, but not in the way we had expected.  We had simply been wrong.

Experienced happiness, we learned, depends mainly on personality and on the hedonic value of the activities to which people allocate their time.  Life circumstances influence the allocation of time, and the hedonic outcome is often mixed: high-income women have more enjoyable activities than the poor, but they also spend more time engaged in work that they do not enjoy; married women spend less time alone, but more time doing tedious chores.  Conditions that make people satisfied with their life do not necessarily make them happy.  

Social scientists rarely change their minds, although they often adjust their position to accommodate inconvenient facts. But it is rare for a hypothesis to be so thoroughly falsified.  Merely adjusting my position would not do; although I still find the idea of an aspiration treadmill attractive, I had to give it up.

To compound the irony, recent findings from the Gallup World Poll raise doubts about the puzzle itself.  The most dramatic result is that when the entire range of human living standards is considered, the effects of income on a measure of life satisfaction (the "ladder of life") are not small at all.  We had thought income effects are small because we were looking within countries.  The GDP differences between countries are enormous, and highly predictive of differences in life satisfaction.  In a sample of over 130,000 people from 126 countries, the correlation between the life satisfaction of individuals and the GDP of the country in which they live was over .40 – an exceptionally high value in social science.  Humans everywhere, from Norway to Sierra Leone, apparently evaluate their life by a common standard of material prosperity, which changes as GDP increases. The implied conclusion, that citizens of different countries do not adapt to their level of prosperity, flies against everything we thought we knew ten years ago.  We have been wrong and now we know it.  I suppose this means that there is a science of well-being, even if we are not doing it very well.

kai_krause's picture
Software Pioneer; Philosopher; Author, A Realtime Literature Explorer

It is a charming concept that humans are in fact able "to change their mind" in the first place. Not that it necessarily implies a change for the better, but at least it does have that positive ring of supposing a Free Will to perform this feat at all. Better, in any case, to be the originator of the changing, rather than having it done to you, in the much less applaudable form of brain washing.

For me, in my own life as I passed the half-century mark, with almost exactly half the time spent in the US and the other in Europe, in-between circling the globe a few times, I can look back on what now seems like multiple lifetimes worth of mind changing.

Here then is a brief point, musing about the field I spent 20 years in: Computer Software. And it is deeper than it may seem at first glance.

I used to think "Software Design" is an art form.

I now believe that I was half-right:
it is indeed an art, but it has a rather short half-life: 
Software is merely a performance art!

A momentary flash of brilliance, doomed to be overtaken by the next wave, or maybe even by its own sequel. Eaten alive by its successors. And time...

This is not to denigrate the genre of performance art: anamorphic sidewalk chalk drawings, Goldsworthy pebble piles or Norwegian carved-ice-hotels are admirable feats of human ingenuity, but they all share that ephemeral time limit: the first rain, wind or heat will dissolve the beauty, and the artist must be well aware of its fleeting glory.

For many years I have discussed this with friends that are writers, musicians, painters and the simple truth emerged: one can still read the words, hear the music and look at the images....

Their value and their appeal remains, in some cases even gain by familiarity: like a good wine it can improve over time. You can hum a tune you once liked, years later. You can read words or look a painting from 300 years ago and still appreciate its truth and beauty today, as if brand new. Software, by that comparison, is more like Soufflé: enjoy it now, today, for tomorrow it has already collapsed on itself. Soufflé 1.1 is the thing to have, Version 2.0 is on the horizon.

It is a simple fact: hardly any of my software even still runs at all!

Back in 1982 I started with a highschool buddy in a garage in the Hollywood hills. With ludicrous limitations we conjured up dreams: three-dimensional charting, displaying sound as time-slice mountains of frequency spectrum data, annotated with perspective lettering... and all that in 32K of RAM on a 0.2Mhz processor. And we did it... a few years later it fed 30 people.

The next level of dreaming up new frontiers with a talented tight team was complex algorithms for generating fractals, smooth color gradients, multi layer bumpmapped textures and dozens of image filters, realtime liquid image effects, and on and on... and that too, worked and this time fed over 300 people. Fifteen products sold many millions of copies - and a few of them still persist to this day, in version 9 or 10 or 11... but for me, I realized, I no longer see myself as a software designer - I changed my mind.

Today, if you have a very large task at hand, one that you calculate might take two years or three... it has actually become cheaper to wait for a couple of generation changes in the hardware and do the whole thing then - ten times faster.

In other words: sit by the beach with umbrella drinks for 15 months and then finish it all at once with some weird beowulf-cluster of machinery and still beat the original team by leaps and bounds. At the start, all we were given was the starting address in RAM where video memory began, and a POKE to FC001101 would put a dot on the screen. Just one dot.

Then you figured out how to draw a line. How to connect them to polygons. How to fill those with patterns. All on a screen of 192x128, ( which is now just "an icon")

Uphill in the snow, both ways.

Now the GPUs are blasting billions of pixels per second and all they will ask is "does it blend?" Pico, Femto, Atto, Zepto, Yocto cycles stored in Giga, Tera, Peta, Exa, Zetta, Yotta cells.

I rest my case about those umbrella drinks.

Do I really just drop all technology and leave computing ? Nahh. QuadHD screens are just around the corner, and as a tool for words, music and images there are fantastic new horizons for me. I am more engaged in it all than ever — alas: the actual coding and designing itself is no longer where I see my contribution. But the point is deeper than just one mans path:

The new role of technology is a serious philosophical point in the long range outlook for mankind. Most decision makers world wide, affecting the entire planet, are technophobes, luddites and noobs beyond belief. They have no vision for the potential, nor proper respect for the risks, nor simple estimation of the immediate value for quality of life that technology could bring. 
Maybe one can change their mind?

I remembered that I once wrote something about this very topic... 
and I found it: 

I changed my mind mostly about changing my mind: 
I used to be all for 'being against it', 
then I was all against 'being for it', 
until I realized: thats the same thing....never mind. 
It's a 'limerickety' little thing from some keynote 12 years ago, but...see.... it still runs : )

george_johnson's picture
Author; The Cancer Chronicles, The Ten Most Beautiful Experiments; Columnist, The New York Times

I used to think that the most fascinating thing about physics was theory — and that the best was still to come. But as physics has grown vanishingly abstract I've been drawn in the opposite direction, to the great experiments of the past.

First I determined to show myself that electrons really exist. Firing up a beautiful old apparatus I found on eBay — a bulbous vacuum tube big as a melon mounted between two coils — I replayed J. J. Thomson's famous experiment of 1897 in which he measured the charge-to-mass ratio of an electron beam. It was thrilling to see the bluish-green cathode ray dive into a circle as I energized the electromagnets. Even better, when I measured the curve and plugged all the numbers into Thomson's equation, my answer was off by only a factor of two. Pretty good for a journalist. I had less success with the stubborn Millikan oil-drop experiment. Mastering it, I concluded, would be like learning to play the violin.

Electricity in the raw is as mysterious as superstrings. I turn down the lights and make my Geissler tubes glow with the touch of a high-voltage wand energized by a brass-and-mahogany Ruhmkorff coil. I coax the ectoplasmic rays in my de la Rive tube to rotate around a magnetized pole.

Maybe in a year or two, the Large Hadron Collider will make this century's physics interesting again. Meanwhile, as soon as I find a nice spinthariscope, I'm ready to go nuclear.

nassim_nicholas_taleb's picture
Distinguished Professor of Risk Engineering, New York University School of Engineering ; Author, Incerto (Antifragile, The Black Swan...)

I spent a long time believing in the centrality of probability in life and advocating that we should express everything in terms of degrees of credence, with unitary probabilities as a special case for total certainties, and null for total implausibility. Critical thinking, knowledge, beliefs, everything needed to be probabilized. Until I came to realize, twelve years ago, that I was wrong in this notion that the calculus of probability could be a guide to life and help society. Indeed, it is only in very rare circumstances that probability (by itself) is a guide to decision making . It is a clumsy academic construction, extremely artificial, and nonobservable. Probability is backed out of decisions; it is not a construct to be handled in a standalone way in real-life decision-making. It has caused harm in many fields.

Consider the following statement. "I think that this book is going to be a flop. But I would be very happy to publish it." Is the statement incoherent? Of course not: even if the book was very likely to be a flop, it may make economic sense to publish it (for someone with deep pockets and the right appetite) since one cannot ignore the small possibility of a handsome windfall, or the even smaller possibility of a huge windfall. We can easily see that when it comes to small odds, decision making no longer depends on the probability alone. It is the pair probability times payoff (or a series of payoffs), the expectation, that matters. On occasion, the potential payoff can be so vast that it dwarfs the probability — and these are usually real world situations in which probability is not computable.

Consequently, there is a difference between knowledge and action. You cannot naively rely on scientific statistical knowledge (as they define it) or what the epistemologists call "justified true belief" for non-textbook decisions. Statistically oriented modern science is typically based on Right/Wrong with a set confidence level, stripped of consequences. Would you take a headache pill if it was deemed effective at a 95% confidence level? Most certainly. But would you take the pill if it is established that it is "not lethal" at a 95% confidence level? I hope not.

When I discuss the impact of the highly improbable ("black swans"), people make the automatic mistake of thinking that the message is that these "black swans" are necessarily more probable than assumed by conventional methods. They are mostly less probable. Consider that, in a winner-take-all environment such as the one in the arts, the odds of success are low, since there are fewer successful people, but the payoff is disproportionately high. So, in a fat tailed environment, what I call "Extremistan", rare events are less frequent (their probability is lower), but they are so effective that their contribution to the total pie is more substantial.

[Technical note: the distinction is, simply, between raw probability, P[x>K], i.e. the probability of exceeding K, and E[x|x>K], the expectation of x conditional on x>K. It is the difference between the zeroth moment and the first moment. The latter is what usually matters for decisions. And it is the (conditional) first moment that needs to be the core of decision making. What I saw in 1995 was that an out-of-the-money option value increases when the probability of the eventdecreases, making me feel that everything I thought until then was wrong.]

What causes severe mistakes is that, outside the special cases of casinos and lotteries, you almost never face a single probability with a single (and known) payoff. You may face, say, a 5% probability of an earthquake of magnitude 3 or higher, a 2% probability of one of 4 or higher, etc. The same with wars: you have a risk of different levels of damage, each with a different probability. "What is the probability of war?" is a meaningless question for risk assessment.

So it is wrong to just look at a single probability of a single event in cases of richer possibilities (like focusing on such questions as "what is the probability of losing a million dollars?" while ignoring that , conditional on losing more than a million dollars, you may have an expected loss of twenty million, one hundred million, or just one million). Once again, real life is not a casino with simple bets. This is the error that helps the banking system go bust with an astonishing regularity — I've showed that institutions that are exposed to negative black swans, like banks and some classes of insurance ventures, have almost never profitable over long periods. The problem of the illustrative current subprime mess is not so much that the "quants" and other pseudo-experts in bank risk management were wrong about the probabilities (they were), but that they were severely wrong about the different layers of depth of potential negative outcomes. For instance, Morgan Stanley lost about ten billion dollars (so far) while allegedly having foreseen a subprime crisis and executed hedges against it — they just did not realize how deep it would go and had open exposure to the big tail risks. This is routine: a friend who went bust during the crash of 1987, told me: "I was betting that it would happen but I did not know it would go that far".

The point is mathematically simple but does not register easily. I've enjoyed giving math students the following quiz (to be answered intuitively, on the spot). In a Gaussian world, the probability of exceeding one standard deviations is ~16%. What are the odds of exceeding it under a distribution of fatter tails (with same mean and variance)? The right answer: lower, not higher — the number of deviations drops, but the few that take place matter more. It was entertaining to see that most of the graduate students get it wrong. Those who are untrained in the calculus of probability have a far better intuition of these matters.

Another complication is that just as probability and payoff are inseparable, so one cannot extract another complicated component, utility, from the decision-making equation. Fortunately, the ancients with all their tricks and accumulated wisdom in decision-making, knew a lot of that, at least better than modern-day probability theorists. Let us stop to systematically treat them as if they were idiots. Most texts blame the ancients for their ignorance of the calculus of probability — the Babylonians, Egyptians, and Romans in spite of their engineering sophistication, and the Arabs, in spite of their taste for mathematics, were blamed for not having produced a calculus of probability (the latter being, incidentally, a myth, since Umayyad scholars used relative word frequencies to determine authorships of holy texts and decrypt messages). The reason was foolishly attributed to theology, lack of sophistication, lack of something people call the "scientific method", or belief in fate. The ancients just made decisions in a more ecologically sophisticated manner than modern epistemology minded people. They integrated skeptical Pyrrhonian empiricism into decision making. As I said, consider that belief (i.e., epistemology) and action (i.e., decision-making), the way they are practiced, are largely not consistent with one another.

Let us apply the point to the current debate on carbon emissions and climate change. Correspondents keep asking me if it the climate worriers are basing their claims on shoddy science, and whether, owing to nonlinearities, their forecasts are marred with such a possible error that we should ignore them. Now, even if I agreed that it were shoddy science; even if I agreed with the statement that the climate folks were most probably wrong, I would still opt for the most ecologically conservative stance — leave planet earth the way we found it. Consider the consequences of the very remote possibility that they may be right, or, worse, the even more remote possibility that they may be extremely right.

clay_shirky's picture
Social & Technology Network Topology Researcher; Adjunct Professor, NYU Graduate School of Interactive Telecommunications Program (ITP); Author, Cognitive Surplus

I was a science geek with a religious upbringing, an Episcopalian upbringing, to be precise, which is pretty weak tea as far as pious fervor goes. Raised in this tradition I learned, without ever being explicitly taught, that religion and science were compatible. My people had no truck with Young Earth Creationism or anti-evolutionary cant, thank you very much, and if some people's views clashed with scientific discovery, well, that was their fault for being so fundamentalist.

Since we couldn't rely on the literal truth of the Bible, we needed a fallback position to guide our views on religion and science. That position was what I'll call the Doctrine of Joint Belief: "Noted Scientist X has accepted Jesus as Lord and Savior. Therefore, religion and science are compatible." (Substitute deity to taste.) You can still see this argument today, where the beliefs of Francis Collins or Freeman Dyson, both accomplished scientists, are held up as evidence of such compatibility.

Belief in compatibility is different from belief in God. Even after I stopped believing, I thought religious dogma, though incorrect, was not directly incompatible with science (a view sketched out by Stephen Gould as "non-overlapping magisteria".)  I've now changed my mind, for the obvious reason: I was wrong. The idea that religious scientists prove that religion and science are compatible is ridiculous, and I'm embarrassed that I ever believed it. Having believed for so long, however, I understand its attraction, and its fatal weaknesses.

The Doctrine of Joint Belief isn't evidence of harmony between two systems of thought. It simply offers permission to ignore the clash between them. Skeptics aren't convinced by the doctrine, unsurprisingly, because it offers no testable proposition. What issurprising is that its supposed adherents don't believe it either. If joint beliefs were compatible beliefs, there could be no such thing as heresy. Christianity would be compatible not just with science, but with astrology (roughly as many Americans believe in astrology as evolution), with racism (because of the number of churches who use the "Curse of Ham" to justify racial segregation), and on through the list of every pair of beliefs held by practicing Christians.

To get around this, one could declare that, for some arbitrary reason, the co-existence of beliefs is relevant only to questions of religion and science, but not to astrology or anything else. Such a stricture doesn't strengthen the argument, however, because an appeal to the particular religious beliefs of scientists means having to explain why the majority of them are atheists. (See the 1998 Larson and Witham study for the numbers.) Picking out the minority who aren't atheists and holding only them up as exemplars, is simply special pleading (not to mention lousy statistics.)

The works that changed my mind about compatibility were Pascal Boyer's Religion Explained, and Scott Atran's In Gods We Trust, which lay out the ways religious belief is a special kind of thought, incompatible with the kind of skepticism that makes science work. In Boyer and Atran's views, religious thought doesn't simply happen to be false -- being false is the point, the thing that makes belief both memorable and effective. Psychologically, we overcommit to the ascription of agency, even when dealing with random events (confirmation can be had in any casino.) Belief in God rides in on that mental eagerness, in the same way optical illusions ride in on our tendency to overinterpret ambiguous visual cues. Sociologically, the adherence to what Atran diplomatically calls 'counter-factual beliefs' serves both to create and advertise in-group commitment among adherents. Anybody can believe in things that are true, but it takes a lot of coordinated effort to get people to believe in virgin birth or resurrection of the dead.

We are early in one of the periodic paroxysms of conflict between faith and evidence. I suspect this conflict will restructure society, as after Galileo, rather than leading to a quick truce, as after Scopes, not least because the global tribe of atheists now have a medium in which they can discover one another and refine and communicate their message.

One of the key battles is to insist on the incompatibility of beliefs based on evidence and beliefs that ignore evidence. Saying that the mental lives of a Francis Collins or a Freeman Dyson prove that religion and science are compatible is like saying that the sex lives of Bill Clinton or Ted Haggard prove that marriage and adultery are compatible. The people we need to watch out for in this part of the debate aren't the fundamentalists, they're the moderates, the ones who think that if religious belief is made metaphorical enough,  incompatibility with science can be waved away. It can't be, and we need to say so, especially to the people like me, before I changed my mind.

stephon_h_alexander's picture
Professor of Physics at Brown University; Author, The Jazz of Physics

Before I entered the intellectual funnel of graduate school, I used to cook up thought experiments to explain coincidences, such as  running into a person immediately after a random thought of them. This secretive thinking was good mental entertainment, but the demand of forging a serious career in physics research forced me to make peace with such wild speculations.  In my theory of coincidences, non-local interactions as well as a dark form of energy was necessary; absolute science fiction!  Fifteen years later, we now have overwhelming evidence of a 'fifth-force' mediating an invisible substance that the community has dubbed 'dark energy'.   In hindsight, it is of no coincidence that I have changed my mind that nature is non-local. 

Non-local correlations are not common experience to us, thus it is both difficult to imagine and accept.  Often, research in theoretical physics encourages me to keep an open mind; or to not get too attached to ideas that I am deluded into thinking should be correct.   While this has been a constant struggle for me in my scientific career thus far, I have experienced the value of this theoretical ideology weaning process.  After years of wrestling with some of the outstanding problems in the field of elementary particle physics and cosmology, I have been forced to change my mind on this predisposition that has been silently passed on to me by my physics predecessors, that the laws physics are, for the most part, local.

During my first year in graduate school, I came across the famous Einstein, Podosky and Rosen (EPR) thought experiment, which succinctly argue for the inevitable 'spooky action at a distance' in quantum mechanics.   Then came the Aspect experiment that measured the non-local entanglement of photon polarization confirming EPRs expectation that there exist non-local correlations in nature enabled by quantum mechanics (with a caveat of course).   This piece of knowledge had a short life in my education and research career.

Non-locality exited the door of my brain once and for all, after I approached one of my professors, an accomplished quantum field theorist.  He convinced me that non-locality goes away once quantum mechanics properly incorporates causality, through a unification with special relativity; i.e. a theory known as quantum field theory.   With the promise of a sounder career path I invited these, then comforting words, and attempted to master quantum field theory.   Plus, even if non-locality happens, these processes are exceptional events created under special conditions, while most physics is completely local.  Quantum field theory works and it became my new religion.  I have since remained on this comfortable path.

Now that I specialize in the physics of the early universe, I have first hand witnessed the great predictive and precise explanatory powers of Einstein's general relativity, married with quantum field theory to both explain the complete history and physical mechanism for the origin of structures in the universe, all in a seemingly local and causative fashion.  We call this paradigm cosmic inflation and it is deceptively simple.  The universe started out immediately after the big bang from a microscopically tiny piece of space then inflated 'faster than the speed of light'.   Inflation is able to explain our entire complexity of observed universe with the economy of a few equations that involve general relativity and quantum field theory.

Despite its great success inflation has been plagued with conceptual and technical problems.  These problems created thesis projects and inevitably, jobs for a number of young theorists like myself.  Time after time, publication after publication, like a rat on his wheel, we are running out of steam, as the problems of inflation just reappear in some other form.  I have now convinced myself that the problems associated with inflation won't go away unless we somehow include non-locality. 

Ironically, inflation gets ignited by the same form of dark energy that we see permeating the fabric of the cosmos today, except in much greater abundance fourteen billion years ago.  Where did most of the dark energy go after inflation ended?  Why is some of it still around?  Is this omniscient dark energy the culprit behind non-local activity in physical processes?  I don't know exactly how non-locality in cosmology will play itself out, but by its very nature, the physics underlying it will affect 'local' processes.  I still haven't changed my mind on coincidences though.

w_daniel_hillis's picture
Physicist, Computer Scientist, Co-Founder, Applied Invention.; Author, The Pattern on the Stone

As a child, I was told that hot water freezes faster than cold water. This was easy to refute in principle, so I did not believe it.

Many years later I learned that Aristotle had described the effect in his Meteorologica,

"The fact that the water has previously been warmed contributes to its freezing quickly: for so it cools sooner. Hence many people, when they want to cool hot water quickly, begin by putting it in the sun. So the inhabitants of Pontus when they encamp on the ice to fish (they cut a hole in the ice and then fish) pour warm water round their reeds that it may freeze the quicker, for they use the ice like lead to fix the reeds. " (E. W. Webster translation)

I was impressed as always by Aristotle's clarity, confidence and specificity. Of course, I do not expect you to be convinced that it is true simply because Aristotle said so, especially since his explanation is that "warm and cold react upon one another by recoil." (Aristotle, like us, was very good making up explanations to justify his beliefs). Instead, I hope that you will have the pleasure of being convinced, as I was, by trying the experiment yourself.

denis_dutton's picture
Philosopher; Founder and Editor, Arts & Letters Daily; Author, The Art Instinct

The appeal of Darwin's theory of evolution — and the horror of it, for some theists — is that it expunges from biology the concept of purpose, of teleology, thereby converting biology into a mechanistic, canonical science. In this respect, the author of The Origin of Species may be said to be the combined Copernicus, Galileo, and Kepler of biology. Just as these astronomers gave us a view of the heavens in which no angels were required to propel the planets in their orbs and the earth was no longer the center of the celestial system, so Darwin showed that no God was needed to design the spider's intricate web and that man is in truth but another animal.

That's how the standard story goes, and it is pretty much what I used to believe, until I read Darwin's later book, his treatise on the evolution of the mental life of animals, including the human species: The Descent of Man. This is the work in which Darwin introduces one of the most powerful ideas in the study of human nature, one that can explain why the capacities of the human mind so extravagantly exceed what would have been required for hunter-gatherer survival on the Pleistocene savannahs. The idea is sexual selection, the process by which men and women in the Pleistocene chose mates according to varied physical and mental attributes, and in so doing "built" the human mind and body as we know it.

In Darwin's account, human sexual selection comes out looking like a kind of domestication. Just as human beings domesticated dogs and alpacas, roses and cabbages, through selective breeding, they also domesticated themselves as a species through the long process of mate selection. Describing sexual selection as human self-domestication should not seem strange. Every direct prehistoric ancestor of every person alive today at times faced critical survival choices: whether to run or hold ground against a predator, which road to take toward a green valley, whether to slake an intense thirst by drinking from some brackish pool. These choices were frequently instantaneous and intuitive and, needless to say, our direct ancestors were the ones with the better intuitions.

However, there was another kind of crucial intuitive choice faced by our ancestors: whether to choose this man or that woman as a mate with whom to rear children and share a life of mutual support. It is inconceivable that decisions of such emotional intimacy and magnitude were not made with an eye toward the character of the prospective mate, and that these decisions did not therefore figure in the evolution of the human personality — with its tastes, values, and interests.  Our actual direct ancestors, male and female, were the ones who were chosen by each other.

Darwin's theory of sexual selection has disquieted and irritated many otherwise sympathetic evolutionary theorists because, I suspect, it allows purposes and intentions back into evolution through an unlocked side door. The slogan memorized by generations of students of natural selection is random mutation and selective retention. The "retention" in natural selection is strictly non-teleological, a matter of brute, physical survival. The retention process of sexual selection, however, is with human beings in large measure purposive and intentional. We may puzzle about whether, say, peahens have "purposes" in selecting peacocks with the largest tails. But other animals aside, it is absolutely clear that with the human race, sexual selection describes a revived evolutionary teleology. Though it is directed toward other human beings, it is as purposive as the domestication of those wolf descendents that became familiar household pets.

Every Pleistocene man who chose to bed, protect, and provision a woman because she struck him as, say, witty and healthy, and because her eyes lit up in the presence of children, along with every woman who chose a man because of his hunting skills, fine sense of humor, and generosity, was making a rational, intentional choice that in the end built much of the human personality as we now know it.

Darwinian evolution is therefore structured across a continuum. At one end are purely natural selective processes that give us, for instance, the internal organs and the autonomic processes that regulate our bodies. At the other end are rational decisions — adaptive and species-altering across tens of thousands of generations in prehistoric epochs.  It is at this end of the continuum, where rational choice and innate intuitions can overlap and reinforce one another, that we find important adaptations that are relevant to understanding the human personality, including the innate value systems implicit in morality, sociality, politics, religion, and the arts. Prehistoric choices honed the human virtues as we now know them: the admiration of altruism, skill, strength, intelligence, industriousness, courage, imagination, eloquence, diligence, kindness, and so forth.

The revelations of Darwin's later work — beautifully explicated as well in books by Helena Cronin, Amotz and Avishag Zahavi, and Geoffrey Miller — have completely altered my thinking about the development of culture. It is not just survival in a natural environment that has made human beings what they are. In terms of our personalities we are, strange to say, a self-made species. For me this is a genuine revelation, as it puts in a new genetic light many human values that have hitherto been regarded as purely cultural.

beatrice_golomb's picture
Professor of Medicine at UCSD

Rather than choose a personal example of a change in mind, I reflect on instances in which my field, medicine, has apparently changed "its" mind based on changes in evidence. In my experience major reversals in belief (as opposed to simply progressions, or changes in course) typically arise when there are serious flaws in evaluation of evidence or inference leading to the old view, the new view, or both.

To be committed to a view based on facts, and later find the view wrong, either the facts had to be wrong or the interpretation of them had to extend beyond what the facts actually implied. The "facts" can be wrong in a range of settings: academic fraud, selective documentation of research methods, and selective publication of favorable results — among many. But in my experience more often it is the interpretation of the facts that is amiss.

Hormone replacement therapy ("HRT") is a case in point. You may recall HRT was widely hailed as slashing heart disease and dementia risk in women. After all, in observational studies — with large samples --women who took HRT had lower rates of heart disease and Alzheimer's than women who did not.

I was not among those advising patients that HRT had the benefits alleged. Women who took HRT (indeed any preventive medication) differed from those who did not. These differences include characteristics that might be expected to produce the appearance of a protective association, through "confounding." For instance, people who get preventive health measures have better education and higher income — which predict less dementia and better health, irrespective of any actual effect of the treatment. (Efforts to adjust for such factors can never be trusted to capture differences sufficiently.) These disparities made it impossible to infer from such "observational" data alone whether the "true" causal relationship of hormone replacement to brain function and heart events was favorable, neutral, or adverse.

When controlled trials were finally conducted — with random allocation to HRT or placebo providing that the compared groups were actually otherwise similar — HRT was found rather to increase rates of heart-related events and dementia. But the lessons from that experience have not been well learned, and new, similarly flawed "findings" continue to be published — without suitable caveats.

It is tempting to provide a raft of other examples in which a range of errors in reasoning from evidence were present, recognition of which should have curbed enthusiasm for a conclusion, but I will spare the reader.

Stunningly, there is little mandated training in evaluation of evidence and inference in medical school — nor indeed in graduate school in the sciences. (Nor are the medical practice guidelines on which your care is grounded generated by people chosen for this expertise.)

Even available elective coursework is piecemeal. Thus, statistics and probability courses each cover some domains of relevance such as study "power," or distinguishing posterior from a priori probabilities: thus it may be that commonly if a wife who was beaten is murdered, the spouse is the culprit (a posteriori), even if it is uncommon for wife beaters to murder their spouse (a priori). Epidemiology-methods courses address confounding and many species of bias.Yet instruction in logical fallacies, for instance, was absent completely from the available course armamentarium. Each of these domains, and others, are critical to sound reasoning from evidence.

Preventive treatments should mandate a high standard of evidence. But throughout the "real world" decisions are required despite incomplete evidence. At a level of information that should not propel a strong evidence-driven "belief," a decision may nonetheless be called for: How to allocate limited resources; whom to vote for; and whether to launch a program, incentive, or law. Even in these domains, where the luxury of tightly controlled convergent evidence is unattainable — or perhaps especially in these domains — understanding which evidence has what implications remains key, and may propel better decisions, and restrain unintended consequences.

The haphazard approach to training in reasoning from evidence in our society is, in its way, astounding — and merits a call to action. Better reasoning from evidence is substantially teachable but seldom directly taught, much less required. It should be central to curriculum requirements, at both graduate (advanced) and undergraduate (basic) levels — if not sooner.

After all, sound reasoning from evidence is (or ought to be) fundamental for persons in each arena in which decisions should be rendered, or inferences drawn, on evidence: not just doctors and scientists, but journalists, policy makers — and indeed citizens, whose determinations affect not solely their own lives but others', each time they parent, serve on juries, and vote.

david_goodhart's picture
Founder and Editor of Prospect magazine

This was a central part of liberal baby-boomer common sense when I was growing up, especially if you came from a (still) dominant country like Britain. Moreover to show any sense of national feeling — apart from contempt for your national traditions — was a sign that you lacked political sophistication.

I now believe this is mainly nonsense. Nationalism can, of course, be a destructive force and we were growing up in the shadow of its 19th and 20th century excesses. In reaction to that most of the civilized world had, by the mid 20th century, signed up to a liberal universalism (as embodied in the UN charter) that stressed the moral equality of all humans. I am happy to sign up to that too, of course, but I now no longer see that commitment as necessarily conflicting with belief in the nation state. Indeed I think many anti-national liberals make a sort of category error — belief in the moral equality of all humans does not mean that we have the same obligations to all humans. Membership of the political community of a modern nation state places quite onerous duties on us to obey laws and pay taxes, but also grants us many rights and freedoms — and they make our fellow citizens politically "special" to us in a way that citizens of other countries are not. This "specialness" of national citizenship is most vividly illustrated in the factoid that every year in Britain we spend 25 times more on the National Health Service than we do on development aid.

Moreover if the nation state can be a destructive force it is also at the root of what many liberals hold dear: representative democracy, accountability, the welfare state, redistribution of wealth and the very idea of equal citizenship. None of these things have worked to any significant extent beyond the confines of the nation state, which is not to say that they couldn't at some point in the future (indeed they already do so to a small extent in the EU). If you look around at the daily news — contested elections in Kenya, death in Pakistan — most of the bad news these days comes from too little nation state not too much. And why was rapid economic development possible in the Asian tigers but not in Africa? Surely the existence of well functioning nation states and a strong sense of national solidarity in the tigers had something to do with it.

And in rich western countries as other forms of human solidarity — social class, religion, ethnicity and so on — have been replaced by individualism and narrower group identities, holding on to some sense of national solidarity remains more important than ever to the good society. A feeling of empathy towards strangers who are fellow citizens (and with whom one shares history, institutions and social and political obligations) underpins successful modern states, but this need not be a feeling that stands in the way of empathy towards all humans. It just remains true that charity begins at home.

jamshed_bharucha's picture
Psychologist; President Emeritus, Cooper Union

I used to believe that a paramount purpose of a liberal education was threefold:

1) Stretch your mind, reach beyond your preconceptions; learn to think of things in ways you have never thought before.

2) Acquire tools with which to critically examine and evaluate new ideas, including your own cherished ones.

3) Settle eventually on a framework or set of frameworks that organize what you know and believe and that guide your life as an individual and a leader.

I still believe #1 and #2. I have changed my mind about #3. I now believe in a new version of #3, which replaces the above with the following:

a) Learn new frameworks, and be guided by them.

b) But never get so comfortable as to believe that your frameworks are the final word, recognizing the strong psychological tendencies that favor sticking to your worldview. Learn to keep stretching your mind, keep stepping outside your comfort zone, keep venturing beyond the familiar, keep trying to put yourself in the shoes of others whose frameworks or cultures are alien to you, and have an open mind to different ways of parsing the world. Before you critique a new idea, or another culture, master it to the point at which its proponents or members recognize that you get it.

Settling into a framework is easy. The brain is built to perceive the world through structured lenses — cognitive scaffolds on which we hang our knowledge and belief systems.

Stretching your mind is hard. Once we've settled on a worldview that suits us, we tend to hold on. New information is bent to fit, information that doesn't fit is discounted, and new views are resisted.

By 'framework' I mean any one of a range of conceptual or belief systems — either explicitly articulated or implicitly followed. These include narratives, paradigms, theories, models, schemas, frames, scripts, stereotypes, and categories; they include philosophies of life, ideologies, moral systems, ethical codes, worldviews, and political, religious or cultural affiliations. These are all systems that organize human cognition and behavior by parsing, integrating, simplifying or packaging knowledge or belief. They tend to be built on loose configurations of seemingly core features, patterns, beliefs, commitments, preferences or attitudes that have a foundational and unifying quality in one's mind or in the collective behavior of a community. When they involve the perception of people (including oneself), they foster a sense of affiliation that may trump essential features or beliefs.

What changed my mind was the overwhelming evidence of biases in favor of perpetuating prior worldviews. The brain maps information onto a small set of organizing structures, which serve as cognitive lenses, skewing how we process or seek new information. These structures drive a range of phenomena, including the perception of coherent patterns (sometimes where none exists), the perception of causality (sometimes where none exists), and the perception of people in stereotyped ways.

Another family of perceptual biases stems from our being social animals (even scientists!), susceptible to the dynamics of in-group versus out-group affiliation. A well known bias of group membership is the over-attribution effect, according to which we tend to explain the behavior of people from other groups in dispositional terms ("that's just the way they are"), but our own behavior in much more complex ways, including a greater consideration of the circumstances. Group attributions are also asymmetrical with respect to good versus bad behavior. For groups that you like, including your own, positive behaviors reflect inherent traits ("we're basically good people") and negative behaviors are either blamed on circumstances ("I was under a lot of pressure") or discounted ("mistakes were made"). In contrast, for groups that you dislike, negative behaviors reflect inherent traits ("they can't be trusted") and positive behaviors reflect exceptions ("he's different from the rest"). Related to attribution biases is the tendency (perhaps based on having more experience with your own group) to believe that individuals within another group are similar to each other ("they're all alike"), whereas your own group contains a spectrum of different individuals (including "a few bad apples"). When two groups accept bedrock commitments that are fundamentally opposed, the result is conflict — or war.

Fortunately, the brain has other systems that allow us to counteract these tendencies to some extent. This requires conscious effort, the application of critical reasoning tools, and practice. The plasticity of the brain permits change - within limits.

To assess genuine understanding of an idea one is inclined to resist, I propose a version of Turing's Test tailored for this purpose: You understand something you are inclined to resist only if you can fool its proponents into thinking you get it. Few critics can pass this test. I would also propose a cross-cultural Turing Test for would-be cultural critics (a Golden Rule of cross-group understanding): before critiquing a culture or aspect thereof, you should be able to navigate seamlessly within that culture as judged by members of that group.

By rejecting #3, you give up certainty. Certainty feels good and is a powerful force in leadership. The challenge, as Bertrand Russell puts it in The History of Western Philosophy, is "To teach how to live without certainty, and yet without being paralyzed by hesitation".

chris_dibona's picture
Open Source and Public Sector, Google

Over the last three years, we've run a project called the Summer of Code, in which we pair up university level software developers with open source software projects. If the student succeeds in fulfilling the goals set forth in their application (which the project has accepted) then they are paid a sum of $4500. We wanted a program that would keep software developers coding over the summer and it would also help out our friends in the world of open source software development.

The passing rate last year was 81%, which means some 700+ students completed their projects to the satisfaction of their mentors.

This last year, we did a cursory study of the code produced by these students and reviewed, among other items, how many lines of code the student produced. The lines of code thing has been done to death in the computer industry, and it's a terrible measure of programmer productivity. It is one of the few metrics we do have and since we make the assumption that if they pass the project then their code has passed muster. So after that the lines of code becomes somewhat more significant than it would normally.

Over the summer of the average student produced 4,000 lines of code with some students producing as much at 10, 14 and even 20 thousand lines. This is an insane amount of code written weather you measure by time or by money. By some measures this means the students are anywhere between 5 and 40 times as productive as your 'average' employed programmer.  This code, mind you, was written by a student who is almost always geographically separated from their mentor by at least 3 time zones and almost never has a face to face meeting with their mentor.

This is an absurd amount of productivity for a student or,  heck, anyone, writing code. This is often production and user-ready code being produced by these students. It's really something to see and made me revise what I consider a productive developer to be and what indeed I should expect from the engineers that work for and with me.

But, and here's the thing I changed my mind about, is the tradeoff for silly high productivity that I have to run my projects the way we run the Summer of Code? Maybe. Can I keep my hands off and let things run their course? Is the team strong enough to act as this kind of mentoring to each other? I now think the answer is that yes, they can run each other better than I can run them. So let's see what letting go looks like. Ask me next year? Let's hope next years question is 'what chance are you least regretful you took?' and I can talk about this then!

mark_henderson's picture
Head of Communications, Wellcome Trust; Author, The Greek Manifesto

I used to take the view that public consultations about science policy were pointless. While the idea of asking ordinary people's opinions about controversial research sounds quite reasonable, it is astonishingly difficult to do well.

When governments canvass about issues such as biotechnology or embryo research, what usually happens is that the whole exercise gets captured by special interests.

A vocal minority with strong opinions that are already widely known and impervious to argument — think Greenpeace and the embryo-rights lobby — get their responses in early and often. The much larger proportion of people who consider themselves neutral, open to persuasion, uninformed or uninterested rarely bother to take part. Public opinion is then deemed to have spoken, without reflecting true public opinion at all. Wouldn't it be better, I thought, to let scientists get on with their research, subject to occasional oversight by specialist panels with appropriate ethical expertise?

Well, to a point. Public consultations can indeed be worse than useless, particularly when the British Government has done the consulting: its exercises on GM crops and embryo research laws were particularly ill-judged. As Sir David King said recently, they have taught us what not to do.

Their failure, though, has stimulated some interesting thinking that has convinced me that it is possible to engage ordinary people in quite complex scientific issues, without letting the usual suspects shout everybody else down.

The Human Fertilisation and Embryology Authority's recent work on cytoplasmic hybrid embryos is a case in point. The traditional part of the exercise had familiar results: pro-lifers and anti-genetic engineering groups mobilised, so 494 of the 810 written submissions were hostile. Careful questioning, however, established that almost all these came from people who oppose all embryo research in all circumstances.

A more scientific poll found 61 per cent backing for interspecies embryos, if these were to be used for medical research. Detailed deliberative workshops revealed that once the rationale for the experiments was properly explained, large majorities overcame "instinctive repulsion" and supported the work.

If consultations are properly run in this way, there is a lot to be said for them. They can actually build public understanding of potentially controversial research, and shoot the fox of science's shrillest critics.

In many ways, they are rather more helpful than seeking advice from bioethicists, whose importance to ethical research I've increasingly come to doubt. It's not that philosophy of science is not a worthwhile academic discipline — it can be stimulating and thought-provoking. The problem is that a bioethicist can almost always be found to support any position.

Leon Kass and John Harris are both eminent bioethicists, yet the counsel you would expect them to give on embryo research laws is going to be rather different. Politicians — or scientists — can and do deliberately appoint ethicists according to their pre-existing world views, then trumpet their advice as somehow independent and authoritative, as if their subject were physics.

If specialist bioethics has a role to play in regulation of science, it is in framing the questions that researchers and the public at large should consider. It can't just be a fig leaf for decisions people were always going to make anyway.

lera_boroditsky's picture
Assistant Professor of Cognitive Science, UCSD

I used to think that languages and cultures shape the ways we think. I suspected they shaped they ways we reason and interpret information.  But I didn't think languages could shape the nuts and bolts of perception, the way we actually see the world.  That part of cognition seemed too low-level, too hard-wired, too constrained by the constants of physics and physiology to be affected by language.

Then studies started coming out claiming to find cross-linguistic differences in color memory.  For example, it was shown that if your language makes a distinction between blue and green (as in English), then you're less likely to confuse a blue color chip for a green one in memory.  In a study like this you would see a color chip, it would then be taken away, and then after a delay you would have to decide whether another color chip was identical to the one you saw or not.

Of course, showing that language plays a role in memory is different than showing that it plays a role in perception.  Things often get confused in memory and it's not surprising that people may rely on information available in language as a second resort.  But it doesn't mean that speakers of different languages actually see the colors differently as they are looking at them.  I thought that if you made a task where people could see all the colors as they were making their decisions, then there wouldn't be any cross-linguistic differences.

I was so sure of the fact that language couldn't shape perception that I went ahead and designed a set of experiments to demonstrate this.  In my lab we jokingly referred to this line of work as "Operation Perceptual Freedom."  Our mission: to free perception from the corrupting influences of language.

We did one experiment after another, and each time to my surprise and annoyance, we found consistent cross-linguistic differences.  They were there even when people could see all the colors at the same time when making their decisions.  They were there even when people had to make objective perceptual judgments.  They were there when no language was involved or necessary in the task at all.  They were there when people had to reply very quickly.  We just kept seeing them over and over again, and the only way to get the cross-linguistic differences to go away was to disrupt the language system.  If we stopped people from being able to fluently access their language, then the cross-linguistic differences in perception went away.

I set out to show that language didn't affect perception, but I found exactly the opposite.  It turns out that languages meddle in very low-level aspects of perception, and without our knowledge or consent shape the very nuts and bolts of how we see the world.

jordan_pollack's picture
Professor and Chairman of Computer Science, Brandeis University

I've changed my mind about electronic mail. When I first used email in graduate school in 1980, it was a dream. It was the most marvelous and practical invention of computer science. A text message quickly typed and reliably delivered (or be told of an error) allowed a new kind of asynchronous communication. It was cheaper (free), faster, and much more efficient than mail, phone, or fax, with a roundtrip in minutes. Only your colleagues had your address, but you could find people at other places using "finger". Colleagues started sharing text-formatted data tables, where 50K bytes was a big email message!

Then came attachments. This hack to insert 33% bloated 8-bit binary files inside of 7-bit text email opened a Pandora's box. Suddenly anyone had the right to send any size package for FREE, like a Socialized United Parcel Service. Microsoft Outlook made it "drag 'n drop" easy for bureaucrats to send word documents. Many computer scientists saw the future and screamed "JUST SEND TEXT" but it was too late. Microsoft kept tweaking its proprietary file formats forcing anyone with email to upgrade Microsoft Office. (I thought they finally stopped with Office 97, but now I am getting DOC-X-files, which might as well be in Martian!)

We faced AOL newbies, mailing lists, free webmail, hotmail spam, RTF mail, chain letters, html mail, musical mail, flash mail, javascript mail, viruses, spybits, faked URL's, phishing, Nigerian cons, Powerpoint arms races, spam-blocking spam, viral videos, Plaxo updates, Facebook friendings, ad nauseum.

The worst part is the legal precedent that your employer "owns" the mail sent out over the network provided. It is as if they own the soundwaves which emit from your throat over the phone. An idiot judgment leads to two Kafkaesque absurdities:

First, if you send email with an ethnic slur, receive email with a picture of a naked child or a copyrghted MP3, you can be fired. Use email to organize a Union? Fugget about it! Second, all email sent and received must now be archived as critical business documents to comply with Sarbanes Oxley. And Homeland Security wants rights to monitor ISP data streams and stores, and hope no warrants are needed for data older than 90 days.

Free Speech in the Information Age isn't your right to post anonymously on a soapbox blog or newspaper story. It means that, if we agree, I should be able to send any data in any file format, with any encryption, from a computer I am using to one your are on, provided we pay for the broadband freight. There is no reason that any government, carrier, or corporation should have any right to store, read, or interpret our digital communications. Show just cause and get a warrant, even if you think an employee is spying or a student is pirating music.

Email is now a nightmare that we have to wake up from. I don't have a solution yet, but I believe the key to re-imagine email is to realize that our computers and phones are "always on" the net. So we can begin with synchronous messaging (both sender and receiver are online) — a cross between file sharing, SMS texting, and instant messaging — and then add grid storage mechanisms for asynchronous delivery, multiple recipients, and reliability.

Until then, call me.

ray_kurzweil's picture
Principal Developer of the first omni-font optical character recognition

I've come to reject the common "SETI" (search for extraterrestrial intelligence) wisdom that there must be millions of technology capable civilizations within our "light sphere" (the region of the Universe accessible to us by electromagnetic communication).  The Drake formula provides a means to estimate the number of intelligent civilizations in a galaxy or in the universe. Essentially, the likelihood of a planet evolving biological life that has created sophisticated technology is tiny, but there are so many star systems, that there should still be many millions of such civilizations. Carl Sagan's analysis of the Drake formula concluded that there should be around a million civilizations with advanced technology in our galaxy, while Frank Drake estimated around 10,000. And there are many billions of galaxies. Yet we don't notice any of these intelligent civilizations, hence the paradox that Fermi described in his famous comment. So where is everyone?

We can readily explain why any one of these civilizations might be quiet. Perhaps it destroyed itself. Perhaps it is following the Star Trek ethical guideline to avoid interference with primitive civilizations (such as ours). These explanations make sense for any one civilization, but it is not credible, in my view, that every one of the billions of technology capable civilizations that should exist has destroyed itself or decided to remain quiet. 

The SETI project is sometimes described as trying to find a needle (evidence of a technical civilization) in a haystack (all the natural signals in the universe). But actually, any technologically sophisticated civilization would be generating trillions of trillions of needles (noticeably intelligent signals). Even if they have switched away from electromagnetic transmissions as a primary form of communication, there would still be vast artifacts of electromagnetic phenomenon generated by all of the many computational and communication processes that such a civilization would need to engage in.  

Now let's factor in what I call the "law of accelerating returns" (the inherent exponential growth of information technology). The common wisdom (based on what I call the intuitive linear perspective) is that it would take many thousands, if not millions of years, for an early technological civilization to become capable of technology that spanned a solar system. But because of the explosive nature of exponential growth, it will only take a quarter of a millennium (in our own case) to go from sending messages on horseback to saturating the matter and energy in our solar system with sublimely intelligent processes.

The price-performance of computation went from 10-5 to 108 cps per thousand dollars in the 20th century. We also went from about a million dollars to a trillion dollars in the amount of capital devoted to computation, so overall progress in nonbiological intelligence went from 10-2 to 1017 cps in the 20th century, which is still short of the human biological figure of 1026 cps. By my calculations, however, we will achieve around 1069 cps by the end of the 21st century, thereby greatly multiplying the intellectual capability of our human-machine civilization.  Even if we find communication methods superior to electromagnetic transmissions we will nonetheless be generating an enormous number of intelligent electromagnetic signals.  

According to most analyses of the Drake equation, there should be billions of civilizations, and a substantial fraction of these should be ahead of us by millions of years. That's enough time for many of them to be capable of vast galaxy-wide technologies. So how can it be that we haven't noticed any of the trillions of trillions of "needles" that each of these billions of advanced civilizations should be creating?

My own conclusion is that they don't exist. If it seems unlikely that we would be in the lead in the universe, here on the third planet of a humble star in an otherwise undistinguished galaxy, it's no more perplexing than the existence of our universe with its ever so precisely tuned formulas to allow life to evolve in the first place.

gregory_benford's picture
Emeritus Professor of Physics and Astronomy, UC-Irvine; Novelist, The Berlin Project

Richard Feynman held that philosophy of science is as useful to scientists as ornithology is to birds. Often this is so. But the unavoidable question about physics is — where do the laws come from?

Einstein hoped that God had no choice in making the universe. But philosophical issues seem unavoidable when we hear of  the "landscape" of possible string theory models. As now conjectured, the theory leads to 10500 solution universes — a horrid violation of Occam's Razor we might term "Einstein's nightmare."

I once thought that the laws of our universe were unquestionable, in that there was no way for science to address the question. Now I'm not so sure. Can we hope to construct a model of how laws themselves arise?

Many scientists dislike even the idea of doing this, perhaps because it's hard to know where to start. Perhaps ideas from the currently chic technology, computers, are a place to start. Suppose we treat the universe as a substrate carrying out computations, a meta-computer.

Suppose that precise laws require computation, which can never be infinitely exact. Such a limitation that might be explained by counting the computational capacity of a sphere around an "experiment" that tries to measure outcomes of those laws. The sphere expands at the speed of light, say, so longer experiment times give greater precision. Thinking mathematically, this sets a limit on how sharp differentials can be in our equations. A partial derivative of time cannot be better than the time to compute it.

In a sense, there may be an ultimate limit on how well known any law can be, especially one that must describe all of space-time, like classical relativity. It can't be better than the total computational capacity of the universe, or the capacity within the light sphere we can see.

I wonder if this idea can somehow define the nature of laws, beyond the issue of their precision? For example, laws with higher derivatives will be less descriptive because their operations cannot be carried out in a given volume over a finite time.

Perhaps the infinite discreteness required for formulating any mathematical system could be the limiting bound on such discussions. There should be energy bounds, too, within a finite volume, and thus limits on processing power set by the laws of thermodynamics. Still, I don't see how these arguments tell us enough to derive, say, general relativity.

Perhaps we need more ideas to derive a Law of Laws. Can we use the ideas of evolution? Perhaps invoke selection among laws that penalize those laws that lead to singularities — and thus taking those regions of space-time out of the game? Lee Smolin tried a limited form of this by supposing universes reproduce through black hole collapses. Ingenious, but that didn't seem to lead very far. He imagined some variation in reproduction of budded-off generations of universes, so their fundamental parameters varied a bit. Then selection could work.

In a novel of a decade ago, Cosmo, I invoked intelligent life, rather than singularities, to determine selection for universes that can foster intelligence, as ours seems to. (I didn't know about Lee's ideas at the time.) The idea is that a universe hosting intelligence evolves creatures that find ways in the laboratory to make more universes, which bud off and can further engender more intelligence, and thus more experiments that make more universes. This avoids the problem of how the first universe started, of course. Maybe the Law of Laws could answer that, too?

alison_gopnik's picture
Psychologist, UC, Berkeley; Author, The Gardener and the Carpenter

Recently, I've had to change my mind about the very nature of knowledge because of an obvious, but extremely weird fact about children - they pretend all the time. Walk into any preschool and you'll be surrounded by small princesses and superheroes in overalls - three-year-olds literally spend more waking hours in imaginary worlds than in the real one. Why? Learning about the real world has obvious evolutionary advantages and kids do it better than anyone else. But why spend so much time thinking about wildly, flagrantly unreal worlds? The mystery about pretend play is connected to a mystery about adult humans - especially vivid for an English professor's daughter like me. Why do we love obviously false plays and novels and movies?

The greatest success of cognitive science has been our account of the visual system. There's a world out there sending information to our eyes, and our brains are beautifully designed to recover the nature of that world from that information. I've always thought that science, and children's learning, worked the same way. Fundamental capacities for causal inference and learning let scientists, and children, get an accurate picture of the world around them - a theory. Cognition was the way we got the world into our minds.

But fiction doesn't fit that picture - its easy to see why we want the truth but why do we work so hard telling lies? I thought that kids' pretend play, and grown-up fiction, must be a sort of spandrel, a side-effect of some other more functional ability. I said as much in a review in Science and got floods of e-mail back from distinguished novel-reading scientists. They were all sure fiction was a Good Thing - me too, of course, - but didn't seem any closer than I was to figuring out why.

So the anomaly of pretend play has been bugging me all this time. But finally, trying to figure it out has made me change my mind about the very nature of cognition itself.

I still think that we're designed to find out about the world, but that's not our most important gift. For human beings the really important evolutionary advantage is our ability to create new worlds. Look around the room you're sitting in. Every object in that room - the right angle table, the book, the paper, the computer screen, the ceramic cup was once imaginary. Not a thing in the room existed in the pleistocene. Every one of them started out as an imaginary fantasy in someone's mind. And that's even more true of people - all the things I am, a scientist, a philosopher, an atheist, a feminist, all those kinds of people started out as imaginary ideas too. I'm not making some relativist post-modern point here, right now the computer and the cup and the scientist and the feminist are as real as anything can be. But that's just what our human minds do best - take the imaginary and make it real. I think now that cognition is also a way we impose our minds on the world.

In fact, I think now that the two abilities - finding the truth about the world and creating new worlds-are two sides of the same coins. Theories, in science or childhood, don't just tell us what's true - they tell us what's possible, and they tell us how to get to those possibilities from where we are now. When children learn and when they pretend they use their knowledge of the world to create new possibilities. So do we whether we are doing science or writing novels. I don't think anymore that Science and Fiction are just both Good Things that complement each other. I think they are, quite literally, the same thing.

lewis_wolpert's picture
Professor of Biology

I have more many years worked on pattern in the developing embryo formation which is the development of spatial organization as seen, for example in the arm and hand. My main model for pattern formation is based on cells acquiring a positional value. The model proposes that cells have their position specified as in a co-ordinate system and this determines, depending on their developmental history and their genetic constitution what they do.

The development of the chick limb illustrates some of the problems. As the wing grows out from the flank up there is a thickened ridge of the covering sheet of cells that secretes special proteins which we think, and this is controversial, specifies a region in the cells beneath the ridge which we call the progress zone. At the posterior margin of the limb is the polarising region which secretes a protein, Sonic Hedgehog. This is a signaling molecule used again and again in the development of the embryo. The normal pattern of digits in the chick wing is 2, 3, and 4. If another polarising region is grafted to the anterior margin the pattern of digits is 4, 3, 2, 2, 3, 4.

The interpretation is that Sonic Hedgehog sets up a gradient which specifies position and with the graft there is a mirror image gradient and Crick suggested this was due to diffusion of a molecule like sonic hedgehog setting up a gradient. We have worked hard to show that this model is correct.

The best evidence that it maybe gradient is that if just a small amount of Sonic Hedgehog in the anterior margin then you just get an extra digit 2. If one increases it a bit put a little bit more and you get a 3, 2. But is there really a diffusible gradient in Sonic Hedgehog specifying position? The situation is much more complex.

We now think that the model is wrong as diffusion of a molecule is far too unreliable for reliably and accurately specifying positional values. The reasons why we think diffusion cannot work is that there is now good evidence that a diffusing molecule has to go between and even into cells, and interact with extracellular molecules making it totally unreliable. A more attractive model might be based on interactions at cell contacts as in the polarity models proposed by others. Position would be specified by cells talking to each other. 

This is a serious change in my thinking.

richard_dawkins's picture
Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science, Oxford; Author, Books Do Furnish a Life

When a politician changes his mind, he is a 'flip-flopper.' Politicians will do almost anything to disown the virtue — as some of us might see it — of flexibility. Margaret Thatcher said, "The lady is not for turning." Tony Blair said, "I don't have a reverse gear." Leading Democratic Presidential candidates, whose original decision to vote in favour of invading Iraq had been based on information believed in good faith but now known to be false, still stand by their earlier error for fear of the dread accusation: 'flip-flopper'. How very different is the world of science. Scientists actually gain kudos through changing their minds. If a scientist cannot come up with an example where he has changed his mind during his career, he is hidebound, rigid, inflexible, dogmatic! It is not really all that paradoxical, when you think about it further, that prestige in politics and science should push in opposite directions.

I have changed my mind, as it happens, about a highly paradoxical theory of prestige, in my own field of evolutionary biology. That theory is the Handicap Principle suggested by the Israeli zoologist Amotz Zahavi. I thought it was nonsense and said so in my first book, The Selfish Gene. In the Second Edition I changed my mind, as the result of some brilliant theoretical modelling by my Oxford colleague Alan Grafen.

Zahavi originally proposed his Handicap Principle in the context of sexual advertisement by male animals to females. The long tail of a cock pheasant is a handicap. It endangers the male's own survival. Other theories of sexual selection reasoned — plausibly enough — that the long tail is favoured in spite of its being a handicap. Zahavi's maddeningly contrary suggestion was that females prefer long tailed males, not in spite of the handicap but precisely because of it. To use Zahavi's own preferred style of anthropomorphic whimsy, the male pheasant is saying to the female, "Look what a fine pheasant I must be, for I have survived in spite of lugging this incapacitating burden around behind me."

For Zahavi, the handicap has to be a genuine one, authentically costly. A fake burden — the equivalent of the padded shoulder as counterfeit of physical strength — would be rumbled by the females. In Darwinian terms, natural selection would favour females who scorn padded males and choose instead males who demonstrate genuine physical strength in a costly, and therefore, unfakeable way. For Zahavi, cost is paramount. The male has to pay a genuine cost, or females would be selected to favour a rival male who does so.

Zahavi generalized his theory from sexual selection to all spheres in which animals communicate with one another. He himself studies Arabian Babblers, little brown birds of communal habit, who often 'altruistically' feed each other. Conventional 'selfish gene' theory would seek an explanation in terms of kin selection or reciprocation. Indeed, such explanations are usually right (I haven't changed my mind about that). But Zahavi noticed that the most generous babblers are the socially dominant individuals, and he interpreted this in handicap terms. Translating, as ever, from bird to human language, he put it into the mouth of a donor bird like this: "Look how superior I am to you, I can even afford to give you food." Similarly, some individuals act as 'sentinels', sitting conspicuously in a high tree and not feeding, watching for hawks and warning the rest of the flock who are therefore able to get on with feeding. Again eschewing kin selection and other manifestations of conventional selfish genery, Zahavi's explanation followed his own paradoxical logic: "Look what a great bird I am, I can afford to risk my life sitting high in a tree watching out for hawks, saving your miserable skins for you and allowing you to feed while I don't." What the sentinel pays out in personal cost he gains in social prestige, which translates into reproductive success. Natural selection favours conspicuous and costly generosity.

You can see why I was sceptical. It is all very well to pay a high cost to gain social prestige; maybe the raised prestige does indeed translate into Darwinian fitness; but the cost itself still has to be paid, and that will wipe out the fitness gain. Don't evade the issue by saying that the cost is only partial and will only partially wipe out the fitness gain. After all, won't a rival individual come along and out-compete you in the prestige stakes by paying a greater cost? And won't the cost therefore escalate until the point where it exactly wipes out the alleged fitness gain?

Verbal arguments of this kind can take us only so far. Mathematical models are needed, and various people supplied them, notably John Maynard Smith who concluded that Zahavi's idea, though interesting, just wouldn't work. Or, to be more precise, Maynard Smith couldn't find a mathematical model that led to the conclusion that Zahavi's theory might work. He left open the possibility that somebody else might come along later with a better model. That is exactly what Alan Grafen did, and now we all have to change our minds.

I translated Grafen's mathematical model back into words, in the Second Edition of The Selfish Gene (pp 309-313), and I shall not repeat myself here. In one sentence, Grafen found an evolutionarily stable combination of male advertising strategy and female credulity strategy that turned out to be unmistakeably Zahavian. I was wrong to dismiss Zahavi, and so were a lot of other people.

Nevertheless, a word of caution. Grafen's role in this story is of the utmost importance. Zahavi advanced a wildly paradoxical and implausible idea, which — as Grafen was able to show — eventually turned out to be right. But we must not fall into the trap of thinking that, therefore, the next time somebody comes up with a wildly paradoxical and implausible idea, that one too will turn out to be right. Most implausible ideas are implausible for a good reason. Although I was wrong in my scepticism, and I have now changed my mind, I was still right to have been sceptical in the first place! We need our sceptics, and we need our Grafens to go to the trouble of proving them wrong.