Question Center

The flat earth and geocentric world are examples of wrong scientific beliefs that were held for long periods. Can you name your favorite example and for extra credit why it was believed to be true?

An Edge Special Event!

To selected Edge contributors:

I am doing research for a new book and would hope to elicit informed responses to the following question:

The flat earth and geocentric world are examples of wrong scientific beliefs that were held for long periods. Can you name your favorite example and for extra credit why it was believed to be true?

Please note that I am interested in things we once thought were true and took forever to unlearn. I am looking for wrong scientific beliefs that we've already learned were wrong, rather than those the respondent is predicting will be wrong which makes it different from the usual Edge prediction sort of question.

Several responders pointed out that the phrase "scientific belief" in my question was not well defined. Did I mean beliefs held by scientists or beliefs by the lay public about science. The answer is that I am interested in both, though I should stress that this is not at all what my next book will be about. I do not know enough about science to write anything about the subject. However, for the book I am thinking about stuff that we get wrong, often for long periods of time, and am doing some wondering about whether there are some principles defining when such mistakes are more likely to happen.

This exercise has been fantastically interesting, and if anyone is prompted by this to send in more ideas please do. I am also interested if anyone has thoughts about what the principles might be, if, indeed there are any.

Richard Thaler

RICHARD H. THALER, Director of the Center for Decision Research at the University of Chicago Graduate School of Business, is the father of Behavioral Economics. He is coauthor (with Cass Sunstein) of Nudge: Improving Decisions About Health, Wealth, and Happiness, and he writes a column that appears in the Business section of The Sunday New York Times.

Richard Thaler's Edge Bio Page


Thaler on Edge: "A Short Course In Behavioral Economics with Richard Thaler, Daniel Kahneman, Sendhil Mullainathan", The Edge Master Class 2008


The flat earth and geocentric world are examples of wrong scientific beliefs that were held for long periods. Can you name your favorite example and for extra credit why it was believed to be true?

65 Contributors: Neil Shubin, Garrett Lisi, Peter Schwartz, David Deutsch, Haim Harari, Alun Anderson, Irene Pepperberg, John Holland, Derek Lowe, Charles Simonyi, Nathan Myhrvold, Lawrence Krauss, Steven Strogatz, Cesar Hidalgo, Eric Topol, Christian Keysers, Simona Morini, Ross Anderson, James Croak, Rob Kurzban, Lewis Wolpert, Howard Gardner, Ed Regis, Robert Trivers, Frank Tipler, Joan Chaio, Jeremy Bernstein, Matthew Ritchie, Clay Shirky, Roger Schank, Gary Klein, Gregory Cochran, Eric Weinstein, Geoffrey Carr, James O'Donnell, Lane Greene, Jonathan Haidt, Juan Enriquez, Scott Atran, Rupert Sheldrake, Emanuel Derman, Charles Seife, Milford H. Wolpoff, Robert Shapiro, Judith Harris, Jordan Pollack, Sue Blackmore, Nicholas G. Carr, Lee Smolin, Marti Hearst, Gino Segre, Carl Zimmer, Gregory Paul, Alison Gopnik, George Dyson, Mark Pagel, Timothy Taylor, David Berreby, Zenon Pylyshyn, Michael Shermer, George Lakoff, Eduardo Salcedo-Albarán, Garniss Curtis, Marcel Kinsbourne, Paul Kedrosky

David Berreby on November 27, 2010

If you were a sophisticated and up-to-the-minute science buff in 17th century Europe, you knew that there was only one properly scientific way to explain anything: "the direct contact-action of matter pushing on matter," (as Peter Dear puts it The Intelligibility of Nature). Superstitious hayseeds thought that one object could influence another without a chain of physical contact, but that was so last century by 1680. Medieval physics had been rife with such notions; modern thought had cast those demons out. To you, then, Newton's theory of gravity looked like a step backwards. It held that the sun influenced the Earth without touching it, even via other objects. At the time, that just sounded less "sciencey" than the theories it eventually replaced.

This came to mind the other day because, over at Edge.org, Richard H. Thaler asked people to nominate examples of "wrong scientific beliefs that were held for long periods." He also asked us to suggest a reason that our nominee held sway for too long. ...


November 24, 2010

By Andrew C. Revkin

There's a fascinating list of scientific ideas that endured for a long time, but were wrong, over at Edge.org, the Web site created by the agent and intellectual impresario John Brockman.

The cautionary tale of the fight over the cause of stomach ulcers, listed by quite a few contributors there, is the kind of saga that gives science journalists (appropriately) sleepless nights. One of my favorites in the list is the offering of Carl Zimmer, the author and science journalist, who discusses some durable misconceptions about the stuff inside our skulls:

"This laxe pithe or marrow in man's head shows no more capacity for thought than a Cake of Sewet or a Bowl of Curds."

This wonderful statement was made in 1652 by Henry More, a prominent seventeenth-century British philosopher. More could not believe that the brain was the source of thought. These were not the ravings of a medieval quack, but the argument of a brilliant scholar who was living through the scientific revolution. At the time, the state of science made it was very easy for many people to doubt the capacity of the brain. And if you've ever seen a freshly dissected brain, you can see why. It's just a sack of custard. Yet now, in our brain-centered age, we can't imagine how anyone could think that way.
The list grew out of a query from Richard Thaler, the director of the Center for Decision Research at the University of Chicago Graduate School of Business and coauthor, with Cass Sunstein, of " Nudge: Improving Decisions About Health, Wealth, and Happiness." (He also writes a column for The Times.)

Here's his question:

The flat earth and geocentric world are examples of wrong scientific beliefs that were held for long periods. Can you name your favorite example and for extra credit why it was believed to be true?


November 24, 2010

Maggie Koerth-Baker

Science can contradict itself. And that's OK. It's a fundamental part of how research works. But from what I've seen, it's also one of the hardest parts for the general public to understand. When an old theory dies, it's not because scientists have lied to us and can't be trusted. In fact, exactly the opposite. Those little deaths are casualties of the process of fumbling our way towards Truth*.

Of course, even after the pulse has stopped, the dead can be pretty interesting. Granted, I'm biased. I like dead things enough to have earned a university degree in the sort of anthropology that revolves around exactly that. But I'm not alone. A recent article at the Edge Foundation website asked a broad swath of scientists and thinkers to name their favorite long-held theory, which later turned out to be dead wrong. The responses turn up all sorts of fascinating mistakes of science history—from the supposed stupidity of birds, to the idea that certain, separate parts of the brain controlled nothing but motor and visual skills.

One of my favorites: The idea that complex, urban societies didn't exist in Pre-Columbian Costa Rica, and other areas south of the Maya heartland. In reality, the cities were always there. I took you on a tour of one last January. It's just that the people who lived there built with wood and thatch, rather than stone. The bulk of the structures decayed over time, and what was left was easy to miss, if you were narrowly focused on looking for giant pyramids.

What's your favorite dead theory?

Edge: Wrong Scientific Beliefs That Were Held for Long Periods of Time ...


November 23, 2010


Earlier this week Richard H. Thaler posted a question to selected Edge contributors, asking them for their favorite examples of wrong scientific theories that were held for long periods of time. You know, little ideas like "the earth is flat."

The contributor's responses came from all different fields and thought processes, but there were a few recurring themes. One of the biggest hits was the theory that ulcers were caused by stress–this was discredited by Barry Marshall and Robin Warren, who proved that the bacteria H. pylori bring on the ulcers. Gregory Cochran explains:

One favorite is helicobacter pylori as the main cause of stomach ulcers. This was repeatedly discovered and then ignored and forgotten: doctors preferred 'stress' as the the cause, not least because it was undefinable. Medicine is particularly prone to such shared mistakes. I would say this is the case because human biology is complex, experiments are not always permitted, and MDs are not trained to be puzzle-solvers–instead, to follow authority.

Another frequent topic of disbelief among Edge responders was theism and its anti-science offshoots–in particular the belief in intelligent design, and the belief that the Earth is only a few thousand years old. Going by current political discussions in America it may seem that these issues are still under contention and shouldn't be included on the list, but I'm going to have to say differently, and agree with Milford Wolpoff:

Creationism's step sister, intelligent design, and allied beliefs have been held true for some time, even as the mountain of evidence supporting an evolutionary explanation for the history and diversity of life continues to grow. Why has this belief persisted? There are political and religious reasons, of course, but history shows than neither politics nor religion require a creationist belief in intelligent design. ...


Evolutionary Biologist; Robert R. Bensley Distinguished Service Professor; University of Chicago; Author, Your Inner Fish

One wrong idea in my field was that the map of the earth was fixed...that the continents stayed in one place over time. This notion held despite the fact that anyone, including small children, could see that the coasts of Africa and South America (like many places) fit together like a jigsaw puzzle. Evidence for moving continents piled up (fossils from different places, similar rocks...), but still there was strong resistance. Part of the problem is that nobody could imagine a mechanism for the continents to move about...did they raft like icebreakers through the ocean mushing the sea bottom as they did so? Nobody saw how this could possibly happen.

Independent Theoretical Physicist; Author, "An Exceptionally Simple Theory of Everything"

One wrong scientific belief held by cosmologists until recently was that the expansion of the universe would ultimately cease, or even that the universe would re-contract. Evidence now shows that the expansion of the universe is accelerating. This came as quite a shock, although the previous belief was held on scant evidence. Many physicists liked the idea of a closed universe, and expressed distaste at the idea of galaxies accelerating off to infinity, but nature often contradicts our intuition.

Futurist, Business Strategist; Cofounder. Global Business Network, a Monitor Company; Author, Inevitable Surprises

There are several things we believed not true and now believe to be true for example prions did not exist and now are a major field of study and quantum entanglement was impossible, even to Einstein ("spooky action at a distance", he called it. ) and now it is the basis of quantum computing.

Quantum Physicist, Oxford; Author, The Fabric of Reality

Surely the most extreme example is the existence of a force of gravity.

It's hard to say when this belief began but it surely predates Newton. It must have existed from the time when concepts such as force were first formulated until the general theory of relativity superseded it in 1915.

Why did scientists hold that belief for so long? Because it was a very good explanation and there was no rival theory to explain observations such as heavy objects exerting forces on whatever they were resting on. Since 1915 we have known the true explanation, namely that when you hold your arm out horizontally, and think you are feeling it being pulled downwards by a force of gravity, the only force you are actually feeling is the upward force exerted by your own muscles in order to keep your arm accelerating continuously away from a straight path in spacetime. Even today, it is hard to discipline our intuition to conceive of what is happening in that way, but it really is.

Physicist, former President, Weizmann Institute of Science; Author, A View from the Eye of the Storm

The earth is flat and the sun goes around it for the same reason that an apple appears to be more strongly attracted by the earth than a leaf, the same reason that when you add 20% and then subtract 20% you return to the same value, and the same reason that the boat is heavier than water. All of these statements appear to be correct, at first sight, and all of them are wrong. The length of time it takes to figure it out is a matter of history and culture. Religion gets into it, psychology, fear of science, and many other factors. I do not believe that there is one parameter that determines how these things are found to be wrong.

The guy who sold me a carpet last month truly insisted that people in Australia are standing on their heads and could not understand how they manage to do it. He still believes that the earth is flat and is ashamed of his belief, but refuses to accept my explanations. I know a union that got a substantial pay raise because a politician did not understand that adding and then subtracting 20% gets you to another result from the one you started. Religious people of all religions believe even more ridiculous things than all of the above. These are examples of the last 10 years, not of the middle ages.

Part of the problem is that, in order to find the truth, in all of these cases, you need to ask the right question. This is more important, and often more difficult, than to find the answer. The right questions in the above cases are of different levels of complexity.

Senior Consultant (and former Editor-in-Chief and Publishing Director of New Scientist); Author, After the Ice: Life, Death, and Geopolitics in the New Arctic

The Great Chain of Being is another great example of a long-held, still not fully displaced, false view and also stems from the same kind of "wrongly centered " thinking.

Essentially the view is that humans stand at the pinnacle of creation (or just below God) and all other life forms are less perfect to a varying degree.

Evolutionary theory teaches that all creatures are equally adapted to the niches in which they live; every branch of the tree is thus in a sense equally perfect.

There was a critical moment in the early 1970s when the new view swept into psychology. I was a student at the time, looking at so-called comparative psychology. The dominant view, put forward by ME Bitterman, was that you could classify "learning ability" and arrange animals according to the level they had reached e.g. fish were incapable of "reversal learning" but rats were, or some such. A paper was then published (by Hodos and Campbell 1969) on the false notion of the Great Chain of Being in psychology and that every animal's learning ability fitted the particular use it made of it (e.g. honey bees are brilliant at learning the time of day at which particular flowers produce nectar, a subject I later researched). This change in the way of thinking reflects also a move way from the US Skinnerian school of lab studies of animals to the European ethological school (pioneered by Novel prize winner Niko Tinbergen who I worked with) of studying animals in their own environments.

The view also fits Native American conceptions of a Creator who does not favour any particular one of his creations but is at odds with the Christian view, which is why it lingers on in the US.

Psychologist, Research Associate, Harvard University; Author, Alex and Me

That all birds were stupid.

It was believed to be true because (a) early neurobiologists couldn't find anything in the avian brain that looked like the primate cortex (although the more enlightened did argue that there was a 'striatal' area that seemed to work in a somewhat comparable manner for some birds) and (b) many studies on avian intelligence, using operant conditioning, focused on pigeons — which are not the most intelligent birds — and the pigeon generally never did even as well as the rat on the type of tasks used.

A corollary: That parrots were not only stupid, but also could never learn to do anything more than mimic human speech.

It was believed to be true because the training techniques initially used in laboratories were not appropriate for teaching heterospecific communication.

Professor of Psychology, Computer Science and Engineering, University of Michigan, Ann Arbor; Author, Emergence: From Chaos to Order

From the time of Aristotle onward, natural philosophers believed that the basic law underlying motion was that all objects (eventually) come to rest. It took Newton to lay aside the myriad details (friction, etc.) in order to build an idealized model that requires 'forces' to change direction or velocity. Subsequently, everything from models of fluid flow to slinging satellites to the outer solar system used Newton's model as a starting point.

Medicinal Chemist

My nominees are:

(1) The "four humours" theory of human physiology. That one, coming most prominently from Galen, persisted for centuries. Although the pure humoural theory gradually eroded, it lived on in the shape of contra-therapeutic bloodletting until the 19th century, and you'd have to think that in the vast majority of those cases it was harmful.

Why it persisted so long is the tough part. My guess is that it was hard (it still is!) for both physicians and patients to realize or admit that very little could be done for most physical ailments. Bloodletting might not always, work, but it had to be better than just standing there doing nothing, right? And in those cases susceptible to it, bloodletting must have had a pretty strong placebo effect, as dramatic as it is.

(2) The "bad air" theory of infectious disease. This is another one that you can find persisting for centuries. I'd say that it lasted for several factors: there were indeed such things as poisonous vapors which could make a person feel sick, for one thing. And the environments that were felt to have the worst vapors were often ones that had higher rates of disease due to the real factors (standing water, poor hygiene, overcrowded dwellings, and so on). Finally, there's the factor that's kept all sorts of erroneous beliefs alive — lack of a compelling alternative. The idea of strange-looking living creatures too small to see being the cause of infections wouldn't have gotten much of a hearing, not in the face of more tangible explanations.

That last point brings up another reason that error persists — the inability (or unwillingness) to realize that man is not the measure of all things. Unaided human perceptions on the human scale don't take you very far on the macro-scale of astronomy, or the micro-scale of cell biology (much less that of subatomic physics). To me, the story of science has been the story of augmenting our perceptions, and realizing that they had to be augmented in the first place.

Computer Scientist, International Software; Former Chief Architect, and Distinguished Engineer, Microsoft Corporation

One short answer is this: Peripatetic Mechanics of Aristotle was probably the longest running wrong scientific idea which went from the Greek times up until practically Newton. The reason for the longevity was that it (namely Aristotle's mechanics) corresponded well to the crude and complicated word around us: with two horses the heavy cart moves indeed faster (without careful measurements we could easily say: two times faster) than with just one horse. When you give a shove to something, it will start moving and then soon stop. Heavy things move down, light things (feathers, smoke) move up. The normal world is just not friendly to the kind of abstraction that allows the setting up of general natural laws like Newton's.

I am of course aware of the currently popular belief that "flat earth" was somehow a widely held "scientific" idea, but I do not know what evidence supports this belief. It was certainly not part of the Antique inheritance (who had pretty good estimates for the diameter of the earth and excellent estimates for the ratio of Earth's and Moon's diameters); It was not part of Aristotle, or Aquinus, or any of the authorities that the Church relied on. No doubt, there were some creation myths or fanciful publications that might have illustrated the world as being flat but it is a stretch to call these "scientific" even by standards of the age, when learned men would have been able to refute such a thesis easily — and probably did as part of their exams.

With the geocentric world it is a different matter — geocentrism was indeed scientifically held (with Ptolemy being the best proponent) and it is indeed false — but not to the same extent as the Peripatetic Mechanics. The real issue was precision of prediction — and the complicated system of Ptolemy gave excellent results, indeed better results than Copernicus (which made the breakthrough idea of Copernicus a difficult sell — just put yourself into the shoes of someone in his time.)

Real improvement in precision came only with Kepler and the elliptical orbits which were arrived at in part by scientific genius, by being a stickler for accuracy, and in part by mad superstition (music of the spheres, etc.) From his point of view, putting the coordinate system around the sun simplified his calculations. The final significance of putting the sun into the center was to be able to associate a physical effect — gravitation — with the cause of that effect, namely with the sun. But this did not really matter before Newton.

In any of the cases a common thread seems to be that the "wrong" scientific ideas were held as long as the difference between "wrong" and "right" did not matter or was not even apparent given the achievable precision, or, in many cases the differences actually favored the "wrong" theory — because of the complexity of the world, the nomenclature, the abstractions.

I think we are all too fast to label old theories "wrong" and with this we weaken the science of today — people say — with some justification from the facts as given to them — that since the old "right" is now "wrong" the "right" of today might be also tainted. I do not believe this — today's "right" is just fine, because yesterday's "wrong" was also much more nuanced "more right" that we are often led to believe.

CEO, Managing Director, Intellectual Ventures; Former Director, Microsoft Research and Chief Technology Officer, Microsoft

Here is a short list:

1. Stress theory of ulcers — it turns out they are due to infection with Heliobacter pylori. Barry Marshall won Nobel Prize for that.

2. Continental drift was proposed in the 1920-30s by Alfred Wegner, but was totally dismissed until the 1960s when it ushered in plate tectonics.

3. Conventional belief was the eye evolved many, many times. Then they discovered the PAX genes that regulate eyes and are found throughout the animal kingdom — eyes evolved ONCE.

4. Geoffrey St. Hillare was a French scientist who had a theory that invertebrates and vertebrates shared a common body plan. He was widely dismissed until the HOX genes were discovered.

Physicist, Director, Origins Initiative, Arizona State University; Author, Hiding in the Mirror

Intelligent design... special creation... the reason... a long age of the earth is so long that people didn't realize that evolution could occur.

Applied mathematician, Cornell University; Author, Sync

Another classic wrong belief is that light propagates through a medium, the "ether," that pervades the universe. This was believed to be true until the early 1900s because all other waves known at that time required a medium in which to propagate. Sound waves travel through air or water; the waves on a plucked guitar string travel down the string itself. Yet on the face of it, light seemed to need no medium — it could travel through seemingly empty space. Theorists concluded that empty space must not really be empty — it must contain a light-bearing medium, the "luminiferous ether".

But the ether was always a very problematic notion. For one thing, it had to be extremely stiff to propagate a wave as fast as light — yet how could empty space be "stiff"?

The existence of the ether was disproved experimentally by the Michelson Morley experiment, and theoretically by Einstein's special theory of relativity.

Assistant Professor, MIT Media Lab; Faculty Associate, Harvard Center for International Development

The age of the earth... which was believed to be only a few thousand years old, due to biblical calculations, until Charles Lyell (who was a good friend of Darwin) begun to come up with estimates of millions of years based on erosion.... the advanced age of the world was heavily refuted by scientists, particularly by Lord Kelvin, who made calculations of the rate at which earth must have cooled down and concluded that this could have only happened in a few thousand years... he did not know about the radioactive decay taking place at the earth's core...

The model that was used to explain mountains was based not on tectonic plates, but rather on a shrinking earth, by assuming that as the earth cooled down it shrunk and creased up....

The humors theory of disease v/s the germ theory of disease.

Basically... any change of paradigm that went on during the 19th century in England...

Cardiologist; Director, Scripps Translational Science Institute, La Jolla

In medicine there are many of these wrong scientific beliefs (so many it is frankly embarrassing). Here are a couple:

We were taught (in med school and as physicians) that when cells in the body differentiate to become heart muscle or nerve tissue/brain, they can never regenerate and there is no natural way for new cells/tissue to form. Wrong!! Enter the stem cell and regenerative medicine era.

Until the mid 1980s, a heart attack was thought to be a fait accomplit, that there was nothing that could ever be done to stop the damage from occurring...just give oxygen, morphine, and say prayers. Then we discovered that we could restore blood supply to the heart and abort the heart attack or prevent much of the damage. The same is now true for stroke. It took almost 80 years for that realization to be made!

Neuroscientist; Scientific Director, Neuroimaging Center, University Medical Center Groningen

For a long time the brain was thought to contain separate parts designed for motor control and visual perception. Only in the 1990's, through the discovery of mirror neurons, did we start to understand that the brain did not work along such divisions, but was instead using motor areas also for perception and perceptual areas also for vision.

I believe that this wrong belief was so deeply engrained because of AI, in which there is no link between what a computer sees human do and the computers routines for moving a robot. Instead, in the human brain the situation is different: the movements we program for our own body look exactly the same as those other humans do. Hence, our motor programs and body are a match for those we observe, and hence afford a strong system for simulating and perceiving the actions of others.

I call this the computer fallacy: thinking of the brain as a computer turned out to harm our understanding of the brain.

Philosopher; Dipartimento delle Arti e del Disegno Industriale, IUAV University Venice

My preference goes to euclidean geometry. It's axioms were considered true for centuries on the basis of intuition (shall we say prejudice?) about space.

FRS; Professor, Security Engineering, Cambridge Computer Laboratory; Researcher in Security Psychology

In the field of security engineering, a persistent flat-earth belief is 'security by obscurity': the doctrine that security measures should not be disclosed or even discussed.

In the seventeenth century, when Bishop Wilkins wrote the first book on cryptography in English in 1641, he felt the need to justify himself: "If all those useful Inventions that are liable to abuse, should therefore be concealed, there is not any Art or Science which might be lawfully profest". In the nineteenth century, locksmiths objected to the publication of books on their craft; although villains already knew which locks were easy to pick, the locksmiths' customers mostly didn't. In the 1970s, the NSA tried to block academic research in cryptography; in the 1990s, big software firms tried to claim that proprietary software is more secure than its open-source competitors.

Yet we actually have some hard science on this. In the standard reliability growth model, it is a theorem that opening up a system helps attackers and defenders equally; there's an empirical question whether the assumptions of this model apply to a given system, and if they don't then there's a further empirical question of whether open or closed is better.

Indeed, in systems software the evidence supports the view that open is better. Yet the security-industrial complex continues to use the obscurity argument to prevent scrutiny of the systems it sells. Governments are even worse: many of them would still prefer that risk management be a matter of doctrine rather than of science."


The first wrong notion that comes to mind, one that lasted centuries, is from Thales of Miletus, regarded as the "father of science" as he rejected mythology in favor of material explanations. He believed everything was water, a substance that in his experience could be viewed in all three forms: liquid, solid, gas. He further speculated that earthquakes were really waves and that the earth must be floating on water because of this.

The idea that matter is one thing in different appearances is regarded as true even today.

Psychologist, UPenn; Director, Penn Laboratory for Experimental Evolutionary Psychology (PLEEP); Author, Why Everyone (Else) is a Hypocrite

I'm guessing you'll get some of the more obvious ones, so I want to offer an instance a little off the beaten path. I came across it doing research of my own into this issue of closely held beliefs that turn out to be wrong.

There was a court case in New York in 1818 surrounding the question of whether a whale was a fish or a mammal. Obviously, we now know not only that there is a correct answer to this question (for a time this wasn't obvious) but also what that answer is (mammal, obviously). Even after some good work in taxonomy, the idea that a whale was a fish persisted. Why?

This one is probably reasonably clear. Humans assign animals to categories because doing so supports inferences. (There's great work by Ellen Markman and Frank Keil on this.) Usually, shared physical features supports inferences about categorization, which then supports inferences about form and behavior. In this case, the phylogeny just happens to violate what usually is a very good way to group animals (or plants), leading to the persistence of the incorrect belief.

Biologist, University College; Author, Six Impossible Things to Do Before Breakfast

That force causes movement — it causes acceleration. That heavy bodies fall faster than lighter ones.

Psychologist, Harvard University; Author, Changing Minds

Among cognitive psychologists, there is widespread agreement that people learn best when they are actively engaged with a topic, have to actively problem solve, as we would put it 'construct meaning.' Yet, among individuals young and old, all over the world, there is a view that is incredibly difficult to dislodge. To wit: Education involves a transmission of knowledge/information from someone who is bigger and older (often called 'the sage on the stage') to someone who is shorter, younger, and lacks that knowledge/information. No matter how many constructivist examples and arguments are marshaled, this view — which I consider a misconception — bounces back. And it seems to be held equally by young and old, by individuals who succeeded in school as well as by individuals who failed miserably.

Now this is not a scientific misconception in the sense of flat earth or six days of creation, but it is an example of a conception that is extraordinarily robust, even though almost no one who has studied cognition seriously believes it hold water.

Let me take this opportunity to express my appreciation for your many contributions to our current thinking.

Science Writer, Author, What Is Life?

Vitalism, the belief that living things embody a special, and not entirely natural, animating force or principle that makes them fundamentally different from nonliving entities. (Although rejected by scientists, I would hazard the guess that vitalism is not entirely dead today among many members of the general public.) This belief's persistence over the ages is explained by the obvious observable differences between life and nonlife.

Living things move about under their own power, they grow, multiply, and ultimately die. Nonliving objects like stones, beer bottles and grains of sand don't do any of that. It's the overwhelming nature of these perceptible differences that accounts for the belief's longevity. In addition, there is still no universally accepted scientific explanation of how life arose, which only adds to the impression that there's something scientifically unexplainable about life.

Evolutionary Biologist, Rutgers University; Coauthor, Genes In Conflict: The Biology of Selfish Genetic Elements

For more than 100 years after Darwin (1859) people believed that evolution favored what was good for the group or the species — even though Darwin explicitly rejected this error

Probable cause: the false theory was just what you would expect people to propagate in a species whose members are concerned to increase the group-orientation of others.

Professor of Mathematical Physics, Tulane University; Author, The Physics of Christianity

I myself have been working a book on precisely the same topic, but with a slightly different emphasis: why did scientists not accept the obvious consequences of their own theories?

Here are three examples of false beliefs long accepted:

(1) The false belief that stomach ulcers were caused by stress rather than bacteria. I have some information on this subject that has never been published anywhere. There is a modern Galileo in this story, a scientist convicted of a felony in criminal court in the 1960's because he thought that bacteria caused ulcers.

(2) The false belief that the continents do not move. The drifting continents were an automatic mathematical consequence of the fact that the Earth was at least 100 million years old, and the fact that the Earth formed by the gravitational collapse of a gas and dust cloud. One of Lord Kelvin's students pointed out the essential idea in a series of papers in Nature. This was long before Wegener.

(3) The false belief that energy had to be a continuous variable. James Clerk Maxwell, no less, realized that this was a false belief. The great mathematician Felix Kelin, of Klein Bottle fame, discussed the question with Erwin Schrödinger of why the fact of quantized energy was not accepted in the 19th century.

Assistant Professor, Brain, Behavior, and Cognition; Social Psychology; Northwestern University

Early pioneering cultural anthropologists, such as Lewis Morgan who penned the influential 1877 work Ancient Society and others, were heavily influenced by Darwinian notions of biological evolution to consider human culture as itself evolving linearly in stages.

Morgan in particular proposed the notion that all human cultures evolved in three basic stages: from savagery, to barbarism to finally, civilization and that technological progress was the key to advancing from one stage to the next. Morgan was by no means an arm chair academic; he lived with Native Americans and and studied their culture extensively. Through these first-hand experiences, Morgan sought to reconcile what he observed to be vast diversity in human cultural practices, particularly between Native Americans and Europeans, with emerging ideas of Darwinian biological evolution.

Morgan was one of several anthropologists at the time who proposed various forms of unilinear cultural evolution, the idea that human culture evolved in stages from simple to more sophisticated and complex, which ultimately later became tied to colonialist ideology and social Darwinism.

Such dangerous ideas then became the catalyst for Franz Boas and other 20th century anthropologists to challenge ideas by Morgan with concepts such as ethnocentrism. By arguing how belief in the superiority of one's own culture guided anthropological theories of unilinear evolution, rather than scientific objectivity per se, Boas and his colleagues exposed an important human and scientific bias in the study of human culture that later gave way to revised theories of cultural evolution, namely multilinear evolution, and the emergence of cultural relativism.

Professor of Physics, Stevens Institute of Technology; Author, Nuclear Weapons: What You Need to Know,

It was generally believed until the work of Hubble that the universe was static and that the Milky Way was everything.


An example of a correct theory that was extensively accepted by the public, then displaced by an alternate interpretation, which has since been problematized without resolution.

Although the 19th century idea that the fourth dimension was an extra dimension of space was in many senses correct, it was invalidated in the cultural imagination by Minkowski and Einstein's convincing and influential representation of time as the fourth dimension of space-time.

For example: the polychora in Picasso & Duchamp's early cubist works were far more directly influenced by Hinton's essays "What is the Fourth Dimension?" and "A Plane World", than Minkowski & Einstein's work — but the general acceptance of

Einstein's theory encouraged art historians to interpret cubist work as being directly influenced by the theory of relativity — which was entirely inaccurate. (This is discussed in depth in Henderson's definitive work The Fourth Dimension and Non-Euclidean Geometry in Modern Art)

Overall, the cultural displacement of the theory of 4-D space has required a series of re-statements of the idea of the fourth dimension — which have so far failed to properly define the nature of the fourth dimension either in time or space to the larger public.

Social & Technology Network Topology Researcher; Adjunct Professor, NYU Graduate School of Interactive Telecommunications Program (ITP); Author, Cognitive Surplus

The existence of ether, the medium though which light (was thought to) travel.

Extra credit: It was believed to be true by analogy — waves propagate through water, and sound waves propagate through air, so light must propagate through X, and the name of this particular X was ether.

It's also my favorite because it illustrates how hard it is to accumulate evidence for deciding something doesn't exist. Ether was both required by 19th century theories and undetectable by 19th century apparatus, so it accumulated a raft of negative characteristics: it was odorless, colorless, inert, and so on.

When Michelson and Morley devised an apparatus sensitive enough to detect characteristic differences in the behavior of light based on the angle through which it traveled through the ether (relative to the earth's motion), and could detect no such differences, they spent considerable time and energy checking their equipment, so sure were they that ether's existence-by-analogy operated as something like proof. (Danny Kahneman calls this 'theory-induced blindness.')

And of course, the failure of ether to appear opened the intellectual space in which Einstein's work was recognized.

Psychologist; Computer Scientist; AI Researcher. Author, The Future of Decision-Making

The obvious candidate for failed theory in the world of learning is the stimulus-response theory (called behaviorism) that dominated psychology for many years. Yes, my dog gets excited when I make coffee because that is when he knows he will get a treat, but that kind of learned behavior is hardly all there is to learning.

In my first year on the faculty at Stanford, Ken Colby offered to share his time in front of the first-year computer science graduate students. At that time each professor in Artificial Intelligence would have a week of an introductory course in which he would say what he was doing. In their second quarter, students would choose one of those professors to work with in a seminar. Ken invited me to share his time and his seminar.

It took a while to get the "results." The next quarter we met with the students who had signed up for our seminar. While other seminars given by other professors had attracted one or two students we have gotten about 20. Boy was my ego fired up. I was succeeding at this new game. At least that was what I thought until all the students went around the room to say why they were there. They were all there because of Ken — none were there because of me.

I wondered what had happened. Ken had given a very glib funny speech without much content. He seemed to be a lightweight, although I knew he wasn't. I, on the other hand, had given a technical speech about my ideas about how language worked and how to get computers to comprehend.

I asked Ken about this and he told me: if you can say everything you know in an hour, you don't know much.

It was some of the best advice I ever got. You can't tell people everything you know without talking way too fast and being incomprehensible. Ken was about hoping to be understood and to be listened to. I was about being serious and right. I never forgot his words of wisdom. These days I am much funnier.

And, I realize that I do know a lot more than can fit into an hour long speech. Maybe then I actually didn't know all that much.

So, I learned a great deal from Ken's just in time advice which I then had to think about. That is one kind of learning. And then, that experience became one my stories and thus a memory (which is another aspect of learning.)  Learning is also about constructing explanations of events that were not predicted so that you can predicted them better next time. And learning is about constructing and trading stories with others to give you nuances of experiences to ponder, which is a very important part of learning.

Learning has more aspects to it than just those things of course. Stimulus-response doesn't cover much of the turf.

Research Psychologist; Founder, Klein Associates; Author, The Power of Intuition

Here are some of my favorites:

1. Ulcers are created by stress. Some research on monkeys seemed to bear this out, and it fit a comforting stereotype that Type A individuals were burning up inside.

2. Genes are made of protein. This was more reasonable — the complex protein molecules matched the complexity of the proteins that genes were building.

3. Yellow Fever is caused by miasma and filth. I think this was sustained by a natural repugnance when entering homes that smelled bad — a feeling of "wow, that can't be good — I need to get out of here as soon as possible." Plus a class judgment that poor people live in less sanitary conditions and are more susceptible. Plus a belief that the mosquito theory had been discredited. (In fact, the study on mosquitoes failed to take into account a 12-day incubation period.)

4. Cholera is caused by miasma and filth. Ditto. Now the part I can't really understand is why John Snow was so effective in changing this mindset about cholera in England and how his views quickly spread to the U.S., whereas 50 years later even after Walter Reed and his staff eliminated Yellow Fever from Cuba, his subordinate Gorgas (who was in charge of eliminating Yellow Fever in Havana) was so unsuccessful in convincing the authorities when Gorgas was subsequently posted to Panama to control Yellow Fever during the building of the canal.

Consultant, Adaptive Optics; Adjunct Professor of Anthropology, University of Utah; Coauthor, The 10,000 Year Explosion: How Civilization Accelerated Human Evolution

Educated types in the western world have known the shape of the Earth for a long time, about 2500 years — the idea that they believed in a flat Earth is a canard. The notion that the Earth was the center was popular for much longer, largely because parallax was too small to measure, since distances to the stars are enormous compared to the radius of the Earth's orbit. Many people will undoubtedly tell you this.

One favorite is helicobacter pylori as the main cause of stomach ulcers. This was repeatedly discovered and then ignored and forgotten: doctors preferred 'stress' as the the cause, not least because it was undefinable. Medicine is particularly prone to such shared mistakes. I would say this is the case because human biology is complex, experiments are not always permitted, and MDs are not trained to be puzzle-solvers — instead, to follow authority. A lot of this traces back to medical traditions, which developed over long periods during which medicine was an ineffective pseudoscience. Freudian analysis was another such madness of crowds.

I would guess that most basic anthropological doctrine is false — for example. the 'psychic unity of mankind'. but then most practitioners don't really pretend to do science.

One could go on and on!

Mathematician and Economist; Principal, Natron Group

The modern textbook example of groupthink within fundamental physics is likely the so-called Tau-Theta puzzle of the 1950s. The Tau and Theta particles were seen to be as physically indistinguishable as Clark Kent and Superman, except for the ways in which they disintegrated. Yet to suggest that they were the same particle required the mental leap needed to assert that natural law carries a kind of asymmetric beauty mark which could be used to distinguish processes in the real world from their reflections in a pristine mirror. After experimenters at Columbia finally indicated in 1956 that the Tau and Theta were indeed the same particle, physicists came to see that for decades, no one had really bothered to check whether something as profoundly dramatic as an asymmetric universe was hiding within plain sight and easy reach.

An even more compelling example of group blindness drawn from engineering is the bizarre case of the Rollaboard suitcase. In the dawning age of jet travel, it seemed no one could find a way to create stable wheeled luggage. Typical early designs featured leashes and tiny external casters on which horizontal luggage would briefly roll before tipping over. It was only in 1989 that Northwest Airlines pilot Robert Plath solved this problem for travelers with the now ubiquitous vertical design of the Rollaboard, with built in wheels and telescoping handles. What is fascinating about this example of groupthink is that every recent scientific genius who struggled with luggage while on the lecture circuit had missed this simple elegant idea, as it required no modern technological advance or domain specific expertise.

Journalist, The Economist

I assume someone might already have written in to suggest "the belief that physical traits acquired during one's lifetime could be passed on to children" — e.g., that a person who became fat through overeating would thereby have fat children (and not because he had genes for obesity). This was apparently even believed by Darwin, I just read, before the discovery and understanding of genes.

Classicist; Provost, Georgetown University; Author, The Ruin of the Roman Empire

As classicist, I feel I know too many examples! Ancient medicine and ancient astronomy in particular were full of truths, quite true and valid within the framework within which constructed, that now appear as utter nonsense. I would put at top of my list, however, the science of astronomy — not for the Ptolemaic mathematical workings-out, but for the solid, serious, scientific astrological content. That is to say, it's a beautiful example of a paradigm, in Kuhnian terms, that made perfect sense at the time, that was the foundation for many further advances, that led to undoubtedly serious science, that validated itself by e.g. the way it allowed you to predict eclipses (how could it not be science?), and that just fell apart at the touch of a serious thought. To compare large with small, I would put it next to the science of ulcer medicine 60 years ago, which made similar perfect sense and was all driven by diet and stress and was a continually refining science — falling apart more or less isntantaneously, what, 25 years ago, with the discovery of the link to H. pylori. What the two have in common is the focus on phenomena (that is, the things that appear, the surface data) produces science, but each time you go a step beneath phenomena to mechanisms, new science happens. That's when the impossible becomes possible.

Science Editor, The Economist

Believing that people believed the Earth was flat is a good example of a modern myth about ancient scientific belief. Educated people have known it was spherical (and also how big it was) since the time of Eratosthenes. That is pretty close to the beginning of any system of thought that could reasonably merit being called scientific...

One that was long thought to be true, but isn't, is the spontaneous generation of life. I've never quite understood how that squared with life being divinely created. But the whole pre-Pasteur thing was definitely a widely held, incorrect belief...

Psychologist, University of Virginia; Author, The Happiness Hypothesis

The closest thing to a persistent flat earth belief in psychology is probably the view that experiences in the first five years of life largely shape the personality of the adult. (The child is father to the man, as Freud said). It's now clear that experiences that affect brain development, such as some viral diseases or some head injuries, can indeed change adult personality. Also, extreme conditions that endure for years or that interfere with the formation of early attachments (e.g., an abusive parent) can also have lasting effects. But the idea that relatively short-lived experiences in the first few years — even traumatic ones, and even short-lived sexual abuse — will have powerful effects on adult personality... this just doesn't seem to be true. (Although such events can leave lasting traces on older children). Personality is shaped by the interaction of genes with experience; psychologists and lay people alike long underestimated the power of genes, and they spent too much time looking at the wrong phase of childhood (early childhood), instead of at the developmental phases that matter more (i.e., the prenatal period, and adolescence).

Why is early childhood such a draw when people try to explain adult personalities? I think it's because we think in terms of stories, and it's almost impossible for us NOT to look back from Act III (adulthood) to early childhood (act I) when we try to explain someone turned out to be a hero or serial killer. In stories, there's usually some foreshadowing in act I of events to come in act III. But in real life there is almost never a connection.

Managing Director in Excel Medical Ventures; Chairman and CEO of Biotechonomy LLC; Author, As the Future Catches You

We have acted, with good reason, as if human beings are all alike. And given the history of eugenics this has been a good and rational position and policy. But we are entering an era where we recognize that there are more and differences in how a particular medicine affects particular groups of people. Same of foods, pollutants, viruses, and bacteria.We are beginning to recognize we react, and are at differential risks, of catching diseases like AIDS, malaria, anemias. And just this month we began to get a glimpse of the first thousand human genomes. These will soon number in the hundreds of thousands. Are we ready should these initial gene maps show that there are real and significant differences between groups of human beings?

Anthropologist; Visiting Professor of Psychology and Public Policy at the University of Michigan; Rresidential Scholar in Sociology at the John Jay College of Criminal Justice, New York City; Author, Talking to the Enemy

Anglo-American empiricists and communists alike believed that human minds were almost infinitely malleable, and learned the structure and content of thoughts and ideas based on the frequency of events perceived and on the nearness of events to one another (if one kind of event frequently precedes a second kind of event then the first is likely the cause of the other). Rewards and punishments ("carrots and sticks") supposedly determine which events are attended to.

Many Continental thinkers and Fascists believed that fundamental ideas of science, art and the "higher thoughts" of European civilization were either innate or inherently easy to learn only for a biologically privileged set of human beings. As with most earlier views of human cognition and learning, both of these philosophies and their accompanying pseudo-sciences of the mind were based on social and political considerations that ignored, and indeed effectively banned, reasoned inquiry and evidence as to the nature of the human mind.

That is why, after centuries of science, study of the mind is still in a foetal stage, and actual progress has been limited to fundamental discoveries that can be counted on one hand (for example, that human linguistic competence — and thus perhaps other fundamental cognitive structures — is universally and innately fairly well-structured; or that human beings do not think like markov processors, logic machines, or as rational economic and political actors ought to).

Developmental Biologist; Author, The Sense of Being Stared At

In the nineteenth century, many scientists were convinced that the course of nature was totally determinate and in principle predictable in every detail, as in Laplace's famous fantasy of scientific omniscience: "Consider an intelligence which, at any instant, could have a knowledge of all the forces controlling nature together with the momentary conditions of all the entities of which nature consists. If this intelligence were powerful enough to submit all these data to analysis it would be able to embrace in a single formula the movements of the largest bodies in the universe and those of the lightest atoms; for it nothing would be uncertain; the past and future would be equally present for its eyes."

T.H. Huxley even imagined that the course of evolution was predictable: "If the fundamental proposition of evolution is true, that the entire world living and not living, is the result of the mutual interaction, according to definite laws, of the forces possessed by the molecules of which the primitive nebulosity of the universe was composed, it is no less certain the existing world lay, potentially, in the cosmic vapour, and that a sufficient intellect could, from a knowledge of the properties of the molecules of that vapour, have predicted, say, the state of the fauna of Great Britain in 1869."

With the advent of quantum theory, indeterminacy rendered the belief in determinism untenable, and in the neo-Darwinian theory of evolution (which T.H. Huxley's grandson, Julian, did so much to promote) randomness plays a central role through the chance mutations of genes.

Professor in Columbia University's Industrial Engineering and Operations Research Department; Partner at Prisma Capital Partners; Author, My Life as a Quant

1. For years, running shoe companies have assumed without evidence that more is better, that thicker padded soles are better at preventing injuries in runners. In the 70s, shoe soles grew Brobdingnagian. Now, recent research confirms that running barefoot and landing on your forefoot, on any surface, even one as hard as the road to hell, produces less shock than running and unavoidably landing on your heels in rigid padded stabilized shoes.

2. For years optometrists have given small children spectacles at the first hint of nearsightedness. But ordinary unifocal lenses modify not only their accommodation to distance vision but to near vision too, where they don't need help. Now there is evidence that giving near-sighted kids bifocals that correct only their distance vision and not their close-up vision seems to make their nearsightedness progress less rapidly.

Professor of Journalism at New York University; Author, Proofiness

Caloric, phlogiston, and ether immediately come to mind, but I'm particularly fond one consequence of Aristotelian mechanics: the assertion that there is no such thing as a vacuum.

The concept of the void conflicted with the way that Aristotle conceived of motion; admitting a void into his universe quite simply broke all of his models about the nature of matter and the way objects move. (A rock, say, suspended in a vacuum, would not be able to fall to its proper place at the center of the world as his laws said they must.)

In the West, the consequent misconception — that nature so abhors a vacuum that it can not exist under any circumstance — lasted until Torricelli and Pascal disproved it in the 17th century.

Professor of Anthropology and Adjunct Associate Research Scientist, Museum of Anthropology at the University of Michigan; Author, Race and Human Evolution

Creationism's step sister, intelligent design, and allied beliefs have been held true for some time, even as the mountain of evidence supporting an evolutionary explanation for the history and diversity of life continues to grow. Why has this belief persisted? There are political and religious reasons, of course, but history shows than neither politics nor religion require a creationist belief in intelligent design.

I think the deeper answer lies elsewhere, in the way children categorize the world in to a hierarchy of types of inanimate and living things (and for that matter types of people), and the rigid categorization this leaves in adults that stands in the way of accepting biological explanations that show the hierarchy can develop from natural laws including randomness,and categories may originate and change by natural laws within a hierarchical structure. Could a draw poker hand improve without divine intervention? Could Plato's precept of ideals have survived a trip to Art Van's?

Professor Emeritus of Chemistry and Senior Research Scientist at New York University; Author, Planetary Dreams

For many centuries, most scientists and philosophers believed that dead or inanimate matter could quickly transform itself into living beings, just as the reverse can occur quite rapidly. This belief, rapid spontaneous generation, was supported by simple observation of common events. Fireflies emerged from the morning dew, bacteria appeared in sterilized broths and small animals arose from mud at the bottom of streams and ponds.

In Shakespeare's "Antony and Cleopatra" Lepidus told Antony "Your serpent of Egypt is born of the mud, by the action of the Sun, and so is your crocodile." Among the notables who endorsed this theory were Aristotle, Thomas Aquinas, Francis Bacon, Galileo and Copernicus. Many carefully controlled experiments, culminating in the work of Louis Pasteur, were needed to negate this idea.

Author, No Two Alike

The apple doesn't fall far from the tree. In other words, people tend to resemble their parents. They resemble their parents not only in physical appearance but also, to some degree, in psychological characteristics.

The question is: Why? Two competing answers have been offered: nature (the genes that people inherit from their parents) and nurture (the way their parents brought them up). Neither of these extreme positions stood up to scrutiny and they eventually gave way to a compromise solution: nature + nurture. Half nature, half nurture. This compromise is now an accepted belief, widely held by scientists and nonscientists alike.

But the compromise solution is wrong, too. Genes do indeed make people turn out something like their parents, but the way their parents brought them up does not. So nature + nurture is wrong: it's nature + something else.

The evidence has been piling up since the 1970s; by now it's overwhelming. And yet few people outside of psychology know about this evidence, and even within psychology only a minority have come to terms with it.

You asked for "examples of wrong scientific beliefs that we've already learned were wrong." But who is "we"? A few thousand people have learned that the belief in nature + nurture is wrong, but most people haven't.

Computer Science and Complex Systems Professor at Brandeis University

A persistent belief is that human symbolic intelligence is the highest form of intelligence around. This leads directly to both creationism and good old-fashioned AI which seeks to model cognition using Lisp programs.

Evolution can design machines of such great complexity that the space shuttle with half a million parts looks like a tinker toy construction. In order to explain the design intelligence of evolution, most Republicans are convinced that a superintelligent creator was involved. Developmental intelligence which manufactures machines with 10 billion moving parts without any factory supervisors is another area where nature outstrips the best human performance. Immunological Intelligence, telling self from non-self, is another AI-complete problem. And human intelligence itself is so vastly complex that we've made up stories of conscious symbol processing, like logic and grammar, to try to explain what goes on in our heads.

The mind, like the weather, envelopes the brain like a planet and requires dynamical and integrated explanations rather than just-so stories.

Psychologist and Ex-Parapsychologist; Author, Consciousness: An Introduction

My favourite example is the hunt for the "élan vital" or life force. People seemed to think that — given living things behave so very differently from non-living things — there must be some special underlying force or substance or energy or something that explains the difference, something that animates a living body and leaves the body when it dies.

Of course many people still believe in various versions of this, such as spirits, souls, subtle energy bodies and astral bodies, but scientists long ago gave up the search once they realised that being alive is a process that we can understand and that needs no special force to make it work.

I think this was believed to be true for two reasons :

1. Explaining how living things work is not trivial — it has required understanding heredity, homeostasis, self-organisation and many other factors.

2. (perhaps more important) Human beings are natural dualists. From an early age children begin thinking of themselves not as a physical body but as something that inhabits a physical body or brain. We feel as though we are an entity that has consciousness and free will even though this is all delusion. I suggest that this delusion of duality is also the underlying cause of the hopeless hunt for the life force.

Author, The Shallows

I think it's particularly fascinating to look at how scientific beliefs about the functioning of the human brain have progressed through a long series of misconceptions.

Aristotle couldn't believe that the brain, an inert grey mass, could have anything to do with thought; he assumed that the heart, hot and pulsing, must be the source of cognition, and that the brain's function was simply to cool the blood.

Descartes assumed that the brain, with its aperture-like "cavities and pores," was, along with the heart, part of an elaborate hydraulic system that controlled the flow of "animal spirits" through the "pipes" of the nerves. More recently, there was a longstanding belief that the cellular structure of the brain was essentially fixed by the time a person hit the age of 20 or so; we now know, through a few decades' worth of neuroplasticity research, that even the adult brain is quite malleable, adapting continually to shifts in circumstances and behavior.

Even more recently there's been a popular conception of the brain as a set of computing modules running, essentially, genetically determined software programs, an idea that is now also being chipped away by new research. Many of these misconceptions can be traced back to the metaphors human beings have used to understand themselves and the world (as Robert Martensen has described in his book The Brain Takes Shape).

Descartes' mechanistic "clockwork" metaphor for explaining existence underpinned his hydraulic brain system and also influenced our more recent conception of the brain as a system of fixed and unchanging parts.

Contemporary models of the brain's functioning draw on the popular metaphorical connection between the brain and the digital computer. My sense is that many scientific misconceptions have their roots in the dominant metaphors of the time. Metaphors are powerful explanatory tools, but they also tend to mislead by oversimplifying.

Founding and Senior Faculty member at Perimeter Institute for Theoretical Physics in Waterloo, Canada; Adjunct Professor of Physics at the University of Waterloo; Author, The Trouble With Physics

Perhaps the most embarrassing example from 20th Century physics of a false but widely held belief was the claim that von Neumann had proved in his 1930 text book on the mathematical foundations of quantum mechanics that hidden variables theories are impossible. These would be theories that give a complete description of individual systems rather than the statistical view of ensembles described by quantum mechanics. In fact de Broglie had written down a hidden variables theory in 1926 but abandoned work on it because of von Neumann's theorem. For the next two decades no one worked on hidden variables theories.

In the early 1950's David Bohm reinvented de Broglie's theory. When his paper was rejected because von Neumann proved what he claimed impossible, he read and easily found a fallacy in the von Neumann's reasoning. Indeed, there had been at least one paper pointing out the fallacy in the 1930s that was ignored. The result was that progress on hidden variables theories in general, and de Broglie and Bohm's theory in particular, was delayed by several decades.

An example in economics is the notion that an economic markets can usefully be described as having a single unique and stable equilibrium, to which it is driven by market forces. As described by neoclassical models of markets such as the Arrow-Debreu model of general equilibrium, equilibrium is defined as a set of prices for which demand for all goods equals supply, as a result of each consumer maximizing their utility and each producer maximizing their profit. A basic result is that such equilibria are Pareto efficient, which means no one's utility can be increased without decreasing some body else's utility. Furthermore, if the economy is in equilibrium there are no path dependent effects, moreover it can be argued that market prices in equilibrium are perfectly rational and reflect all relevant information.

If equiilibrium were unique, then one could argue that the most ethical thing to do is to leave markets free and unregulated so that they can find their points of equilibrium where efficiency and utility are maximized. This kind of thinking to some extent motivated choices about leaving financial markets under-regulated resulting in the recent economic crisis and current difficulties.

However, it was learned in the 1970s that even if efficiency and equilibrium are useful notions, the idea that equilibria are unique is not true in generic general equilibrium models. The Sonnenschein-Mantel-Debreu Theorem of 1972 implies that equilibria are in general highly non-unique, and it is not difficult to invent models in which the number of equilibria scales with the number of producers. But if there are multiple equilibria, most will not be stable. Moreover supply and demand are balanced in each of the many equilibria, so market forces do not suffice to explain which equilibria the market is in or to pick which would be preferred. The consequence theoretically is that path dependent effects which determine which of many the equilibria the market is in must be important, the political consequence is that there is not an ethical argument for leaving markets unregulated. Since then some of the more interesting work in economics studies issues of path dependence and multiple equilibria.

I cannot comment on why economists made the mistake of thinking about market equilibrium as if it were unique. I do think I have some insight into why a false belief about the possibility of alternatives to quantum mechanics could persist for more than two decades. During this period there was rapid progress in the application of quantum mechanics to a wide set of phenomena from astrophysics to nuclear and solid state physics.

Meanwhile the most popular interpretation of quantum mechanics was Bohr's, which is now hardly taken seriously by anyone. Those who concentrated on the foundations of the subject were left behind, especially as it was convenient for the progress that was being made, to believe that the foundations were surer than in fact they were. Perhaps there are periods in science where it makes sense for most scientists to sweep foundational worries under the carpet and make progress on applications, postponing the inevitable reckoning with the inconsistencies to a time when there are better hints from experiment.

Associate Professor in the School of Information at UC Berkeley, Affiliate appointment in Computer Science Division

In the early days of the field of Artificial Intelligence, researchers thought that it would not be terribly difficult to implement a vision recognition or language understanding program. According to a July 1966 memo from the MIT Project Mac archive, Seymour Papert announced a summer project whose goal was to construct "a significant portion of the vision system." Other early leaders of AI were also optimistic, including Herbert Simon who was quoted saying in 1965 that "machines will be capable, within twenty years, of doing any work a man can do," and Marvin Minsky, who is attributed with saying "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."

The importance of these misperceptions is the underestimation of the complexity of the brain.

Professor of Physics and Astronomy at the University of Pennsylvania; Author, Ordinary Geniuses

I would not count flat earth as a wrong theory believed to be true by everybody since e.g. the ancient Greeks thought the Earth was a sphere and had even measured its curvature.

A classic example of a wrong theory is that of Phlogiston, namely the existence of a substance that is released in combustion. There were also variations going by the name of caloric. A second wrong theory is that of a Luminiferous Aether, a substance through which light is transmitted. Chemical experiments disproved the first and e.g. the Michelson -Morley expt. the second.

There are of course also numerous wrong theories/beliefs regarding spontaneous generation of life disproved in the 17th century by Francesco Redi and ultimately by Louis Pasteur in the 19th.

I have a small favorite, the belief that body temperature varied with climate, disproved by the invention in the early 17th century of the thermometer.

Science Writer; Author, Soul Made Flesh

"This laxe pithe or marrow in man's head shows no more capacity for thought than a Cake of Sewet or a Bowl of Curds."

This wonderful statement was made in 1652 by Henry More, a prominent seventeenth-century British philosopher. More could not believe that the brain was the source of thought. These were not the ravings of a medieval quack, but the argument of a brilliant scholar who was living through the scientific revolution. At the time, the state of science made it was very easy for many people to doubt the capacity of the brain. And if you've ever seen a freshly dissected brain, you can see why. It's just a sack of custard. Yet now, in our brain-centered age, we can't imagine how anyone could think that way.

Independent Researcher; Author, Dinosaurs of the Air

Richard Thaler seems to think that the concept of a flat earth was widely held for a long time. This is not really correct. Mariners have long understood that the earth is strongly curved and possibly a sphere. Ships disappear down over the horizon (I once saw this effect on the Chesapeake bay and was shocked how fast the hull of a giant container ship dropped out of sight while the top of the superstructure was still easily visible). Polaris gets lower on the horizon as one sails south and eventually disappears and so on. Over 2000 years ago the circumferance of the planet was pretty accurately calculated by Eratosthenes using some clever geometry and sun angle measurements. This knowledge may have been lost in the west in the dark ages, but was well known to the Euroelites after the improved communications from Constantinople, Alexandria etc after the Crusades.

When Columbus was trying to get a government to cough up the money for his trip west he was not trying convince patrons that the planet was a sphere. The problem was that the experts told the people with the money that the distance from Europe and Asia across the super ocean separating them was 14,000 miles with no visible means of logistical support during the voyage (the perfect Bible did not mention extra continents being in the way). However, some works had come out saying that Eratosthenes had messed up and the planet was much smaller (I've heard this was based on Biblical passages and Columbus was very devout, but am not sure about that). Columbus figured it was 3-4000 miles to the west, a skip and a hop compared to the horrendous around Africa route. When the Spanish monarchs finally kicked the last Muslims out of Iberia and were having fun picking on Jews they decided what the heck and see what this Columbus fellow could do, the cost was just three little cargo vessels and their crews.

The story about the crews getting upset about sailing off the edge of the earth is probably a myth since they knew better. That Columbus was fighting the false knowledge of the flat earth apparently was invented in the late 1800s in an effort to make him a great American symbol of the progress of science over superstition associated with the 1892 celebrations.

As far as a distinct example of lots of people believing in something that is scientifically wrong the best example I can think of are the various creation myths. This occurred not only before the advent of modern science but continues in the form of various forms of creationism.

Psychologist, UC, Berkeley; Author, The Philosophical Baby

There is interesting evidence that many once popular and evidence-resistant scientific belief systems are also developed spontaneously by many children. For example, children seem to develop a "vitalistic" theory of intuitive biology, rather like the Chinese concept of "chi", at around age 5, independently of what they are taught in school. Similarly , school-age children, even those with an explicitly atheist upbringing, develop ideas about God as an explanatory force at about 7, as part of an "intuitive teleology" that explains events in terms of agency.

The psychologist Tania Lombrozo has shown that even Harvard undergraduates who endorse evolution consistently interpret evolutionary claims in a teleological rather than mechanistic way (eg giraffes try to reach the high leaves and so develop longer necks). And we have shown that six year olds develop a notion of fully autonomous "free will" that is notoriously difficult to overturn. There is also a lot of evidence that scientific theories are built out of these everyday intuitive theories.

If, as we think, children use Bayesian techniques to develop intuitive theories of the world, based on the evidence they see around them, then it might , in some sense, be rational to hold on to these beliefs, which have the weight of accumulated prior experience. Other scientific beliefs, without a history of everyday confirmation, might be easier to overturn based on just scientific evidence alone.

Science Historian; Author, Darwin Among the Machines

Many (but not all) scientists assumed the far side of the moon would turn out to look much the same as the side we are familiar with. "I was very enthusiastic about getting a picture of the other side of the moon," Herbert York, former advisor to President Eisenhower, told me in 1999. "And there were various ways of doing it, sooner or later. And I argued with Hornig [Donald Hornig, Chairman of the President's Science Advisory Committee] about it and he said, 'Why? It looks just like this side.' And it turned out it didn't."

Evolutionary Biologist, University of Reading; Author, The Oxford Encyclopedia of Evolution

The traditional Judeo-Christian belief is that women have one more rib than men, Adam having given one of his to make Eve. I think this belief survived until the 16th century!

Reader in Archaeology at the University of Bradford, UK; Editor-in-Chief of the Journal of World Prehistory; Author, The Artificial Ape

In Europe it used to be thought that what are now understood to be prehistoric stone tools originated naturally rather than technologically. In line with other objects displaying regular forms, such as ammonites or hail stones, and in view of the fact that things like meteorites do sometimes fall out of the sky, the facetted tear-drops of flint that we now call Acheulian hand axes were believed to coalesce in rain clouds.

By contrast, barbed and tanged arrowheads were imagined to be supernatural – 'elfshot'. The overarching Biblically-based metaphysic of a world not yet 6000 years old denied a plausible timeframe for any radically different human past.

It took two things for the natural / supernatural categories to be replaced.

First, encounters during the 'Age of Discovery' with indigenous peoples without the use or knowledge of metals allowed the mystery objects to be viewed analogously (a perspective latent, yet warped, in the 'elfshot' idea).

Second, the application of uniformitarian principles in geology challenged the diluvial (flood-produced) explanation of landforms and pointed to a very ancient earth. Thus a human antiquity of potentially vast length became imaginable in which prehistoric European counterparts to the ethnographically known 'stone age' societies could be postulated.

Author, Us and Them

The belief (or maybe, more accurately, the unstated intuition) that objects must be in direct contact to influence one another.

In the 17th century, that led skeptics to scoff at Newton's theory of gravity. Proper science was supposed to map how matter pushes against matter to cause various effects. Yet in this theory there was no physical contact, just spooky action at a distance. Almost a century after Newton, rival theories of gravity were still being proposed to remedy this defect.

Similarly, many scientists (including Newton) long theorized about aether, the substance that carries light, in part because, well, if light arrives, it must be borne by something. Perhaps the modern incarnations of this belief are claims that psychological and biological sciences won't be really sound until they can be restated in terms of chemistry or physics, ie, as statements about matter interacting with itself.

The reason the belief persisted? In my first example, I believe, it was because it sounded, in the context, "more scientific" than its predecessors and its challengers. To an advanced 17th century thinker, for example, thinking that the sun can influence the Earth without physical contact probably smacked of medieval physics. The best and brightest had recently overthrown these notions that objects fell without a material cause, so why should we go backwards?

I think this is a common reason for the endurance of theories in the face of contrary evidence: The theory has the patina of rigor and seriousness, while alternatives are, rightly or wrongly, associated either with (supposedly) defunct ideas or with intellectual confusion. We don't want to march backwards and we don't want to get lost, so we'd better stick with what we have.

To bring this back round to your own work (and to fend off accusations that I've built a straw man), I have been told more than once that "rational economic man" theories of behavior, flawed as they are, at least have the virtue of coherence and rigor, and therefore can't be jettisoned. "It may be flawed science, but it's the only science we have" — I think that has been a motive for clinging to beliefs long past the point where we, from our vantage point, think they were justified.

Board of Governors Professor of Cognitive Science, Center for Cognitive Science and Department of Psychology, Rutgers University

Why does it take so long to accept new views even when the evidence is clear? Wittgenstein tells the following anecdote (I presume to show that first impressions about why views change are generally wrong):

Two philosophers meet in the hall. One says to the other, Why do you supposed people believed for such a long time that the sun goes around the earth, rather than that the earth rotates? The other philosopher replies, Obviously because it looks as though the sun is going around the earth. To which the first philosopher replies, But what would it look like if it looked like the earth was rotating?

Historian of Science; Publisher, Skeptic Magazine' columnist for Scientific American; Author, The Believing Brain

A splendid example of a wrong idea that persisted for nearly half a century (from 1610 to 1655) is the belief that Saturn was an odd configuration of either one, two, or three bodies orbiting around each other, with no hint that it had rings.

Galileo's started the error with his earliest telescopic observations of Saturn in 1610, after which he wrote to Johannes Kepler: "Altissimum planetam tergeminum observavi," "I have observed that the farthest planet is threefold." He then explained what he meant: "This is to say that to my very great amazement Saturn was seen to me to be not a single star, but three together, which almost touch each other."

Galileo's error is instructive for an understanding of the interplay of data and theory, and when it came to Saturn, Galileo lacked them both. Data: Saturn is twice as far away as Jupiter, thus what few photons of light there were streaming through the cloudy glass in his little tube made resolution of the rings problematic at best.

Theory: There was no theory of planetary rings. It is at this intersection of nonexistent theory and nebulous data that the power of belief is at its zenith and the mind fills in the blanks. Challenged by a fellow astronomer who suggested that perhaps it was one oblong object rather than three spheres, Galileo boasted of his own superior observational skills, and that "where perfection is lacking, the shape and distinction of the three stars imperfectly seen. I, who have observed it a thousand times at different periods with an excellent instrument, can assure you that no change whatever is to be seen in it."

Such is the power of belief that Galileo went to his grave believing not what his eyes actually saw but what his model of the world told him he was seeing. It was literally a case of I wouldn't have seen it if I hadn't believed it. It wasn't until 1659 and the publication of the Dutch astronomer Christiaan Huygens's great work Systema Saturnium that it became common astronomical knowledge that Saturn had rings.

Cognitive Scientist and Linguist; Richard and Rhoda Goldman Distinguished Professor of Cognitive Science and Linguistics, UC Berkeley; Author, The Political Mind

Enlightenment Reason and Classical Rationality have been shown over and over in the cognitive and brain sciences to be false in just about every respect. Yet they are still being taught and used throughout the academic world and in progressive policy circles. Real human reason is very different.

Here are the claims of enlightenment reason, and the realities:

  • Claim: Thought is conscious. But neuroscience shows that thought is about 98 percent unconscious.

  • Claim: Reason is abstract and independent of the body. But reason is embodied in two ways: (1) we think with our brains and (2) thought is grounded in the sensory-motor system.

  • Yet, because we think with our brains and thought is embodied via the sensory-motor system, reason is completely embodied.

  • Claim: Reason can fit the world directly. Yet because we think with a brain structured by the body, reason is constrained by what the brain and body allow.

  • Claim: Reason uses formal logic. In reality, reason is frame-based and very largely metaphorical. Basic metaphors arise naturally around the world due to common experiences and the nature of neural learning. The literature on Embodied cognition has experimentally verified the reality of metaphorical thought. Real human reason uses frame-based and metaphor-based logics. Behavioral economics is based on this fact.

  • Claim: Emotion gets in the way of reason. Actually, real reason requires emotion. Brain-damaged patients who cannot feel emotion don't know what to want, since like and not like mean nothing to them and they cannot judge the emotions of others. As a result they cannot make rational decisions.

  • Claim: Reason is universal. Actually, even conservatives and progressives reason differently, and evidence is pouring in that one's native language affects how one reasons.

  • Claim: Language is neutral, and can fit the world directly. Actually language is defined in terms of frames and metaphors, works through the brain and does not fit the world directly. Indeed, many of the concepts named by words (e.g. freedom) are essentially contested and have meanings that vary with value systems.

  • Claim: Mathematics exists objectively and structures the universe. Mathematics has actually been created by mathematicians using their human brains, with frames and metaphors.

  • Claim: Reason serves self-interest. Partly true of course, but to a very large extent reason is based on empathetic connections to others, which works via the mirror neuron systems in our brains.

Given the massive failures of enlightenment reason, widely documented in the brain and cognitive sciences, why is it still taught and widely assumed?

First, it did a great historical job back in the 17th and 18th centuries in overcoming the dominance of the Church and feudalism.

Second, it permitted the rise of science, even though science doesn't really use it.

Third, unconscious mechanisms like framed-based and metaphorical thought are mostly not accessible to consciousness, and thus we cannot really see how we think.

Fourth, applications of formal logic have come into wide use, say in the rational actor model of classical economics (which failed in the economic collapse of 2008).

Fifth, we are taught enlightenment reason in our schools and universities and its failure is not directly taught, even in neuroscience classes. Seventh, most people just think and don't pay much attention to the details, especially those that are not conscious.

Sixth, most people just think and don’t pay much attention to the details, especially those that are not conscious.

Much of liberal thought uses enlightenment reason, which claims that if you just tell people the facts about their interests, they will reason to the right conclusion, since reason is supposed to universal, logical, and based on self-interest. The Obama administration assumed that in its policy discourse, and that assumption led to the debacle of the 2010 elections. Marketers have a better sense of how reason really works, and Republicans have been better at marketing their ideas. The scientific fallacy of enlightenment reason has thus had major real-world effects.

Philosopher; Founder, Manager, Metodo

Phrenology and lobotomy. Even when these were not scientific paradigms, they clearly illustrate how science affects people's life and morality. For those not engaged in the scientific work, it is easy to forget that technology, and a great part of the western contemporary culture, results from science. However, people tend to interpret scientific principles and findings as strange matters that have nothing to do with everyday life, from gravity and evolution, to physics and pharmacology.

Phrenology is defined as the "scientific" relation between the skull's shape and behavioral traits. It was applied to understand, for example, the reason for the genius of Professor Samuel B. F. Morse. However, it was also applied in prisons and asylums to explicate and predict criminal behaviors. In fact, it was also assumed that the skull's shape explained incapacities to act according to the law. If you were spending your life in an asylum or a prison in 19th century because of a phrenological "proof" or "argument", you could perfectly understand how important science in your life is, even if you are not a scientist. Even more, if you were going to be a lobotomy's patient in the past century.

In 1949, Antonio Egas Moniz achieved the Nobel Prize of Physiology and Medicine for discovering the great therapeutic value of lobotomy, a surgical procedure that, in its transorbital versions, consisted of introducing an ice pick through the eye's orbit to disconnect the prefrontal cortex. Thousands of lobotomies were performed between the decade of 1940's and the first years of 1960's, including Rosemary Kennedy, sister of President John F. Kennedy, on the list of recipients; all of them with the scientific seal of a Nobel Prize. Today, half a century later, it seems unthinkable to apply such a "scientific" therapy. I keep asking myself: "what if" a mistake like this one is adopted today as policy on public health?

Science affects people's lives directly. A scientific mistake can send you to jail or break your brain into pieces. It also seems to affect the kinds of moral stances that we adopt. Today, it would be morally reprehensible to send someone to jail because of the shape of his head, or to perform a lobotomy. However, 50 or 100 years ago it was morally acceptable. This is why we should spend more time thinking of practical issues, like scientific principles, scientific models and scientific predictions as a basis for public health and policy decisions, rather than guessing about what is right or wrong according to god's mind or the unsubstantiated beliefs presented by special interest groups.

Geochronologist Emeritus, University of California, Berkeley; Coauthor, Java Man

For years I believed the Government's insistence that UFO's did not exist until I saw one under circumstances that could leave no doubt. Subsequently over many years I have seen three more. Being a scientist and professor at U.C. Berkeley, I quizzed many graduate students, asking them if they think they have seen UFO's would they come to my office and tell me about them. To my surprise, several of them did, and some went on to teach at various universities such as CalTech, and Johns Hopkins. They found, as I have, if a person hasn't seen one, he/she won't believe you. I have convinced only one scientist, and this was by giving him two excellent books on the subject which he read carefully, He came to me and said, "I am now a believer, but why this government secrecy?" I replied that I didn't know but that it must be extremely important to some branch of the government in the military.

Neurologist & Cognitive Neuroscientist, The New School; Coauthor, Children's Learning and Attention Problems

Overthrowing a presumption of symmetry, Broca revealed in 1862 that the left forebrain alone subserves language. The left hemisphere’s many specializations were cumulatively uncovered in the course of the next hundred years. Yet the "minor" right hemisphere was not similarly investigated until the 1960s. How could the plentiful specializations of the right hemisphere have been so long overlooked? The problem was not technological. The right hemisphere’s specializations were ultimately revealed by the same methods that had uncovered those of the left. What finally oriented investigators to the profuse specializations of the right hemisphere: spatiotemporal, emotional, interpersonal, creative?

Perhaps it was culture change (Kinsbourne 2000). Rigidly hierarchical thinking about human affairs prevailed: master/slave, boss/worker, king/subject, priest/sinner, God/angel, along a continuum of power, influence and assumed merit, projected to notions about the brain. Language, and the left hemisphere were placed at the peak. Simpler functions shared with other animals, were conceived as bilaterally represented. Thus the right hemisphere was denied its own specializations.

As the great empires fragmented into patchworks of independent states after World War Two, vertical value rankings no longer seemed self-evident or sufficient. Cultural constructs expanded to encompass horizontally interactive, collaborative organization (Crumley 1995). This upheaval of ideas at last permitted focus on the right hemisphere as a highly specialized collaborator of the left. In time it overthrew the antiquated notion of the cerebral network as a tangle of centers and connections, and recognized it as a heterarchical democracy (McCulloch 1945):  uncentered, unsupervised, parallel, and self-organizing.

Unquestioned and even unconscious cultural premises obstructed the natural progress of discovery. Perhaps "self-evident" but false culture-based assumptions still hold us up.

Editor, Infectious Greed; Senior Fellow, Kauffman Foundation

My favorite example is about science itself. For the longest time scientists didn't believe that their own discipline followed rules, per se, but then Imre Lakatos, Thomas Kuhn, Karl Popper and, my favorite, Paul Feyerabend showed how science was sociology, was prone to enthusiasms, fashions, and dogma, and so on. It was one of the most important realizations of my doctoral program.

John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2010 By Edge Foundation, Inc
All Rights Reserved.