"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?"
Research Associate, Psychology, Harvard University;
Author, The Alex Studies
The Fallacy of Hypothesis Testing
I've begun to rethink the way we teach students to engage
in scientific research. I was trained, as a chemist, to use
the classic scientific method: Devise a testable hypothesis,
and then design an experiment to see if the hypothesis is correct
or not. And I was told that this method is equally valid for
the social sciences. I've changed my mind that this is
the best way to do science. I have three reasons for this change
and probably most importantly, I've learned that one often
needs simply to sit and observe and learn about one's subject
before even attempting to devise a testable hypothesis. What
are the physical capacities of the subject? What is the social
and ecological structure in which it lives? Does some anecdotal
evidence suggest the form that the hypothesis should take?
Few granting agencies are willing to provide support for
this step, but it is critical to the scientific process,
particularly for truly innovative research. Often, a proposal
to gain observational experience is dismissed as being a "fishing
expedition"…but how can one devise a workable
hypothesis to test without first acquiring basic knowledge
of the system, and how better to obtain such basic knowledge
than to observe the system without any preconceived notions?
I've learned that truly interesting questions really often
can't be reduced to a simple testable hypothesis, at least
not without being somewhat absurd. "Can a parrot label objects?"
may be a testable hypothesis, but actually isn't very interesting…what is interesting,
for example, is how that labeling compares to the behavior
of a young child, exactly what type of training might enable
such learning and what type of training is useless, how far
can such labeling transfer across exemplars, and….Well,
you get the picture…the exciting part is a series of interrelated questions
that arise and expand almost indefinitely.
Third, I've learned that the scientific community's
emphasis on hypothesis-based research leads too many scientists
to devise experiments to prove, rather than test, their hypotheses.
Many journal submissions lack any discussion of alternative
competing hypotheses: Researchers don't seem to realize
that collecting data that are consistent with their original
hypothesis doesn't mean that it is unconditionally true.
Alternatively, they buy into the fallacy that absence of evidence
for something is always evidence of its absence.
all for rigor in scientific research — but let's
emphasize the gathering of knowledge rather than the proving
of a point.
Dartmouth College; Author, The Prophet and the Astronomer
To Unify or Not: That is the Question
grew up infused with the idea of unification. It came first from
religion, from my Jewish background. God was all over, was all-powerful,
and had a knack for interfering with human affairs, at least
in the Old Testament. He then appeared to have decided to be
a bit shyer, sending a Son instead, and only revealing Himself
through visions and prophecies. Needless to say, when, as a teenager,
I started to get interested in science, this vision of an all-pervading
God, stories of floods, commandments and plagues, started to
become very suspicious. I turned to physics, idolizing Einstein
and his science; here was a Jew that saw further, that found
a way of translating this old monotheistic tradition into the
universal language of science.
As I started my research career, I had absolutely no doubt that I wanted to become a theoretical physicist working on particle physics and cosmology. Why the choice? Simple: it was the joining of the two worlds, of the very large and the very small, that offered the best hope for finding a unified theory of all Nature, that brought together matter and forces into one single magnificent formulation, the final Platonist triumph. This was what Einstein tried to do for the last three decades of his life, although in his days it was more a search for unifying only half of the forces of Nature, gravity and electromagnetism.
I wrote dozens of papers related to the subject of unification, even my Ph.D. dissertation was on the topic. I was fascinated by the modern approaches to the idea, supersymmetry, superstrings, a space with extra, hidden dimensions. A part of me still is. But then, a few years ago, something snapped. It probably was brought by a combination of factors, a deeper understanding of the historical and cultural processes that shape scientific ideas. I started to doubt unification, finding it to be the scientific equivalent of a monotheistic formulation of reality, a search for God revealed in equations. Of course, had we the slightest experimental evidence in favor of unification, of supersymmetry and superstrings, I'd be the first popping the champagne open. But it's been over twenty years, and all attempts so far have failed. Nothing in particle accelerators, nothing in cryogenic dark matter detectors, no magnetic monopoles, no proton decay, all tell-tale signs of unification predicted over the years. Even our wonderful Standard Model of particle physics, where we formulate the unification of electromagnetism and the weak nuclear interactions, is not really a true unification: the theory retains information from both interactions in the form of their strengths or, in more technical jargon, of their coupling constants. A true unification should have a single coupling constant, a single interaction.
All of my recent anti-unification convictions can crumble during the next few years, after our big new machine, the Large Hadron Collider, is turned on. Many colleagues hope that supersymmetry will finally show its face. Others even bet on possible signs of extra dimensions revealed. However, I have a feeling things won't turn out so nicely. The model of unification, which is so aesthetically appealing, may be simply this, an aesthetically appealing description of Nature, which, unfortunately, doesn't correspond to physical reality. Nature doesn't share our myths. The stakes are high indeed. But being a mild agnostic, I don't believe until there is evidence. And then, there is no need to believe any longer, which is precisely the beauty of science.
Physicist, Institute of Advanced Study, Author, A Many Colored Glass
When facts change your mind, that's not always science. It may be history. I changed my mind about an important historical question: did the nuclear bombings of Hiroshima and Nagasaki bring World War Two to an end? Until this year I used to say, perhaps. Now, because of new facts, I say no. This question is important, because the myth of the nuclear bombs bringing the war to an end is widely believed. To demolish this myth may be a useful first step toward ridding the world of nuclear weapons.
Until the last few years, the best summary of evidence concerning this question was a book, "Japan's Decision to Surrender", by Robert Butow, published in 1954. Butow interviewed the surviving Japanese leaders who had been directly involved in the decision. He asked them whether Japan would have surrendered if the nuclear bombs had not been dropped. His conclusion, "The Japanese leaders themselves do not know the answer to that question, and if they cannot answer it, neither can I". Until recently, I believed what the Japanese leaders said to Butow, and I concluded that the answer to the question was unknowable.
Facts causing me to change my mind were brought to my attention by Ward Wilson. Wilson summarized the facts in an article, "The Winning Weapon? Rethinking Nuclear Weapons in the Light of Hiroshima", in the Spring 2007 issue of the magazine, "International Security". He gives references to primary source documents and to analyses published by other historians, in particular by Robert Pape and Tsuyoshi Hasegawa. The facts are as follows:
1. Members of the Supreme Council, which customarily met with the
Emperor to take important decisions, learned of the nuclear bombing
of Hiroshima on the morning of August 6, 1945. Although Foreign
Minister Togo asked for a meeting, no meeting was held for three
2. A surviving diary records a conversation of Navy Minister Yonai, who was a member of the Supreme Council, with his deputy on August 8. The Hiroshima bombing is mentioned only incidentally. More attention is given to the fact that the rice ration in Tokyo is to be reduced by ten percent.
3. On the morning of August 9, Soviet troops invaded Manchuria. Six
hours after hearing this news, the Supreme Council was in session.
News of the Nagasaki bombing, which happened the same morning, only
reached the Council after the session started.
4. The August 9 session of the Supreme Council resulted in the decision to surrender.
5. The Emperor, in his rescript to the military forces ordering their surrender, does not mention the nuclear bombs but emphasizes the historical analogy between the situation in 1945 and the situation at the end of the Sino-Japanese war in 1895. In 1895 Japan had defeated China, but accepted a humiliating peace when European powers led by Russia moved into Manchuria and the Russians occupied Port Arthur. By making peace, the emperor Meiji had kept the Russians out of Japan. Emperor Hirohito had this analogy in his mind when he ordered the surrender.
6. The Japanese leaders had two good reasons for lying when they spoke to Robert Butow. The first reason was explained afterwards by Lord
Privy Seal Kido, another member of the Supreme Council: "If military
leaders could convince themselves that they were defeated by the
power of science but not by lack of spiritual power or strategic
errors, they could save face to some extent". The second reason was
that they were telling the Americans what the Americans wanted to
hear, and the Americans did not want to hear that the Soviet invasion
of Manchuria brought the war to an end.
In addition to the myth of two nuclear bombs bringing the war to an end, there are other myths that need to be demolished. There is the myth that, if Hitler had acquired nuclear weapons before we did, he could have used them to conquer the world. There is the myth that the invention of the hydrogen bomb changed the nature of nuclear warfare. There is the myth that international agreements to abolish weapons without perfect verification are worthless. All these myths are false. After they are demolished, dramatic moves toward a world without nuclear weapons may become possible.
Science Writer, Author, Nano
Predicting the Future
I used to think you could predict the future. In "Profiles of the Future," Arthur C. Clarke made it seem so easy. And so did all those other experts who confidently predicted the paperless office, the artificial intelligentsia who for decades predicted "human equivalence in ten years," the nanotechnology prophets who kept foreseeing major advances toward molecular manufacturing within fifteen years, and so on.
Mostly, the predictions of science and technology types were wonderful: space colonies, flying cars in everyone's garage, the conquest (or even reversal) of aging. (There were of course the doomsayers, too, such as the population-bomb theorists who said the world would run out of food by the turn of the century.)
But at last, after watching all those forecasts not come true, and in fact become falsified in a crashing, breathtaking manner, I began to question the entire business of making predictions. I mean, if even Nobel prizewinning scientists such as Ernest Rutherford, who gave us essentially the modern concept of the nuclear atom, could say, as he did in 1933, that "We cannot control atomic energy to an extent which would be of any value commercially, and I believe we are not likely ever to be able to do so," and be so spectacularly wrong about it, what hope was there for the rest of us?
And then I finally decided that I knew the source of this incredible mismatch between confident forecast and actual result. The universe is a complex system in which countless causal chains are acting and interacting independently and simultaneously (the ultimate nature of some of them unknown to science even today). There are in fact so many causal sequences and forces at work, all of them running in parallel, and each of them often affecting the course of the others, that it is hopeless to try to specify in advance what's going to happen as they jointly work themselves out. In the face of that complexity, it becomes difficult if not impossible to know with any assurance the future state of the system except in those comparatively few cases in which the system is governed by ironclad laws of nature such as those that allow us to predict the phases of the moon, the tides, or the position of Jupiter in tomorrow night's sky. Otherwise, forget it.
Further, it's an illusion to think that supercomputer modeling is up to the task of truly reliable crystal-ball gazing. It isn't. Witness the epidemiologists who predicted that last year's influenza season would be severe (in fact it was mild); the professional hurricane-forecasters whose models told them that the last two hurricane seasons would be monsters (whereas instead they were wimps). Certain systems in nature, it seems, are computationally irreducible phenomena, meaning that there is no way of knowing the outcome short of waiting for it to happen.
Formerly, when I heard or read a prediction, I believed it. Nowadays I just roll my eyes, shake my head, and turn the page.
Physicist; Technical Consultant; Science Fiction Writer; Author, The Transparent Society
Sometimes you are glad to discover you were wrong. My best example of that kind of pleasant surprise is India. I'm delighted to see its recent rise, on (tentative) course toward economic, intellectual and social success. If these trends continue, it will matter a lot to Earth civilization, as a whole. The factors that fostered this trend appear to have been atypical — at least according to common preconceptions like "west and east" or "right vs left." I learned a lesson, about questioning my assumptions.
Alas, there have been darker surprises. The biggest example has been America's slide into what could be diagnosed as bona fide Future Shock.
Alvin Toffler appears to have sussed it. Back in 1999, while we were fretting over a silly "Y2K Bug" in ancient COBOL code, something else happened, at a deeper level. Our weird governance issues are only surface symptoms of what may have been a culture-wide crisis of confidence, upon the arrival of that "2" in the millennium column. Yes, people seemed to take the shift complacently, going about their business, But, underneath all the blithe shrugs, millions have turned their backs upon the future, even as a topic of discussion or interest.
Other than the tenacious grip of Culture War, what evidence can I offer? Well, in my own fields, let me point to a decline in the futurist-punditry industry. (A recent turnaround offers hope.) And a plummet in the popularity of science fiction literature (as opposed to feudal-retro fantasy.) John B. has already shown us how little draw science books offer, in the public imagination — an observation that not only matches my own, but also reflects the anti-modernist fervor displayed by all dogmatic movements.
One casualty: the assertive, pragmatic approach to negotiation and human-wrought progress that used to be mother's milk to this civilization.
Yes, there were initial signs of all this, even in the 1990s. But the extent of future-anomie and distaste for science took me completely by surprise. It makes me wonder why Toffler gets mentioned so seldom.
Let me close with a final surprise, that's more of a disappointment.
I certainly expected that, by now, online tools for conversation, work, collaboration and discourse would have become far more useful, sophisticated and effective than they currently are. I know I'm pretty well alone here, but all the glossy avatars and video and social network sites conceal a trivialization of interaction, dragging it down to the level of single-sentence grunts, flirtation and ROTFL [rolling on the floor laughing], at a time when we need discussion and argument to be more effective than ever.
Indeed, most adults won't have anything to do with all the wondrous gloss that fills the synchronous online world, preferring by far the older, asynchronous modes, like web sites, email, downloads etc.
This isn't grouchy old-fart testiness toward the new. In fact, there are dozens of discourse-elevating tools just waiting out there to be born. Everybody is still banging rocks together, while bragging about the colors. Meanwhile, half of the tricks that human beings normally use, in real world conversation, have never even been tried online.
Mathematician, Computer Scientist; CyberPunk Pioneer; Novelist;
the Seashell, and the Soul
Can Robots See God?
Studying mathematical logic in the 1970s I believed it was possible to put together a convincing argument that no computer program can fully emulate a human mind. Although nobody had quite gotten the argument right, I hoped to straighten it out.
My belief in this will-o-the-wisp was motivated by a gut feeling that people have numinous inner qualities that will not be found in machines. For one thing, our self-awareness lets us reflect on ourselves and get into endless mental regresses: "I know that I know that I know..." For another, we have moments of mystical illumination when we seem to be in contact, if not with God, then with some higher cosmic mind. I felt that surely no machine could be self-aware or experience the divine light.
At that point, I'd never actually touched a computer — they were still inaccessible, stygian tools of the establishment. Three decades rolled by, and I'd morphed into a Silicon Valley computer scientist, in constant contact with nimble chips. Setting aside my old prejudices, I changed my mind — and came to believe that we can in fact create human-like computer programs.
Although writing out such a program is in some sense beyond the abilities of any one person, we can set up simulated worlds in which such computer programs evolve. I feel confident that some relatively simple set-up will, in time, produce a human-like program capable of emulating all known intelligent human behaviors: writing books, painting pictures, designing machines, creating scientific theories, discussing philosophy, and even falling in love. More than that, we will be able to generate an unlimited number of such programs, each with its own particular style and personality.
What of the old-style attacks from the quarters of mathematical logic? Roughly speaking, these arguments always hinged upon a spurious belief that we can somehow discern between, on the one hand, human-like systems which are fully reliable and, on the other hand, human-like systems fated to begin spouting gibberish. But the correct deduction from mathematical logic is that there is absolutely no way to separate the sheep from the goats. Note that this is already our situation vis-a-vis real humans: you have no way to tell if and when a friend or a loved one will forever stop making sense.
With the rise of new practical strategies for creating human-like programs and the collapse of the old a priori logical arguments against this endeavor, I have to reconsider my former reasons for believing humans to be different from machines. Might robots become self-aware? And — not to put too fine a point on it — might they see God? I believe both answers are yes.
Consciousness probably isn't that big a deal. A simple pair of facing mirrors exhibit a kind of endlessly regressing self-awareness, and this type of pattern can readily be turned into computer code.
And what about basking in the divine light? Certainly if we take a reductionistic view that mystical illumination is just a bath of intoxicating brain chemicals, then there seems to be no reason that machines couldn't occasionally be nudged into exceptional states as well. But I prefer to suppose that mystical experiences involve an objective union with a higher level of mind, possibly mediated by offbeat physics such as quantum entanglement, dark matter, or higher dimensions.
Might a robot enjoy these true mystical experiences? Based on my studies of the essential complexity of simple systems, I feel that any physical object at all must be equally capable of enlightenment. As the Zen apothegm has it, "The universal rain moistens all creatures."
So, yes, I now think that robots can see God.
Philosopher, University of Oxford; Author,
For me, belief is not an all or nothing thing — believe or disbelieve, accept or reject. Instead, I have degrees of belief, a subjective probability distribution over different possible ways the world could be. This means that I am constantly changing my mind about all sorts of things, as I reflect or gain more evidence. While I don't always think explicitly in terms of probabilities, I often do so when I give careful consideration to some matter. And when I reflect on my own cognitive processes, I must acknowledge the graduated nature of my beliefs.
The commonest way in which I change my mind is by concentrating my credence function on a narrower set of possibilities than before. This occurs every time I learn a new piece of information. Since I started my life knowing virtually nothing, I have changed my mind about virtually everything. For example, not knowing a friend's birthday, I assign a 1/365 chance (approximately) of it being the 11th of August. After she tells me that the 11th of August is her birthday, I assign that date a probability of close to 100%. (Never exactly 100%, for there is always a non-zero probability of miscommunication, deception, or other error.)
It can also happen that I change my mind by smearing out my credence function over a wider set of possibilities. I might forget the exact date of my friend's birthday but remember that it is sometime in the summer. The forgetting changes my credence function, from being almost entirely concentrated on 11th of August to being spread out more or less evenly over all the summer months. After this change of mind, I might assign a 1% probability to my friend's birthday being on the 11th of August.
My credence function can become more smeared out not only by forgetting but also by learning — learning that what I previously took to be strong evidence for some hypothesis is in fact weak or misleading evidence. (This type of belief change can often be mathematically modeled as a narrowing rather than a broadening of credence function, but the technicalities of this are not relevant here.)
For example, over the years I have become moderately more uncertain about the benefits of medicine, nutritional supplements, and much conventional health wisdom. This belief change has come about as a result of several factors. One of the factors is that I have read some papers that cast doubt on the reliability of the standard methodological protocols used in medical studies and their reporting. Another factor is my own experience of following up on MEDLINE some of the exciting medical findings reported in the media — almost always, the search of the source literature reveals a much more complicated picture with many studies showing a positive effect, many showing a negative effect, and many showing no effect. A third factor is the arguments of a health economist friend of mine, who holds a dim view of the marginal benefits of medical care.
Typically, my beliefs about big issues change in small steps. Ideally, these steps should approximate a random walk, like the stock market. It should be impossible for me to predict how my beliefs on some topic will change in the future. If I believed that a year hence I will assign a higher probability to some hypothesis than I do today — why, in that case I could raise the probability right away. Given knowledge of what I will believe in the future, I would defer to the beliefs of my future self, provided that I think my future self will be better informed than I am now and at least as rational.
I have no crystal ball to show me what my future self will believe. But I do have access to many other selves, who are better informed than I am on many topics. I can defer to experts. Provided they are unbiased and are giving me their honest opinion, I should perhaps always defer to people who have more information than I do — or to some weighted average of expert opinion if there is no consensus. Of course, the proviso is a very big one: often I have reason to disbelieve that other people are unbiased or that they are giving me their honest opinion. However, it is also possible that I am biased and self-deceiving. An important unresolved question is how much epistemic weight a wannabe Bayesian thinker should give to the opinions of others. I'm looking forward to changing my mind on that issue, hopefully by my credence function becoming concentrated on the correct answer.
Physicist, University of Pennsylvania; Author: Faust
In Copenhagen: A Struggle for the Soul of Physics
The Universe's Expansion
The first topic you treat in freshman physics is showing how a ball shot straight up out of the mouth of a cannon will reach a maximum height and then fall back to Earth, unless its initial velocity, known now as escape velocity, is great enough that it breaks out of the Earth' gravitational field. If that is the case, its final velocity is however always less than its initial one. Calculating escape velocity may not be very relevant for cannon balls, but certainly is for rocket ships.
The situation with the explosion we call the Big Bang is obviously more complicated, but really not that different, or so I thought. The standard picture said that there was an initial explosion, space began to expand and galaxies moved away from one another. The density of matter in the Universe determined whether the Big Bang would eventually be followed by a Big Crunch or whether the celestial objects would continue to move away from one another with decreasing acceleration. In other words one could calculate the Universe's escape velocity. Admittedly the discovery of Dark Matter, an unknown quantity seemingly five times as abundant as known matter, seriously altered the framework but not in a fundamental way since Dark Matter was after all still matter, even if its identity is unknown.
This picture changed in 1998 with the announcement by two teams, working independently, that the rate of acceleration of the Universe's expansion was increasing, not decreasing. It was as if freshman physics' cannonball miraculously moved faster and faster as it left the Earth. There was no possibility of a Big Crunch, in which the Universe would collapse back on itself. The groups' analyses, based on observing distant stars of known luminosity, supernovae 1a, was solid. Science magazine dubbed it 1998's Discovery of The Year.
The cause of this apparent gravitational repulsion is not known. Called Dark Energy to distinguish it from Dark Matter, it appears to be the dominant force in the Universe's expansion, roughly three times as abundant as its Dark matter counterpart. The prime candidate for its identity is the so-called Cosmological Constant, a term first introduced into the cosmic gravitation equations by Einstein to neutralize expansion, but done away with by him when Hubble reported that the Universe was in fact expanding.
Finding a theory that will successfully calculate the magnitude of this cosmological constant, assuming this is the cause of the accelerating expansion, is perhaps the outstanding problem in the conjoined areas of cosmology and elementary particle physics. Despite many attempts, success does not seem to be in sight. If the cosmological constant is not the answer, an alternate explanation of the Dark Energy would be equally exciting.
Furthermore the apparent present equality, to within a factor of three, of matter density and the cosmological constant has raised a series of important questions. Since matter density decreases rapidly as the Universe expands (matter per volume decreases as volume increases) and the cosmological constant does not, we seem to be living in that privileged moment of the Universe's history when the two factors are roughly equal. Is this simply an accident? Will the distant future really be one in which, with Dark Energy increasingly important, celestial objects have moved so far apart so quickly as to fade from sight?
The discovery of Dark Energy has radically changed our view of the Universe. Future, keenly awaited findings, such as the identities of Dark Matter and Dark Energy will do so again.
Psychologist, University of Massachusetts, Amherst; Author: The Cognitive Brain
I have never questioned the conventional view that a good grounding in the physical sciences is needed for a deep understanding of the biological sciences. It did not occur to me that the opposite view might also be true. If someone were to have asked me if biological knowledge might significantly influence my understanding of our basic physical sciences, I would have denied it.
Now I am convinced that the future understanding
of our most important physical principles will be profoundly
shaped by what we learn in the living realm of biology.
What have changed my mind are the relatively recent developments
in the theoretical constructs and empirical findings
in the sciences of the brain — the biological foundation of all thought. Progress here can cast new light on the fundamental subjective factors that constrain our scientific formulations in what we take to be an objective enterprise.
Biologist, Reading University, England
We Differ More Than We Thought
The last thirty to forty years of social
science has brought an overbearing censorship to the
way we are allowed to think and talk about the diversity
of people on Earth. People of Siberian descent, New
Guinean Highlanders, those from the Indian sub-continent,
Caucasians, Australian aborigines, Polynesians, Africans — we
are, officially, all the same: there are no races.
as the old ideas about race are, modern genomic studies
reveal a surprising, compelling and different picture
of human genetic diversity. We are on average
about 99.5% similar to each other genetically. This
is a new figure, down from the previous estimate of
99.9%. To put what may seem like miniscule differences
in perspective, we are somewhere around 98.5% similar,
maybe more, to chimpanzees, our nearest evolutionary
The new figure for us, then, is significant.
It derives from among other things, many small genetic
differences that have emerged from studies that compare
human populations. Some confer the ability among
adults to digest milk, others to withstand equatorial
sun, others yet confer differences in body shape or
size, resistance to particular diseases, tolerance
to hot or cold, how many offspring a female might eventually
produce, and even the production of endorphins — those
internal opiate-like compounds. We also differ
by surprising amounts in the numbers of copies of some
genes we have.
Modern humans spread out of Africa
only within the last 60-70,000 years, little more than
the blink of an eye when stacked against the 6 million
or so years that separate us from our Great Ape ancestors.
The genetic differences amongst us reveal a species with
a propensity to form small and relatively isolated
groups on which natural selection has often acted strongly
to promote genetic adaptations to particular environments.
differ genetically more than we thought, but we should
have expected this: how else but through isolation
can we explain a single species that speaks at least
7,000 mutually unintelligible languages around the
What this all means is that, like it
or not, there may be many genetic differences among
human populations — including differences that
may even correspond to old categories of 'race' — that
are real differences in the sense of making one group
better than another at responding to some particular
environmental problem. This in no way says one
group is in general 'superior' to another, or
that one group should be preferred over another. But
it warns us that we must be prepared to discuss genetic
differences among human populations.