"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?" |
|
KEITH
DEVLIN
Mathematician;
Executive Director, Center for the Study
of Language and Information, Stanford; Author, The

What
is the nature of mathematics? Becoming a mathematician
in the 1960s, I swallowed hook, line, and sinker
the Platonistic philosophy dominant at the time,
that the objects of mathematics (the numbers,
the geometric figures, the topological spaces,
and so forth) had a form of existence in some
abstract ("Platonic") realm. Their
existence was independent of our existence as
living, cognitive creatures, and searching for
new mathematical knowledge was a process of explorative
discovery not unlike geographic exploration or
sending out probes to distant planets.
I
now see mathematics as something entirely different,
as the creation of the (collective) human mind.
As such, mathematics says as much about we ourselves
as it does about the external universe we inhabit.
Mathematical facts are not eternal truths about
the external universe, which held before we entered
the picture and will endure long after we are gone.
Rather, they are based on, and reflect, our interactions
with that external environment.
This
is not to say that mathematics is something we
have freedom to invent. It's not like literature
or music, where there are constraints on the form
but writers and musicians exercise great creative
freedom within those constraints. From the perspective
of the individual human mathematician, mathematics
is indeed a process of discovery. But what is being
discovered is a product of the human (species)-environment
interaction.
This
view raises the fascinating possibility that other
cognitive creatures in another part of the universe
might have different mathematics. Of course, as
a human, I cannot begin to imagine what that might
mean. It would classify as "mathematics" only
insofar as it amounted to that species analyzing
the abstract structures that arose from their interactions
with their environment.
This
shift in philosophy has influenced the way I teach,
in that I now stress social aspects of mathematics.
But when I'm giving a specific lecture on, say,
calculus or topology, my approach is entirely platonistic.
We do our mathematics using a physical brain that
evolved over hundreds of thousands of years by
a process of natural selection to handle the physical
and more recently the social environments in which
our ancestors found themselves. As a result, the
only way for the brain to actually do mathematics
is to approach it "platonistically," treating
mathematical abstractions as physical objects that
exist.
A
platonistic standpoint is essential to doing mathematics,
just as Cartesian dualism is virtually impossible
to dispense with in doing science or just plain
communicating with one another ("one another"?).
But ultimately, our mathematics is just that: our mathematics,
not the universe's. |
DAVID
G. MYERS
Social
psychologist, Hope College; author, Psychology, 8th
edition

Reading
and reporting on psychological science has changed
my mind many times, leading me now to believe that
• newborns
are not the blank slates I once presumed,
• electroconvulsive therapy often alleviates
intractable depression,
• economic growth has not improved our morale,
• the automatic unconscious mind dwarfs the
controlled conscious mind,
• traumatic experiences rarely get repressed,
• personality is unrelated to birth order,
• most folks have high self-esteem (which sometimes
causes problems),
• opposites do not attract,
• sexual orientation is a natural, enduring
disposition (most clearly so for men), not a choice.
In
this era of science-religion conflict, such revelations
underscore our need for what science and religion jointly
mandate: humility. Humility, I remind my student audience,
is fundamental to the empirical spirit advocated long
ago by Moses: "If a prophet speaks in the
name of the Lord and what he says does not come true,
then it is not the Lord's message." Ergo, if our
or anyone's ideas survive being put to the test, so
much the better for them. If they crash against a wall
of evidence, it is time to rethink. |
DANIEL
EVERETT
Researcher of Pirahã Culture; Chair
of Languages, Literatures, & Cultures, Professor
of Linguistics and Anthropology, Illinois State
University

Homeopathic Bias and Language Origins
I have wondered why some authors claim that people rarely if ever change their mind. I have changed my mind many times. This could be because I have weak character, because I have no self-defining philosophy, or because I like change. Whatever the reason, I enjoy changing my mind. I have occasionally irritated colleagues with my seeming motto of 'If it ain't broke, break it.'
At the same time, I adhered to a value common in the day-to-day business of scientific research, namely, that changing one's mind is alright for little matters but is suspect when it comes to big questions. Take a theory that is compatible with either conclusion 'x' or conclusion 'y'. First you believed 'x'. Then you received new information and you believed 'y'. This is a little change. And it is a natural form of learning - a change in behavior resulting from exposure to new information.
But change your mind, say, about the general theory that you work with, at least in some fields, and you are looked upon as a kind of maverick, a person without proper research priorities, a pot-stirrer. Why is that, I wonder?
I think that the stigma against major mind changes in science results from what I call 'homeopathic bias' - scientific knowledge is built up bit by little bit as we move cumulatively towards the truth.
This bias can lead researchers to avoid concluding that their work undermines the dominant theory in any significant way. Non-homeopathic doses of criticism can be considered not merely inappropriate, but even arrogant - implying somehow that the researcher is superior to his or her colleagues, whose unifying conceptual scheme is now judged to be weaker than they have noticed or are been willing to concede.
So any scientist publishing an article or a book about a non-homeopathic mind-change could be committing a career-endangering act. But I love to read these kinds of books. They bother people. They bother me.
I changed my mind about this homeopathic bias. I think it is myopic for the most part. And I changed my mind on this because I changed my mind regarding the largest question of my field - where language comes from. This change taught me about the empirical issues that had led to my shift and about the forces that can hold science and scientists in check if we aren't aware of them.
I believed at one time that culture and language were largely independent. Yet there is a growing body of research that suggests the opposite - deep reflexes from culture are to be found in grammar.
But if culture can exercise major effects on grammar, then the theory I had committed most of my research career to - the theory that grammar is part of the human genome and that the variations in the grammars of the world's languages are largely insignificant, was dead wrong. There did not have to be a specific genetic capacity for grammar - the biological basis of grammar could also be the basis of gourmet cooking, of mathematical reasoning, and of medical advances - human reasoning.
Grammar had once seemed to me too complicated to derive from any general human cognitive properties. It appeared to cry out for a specialized component of the brain, or what some linguists call the language organ. But such an organ becomes implausible if we can show that it is not needed because there are other forces that can explain language as both ontogenetic and phlyogentic fact.
Many researchers have discussed the kinds of things that hunters and gatherers needed to talk about and how these influenced language evolution. Our ancestors had to talk about things and events, about relative quantities, and about the contents of the minds of their conspecifics, among other things. If you can't talk about things and what happens to them (events) or what they are like (states), you can't talk about anything. So all languages need verbs and nouns. But I have been convinced by the research of others, as well as my own, that if a language has these, then the basic skeleton of the grammar largely follows. The meanings of verbs require a certain number of nouns and those nouns plus the verb make simple sentences, ordered in logically restricted ways. Other permutations of this foundational grammar follow from culture, contextual prominence, and modification of nouns and verbs. There are other components to grammar, but not all that many. Put like this, as I began to see things, there really doesn't seem to be much need for grammar proper to be part of the human genome as it were. Perhaps there is even much less need for grammar as an independent entity than we might have once thought. |
DAVID
DALRYMPLE
Student,
MIT's Center for Bits and Atoms; Researcher, Internet
0, Fab Lab Thinner Clients for South Africa, Conformal
Computing

Maybe
MBAs Should Design Computers After All
Not
that long ago, I was under the impression that the
basic problem of computer architecture had been solved.
After all, computers got faster every year, and gradually
whole new application domains emerged. There was constantly
more memory available, and software hungrily consumed
it. Each new computer had a bigger power supply, and
more airflow to extract the increasing heat from the
processor.
Now, clock speeds aren't rising quite as quickly, and
the progress that is made doesn't seem to help our computers
start up or run any faster. The traditions of the computing
industry, some going as far back as the first digital
computers built by John von Neumann in the 1950s, are
starting to grow obsolete. The slower computers seem
to get faster, and the more deeply I understand the way
things actually work, the more these problems become
apparent to me. They really come to light when you think
about a computer as a business.
Imagine
if your company or organization had one fellow [the
CPU] who sat in an isolated office, and refused to
talk with anyone except his two most trusted deputies
[the Northbridge and Southbridge], through which all
the actual work the company does must be funneled.
Because this one man — let's call him Bob — is
so overloaded doing all the work of the entire company,
he has several assistants [memory controllers] who
remember everything for him. They do this through a
complex system [virtual memory] of file cabinets of
various sizes [physical memories], the organization
over which they have strictly limited autonomy.
Because
it is faster to find things in the smaller cabinets
[RAM], where there is less to sift through, Bob asks
them to put the most commonly used information there.
But since he is constantly switching between different
tasks, the assistants must swap in and out the files
in the smaller cabinets with those in the larger ones
whenever Bob works on something different ["thrashing"].
The largest file cabinet is humongous, and rotates
slowly in front of a narrow slit [magnetic storage].
The assistant in charge of it must simply wait for
the right folder to appear in front of him before passing
it along [disk latency].
Any
communication with customers must be handled through
a team of receptionists [I/O controllers] who don't
take the initiative to relay requests to one of Bob's
deputies. When Bob needs customer input to continue
on a difficult problem, he drops what he is doing to
chase after his deputy to chase after a receptionist
to chase down the customer, thus preventing work for
other customers to be done in that time.
This
model is clearly horrendous for numerous reasons. If
any staff member goes out to lunch, the whole operation
is likely to grind to a halt. Tasks that ought to be
quite simple turn out to take a lot of time, since
Bob must re-acquaint himself with the issues in question.
If a spy gains Bob's trust, all is lost. The only way
to make the model any better without giving up and
starting over is to hire people who just do their work
faster and spend more hours in the office. And yet,
this is the way almost every computer in the world
operates today.
It
is much more sane to hire a large pool of individuals,
and, depending on slow-changing customer needs, organize
them into business units and assign them to customer
accounts. Each person keeps track of his own small
workload, and everyone can work on a separate task
simultaneously. If the company suddenly acquires new
customers, it can recruit more staff instead of forcing
Bob to work overtime. If a certain customer demands
more attention than was foreseen, more people can be
devoted to the effort. And perhaps most importantly,
collaboration with other businesses becomes far more
meaningful than the highly coded, formal game of telephone
that Bob must play with Frank, who works in a similar
position at another corporation [a server]. Essentially,
this is a business model problem as much as a computer
science one.
These
complaints only scratch the surface of the design flaws
of today's computers. On an extremely low level, with
voltages, charge, and transistors, energy is handled
recklessly, causing tremendous heat, which would melt
the parts in a matter of seconds were it not for the
noisy cooling systems we find in most computers. And
on a high level, software engineers have constructed
a city of competing abstractions based on the fundamentally
flawed "CPU" idea.
So
I have changed my mind. I used to believe that computers
were on the right track, but now I think the right
thing to do is to move forward from our 1950s models
to a ground-up, fundamentally distributed computing
architecture. I started to use computers at 17 months
of age and started programming them at 5, so I took
the model for granted. But the present stagnation of
perceptual computer performance, and the counter-intuitiveness
of programming languages, led me to question what I
was born into and wonder if there's a better way. Now
I'm eager to help make it happen. When discontent changes
your mind, that's innovation. |
MAX
TEGMARK
Physicist,
MIT; Researcher, Precision Cosmology

Do
we need to understand consciousness to understand
physics? I used to answer "yes",
thinking that we could never figure out the elusive "theory
of everything" for our external physical reality
without first understanding the distorting mental
lens through which we perceive it.
After
all, physical reality has turned out to be very different
from how it seems, and I feel that most of our notions
about it have turned out to be illusions. The world
looks like it has three primary colors, but that number
three tells us nothing about the world out there, merely
something about our senses: that our retina has three
kinds of cone cells. The world looks like it has impenetrably
solid and stationary objects, but all except a quadrillionth
of the volume of a rock is empty space between particles
in restless schizophrenic vibration. The world feels
like a three-dimensional stage where events unfold
over time, but Einstein's work suggests that change
is an illusion, time being merely the fourth dimension
of an unchanging space-time that just is, never created
and never destroyed, containing our cosmic history
like a DVD contains a movie. The quantum world feels
random, but Everett's work suggests that randomness
too is an illusion, being simply the way our minds
feel when cloned into diverging parallel universes.
The
ultimate triumph of physics would be to start with
a mathematical description of the world from the "bird's
eye view" of a mathematician studying the equations
(which are ideally simple enough to fit on her T-shirt)
and to derive from them the "frog's eye view" of
the world, the way her mind subjectively perceives
it. However, there is also a third and intermediate "consensus
view" of the world. From your subjectively perceived
frog perspective, the world turns upside down when
you stand on your head and disappears when you close
your eyes, yet you subconsciously interpret your sensory
inputs as though there is an external reality that
is independent of your orientation, your location and
your state of mind. It is striking that although this
third view involves both censorship (like rejecting
dreams), interpolation (as between eye-blinks) and
extrapolation (like attributing existence to unseen
cities) of your frog's eye view, independent observers
nonetheless appear to share this consensus view. Although
the frog's eye view looks black-and-white to a cat,
iridescent to a bird seeing four primary colors, and
still more different to a bee seeing polarized light,
a bat using sonar, a blind person with keener touch
and hearing, or the latest robotic vacuum cleaner,
all agree on whether the door is open.
This
reconstructed consensus view of the world that humans,
cats, aliens and future robots would all agree on is
not free from some of the above-mentioned shared illusions.
However, it is by definition free from illusions that
are unique to biological minds, and therefore decouples
from the issue of how our human consciousness works.
This is why I've changed my mind: although understanding
the detailed nature of human consciousness is a fascinating
challenge in its own right, it is not necessary
for a fundamental theory of physics, which need "only" derive
the consensus view from its equations.
In
other words, what Douglas Adams called "the ultimate
question of life, the universe and everything" splits
cleanly into two parts that can be tackled separately:
the challenge for physics is deriving the consensus
view from the bird's eye view, and the challenge for
cognitive science is to derive the frog's eye view
from the consensus view. These are two great challenges
for the third millennium. They are each daunting in
their own right, and I'm relieved that we need not
solve them simultaneously. |
ROBERT
SAPOLSKY
Neuroscientist, Stanford University, Author, A
Primate's Memoir

Well,
my biggest change of mind came only a few years ago. It
was the outcome of a painful journey of self-discovery,
where my wife and children stood behind me and made it possible,
where I struggled with all my soul, and all my heart and
all my might. But that had to do with my realizing that
Broadway musicals are not cultural travesties, so it's a
little tangential here. Instead I'll focus on science.
I'm both a neurobiologist and a primatologist, and I've changed
my mind about plenty of things in both of these realms.
But the most fundamental change is one that transcends
either of those disciplines — this was my realizing
that the most interesting and important things in the life
sciences are not going to be explained with sheer reductionism.
A specific change of mind concerned my work as a neurobiologist.
This came about 15 years ago, and it challenged neurobiological
dogma that I had learned in pre-school, namely that the adult
brain does not make new neurons. This fact had always been
a point of weird pride in the field — hey, the
brain is SO fancy and amazing that its elements are irreplaceable, not like some
dumb-ass simplistic liver that's so totally fungible that it can regrow itself. And
what this fact also reinforced, in passing, was the dogma
that the brain is set in stone very early on in life, that
there's all sorts of things that can't be changed once a
certain time-window had passed.
Starting in the 1960's, a handful of crackpot scientists
had been crying in the wilderness about how the adult brain
does make new neurons. At best, their
unorthodoxy was ignored; at worst, they were punished for it. But by the
1990's, it had become clear that they were right. And "adult neurogenesis" has
turned into the hottest subject in the field — the
brain makes new neurons, makes them under interesting circumstances,
fails to under other interesting ones.
The new neurons function, are integrated into circuits, might
even be required for certain types of learning. And the phenomenon
is a cornerstone of a new type of neurobiological chauvinism — part
of the very complexity and magnificence of the brain is how
it can rebuild itself in response to the world around it.
So, I'll admit, this business about new neurons was a tough
one for me to assimilate. I
wasn't invested enough in the whole business to be in the crowd indignantly saying,
No, this can't be true. Instead, I just tried to ignore it. "New neurons",
christ, I can't deal with this, turn the page. And after an embarrassingly
long time, enough evidence had piled up that I had to change my mind and decide
that I needed to deal with it after all. And it's now
one of the things that my lab studies.
The
other change concerned my life as a primatologist, where
I have been studying male baboons in East Africa. This
also came in the early 90's. I study what social behavior
has to do with health, and my shtick always was that if
you want to know which baboons are going to be festering
with stress-related disease, look at the low-ranking ones. Rank
is physiological destiny, and if you have a choice in the
matter, you want to win some critical fights and become a
dominant male, because you'll be healthier. And my change
of mind involved two pieces.
The
first was realizing, from my own data and that of others,
that being dominant has far less to do with winning fights
than with social intelligence and impulse control. The
other was realizing that while health has something to
do with social rank, it has far more to do with personality
and social affiliation — if you want to
be a healthy baboon, don't be a socially isolated one.
This particular shift has something to do with the accretion
of new facts, new statistical techniques for analyzing
data, blah blah. Probably most importantly, it has to do
with the fact that I was once a hermetic 22-year old studying
baboons and now, 30 years later, I've changed my mind about
a lot of things in my own life. |
TOR
NØRRETRANDERS
Science
Writer; Consultant; Lecturer, Copenhagen; Author, The
Generous Man

Permanent
Reincarnation
I
have changed my mind about my body. I used to think of
it as a kind of hardware on which my mental and behavioral
software was running. Now, I primarily think of my body
as software.
My
body is not like a typical material object, a stable
thing. It is more like a flame, a river or an eddie.
Matter is flowing through it all the time. The constituents
are being replaced over and over again.
A chair or a table is stable because the atoms stay where
they are. The stability of a river stems from the constant
flow of water through it.
98 percent of the atoms in the body are replaced every
year. 98 percent! Water molecules stays in your body for
two weeks (and for an even shorter time in a hot climate),
the atoms in your bones stays there for a few months. Some
atoms stay for years. But almost not one single atom stay
with you in your body from cradle to grave.
What is constant in you is not material. An average person
takes in 1.5 ton of matter every year as food, drinks and
oxygen. All this matter has to learn to be you. Every year.
New atoms will have to learn to remember your childhood.
These
numbers has been known for half a century or more, mostly
from studies of radioactive isotopes. Physicist Richard
Feynman said in 1955: "Last week's potatoes!
They now can remember what was going on in your mind a
year ago."
But
why is this simple insight not on the all-time Top 10
list of important discoveries? Perhaps because it tastes
a little like spiritualism and idealism? Only the ghosts
are for real? Wandering souls?
But
digital media now makes it possible to think of all this
in a simple way. The music I danced to as a teenager
has been moved from vinyl-LPs to magnetic audio tapes
to CDs to Pods and whatnot. The physical representation
can change and is not important — as long as it
is there. The music can jump from medium to medium, but
it is lost if it does not have a representation. This
physics of information was sorted out by Rolf Landauer
in the 1960'ies. Likewise, out memories can move from
potato-atoms to burger-atoms to banana-atoms. But the
moment they are on their own, they are lost.
We reincarnate ourselves all the time. We constantly give
our personality new flesh. I keep my mental life alive
by making it jump from atom to atom. A constant flow. Never
the same atoms, always the same river. No flow, no river.
No flow, no me.
This is what I call permanent reincarnation: Software
replacing its hardware all the time. Atoms replacing atoms
all the time. Life. This is very different from religious
reincarnation with souls jumping from body to body (and
souls sitting out there waiting for a body to take home
in).
There has to be material continuity for permanent reincarnation
to be possible. The software is what is preserved, but
it cannot live on its own. It has to jump from molecule
to molecule, always in carnation.
I have changed my mind about the stability of my body:
It keeps changing all the time. Or I could not stay the
same. |
HELEN
FISHER
Research
Professor, Department of Anthropology, Rutgers University; Author, Why
We Love

Planned
Obsolescence? The
Four-Year Itch
When
asked why all of her marriages failed, anthropologist
Margaret Mead apparently replied, "I beg your pardon,
I have had three marriages and none of them was a failure." There
are many people like Mead. Some 90% of Americans
marry by middle age. And when I looked at United
Nations data on 97 other societies, I found that more
than 90% of men and women eventually wed in the vast
majority of these cultures, too. Moreover, most
human beings around the world marry one person at a time:
monogamy. Yet, almost everywhere people have devised
social or legal means to untie the knot. And where
they can divorce — and remarry — many do.
So
I had long suspected this human habit of "serial
monogamy" had evolved for some biological purpose. Planned
obsolescence of the pairbond? Perhaps the mythological "seven-year
itch" evolved millions of years ago to enable a
bonded pair to rear two children through infancy
together. If each departed after about seven years
to seek "fresh features," as poet Lord Byron
put it, both would have ostensibly reproduced themselves
and both could breed again — creating more genetic
variety in their young.
So
I began to cull divorce data on 58 societies collected
since 1947 by the Statistical Office of the United Nations. My
mission: to prove that the "seven year itch" was
a worldwide biological phenomenon associated in some
way with rearing young.
Not
to be. My intellectual transformation came while
I was pouring over these divorce statistics in a rambling
cottage, a shack really, on the Massachusetts coast one
August morning. I regularly got up around 5:30,
went to a tiny desk that overlooked the deep woods, and
poured over the pages I had Xeroxed from the United Nations
Demographic Yearbooks. But in country after country,
and decade after decade, divorces tended to peak (the
divorce mode) during and around the fourth year of marriage. There
were variations, of course. Americans tended to
divorce between the second and third year of marriage,
for example. Interestingly, this corresponds with
the normal duration of intense, early stage, romantic
love — often about 18 months to 3 years. Indeed,
in a 2007 Harris poll, 47% of American respondents said
they would depart an unhappy marriage when the romance
wore off, unless they had conceived a child.
Nevertheless,
there was no denying it: Among these hundreds of
millions of people from vastly different cultures, three
patterns kept emerging. Divorces regularly peaked
during and around the fourth year after wedding. Divorces
peaked among couples in their late twenties. And
the more children a couple had, the less likely they
were to divorce: some 39% of worldwide divorces occurred
among couples with no dependent children; 26% occurred
among those with one child; 19% occurred among couples
with two children; and 7% of divorces occurred among
couples with three young.
I
was so disappointed. I mulled about this endlessly. My
friend used to wave his hand over my face, saying, "Earth
to Helen; earth to Helen." Why do so many
men and women divorce during and around the 4-year mark;
at the height of their reproductive years; and
often with a single child? It seemed like such
an unstable reproductive strategy. Then suddenly
I got that "ah-ha" moment: Women in hunting
and gathering societies breastfeed around the clock,
eat a low-fat diet and get a lot of exercise — habits
that tend to inhibit ovulation. As a result, they
regularly space their children about four years apart. Thus,
the modern duration of many marriages—about four
years—conforms to the traditional period of human
birth spacing, four years.
Perhaps
human parental bonds originally evolved to last only
long enough to raise a single child through
infancy, about four years, unless a second infant was
conceived. By age five, a youngster could be reared
by mother and a host of relatives. Equally important,
both parents could choose a new partner and bear more
varied young.
My
new theory fit nicely with data on other species. Only
about three percent of mammals form a pairbond to rear
their young. Take foxes. The vixen's milk
is low in fat and protein; she must feed her kits constantly;
and she will starve unless the dog fox brings her food. So
foxes pair in February and rear their young together. But
when the kits leave the den in mid summer, the pairbond
breaks up. Among foxes, the partnership lasts only
through the breeding season. This pattern is common
in birds. Among the more than 8,000 avian species,
some 90% form a pairbond to rear their young. But
most do not pair for life. A male and female robin, for
example, form a bond in the early spring and rear one
or more broods together. But when the last of the
fledgling fly away, the pairbond breaks up.
Like
pair-bonding in many other creatures, humans have probably
inherited a tendency to love and love again—to
create more genetic variety in our young. We aren't
puppets on a string of DNA, of course. Today some
57% of American marriages last for life. But deep
in the human spirit is a restlessness in long relationships,
born of a time long gone, as poet John Dryden put it, "when
wild in wood the noble savage ran." |
STEVE
NADIS
Science
writer; Contributing Editor, Astronomy Magazine

The
Myth Of The "Open Mind"
When
I was 21, I began working for the Union of Concerned
Scientists (UCS) in Cambridge Massachusetts. I was
still an undergraduate at the time, planning on doing
a brief research stint in energy policy before finishing
college and heading to graduate school in physics.
That "brief research stint" lasted about
seven years, off and on, and I never did make it to
graduate school. But the experience was instructive
nevertheless.
When
I started at UCS in the 1970s, nuclear power safety
was a hot topic, and I squared off in many debates
against nuclear proponents from utility companies,
nuclear engineering departments, and so forth regarding
reactor safety, radioactive wastes, and the viability
of renewable energy alternatives. The next issue I
took on for UCS was the nuclear arms race, which was
equally polarized. (The neocons of that day weren't "neo" back
then; they were just cons.) As with nuclear safety,
there was essentially no common ground between the
two sides. Each faction was invariably trying to do
the other in, through oral rhetoric and tendentious
prose, always looking for new material to buttress
their case or undermine that of their opponents.
Even
though the organization I worked for was called the
Union of Concern Scientists, and even though many of
the staff members there referred to me as a "scientist" (despite
my lack of academic credentials), I knew that what
I was doing was not science. (Nor were the many physics
PhD's in arms control and energy policy doing science
either.) In the back of my head, I always assumed that "real
science" was different — that scientists
are guided by facts rather than by ideological positions,
personal rivalries, and whatnot.
In
the decades since, I've learned that while this may
be true in many instances, oftentimes it's not. When
it comes to the biggest, most contentious issues in
physics and cosmology — such as the validity
of inflationary theory, string theory, or the multiverse/landscape
scenario — the image of the objective truth seeker,
standing above the fray, calmly sifting through the
evidence without preconceptions or prejudice, may be
less accurate than the adversarial model of our justice
system. Both sides, to the extent there are sides on
these matters, are constantly assembling their briefs,
trying to convince themselves as well as the jury at
large, while at the same time looking for flaws in
the arguments of the opposing counsel.
This
fractionalization may stem from scientific intuition,
political or philosophical differences, personal
grudges, or pure academic competition. It's not surprising
that this happens, nor is it necessarily a bad thing.
In fact, it's my impression that this approach works
pretty well in the law and in science too. It means
that, on the big things at least, science will be vetted;
it has to withstand scrutiny, pass muster.
But
it's not a cold, passionless exercise either. At its
heart, science is a human endeavor, carried out by
people. When the questions are truly ambitious, it
takes a great personal commitment to make any headway — a
big investment in energy and in emotion as well. I
know from having met with many of the lead researchers
that the debates can get heated, sometimes uncomfortably
so. More importantly, when you're engaged in an epic
struggle like this — trying, for instance, to
put together a theory of broad sweep — it may
be difficult, if not impossible, to keep an "open
mind" because you may be well beyond that stage,
having long since cast your lot with a particular line
of reasoning. And after making an investment over the
course of many years, it's natural to want to protect
it. That doesn't mean you can't change your mind — and
I know of several cases where this has occurred — but,
no matter what you do, it's never easy to shift from
forward to reverse.
Although
I haven't worked as a scientist in any of these areas,
I have written about many of the "big questions" and
know how hard it is to get all the facts lined up so
that they fit together into something resembling an
organic whole. Doing that, even as a mere scribe, involves
periods of single-minded exertion, and during that
process the issues can almost take on a life of their
own, at least while you're actively thinking about
them. Before long, of course, you've moved onto the
next story and the excitement of the former recedes.
As the urgency fades, you start wondering why you felt
so strongly about the landscape or eternal inflation
or whatever it was that had taken over your desk some
months ago.
It's
different, of course, for researchers who may stake
out an entire career — or at least big chunks
thereof — in a certain field. You're obliged
to keep abreast of all that's going on of note, which
means one's interest is continually renewed. As new
data comes in, you try to see how it fits in with the
pieces of the puzzle you're already grappling with.
Or if something significant emerges from the opposing
camp, you may instinctively seek out the weak spots,
trying to see how those guys messed up this time.
It's
possible, of course, that a day may come when, try
as you might, you can't find the weak spots in the
other guy's story. After many attempts and an equal
number of setbacks, you may ultimately have to accede
to the view of an intellectual, if not personal, rival.
Not that you want to but rather because you can't see
any way around it. On the one hand, you might chalk
it up as a defeat, something that will hopefully build
character down the road. But in the grand scheme of
things, it's more of a victory — a sign that
sometimes our adversarial system of science actually
works. |
PAUL
STEINHARDT
Physicist;
Albert Einstein Professor of Science, Princeton University;
Coauthor, Endless Universe: A New History of the Cosmos

What
created the structure of the universe?
Most cosmologists would say the answer is "inflation," and,
until recently, I would have been among them. But "facts
have changed my mind" —
and I now feel compelled to seek a new explanation that
may or may not incorporate inflation.
The idea always seemed incredibly simple. Inflation is
a period of rapid accelerated expansion that can transform
the chaos emerging from the big bang into the smooth,
flat homogeny observed by astronomers. If one likens
the conditions following the bang to a wrinkled and twisted
sheet of perfectly elastic rubber, then inflation corresponds
to stretching the sheet at faster-than-light speeds until
no vestige of its initial state remains. The "inflationary
energy" driving the accelerated expansion then decays
into the matter and radiation seen today and the stretching
slows to a modest pace that allows the matter to condense
into atoms, molecules, dust, planets, stars and galaxies.
I would describe this version as the "classical
view" of inflation in two senses. First, this is
the historic picture of inflation first introduced and
now appearing in most popular descriptions. Second, this
picture is founded on the laws of classical physics,
assuming quantum physics plays a minor role. Unfortunately,
this classical view is dead wrong. Quantum physics turns
out to play an absolutely dominant role in shaping the
inflationary universe. In fact, inflation amplifies the
randomness inherent in quantum physics to produce an
universe that is random and unpredictable.
This realization has come slowly. Ironically, the role
of quantum physics was believed to be a boon to the inflationary
paradigm when it was first considered twenty-five years
ago by several theorists, including myself. The classical
picture of inflation could not be strictly true, we recognized,
or else the universe would be so smooth after inflation
that galaxies and other large-scale structures would
never form. However, inflation ends through the quantum
decay of inflationary energy into matter and radiation.
The quantum decay is analogous to the decay of radioactive
uranium, in which there is some mean rate of decay but
inherent unpredictability as to when any particular uranium
nucleus will decay. Long after most uranium nuclei have
decayed, there remain some nuclei that have yet to fission.
Similarly,
inflationary energy decays at slightly different times
in different places, leading to spatial variations in
the temperature and matter density after inflation ends.
The "average" statistical pattern appears to
agree beautifully with the pattern of microwave background
radiation emanating from the earliest stages of the universe
and to produce just the pattern of non-uniformities needed
to explain the evolution and distribution of galaxies.
The agreement between theoretical calculation and observations
is a celebrated triumph of the inflationary picture.
But is this really a triumph? Only if the classical view
were correct. In the quantum view, it makes no sense to
talk about an "average" pattern. The problem
is that, as in the case of uranium nuclei, there always
remain some regions of space in which the inflationary
energy has not yet decayed into matter and radiation at
all. Although one might have guessed the undecayed regions
are rare, they expand so much faster than those that have
decayed that they soon overtake the volume of the universe.
The patches where inflationary energy has decayed and galaxies
and stars have evolved become the oddity — rare pockets
surrounded by space that continues to inflate away.
The
process repeats itself over and over, with the number
of pockets and the volume of surrounding space increasing
from moment to moment. Due to random quantum fluctuations,
pockets with all kinds of properties are produced
— some flat, but some curved; some with variations
in temperature and density like what we observe, but some
not; some with forces and physical laws like those we experience,
but some with different laws. The alarming result is that
there are an infinite number of pockets of each type and,
despite over a decade of attempts to avoid the situation,
no mathematical way of deciding which is more probable
has been shown to exist.
Curiously,
this unpredictable "quantum view" of inflation
has not yet found its way into the consciousness of many
astronomers working in the field, let alone the greater
scientific community or the public at large.
One
often reads that recent measurements of the cosmic microwave
background or the large-scale structure of the universe
have verified a prediction of inflation. This invariably
refers to a prediction based on the naïve classical
view. But if the measurements ever come out differently,
this could not rule out inflation. According to the quantum
view, there are invariably pockets with matching properties.
And
what of the theorists who have been developing the inflationary
theory for the last twenty-five years? Some, like me,
have been in denial, harboring the hope that a way can
be found to tame the quantum effects and restore the
classical view. Others have embraced the idea that cosmology
may be inherently unpredictable, although this group
is also vociferous in pointing how observations agree
with the (classical) predictions of inflation.
Speaking
for myself, it may have taken me longer to accept its
quantum nature than it should have, but, now that facts
have changed my mind, I cannot go back again. Inflation
does not explain the structure of the universe. Perhaps
some enhancement can explain why the classical view works
so well, but then it will be that enhancement rather
than inflation itself that explains the structure of
the universe. Or maybe the answer lies beyond the big
bang. Some of us are considering the possibility that
the evolution of the universe is cyclic and that the
structure was set by events that occurred before the
big bang. One of the draws of this picture is that quantum
physics does not play the same dominant role, and there
is no escaping its predictions of the uniformity, flatness
and structure of the universe. |
|