"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?" |
|
GREGORY
BENFORD
Physicist,
UC Irvine; Author, Deep
Time

Evolving
the laws of physics
Richard
Feynman held that philosophy of science is as useful to
scientists as ornithology is to birds. Often this is so.
But the unavoidable question about physics is — where
do the laws come from?
Einstein
hoped that God had no choice in making the universe. But
philosophical issues seem unavoidable when we hear of the "landscape" of
possible string theory models. As now conjectured, the
theory leads to 10500 solution universes — a horrid
violation of Occam's Razor we might term "Einstein's nightmare."
I
once thought that the laws of our universe were unquestionable,
in that there was no way for science to address the question.
Now I'm not so sure. Can we hope to construct a model of
how laws themselves arise?
Many
scientists dislike even the idea of doing this, perhaps
because it's hard to know where to start. Perhaps ideas
from the currently chic technology, computers, are a place
to start. Suppose we treat the universe as a substrate
carrying out computations, a meta-computer.
Suppose
that precise laws require computation, which can never
be infinitely exact. Such a limitation that might be explained
by counting the computational capacity of a sphere around
an "experiment" that tries to measure outcomes
of those laws. The sphere expands at the speed of light,
say, so longer experiment times give greater precision.
Thinking mathematically, this sets a limit on how sharp
differentials can be in our equations. A partial derivative
of time cannot be better than the time to compute it.
In
a sense, there may be an ultimate limit on how well known
any law can be, especially one that must describe all of
space-time, like classical relativity. It can't be better
than the total computational capacity of the universe,
or the capacity within the light sphere we can see.
I
wonder if this idea can somehow define the nature of laws,
beyond the issue of their precision? For example, laws
with higher derivatives will be less descriptive because
their operations cannot be carried out in a given volume
over a finite time.
Perhaps
the infinite discreteness required for formulating any
mathematical system could be the limiting bound on such
discussions. There should be energy bounds, too, within
a finite volume, and thus limits on processing power set
by the laws of thermodynamics. Still, I don't see how these
arguments tell us enough to derive, say, general relativity.
Perhaps
we need more ideas to derive a Law of Laws. Can we use
the ideas of evolution? Perhaps invoke selection among
laws that penalize those laws that lead to singularities — and
thus taking those regions of space-time out of the game?
Lee Smolin tried a limited form of this by supposing universes
reproduce through black hole collapses. Ingenious, but
that didn't seem to lead very far. He imagined some variation
in reproduction of budded-off generations of universes,
so their fundamental parameters varied a bit. Then selection
could work.
In
a novel of a decade ago, Cosmo, I invoked intelligent
life, rather than singularities, to determine selection
for universes that can foster intelligence, as ours seems
to. (I didn't know about Lee's ideas at the time.) The
idea is that a universe hosting intelligence evolves creatures
that find ways in the laboratory to make more universes,
which bud off and can further engender more intelligence,
and thus more experiments that make more universes. This
avoids the problem of how the first universe started, of
course. Maybe the Law of Laws could answer that, too? |
LERA
BORODITSKY
Cognitive
Psychology & Cognitive Neuroscience, Stanford University

Do
our languages shape the nuts and bolts of perception,
the very way we see the world?
I
used to think that languages and cultures shape the ways
we think. I suspected they shaped they ways we reason and
interpret information. But I didn't think languages
could shape the nuts and bolts of perception, the way we
actually see the world. That part of cognition seemed
too low-level, too hard-wired, too constrained by the constants
of physics and physiology to be affected by language.
Then
studies started coming out claiming to find cross-linguistic
differences in color memory. For example, it was
shown that if your language makes a distinction between
blue and green (as in English), then you're less likely
to confuse a blue color chip for a green one in memory. In
a study like this you would see a color chip, it would
then be taken away, and then after a delay you would have
to decide whether another color chip was identical to the
one you saw or not.
Of
course, showing that language plays a role in memory is
different than showing that it plays a role in perception. Things
often get confused in memory and it's not surprising that
people may rely on information available in language as
a second resort. But it doesn't mean that speakers
of different languages actually see the colors differently
as they are looking at them. I thought that if you
made a task where people could see all the colors as they
were making their decisions, then there wouldn't be any
cross-linguistic differences.
I
was so sure of the fact that language couldn't shape perception
that I went ahead and designed a set of experiments to
demonstrate this. In my lab we jokingly referred
to this line of work as "Operation Perceptual Freedom." Our
mission: to free perception from the corrupting influences
of language.
We
did one experiment after another, and each time to my surprise
and annoyance, we found consistent cross-linguistic differences. They
were there even when people could see all the colors at
the same time when making their decisions. They were
there even when people had to make objective perceptual
judgments. They were there when no language was involved
or necessary in the task at all. They were there
when people had to reply very quickly. We just kept
seeing them over and over again, and the only way to get
the cross-linguistic differences to go away was to disrupt
the language system. If we stopped people from being
able to fluently access their language, then the cross-linguistic
differences in perception went away.
I
set out to show that language didn't affect perception, but
I found exactly the opposite. It turns out that languages
meddle in very low-level aspects of perception, and without
our knowledge or consent shape the very nuts and bolts of how
we see the world. |
JAMSHED
BHARUCHA
Professor
of Psychology, Provost, Senior Vice President, Tufts University

Education
as Stretching the Mind
I
used to believe that a paramount purpose of a liberal education
was threefold:
1)
Stretch your mind, reach beyond your preconceptions;
learn to think of things in ways you have never thought
before.
2) Acquire tools with which to critically
examine and evaluate new ideas, including your
own cherished ones.
3) Settle eventually on a framework or set
of frameworks that organize what you know and believe
and that guide your life as an individual and a leader.
I
still believe #1 and #2. I have changed my mind about #3.
I now believe in a new version of #3, which replaces the
above with the following:
a)
Learn new frameworks, and be guided by them.
b) But never
get so comfortable as to believe that your frameworks
are the final word, recognizing the strong psychological
tendencies that favor sticking to your worldview. Learn
to keep stretching your mind, keep stepping outside your
comfort zone, keep venturing beyond the familiar, keep
trying to put yourself in the shoes of others whose frameworks
or cultures are alien to you, and have an open mind to
different ways of parsing the world. Before you critique
a new idea, or another culture, master it to the point
at which its proponents or members recognize that you
get it.
Settling
into a framework is easy. The brain is built to perceive
the world through structured lenses — cognitive
scaffolds on which we hang our knowledge and belief systems.
Stretching
your mind is hard. Once we've settled on a worldview that
suits us, we tend to hold on. New information is bent to
fit, information that doesn't fit is discounted, and new
views are resisted.
By
'framework' I mean any one of a range of conceptual or
belief systems — either explicitly articulated or
implicitly followed. These include narratives, paradigms,
theories, models, schemas, frames, scripts, stereotypes,
and categories; they include philosophies of life, ideologies,
moral systems, ethical codes, worldviews, and political,
religious or cultural affiliations. These are all systems
that organize human cognition and behavior by parsing,
integrating, simplifying or packaging knowledge or belief.
They tend to be built on loose configurations of seemingly
core features, patterns, beliefs, commitments, preferences
or attitudes that have a foundational and unifying quality
in one's mind or in the collective behavior of a community.
When they involve the perception of people (including oneself),
they foster a sense of affiliation that may trump essential
features or beliefs.
What
changed my mind was the overwhelming evidence of biases
in favor of perpetuating prior worldviews. The brain maps
information onto a small set of organizing structures,
which serve as cognitive lenses, skewing how we process
or seek new information. These structures drive a range
of phenomena, including the perception of coherent patterns
(sometimes where none exists), the perception of causality
(sometimes where none exists), and the perception of people
in stereotyped ways.
Another
family of perceptual biases stems from our being social
animals (even scientists!), susceptible to the dynamics
of in-group versus out-group affiliation. A well known
bias of group membership is the over-attribution effect,
according to which we tend to explain the behavior of people
from other groups in dispositional terms ("that's just
the way they are"), but our own behavior in much more complex
ways, including a greater consideration of the circumstances.
Group attributions are also asymmetrical with respect to
good versus bad behavior. For groups that you like, including
your own, positive behaviors reflect inherent traits ("we're
basically good people") and negative behaviors are either
blamed on circumstances ("I was under a lot of pressure")
or discounted ("mistakes were made"). In contrast, for
groups that you dislike, negative behaviors reflect inherent
traits ("they can't be trusted") and positive behaviors
reflect exceptions ("he's different from the rest"). Related
to attribution biases is the tendency (perhaps based on
having more experience with your own group) to believe
that individuals within another group are similar to each
other ("they're all alike"), whereas your own group contains
a spectrum of different individuals (including "a few bad
apples"). When two groups accept bedrock commitments that
are fundamentally opposed, the result is conflict — or
war.
Fortunately,
the brain has other systems that allow us to counteract
these tendencies to some extent. This requires conscious
effort, the application of critical reasoning tools, and
practice. The plasticity of the brain permits change -
within limits.
To
assess genuine understanding of an idea one is inclined
to resist, I propose a version of Turing's Test tailored
for this purpose: You understand something you are inclined
to resist only if you can fool its proponents into thinking
you get it. Few critics can pass this test. I would also
propose a cross-cultural Turing Test for would-be cultural
critics (a Golden Rule of cross-group understanding): before
critiquing a culture or aspect thereof, you should be able
to navigate seamlessly within that culture as judged by
members of that group.
By
rejecting #3, you give up certainty. Certainty feels good
and is a powerful force in leadership. The challenge, as
Bertrand Russell puts it in The History of Western
Philosophy, is "To teach how to live without certainty,
and yet without being paralyzed by hesitation". |
DENIS
DUTTON
Professor
of the philosophy of art, University of Canterbury, New Zealand,
editor of Philosophy and Literature and Arts & Letters
Daily

The
Self-Made Species
The
appeal of Darwin's theory of evolution — and the
horror of it, for some theists — is that it expunges
from biology the concept of purpose, of teleology, thereby
converting biology into a mechanistic, canonical science.
In this respect, the author of The Origin of Species may
be said to be the combined Copernicus, Galileo, and Kepler
of biology. Just as these astronomers gave us a view of
the heavens in which no angels were required to propel
the planets in their orbs and the earth was no longer the
center of the celestial system, so Darwin showed that no
God was needed to design the spider's intricate web and
that man is in truth but another animal.
That's
how the standard story goes, and it is pretty much what
I used to believe, until I read Darwin's later book,
his treatise on the evolution of the mental life of animals,
including the human species: The Descent of Man.
This is the work in which Darwin introduces one of the
most powerful ideas in the study of human nature, one that
can explain why the capacities of the human mind so extravagantly
exceed what would have been required for hunter-gatherer
survival on the Pleistocene savannahs. The idea is sexual
selection, the process by which men and women in the
Pleistocene chose mates according to varied physical and
mental attributes, and in so doing "built" the human mind
and body as we know it.
In
Darwin's account, human sexual selection comes out looking
like a kind of domestication. Just as human beings domesticated
dogs and alpacas, roses and cabbages, through selective
breeding, they also domesticated themselves as a species
through the long process of mate selection. Describing
sexual selection as human self-domestication should
not seem strange. Every direct prehistoric ancestor of
every person alive today at times faced critical survival
choices: whether to run or hold ground against a predator,
which road to take toward a green valley, whether to slake
an intense thirst by drinking from some brackish pool.
These choices were frequently instantaneous and intuitive
and, needless to say, our direct ancestors were the ones
with the better intuitions.
However,
there was another kind of crucial intuitive choice faced
by our ancestors: whether to choose this man or that woman
as a mate with whom to rear children and share a life of
mutual support. It is inconceivable that decisions of such
emotional intimacy and magnitude were not made with an
eye toward the character of the prospective mate, and that
these decisions did not therefore figure in the evolution
of the human personality — with its tastes, values,
and interests. Our actual direct ancestors, male
and female, were the ones who were chosen by each other.
Darwin's
theory of sexual selection has disquieted and irritated
many otherwise sympathetic evolutionary theorists because,
I suspect, it allows purposes and intentions back into
evolution through an unlocked side door. The slogan memorized
by generations of students of natural selection is random
mutation and selective retention. The "retention" in
natural selection is strictly non-teleological, a matter
of brute, physical survival. The retention process of sexual
selection, however, is with human beings in large measure
purposive and intentional. We may puzzle about whether,
say, peahens have "purposes" in selecting peacocks with
the largest tails. But other animals aside, it is absolutely
clear that with the human race, sexual selection describes
a revived evolutionary teleology. Though it is directed
toward other human beings, it is as purposive as the domestication
of those wolf descendents that became familiar household
pets.
Every
Pleistocene man who chose to bed, protect, and provision
a woman because she struck him as, say, witty and healthy,
and because her eyes lit up in the presence of children,
along with every woman who chose a man because of his hunting
skills, fine sense of humor, and generosity, was making
a rational, intentional choice that in the end built much
of the human personality as we now know it.
Darwinian
evolution is therefore structured across a continuum. At
one end are purely natural selective processes that give
us, for instance, the internal organs and the autonomic
processes that regulate our bodies. At the other end are
rational decisions — adaptive and species-altering
across tens of thousands of generations in prehistoric
epochs. It is at this end of the continuum, where
rational choice and innate intuitions can overlap and reinforce
one another, that we find important adaptations that are
relevant to understanding the human personality, including
the innate value systems implicit in morality, sociality,
politics, religion, and the arts. Prehistoric choices honed
the human virtues as we now know them: the admiration of
altruism, skill, strength, intelligence, industriousness,
courage, imagination, eloquence, diligence, kindness, and
so forth.
The
revelations of Darwin's later work — beautifully
explicated as well in books by Helena Cronin, Amotz and
Avishag Zahavi, and Geoffrey Miller — have completely
altered my thinking about the development of culture. It
is not just survival in a natural environment that has
made human beings what they are. In terms of our personalities
we are, strange to say, a self-made species. For me this
is a genuine revelation, as it puts in a new genetic light
many human values that have hitherto been regarded as purely
cultural. |
CLAY SHIRKY
Social & Technology
Network Topology Researcher; Adjunct Professor, NYU Graduate
School of Interactive Telecommunications Program (ITP)

Religion
and Science
I was a science geek with a religious upbringing, an Episcopalian
upbringing, to be precise, which is pretty weak tea as
far as pious fervor goes. Raised in this tradition I learned,
without ever being explicitly taught, that religion and
science were compatible. My people had no truck with Young
Earth Creationism or anti-evolutionary cant, thank you
very much, and if some people's views clashed with scientific
discovery, well, that was their fault for being so fundamentalist.
Since
we couldn't rely on the literal truth of the Bible, we
needed a fallback position to guide our views on religion
and science. That position was what I'll call the Doctrine
of Joint Belief: "Noted Scientist X has accepted Jesus
as Lord and Savior. Therefore, religion and science are
compatible." (Substitute deity to taste.) You can
still see this argument today, where the beliefs of Francis
Collins or Freeman Dyson, both accomplished scientists,
are held up as evidence of such compatibility.
Belief
in compatibility is different from belief in God. Even
after I stopped believing, I thought religious dogma, though
incorrect, was not directly incompatible with science (a
view sketched out by Stephen Gould as "non-overlapping
magisteria".) I've now changed my mind, for
the obvious reason: I was wrong. The idea that religious
scientists prove that religion and science are compatible
is ridiculous, and I'm embarrassed that I ever believed
it. Having believed for so long, however, I understand
its attraction, and its fatal weaknesses.
The
Doctrine of Joint Belief isn't evidence of harmony between
two systems of thought. It simply offers permission to
ignore the clash between them. Skeptics aren't convinced
by the doctrine, unsurprisingly, because it offers no testable
proposition. What is surprising is that its supposed
adherents don't believe it either. If joint beliefs were
compatible beliefs, there could be no such thing as heresy.
Christianity would be compatible not just with science,
but with astrology (roughly as many Americans believe in
astrology as evolution), with racism (because of the number
of churches who use the "Curse of Ham" to justify
racial segregation), and on through the list of every pair
of beliefs held by practicing Christians.
To
get around this, one could declare that, for some arbitrary
reason, the co-existence of beliefs is relevant only to
questions of religion and science, but not to astrology
or anything else. Such a stricture doesn't strengthen the
argument, however, because an appeal to the particular
religious beliefs of scientists means having to explain
why the majority of them are atheists. (See the 1998 Larson
and Witham study for the numbers.) Picking out the minority
who aren't atheists and holding only them up as exemplars,
is simply special pleading (not to mention lousy statistics.)
The
works that changed my mind about compatibility were Pascal
Boyer's Religion Explained, and Scott Atran's In
Gods We Trust, which lay out the ways religious belief
is a special kind of thought, incompatible with the kind
of skepticism that makes science work. In Boyer and Atran's
views, religious thought doesn't simply happen to be false
-- being false is the point, the thing that makes belief
both memorable and effective. Psychologically, we overcommit
to the ascription of agency, even when dealing with random
events (confirmation can be had in any casino.) Belief
in God rides in on that mental eagerness, in the same way
optical illusions ride in on our tendency to overinterpret
ambiguous visual cues. Sociologically, the adherence to
what Atran diplomatically calls 'counter-factual beliefs'
serves both to create and advertise in-group commitment
among adherents. Anybody can believe in things that are
true, but it takes a lot of coordinated effort to get people
to believe in virgin birth or resurrection of the dead.
We
are early in one of the periodic paroxysms of conflict
between faith and evidence. I suspect this conflict will
restructure society, as after Galileo, rather than leading
to a quick truce, as after Scopes, not least because the
global tribe of atheists now have a medium in which they
can discover one another and refine and communicate their
message.
One
of the key battles is to insist on the incompatibility
of beliefs based on evidence and beliefs that ignore evidence.
Saying that the mental lives of a Francis Collins or a
Freeman Dyson prove that religion and science are compatible
is like saying that the sex lives of Bill Clinton or Ted
Haggard prove that marriage and adultery are compatible.
The people we need to watch out for in this part of the
debate aren't the fundamentalists, they're the moderates,
the ones who think that if religious belief is made metaphorical
enough, incompatibility with science can be waved
away. It can't be, and we need to say so, especially to
the people like me, before I changed my mind. |
KAI
KRAUSE
Software
and Design Pioneer

Software is merely a Performance
Art
It is a charming concept that humans are in
fact able "to change their mind" in the first place.
Not that it necessarily implies a change for the better,
but at least it does have that positive ring of supposing
a Free Will to perform this feat at all. Better, in any case,
to be the originator of the changing, rather than having
it done to you, in the much less applaudable form of brain
washing.
For me, in my own life as I passed the half-century
mark, with almost exactly half the time spent in the US and
the other in Europe, in-between circling the globe a few
times, I can look back on what now seems like multiple lifetimes
worth of mind changing.
Here then is a brief point, musing about the
field I spent 20 years in: Computer Software. And it is deeper
than it may seem at first glance.
I used to think "Software Design" is
an art form.
I now believe that I was half-right:
it is indeed an art, but it has a rather
short half-life:
Software is merely a performance art!
A momentary flash of brilliance, doomed to
be overtaken by the next wave, or maybe even by its own sequel.
Eaten alive by its successors. And time...
This is not to denigrate the genre of performance
art: anamorphic sidewalk chalk drawings, Goldsworthy pebble
piles or Norwegian carved-ice-hotels are admirable feats
of human ingenuity, but they all share that ephemeral time
limit: the first rain, wind or heat will dissolve the beauty,
and the artist must be well aware of its fleeting glory.
For many years I have discussed this with friends
that are writers, musicians, painters and the simple truth
emerged: one can still read the words, hear the music and
look at the images....
Their value and their appeal remains, in some
cases even gain by familiarity: like a good wine it can improve
over time. You can hum a tune you once liked, years later.
You can read words or look a painting from 300 years ago
and still appreciate its truth and beauty today, as if brand
new. Software, by that comparison, is more like Soufflé:
enjoy it now, today, for tomorrow it has already collapsed
on itself. Soufflé 1.1 is the thing to have, Version
2.0 is on the horizon.
It is a simple fact: hardly any of my software
even still runs at all!
Back in 1982 I started with a highschool buddy
in a garage in the Hollywood hills. With ludicrous limitations
we conjured up dreams: three-dimensional charting, displaying
sound as time-slice mountains of frequency spectrum data,
annotated with perspective lettering... and all that in 32K
of RAM on a 0.2Mhz processor. And we did it... a few years
later it fed 30 people.
The next level of dreaming up new frontiers
with a talented tight team was complex algorithms for generating
fractals, smooth color gradients, multi layer bumpmapped
textures and dozens of image filters, realtime liquid image
effects, and on and on... and that too, worked and this time
fed over 300 people. Fifteen products sold many millions
of copies - and a few of them still persist to this day,
in version 9 or 10 or 11... but for me, I realized, I no
longer see myself as a software designer - I changed my mind.
Today, if you have a very large task at hand,
one that you calculate might take two years or three... it
has actually become cheaper to wait for a couple of generation
changes in the hardware and do the whole thing then - ten
times faster.
In other words: sit by the beach with umbrella
drinks for 15 months and then finish it all at once with
some weird beowulf-cluster of machinery and still beat the
original team by leaps and bounds. At the start, all we were
given was the starting address in RAM where video memory
began, and a POKE to FC001101 would put a dot on the screen.
Just one dot.
Then you figured out how to draw a line. How
to connect them to polygons. How to fill those with patterns.
All on a screen of 192x128, ( which is now just "an
icon")
Uphill in the snow, both ways.
Now the GPUs are blasting billions of pixels
per second and all they will ask is "does it blend?" Pico,
Femto, Atto, Zepto, Yocto cycles stored in Giga, Tera, Peta,
Exa, Zetta, Yotta cells.
I rest my case about those umbrella drinks.
Do I really just drop all technology and leave
computing ? Nahh. QuadHD screens are just around the corner,
and as a tool for words, music and images there are fantastic
new horizons for me. I am more engaged in it all than ever — alas:
the actual coding and designing itself is no longer where
I see my contribution. But the point is deeper than just
one mans path:
The new role of technology is a serious philosophical
point in the long range outlook for mankind. Most decision
makers world wide, affecting the entire planet, are technophobes,
luddites and noobs beyond belief. They have no vision for
the potential, nor proper respect for the risks, nor simple
estimation of the immediate value for quality of life that
technology could bring.
Maybe one can change their mind?
I remembered that I once wrote something about
this very topic...
and I found it:
I changed my mind mostly about changing my mind:
I used to be all for 'being against it',
then I was all against 'being for it',
until I realized: thats the same thing....never mind.
It's a 'limerickety' little thing from some keynote 12 years
ago, but...see.... it still runs : ) |
LINDA
S. GOTTFREDSON
Sociologist,
University of Delaware; co-director of the Project for the
Study of Intelligence and Society.

The
Calculus of Small but Consistent Effects
For
an empiricist, science brings many surprises. It has continued
to change my thinking about many phenomena by challenging
my presumptions about them. Among the first of my assumptions
to be felled by evidence was that career choice proceeds
in adolescence by identifying one's most preferred options;
it actually begins early in childhood as a taken-for-granted
process of eliminating the least acceptable from further
consideration. Another mistaken presumption was that different
abilities would be important for performing well in different
occupations. The notion that any single ability (e.g., IQ
or g) could predict performance to an appreciable
degree in all jobs seemed far-fetched the first time I heard
it, but that's just what my own attempt to catalog the predictors
of job performance would help confirm. My root error had
been to assume that different cognitive abilities (verbal,
quantitative, etc.) are independent—in today's terms,
that there are "multiple intelligences." Empirical
evidence says otherwise.
The
most difficult ideas to change are those which seem so obviously
true that we can scarcely imagine otherwise until confronted
with unambiguous disconfirmation. For example, even behavior
geneticists had long presumed that non-genetic influences
on intelligence and other human traits grow with age while
genetic ones weaken. Evidence reveals the opposite for intelligence
and perhaps other human traits as well: heritabilities actually increase with
age. My attempt to explain the evolution of high human intelligence
has also led me to question another such "obvious truth," namely,
that human evolution ceased when man took control of his
environment. I now suspect that precisely the opposite occurred.
Here is why.
Human
innovation itself may explain the rapid increase in human
intelligence during the last 500,000 years. Although it has
improved the average lot of mankind, innovation creates evolutionarily
novel hazards that put the less intelligent members of a
group at relatively greater risk of accidental injury and
death. Consider the first and perhaps most important human
innovation, the controlled use of fire. It is still a major
cause of death worldwide, as are falls from man-made structures
and injuries from tools, weapons, vehicles, and domesticated
animals. Much of humankind has indeed escaped from its environment
of evolutionary adaptation (EEA), but only by fabricating
new and increasingly complicated physical ecologies. Brighter
individuals are better able not only to extract the benefits
of successive innovations, but also to avoid the novel threats
to life and limb that they create. Unintentional injuries
and deaths have such a large chance component and their causes
are so varied that we tend to dismiss them as merely "accidental," as
if they were uncontrollable. Yet all are to some extent preventable
with foresight or effective response, which gives an edge
to more intelligence individuals. Evolution requires only
tiny such differences in odds of survival in order to ratchet
up intelligence over thousands of generations. If human innovation
fueled human evolution in the past, then it likely still
does today.
Another
of my presumptions bit the dust, but in the process exposed
a more fundamental, long-brewing challenge to my thinking
about scientific explanation. At least in the social sciences,
we seek big effects when predicting human behavior, whether
we are trying to explain differences in happiness, job performance,
depression, health, or income. "Effect size" (percentage
of variance explained, standardized mean difference, etc.)
has become our yardstick for judging the substantive importance
of potential causes. Yet, while strong correlations between
individuals' attributes and their fates may signal causal
importance, small correlations do not necessarily signal
unimportance.
Evolution
provides an obvious example. Like the house in a gambling
casino, evolution realizes big gains by playing small odds
over myriad players and long stretches of time. The small-is-inconsequential
presumption is so ingrained and reflexive, however, that
even those of us who seek to explain the evolution of human
intelligence over the eons have often rejected hypothesized
mechanisms (say, superior hunting skills) when they could
not explain differential survival or reproductive success
within a single generation.
IQ
tests provide a useful analogy for understanding the power
of small but consistent effects. No single IQ test item measures
intelligence well or has much predictive power. Yet, with
enough items, one gets an excellent test of general intelligence
(g) from only weakly g-loaded items. How?
When test items are considered one by one, the role of chance
dominates in determining who answers the item correctly.
When test takers' responses to many such items are added
together, however, the random effects tend to cancel each
other out, and g's small contribution to all answers
piles up. The result is a test that measures almost nothing
but g.
I
have come to suspect that some of the most important forces
shaping human populations work in this inconspicuous but
inexorable manner. When seen operating in individual instances,
their impact is so small as to seem inconsequential, yet
their consistent impact over events or individuals produces
marked effects. To take a specific example, only the calculus
of small but consistent tendencies in health behavior over
a lifetime seems likely to explain many demographic disparities
in morbidity and mortality, not just accidental death.
Developing
techniques to identify, trace, and quantify such influences
will be a challenge. It currently bedevils behavior geneticists
who, having failed to find any genes with substantial influence
on intelligence (within the normal range of variation), are
now formulating strategies to identify genes that may account
for at most only 0.5% of the variance in intelligence. |
RANDOLPH
M. NESSE
Psychiatrist,
University of Michigan; Coauthor, Why
We Get Sick

Truth does not reside with smart
university experts
I used to believe that you could find out
what is true by finding the smartest people and finding
out what they think. However, the most brilliant people
keep turning out to be wrong. Linus Pauling's ideas
about Vitamin C are fresh in mind, but the famous physicist
Lord Kelvin did more harm in 1900 with calculations based
on the rate of earth's cooling that seemed to show that
there had not been enough time for evolution to take place.
A lot of the belief that smart people are right is an illusion
caused by smart people being very convincing… even
when they are wrong.
I also used to believe that you could find
out what is true by relying on experts — smart experts — who
devote themselves to a topic. But most of us remember
being told to eat margarine because it is safer than butter — then
it turned out that trans-fats are worse. Doctors
told women they must use hormone replacement therapy (HRT)
to prevent heart attacks — but HRT turned out to
increase heart attacks. Even when they are not wrong,
expert reports often don't tell you what is true. For
instance, read reviews by experts about antidepressants;
they provide reams of data, but you won't often find the
simple conclusion that these drugs are not all that helpful
for most patients. It is not just others; I shudder
to think about all the false beliefs I have unknowingly
but confidently passed on to my patients, thanks to my
trust in experts. Everyone should read the article by Ioannidis, "Why
most published research findings are false."
Finally, I used to believe that truth had
a special home in universities. After all, universities
are supposed to be devoted to finding out what is true,
and teaching students what we know and how to find out
for themselves. Universities may be best show in town for
truth pursuers, but most stifle innovation and constructive
engagement of real controversies, not just sometimes, but
most of the time, systematically.
How can this be? Everyone is trying so hard
to encourage innovation! The Regents take great pains
to find a President who supports integrity and creativity,
the President chooses exemplary Deans, who mount massive
searches for the best Chairs. Those Chairs often hire supporters
who work in their own areas, but what if one wants to hire
someone doing truly innovative work, someone who might
challenge established opinions? Faculty committees
intervene to ensure that most positions go to people just
about like themselves, and the Dean asks how much grant
overhead funding a new faculty member will bring in. No
one with new ideas, much less work in a new area or critical
of established dogmas, can hope to get through this fine
sieve. If they do, review committees are waiting.
And so, by a process of unintentional selection, diversity
of thought and topic is excluded. If it still sneaks
in, it is purged. The disciplines become ever more
insular. And universities find themselves unwittingly inhibiting
progress and genuine intellectual engagement. University
leaders recognize this and hate it, so they are constantly
creating new initiatives to foster innovative interdisciplinary
work. These have the same lovely sincerity as new
diets for the New Year, and the same blindness to the structural
factors responsible for the problems.
Where can we look to find what is true? Smart
experts in universities are a place to start, but if we
could acknowledge how hard it is for truth and its pursuers
to find safe university lodgings, and how hard it is for
even the smartest experts to offer objective conclusions,
we could begin to design new social structures that would
support real intellectual innovation and engagement. |
BART
KOSKO
Information Scientist, USC; Author, Noise

THE SAMPLE MEAN
I have changed my mind about using the sample mean as the best way to combine measurements into a single predictive value. Sometimes it is the best way to combine data but in general you do not know that in advance. So it is not the one number from or about a data set that I would want to know in the face of total uncertainty if my life depended on the predicted outcome.
Using the sample mean always seemed like the natural thing to do. Just add up the numerical data and divide by the number of data. I do not recall ever doubting that procedure until my college years. Even then I kept running into the mean in science classes and even in philosophy classes where the discussion of ethics sometimes revolved around Aristotle's theory of the "golden mean." There were occasional mentions of medians and modes and other measures of central tendency but they were only occasional.
The sample mean also kept emerging as the optimal way to combine data in many formal settings. At least it did given what appeared to be the reasonable criterion of minimizing the squared errors of the observations. The sample mean falls out from just one quick application of the differential calculus. So the sample mean had on its side not only mathematical proof and the resulting prominence of appearing in hundreds if not thousands of textbooks and journal articles. It was and remains the evidentiary workhorse of modern applied science and engineering. The sample mean summarizes test scores and gets plotted in trend lines and centers confidence intervals among numerous other applications.
Then I ran into the counter-example of Cauchy data. These data come from bell curves with tails just slightly thicker than the familiar "normal" bell curve. Cauchy bell curves also describe "normal" events that correspond to the main bell of the curves. But Cauchy bell curves have thicker tails than normal bell curves have and these thicker tails allow for many more "outliers" or rare events. And Cauchy bell curves arise in a variety of real and theoretical cases. The counter-example is that the sample mean of Cauchy data does not improve no matter how many samples you combine. This result contrasts with the usual result from sampling theory that the variance of the sample mean falls with each new measurement and hence predictive accuracy improves with sample size (assuming that the square-based variance term measures dispersion and that such a mathematical construct always produces a finite value — which it need not produce in general). The sample mean of ten thousand Cauchy data points has no more predictive power than does the sample mean of ten such data points. Indeed the sample mean of Cauchy data has no more predictive power than does any one of the data points picked at random. This counter-example is but one of the anomalous effects that arise from averaging data from many real-world probability curves that deviate from the normal bell curve or from the twenty or so other closed-form probability curves that have found there way into the literature in the last century.
Nor have scientists always used the sample mean. Historians of mathematics have pointed to the late sixteenth century and the introduction of the decimal system for the start of the modern practice of computing the sample mean of data sets to estimate typical parameters. Before then the mean apparently meant the arithmetic average of just two numbers as it did with Aristotle. So Hernan Cortes may well have had a clear idea about the typical height of an adult male Aztec in the early sixteenth century. But he quite likely did not arrive at his estimate of the typical height by adding measured heights of Aztec males and then dividing by the number added. We have no reason to believe that Cortes would have resorted to such a computation if the Church or King Charles had pressed him to justify his estimate. He might just as well have lined up a large number of Aztec adult males from shortest to tallest and then reported the height of the one in the middle.
There was a related and deeper problem with the sample mean: It is not robust. Extremely small or large values distort it. This rotten-apple property stems from working not with measurement errors but with squared errors. The squaring operation exaggerates extreme data even though it greatly simplifies the calculus when trying to find the estimate that minimizes the observed errors. That estimate turns out to be the sample mean but not in general if one works with the raw error itself or other measures. The statistical surprise of sorts is that using the raw or absolute error of the data gives the sample median as the optimal estimate.
The sample median is robust against outliers. If
you throw away the largest and smallest values in a data
set then the median does not change but the sample mean
does (and gives a more robust "trimmed" mean
as used in combining the judging scores in figure skating
and elsewhere to remove judging bias). Realtors
have long since stated typical housing prices as sample
medians rather than sample means because a few mansions
can so easily skew the sample mean. The sample median
would not change even if the price of the most expensive
house rose to infinity. The median would still be
the middle-ranked house if the number of houses were odd. But
this robustness is not a free lunch. It comes at
the cost of ignoring some of the information in the numerical
magnitudes of the data and has its own complexities for
multidimensional data.
Other evidence pointed to using the sample median rather than the sample mean. Statisticians have computed the so-called breakdown point of these and other statistical measures of central tendency. The breakdown point measures the largest proportion of data outliers that a statistic can endure before it breaks down in a formal sense of producing very large deviations. The sample median achieves the theoretical maximum breakdown point. The sample mean does not come close. The sample median also turns out to be the optimal estimate for certain types of data (such as Laplacian data) found in many problems of image processing and elsewhere — if the criterion is maximizing the probability or likelihood of the observed data. And the sample median can also center confidence intervals. So it too gives rises to hypothesis tests and does so while making fewer assumptions about the data than the sample mean often requires for the same task.
The clincher was the increasing use of adaptive or neural-type algorithms in engineering and especially in signal processing. These algorithms cancel echoes and noise on phone lines as well as steer antennas and dampen vibrations in control systems. The whole point of using an adaptive algorithm is that the engineer cannot reasonably foresee all the statistical patterns of noise and signals that will bombard the system over its lifetime. No type of lifetime average will give the kind of performance that real-time adaptation will give if the adaptive algorithm is sufficiently sensitive and responsive to its measured environment. The trouble is that most of the standard adaptive algorithms derive from the same old and non-robust assumptions about minimizing squared errors and thus they result in the use of sample means or related non-robust quantities. So real-world gusts of data wind tend to destabilize them. That is a high price to pay just because in effect it makes nineteenth-century calculus computations easy and because such easy computations still hold sway in so much of the engineering curriculum. It is an unreasonably high price to pay in many cases where a comparable robust median-based system or its kin both avoids such destabilization and performs similarly in good data weather and does so for only a slightly higher computational cost. There is a growing trend toward using robust algorithms. But engineers still have launched thousands of these non-robust adaptive systems into the stream of commerce in recent years. We do not know whether the social costs involved from using these non-robust algorithms are negligible or substantial.
So if under total uncertainty I had to pick a predictive number from a set of measured data and if my life depended on it — I would now pick the median. |
DAVID
GELERNTER
Computer
Scientist, Yale University; Chief Scientist, Mirror Worlds
Technologies; Author, Drawing Life

Users Are Not Reactionary After
All
What
I've changed my mind about is that the public is wedded
to obsolete 1970s GUIs & info mgmt forever — PARC's
desktop & Bell Labs' Unix file system. I'll give two
example from my own experience. Both constitute long term
ideas of mine and might seem like self-promotion, but my
point is that as a society we don't have the patience to
develop fully those big ideas that need time to soak in.
I
first described a GUI called "lifestreams" in
the Washington Post in 1994. By the early 2000s, I thought
this system was dead in the water, destined to be resurrected
in a grad student's footnote around the 29th century, The
problem was (I thought) that Lifestreams was too unfamiliar,
insufficiently
"evolutionary" and too "revolutionary" (as
the good folks at ARPA like to say [or something like that]);
you need to go step-by-step with the public and the industry
or you lose.
But
today "lifestreams" are all over the net (take
a look yourself), and I'm told that "lifestreaming" has
turned into a verb at some recent Internet conferences.
According to ZDnet.com, "Basically what's important
about the OLPC [one laptop per child], has nothing to do
with its nominal purposes and everything to do with its
interface. Ultimately traceable to David Gelernter's 'Lifestreams'
model, this is not just a remake of Apple's evolution of
the original work at Palo Alto, but something new."
Moral:
the public may be cautions but is not reactionary.
In
a 1991 book called Mirror Worlds, I predicted
that everyone would be putting his personal stuff in the
Cybersphere (AKA "the clouds"); I said the same
in a 2000 manifesto on Edge called "The 2nd
Coming", & in various other pieces in between.
By 2005 or so, I assumed that once again I'd jumped the
gun, by too long to learn the results pre-posthumously — but
once again this (of all topics) turns out to hot and all
over the place nowadays. "Cloud computing" is
the next big thing: What does this all prove? If you're
patient, good ideas find audiences. But you have to be very patient.
And
if you expect to cash in on long-term ideas in the United
States, you're certifiable.
This
last point is a lesson I teach my students, and on this
item I haven't (and don't expect to) change my mind. But
what the hell? It's New Year's, and there are worse things
than being proved right once in a while, even if it's too
late to count. |
|
|