IS YOUR DANGEROUS IDEA?"
Professor, Electrical Engineering, USC; Author, Heaven
in a Chip
bell curves have thick tails
challenge to the normal probability bell curve can have far-reaching
consequences because a great deal of modern science and engineering
rests on this special bell curve. Most of the standard hypothesis
tests in statistics rely on the normal bell curve either directly
or indirectly. These tests permeate the social and medical
sciences and underlie the poll results in the media. Related
tests and assumptions underlie the decision algorithms in radar
and cell phones that decide whether the incoming energy blip
is a 0 or a 1. Management gurus exhort manufacturers to follow
the "six sigma" creed of reducing the variance in
products to only two or three defective products per million
in accord with "sigmas" or standard deviations from
the mean of a normal bell curve. Models for trading stock and
bond derivatives assume an underlying normal bell-curve structure.
Even quantum and signal-processing uncertainty principles or
inequalities involve the normal bell curve as the equality
condition for minimum uncertainty. Deviating even slightly
from the normal bell curve can sometimes produce qualitatively
proposed dangerous idea stems from two facts about the normal
The normal bell curve is not the only bell curve. There are
at least as many different bell curves as there are real numbers.
This simple mathematical fact poses at once a grammatical challenge
to the title of Charles Murray's IQ book The Bell Curve.
Murray should have used the indefinite article "A" instead
of the definite article "The." This is but one of
many examples that suggest that most scientists simply equate
the entire infinite set of probability bell curves with the
normal bell curve of textbooks. Nature need not share the same
practice. Human and non-human behavior can be far more diverse
than the classical normal bell curve allows.
The normal bell curve is a skinny bell curve. It puts most
of its probability mass in the main lobe or bell while the
tails quickly taper off exponentially. So "tail events" appear
rare simply as an artifact of this bell curve's mathematical
structure. This limitation may be fine for approximate descriptions
of "normal" behavior near the center of the distribution.
But it largely rules out or marginalizes the wide range of
phenomena that take place in the tails.
most bell curves have thick tails. Rare events are not so rare
if the bell curve has thicker tails than the normal bell curve
has. Telephone interrupts are more frequent. Lightning flashes
are more frequent and more energetic. Stock market fluctuations
or crashes are more frequent. How much more frequent they are
depends on how thick the tail is — and that is always
an empirical question of fact. Neither logic nor assume-the-normal-curve
habit can answer the question. Instead scientists need to carry
their evidentiary burden a step further and apply one of the
many available statistical tests to determine and distinguish
the bell-curve thickness.
response to this call for tail-thickness sensitivity is that
logic alone can decide the matter because of the so-called
central limit theorem of classical probability theory. This
important "central" result states that some suitably
normalized sums of random terms will converge to a standard
normal random variable and thus have a normal bell curve in
the limit. So Gauss and a lot of other long-dead mathematicians
got it right after all and thus we can continue to assume normal
bell curves with impunity.
argument fails in general for two reasons.
first reason it fails is that the classical central limit theorem
result rests on a critical assumption that need not hold and
that often does not hold in practice. The theorem assumes that
the random dispersion about the mean is so comparatively slight
that a particular measure of this dispersion — the variance
or the standard deviation — is finite or does not blow
up to infinity in a mathematical sense. Most bell curves have
infinite or undefined variance even though they have a finite
dispersion about their center point. The error is not in the
bell curves but in the two-hundred-year-old assumption that
variance equals dispersion. It does not in general. Variance
is a convenient but artificial and non-robust measure of dispersion.
It tends to overweight "outliers" in the tail regions
because the variance squares the underlying errors between
the values and the mean. Such squared errors simplify the math
but produce the infinite effects. These effects do not appear
in the classical central limit theorem because the theorem
assumes them away.
second reason the argument fails is that the central limit
theorem itself is just a special case of a more general result
called the generalized central limit theorem. The generalized
central limit theorem yields convergence to thick-tailed bell
curves in the general case. Indeed it yields convergence to
the thin-tailed normal bell curve only in the special case
of finite variances. These general cases define the infinite
set of the so-called stable probability distributions
and their symmetric versions are bell curves. There are still
other types of thick-tailed bell curves (such as the Laplace
bell curves used in image processing and elsewhere) but the
stable bell curves are the best known and have several nice
mathematical properties. The figure below shows the normal
or Gaussian bell curve superimposed over three thicker-tailed
stable bell curves. The catch in working with stable bell curves
is that their mathematics can be nearly intractable. So far
we have closed-form solutions for only two stable bell curves
(the normal or Gaussian and the very-thick-tailed Cauchy curve)
and so we have to use transform and computer techniques to
generate the rest. Still the exponential growth in computing
power has long since made stable or thick-tailed analysis practical
for many problems of science and engineering.
last point shows how competing bell curves offer a new context
for judging whether a given set of data reasonably obey a normal
bell curve. One of the most popular eye-ball tests for normality
is the PP or probability plot of the data. The data should
almost perfectly fit a straight line if the data come from
a normal probability distribution. But this seldom happens
in practice. Instead real data snake all around the ideal straight
line in a PP diagram. So it is easy for the user to shrug and
a call any data deviation from the ideal line good enough in
the absence of a direct bell-curve competitor. A fairer test
is to compare the normal PP plot with the best-fitting thick-tailed
or stable PP plot. The data may well line up better in a thick-tailed
PP diagram than it does in the usual normal PP diagram. This
test evidence would reject the normal bell-curve hypothesis
in favor of the thicker-tailed alternative. Ignoring these
thick-tailed alternatives favors accepting the less-accurate
normal bell curve and thus leads to underestimating the occurrence
of tail events.
or thick-tailed probability curves continue to turn up as more
scientists and engineers search for them. They tend to accurately
model impulsive phenomena such as noise in telephone lines
or in the atmosphere or in fluctuating economic assets. Skewed
versions appear to best fit the data for the Ethernet traffic
in bit packets. Here again the search is ultimately an empirical
one for the best-fitting tail thickness. Similar searches will
only increase as the math and software of thick-tailed bell
curves work their way into textbooks on elementary probability
and statistics. Much of it is already freely available on the
bell curves also imply that there is not just a single form
of pure white noise. Here too there are at least as many forms
of white noise (or any colored noise) as there are real numbers.
Whiteness just means that the noise spikes or hisses and pops
are independent in time or that they do not correlate with
one another. The noise spikes themselves can come from any
probability distribution and in particular they can come from
any stable or thick-tailed bell curve. The figure below shows
the normal or Gaussian bell curve and three kindred thicker-tailed
bell curves and samples of their corresponding white noise.
The normal curve has the upper-bound alpha parameter of 2 while
the thicker-tailed curves have lower values — tail thickness
increases as the alpha parameter falls. The white noise from
the thicker-tailed bell curves becomes much more impulsive
as their bell narrows and their tails thicken because then
more extreme events or noise spikes occur with greater frequency.
bell curves: The figure on the left shows four superimposed
symmetric alpha-stable bell curves with different tail thicknesses
while the plots on the right show samples of their corresponding
forms of white noise. The parameter describes
the thickness of a stable bell curve and ranges from 0 to
2. Tails grow thicker as grows
smaller. The white noise grows more impulsive as the tails
grow thicker. The Gaussian or normal bell curve has
the thinnest tail of the four stable curves while the Cauchy
bell curve has
the thickest tails and thus the most impulsive noise. Note
the different magnitude scales on the vertical axes. All
the bell curves have finite dispersion while only the Gaussian
or normal bell curve has a finite variance or finite standard
colleagues and I have recently shown that most mathematical
models of spiking neurons in the retina can not only benefit
from small amounts of added noise by increasing their Shannon
bit count but they still continue to benefit from added thick-tailed
or "infinite-variance" noise. The same result holds
experimentally for a carbon nanotube transistor that detects
signals in the presence of added electrical noise.
bell curves further call into question what counts as a statistical "outlier" or
bad data: Is a tail datum error or pattern? The line between
extreme and non-extreme data is not just fuzzy but depends
crucially on the underlying tail thickness.
usual rule of thumb is that the data is suspect if it lies
outside three or even two standard deviations from the mean.
Such rules of thumb reflect both the tacit assumption that
dispersion equals variance and the classical central-limit
effect that large data sets are not just approximately bell
curves but approximately thin-tailed normal bell curves. An
empirical test of the tails may well justify the latter thin-tailed
assumption in many cases. But the mere assertion of the normal
bell curve does not. So "rare" events may not be
so rare after all.
Science Writer; Founding chairman of the
International Centre for Life; Author, The
Agile Gene: How Nature Turns on Nature
is the problem not the solution
all times and in all places there has been too much government.
We now know what prosperity is: it is the gradual extension
of the division of labour through the free exchange of goods
and ideas, and the consequent introduction of efficiencies
by the invention of new technologies. This is the process
that has given us health, wealth and wisdom on a scale unimagined
by our ancestors. It not only raises material standards of
living, it also fuels social integration, fairness and charity.
It has never failed yet. No society has grown poorer or more
unequal through trade, exchange and invention. Think of pre-Ming
as opposed to Ming China, seventeenth century Holland as
opposed to imperial Spain, eighteenth century England as
opposed to Louis XIV's France, twentieth century America
as opposed to Stalin's Russia, or post-war Japan, Hong Kong
and Korea as opposed to Ghana, Cuba and Argentina. Think
of the Phoenicians as opposed to the Egyptians, Athens as
opposed to Sparta, the Hanseatic League as opposed to the
Roman Empire. In every case, weak or decentralised government,
but strong free trade led to surges in prosperity for all,
whereas strong, central government led to parasitic, tax-fed
officialdom, a stifling of innovation, relative economic
decline and usually war.
Rome. It prospered because it was a free trade zone. But
it repeatedly invested the proceeds of that prosperity in
too much government and so wasted it in luxury, war, gladiators
and public monuments. The Roman empire's list of innovations
is derisory, even compared with that of the 'dark ages' that
every age and at every time there have been people who say
we need more regulation, more government. Sometimes, they
say we need it to protect exchange from corruption, to set
the standards and police the rules, in which case they have
a point, though often they exaggerate it. Self-policing standards
and rules were developed by free-trading merchants in medieval
Europe long before they were taken over and codified as laws
(and often corrupted) by monarchs and governments.
Sometimes, they say we need it to protect the weak, the victims
of technological change or trade flows. But throughout history
such intervention, though well meant, has usually proved misguided
— because its progenitors refuse to believe in (or find
out about) David Ricardo's Law of Comparative Advantage: even
if China is better at making everything than France, there
will still be a million things it pays China to buy from France
rather than make itself. Why? Because rather than invent, say,
luxury goods or insurance services itself, China will find
it pays to make more T shirts and use the proceeds to import
luxury goods and insurance.
is a very dangerous toy. It is used to fight wars, impose
ideologies and enrich rulers. True, nowadays, our leaders
do not enrich themselves (at least not on the scale of the
Sun King), but they enrich their clients: they preside over
vast and insatiable parasitic bureaucracies that grow by
Parkinson's Law and live off true wealth creators such as
traders and inventors.
it is possible to have too little government. Only, that
has not been the world's problem for millennia. After the
century of Mao, Hitler and Stalin, can anybody really say
that the risk of too little government is greater than the
risk of too much? The dangerous idea we all need to learn
is that the more we limit the growth of government, the better
off we will all be.
Psychologist, Cornell University
some individuals consider a sacrosanct ability to perceive
moral truths may instead be a hodgepodge of simpler psychological
mechanisms, some of which have evolved for other purposes.
is increasingly apparent that our moral sense comprises a
fairly loose collection of intuitions, rules of thumb, and
emotional responses that may have emerged to serve a variety
of functions, some of which originally had nothing at all
to do with ethics. These mechanisms, when tossed in with
our general ability to reason, seem to be how humans come
to answer the question of good and evil, right and wrong.
Intuitions about action, intentionality, and control, for
instance, figure heavily into our perception of what constitutes
an immoral act. The emotional reactions of empathy and disgust
likewise figure into our judgments of who deserves moral
protection and who doesn't. But the ability to perceive intentions
probably didn't evolve as a way to determine who deserves
moral blame. And the emotion of disgust most likely evolved
to keep us safe from rotten meat and feces, not to provide
information about who deserves moral protection.
the belief that our moral sense provides a royal road to
moral truth is an uncomfortable notion. Most people, after
all, are moral realists. They believe acts are objectively
right or wrong, like math problems. The dangerous idea is
that our intuitions may be poor guides to moral truth, and
can easily lead us astray in our everyday moral decisions.
Psychiatrist, University of Michigan; Coauthor (with George
Williams), Why We Get Sick: The New Science of Darwinian
idea of promoting dangerous ideas seems dangerous to me.
I spend considerable effort to prevent my ideas from becoming
dangerous, except, that is, to entrenched false beliefs and
to myself. For instance, my idea that bad feelings are useful
for our genes upends much conventional wisdom about depression
and anxiety. I find, however, that I must firmly restrain
journalists who are eager to share the sensational but incorrect
conclusion that depression should not be treated. Similarly,
many people draw dangerous inferences from my work on Darwinian
medicine. For example, just because fever is useful does
not mean that it should not be treated. I now emphasize that
evolutionary theory does not tell you what to do in the clinic,
it just tells you what studies need to be done.
also feel obligated to prevent my ideas from becoming dangerous
on a larger scale. For instance, many people who hear about
Darwinian medicine assume incorrectly that it implies support
for eugenics. I encourage them to read history as well as
my writings. The record shows how quickly natural selection
was perverted into Social Darwinism, an ideology that seemed
to justify letting poor people starve. Related ideas keep
emerging. We scientists have a responsibility to challenge
dangerous social policies incorrectly derived from evolutionary
theory. Racial superiority is yet another dangerous idea
that hurts real people. More examples come to mind all too
easily and some quickly get complicated. For instance, the
idea that men are inherently different from women has been
used to justify discrimination, but the idea that men and
women have identical abilities and preferences may also cause
I don't want to promote ideas dangerous to others, I am fascinated
by ideas that are dangerous to anyone who expresses them.
These are "unspeakable ideas." By unspeakable ideas
I don't mean those whose expression is forbidden in a certain
group. Instead, I propose that there is class of ideas whose
expression is inherently dangerous everywhere and always
because of the nature of human social groups. Such unspeakable
ideas are anti-memes. Memes, both true and false, spread
fast because they are interesting and give social credit
to those who spread them. Unspeakable ideas, even true important
ones, don't spread at all, because expressing them is dangerous
to those who speak them.
why, you may ask, is a sensible scientist even bringing the
idea up? Isn't the idea of unspeakable ideas a dangerous
idea? I expect I will find out. My hope is that a thoughtful
exploration of unspeakable ideas should not hurt people in
general, perhaps won't hurt me much, and might unearth some
cannot substitute for examples, even if providing examples
is risky. So, please gather your own data. Here is an experiment.
The next time you are having a drink with an enthusiastic
fan for your hometown team, say "Well, I think our team
just isn't very good and didn't deserve to win." Or,
moving to more risky territory, when your business group
is trying to deal with a savvy competitor, say, "It
seems to me that their product is superior because they are
smarter than we are." Finally, and I cannot recommend
this but it offers dramatic data, you could respond to your
spouse's difficulties at work by saying, "If they are
complaining about you not doing enough, it is probably because
you just aren't doing your fair share." Most people
do not need to conduct such social experiments to know what
happens when such unspeakable ideas are spoken.
broader truths are equally unspeakable. Consider, for instance,
all the articles written about leadership. Most are infused
with admiration and respect for a leader's greatness. Much
rarer are articles about the tendency for leadership positions
to be attained by power-hungry men who use their influence
to further advance their self-interest. Then there are all
the writings about sex and marriage. Most of them suggest
that there is some solution that allows full satisfaction
for both partners while maintaining secure relationships.
Questioning such notions is dangerous, unless you are a comic,
in which case skepticism can be very, very funny.
a final example, consider the unspeakable idea of unbridled
self-interest. Someone who says, "I will only do what
benefits me," has committed social suicide. Tendencies
to say such things have been selected against, while those
who advocate goodness, honesty and service to others get
wide recognition. This creates an illusion of a moral society
that then, thanks to the combined forces of natural and social
selection, becomes a reality that makes social life vastly
are many more examples, but I must stop here. To say more
would either get me in trouble or falsify my argument. Will
I ever publish my "Unspeakable Essays?" It
would be risky, wouldn't it?
UC Irvine; Author, Deep Time
outside the Kyoto box
economists expect the Kyoto Accords to attain their goals.
With compliance coming only slowly and with three big holdouts — the
US, China and India — it seems unlikely to make much
difference in overall carbon dioxide increases. Yet all the
political pressure is on lessening our fossil fuel burning,
in the face of fast-rising demand.
This pits the industrial powers against the legitimate economic
aspirations of the developing world — a recipe for conflict.
Those who embrace the reality of global climate change mostly
insist that there is only one way out of the greenhouse effect — burn
less fossil fuel, or else. Never mind the economic consequences.
But the planet itself modulates its atmosphere through several
tricks, and we have little considered using most of them. The
overall global problem is simple: we capture more heat from
the sun than we radiate away. Mostly this is a good thing,
else the mean planetary temperature would hover around freezing.
But recent human alterations of the atmosphere have resulted
in too much of a good thing.
Two methods are getting little attention: sequestering carbon
from the air and reflecting sunlight.
Hide the Carbon
There are several schemes to capture carbon dioxide from the
air: promote tree growth; trap carbon dioxide from power plants
in exhausted gas domes; or let carbon-rich organic waste fall
into the deep oceans. Increasing forestation is a good, though
rather limited, step. Capturing carbon dioxide from power plants
costs about 30% of the plant output, so it's an economic nonstarter.
That leaves the third way. Imagine you are standing in a ripe
Kansas cornfield, staring up into a blue summer sky. A transparent
acre-area square around you extends upwards in an air-filled
tunnel, soaring all the way to space. That long tunnel holds
carbon in the form of invisible gas, carbon dioxide — widely
implicated in global climate change. But how much?
Very little, compared with how much we worry about it.
The corn standing as high as an elephant's eye all around
you holds four hundred times as much carbon as there is
in man-made carbon dioxide — our villain — in
the entire column reaching to the top of the atmosphere.
(We have added a few hundred parts per million to our air
by burning.) Inevitably, we must understand and control
the atmosphere, as part of a grand imperative of directing
the entire global ecology. Yearly, we manage through agriculture
far more carbon than is causing our greenhouse dilemma.
advantage of that. The leftover corn cobs and stalks from
our fields can be gathered up, floated down the Mississippi,
and dropped into the ocean, sequestering it. Below about
a kilometer depth, beneath a layer called the thermocline,
nothing gets mixed back into the air for a thousand years
or more. It's not a forever solution, but it would buy us
and our descendents time to find such answers. And it is
inexpensive; cost matters.
The US has large crop residues. It has also ignored the Kyoto
Accord, saying it would cost too much. It would, if we relied
purely on traditional methods, policing energy use and carbon
dioxide emissions. Clinton-era estimates of such costs were
around $100 billion a year — a politically unacceptable
sum, which led Congress to reject the very notion by a unanimous
But if the US simply used its farm waste to "hide" carbon
dioxide from our air, complying with Kyoto's standard would
cost about $10 billion a year, with no change whatsoever in
The whole planet could do the same. Sequestering crop leftovers
could offset about a third of the carbon we put into our air.
The carbon dioxide we add to our air will end up in the oceans,
anyway, from natural absorption, but not nearly quickly enough
to help us.
Hiding carbon from air is only one example of ways the planet
has maintained its perhaps precarious equilibrium throughout
billions of years. Another is our world's ability to edit sunlight,
by changing cloud cover.
As the oceans warm, water evaporates, forming clouds. These
reflect sunlight, reducing the heat below — but just
how much depends on cloud thickness, water droplet size, particulate
density — a forest of detail.
If our climate starts to vary too much, we could consider deliberately
adjusting cloud cover in selected areas, to offset unwanted
heating. It is not actually hard to make clouds; volcanoes
and fossil fuel burning do it all the time by adding microscopic
particles to the air. Cloud cover is a natural mechanism we
can augment, and another area where possibility of major change
in environmental thinking beckons.
A 1997 US Department of Energy study for Los Angeles showed
that planting trees and making blacktop and rooftops lighter
colored could significantly cool the city in summer. With minimal
costs that get repaid within five years we can reduce summer
midday temperatures by several degrees. This would cut air
conditioning costs for the residents, simultaneously lowering
energy consumption, and lessening the urban heat island effect.
Incoming rain clouds would not rise as much above the heat
blossom of the city, and so would rain on it less. Instead,
clouds would continue inland to drop rain on the rest of Southern
California, promoting plant growth. These methods are now under
way in Los Angeles, a first experiment.
We can combine this with a cloud-forming strategy. Producing
clouds over the tropical oceans is the most effective way to
cool the planet on a global scale, since the dark oceans absorb
the bulk of the sun's heat. This we should explore now, in
case sudden climate changes force us to act quickly.
Yet some environmentalists find all such steps suspect. They
smack of engineering, rather than self-discipline. True enough — and
that's what makes such thinking dangerous, for some.
Yet if Kyoto fails to gather momentum, as seems probable to
many, what else can we do? Turn ourselves into ineffectual
Mommy-cop states, with endless finger-pointing politics, trying
to equally regulate both the rich in their SUVs and Chinese
peasants who burn coal for warmth? Our present conventional
wisdom might be termed The Puritan Solution — Abstain,
sinners! — and is making slow, small progress. The Kyoto
Accord calls for the industrial nations to reduce their carbon
dioxide emissions to 7% below the 1990 level, and globally
we are farther from this goal every year.
These steps are early measures to help us assume our eventual
21st Century role, as true stewards of the Earth, working alongside
Nature. Recently Billy Graham declared that since the Bible
made us stewards of the Earth, we have a holy duty to avert
climate change. True stewards use the Garden's own methods.
Director, Transcranial Magnetic Stimulation Lab, UCLA
Violence Induces Imitative Violence: The Problem With
violence induces imitative violence. If true, this idea is
dangerous for at least two main reasons. First, because its
implications are highly relevant to the issue of freedom
of speech. Second, because it suggests that our rational
autonomy is much more limited than we like to think. This
idea is especially dangerous now, because we have discovered
a plausible neural mechanism that can explain why observing
violence induces imitative violence. Moreover, the properties
of this neural mechanism — the human mirror neuron
system — suggest that imitative violence may not always
be a consciously mediated process. The argument for protecting
even harmful speech (intended in a broad sense, including
movies and videogames) has typically been that the effects
of speech are always under the mental intermediation of the
listener/viewer. If there is a plausible neurobiological
mechanism that suggests that such intermediate step can be
by-passed, this argument is no longer valid.
more than 50 years behavioral data have suggested that media
violence induces violent behavior in the observers. Meta-data
show that the effect size of media violence is much larger
than the effect size of calcium intake on bone mass, or of
asbestos exposure to cancer. Still, the behavioral data have
been criticized. How is that possible? Two main types of
data have been invoked. Controlled laboratory experiments
and correlational studies assessing types of media consumed
and violent behavior. The lab data have been criticized on
the account of not having enough ecological validity, whereas
the correlational data have been criticized on the account
that they have no explanatory power. Here, as a neuroscientist
who is studying the human mirror neuron system and its relations
to imitation, I want to focus on a recent neuroscience discovery
that may explain why the strong imitative tendencies that
humans have may lead them to imitative violence when exposed
to media violence.
neurons are cells located in the premotor cortex, the part
of the brain relevant to the planning, selection and execution
of actions. In the ventral sector of the premotor cortex
there are cells that fire in relation to specific goal-related
motor acts, such as grasping, holding, tearing, and bringing
to the mouth. Surprisingly, a subset of these cells — what
we call mirror neurons — also fire when we observe
somebody else performing the same action. The behavior of
these cells seems to suggest that the observer is looking
at her/his own actions reflected by a mirror, while watching
somebody else's actions. My group has also shown in several
studies that human mirror neuron areas are critical to imitation.
There is also evidence that the activation of this neural
system is fairly automatic, thus suggesting that it may by-pass
conscious mediation. Moreover, mirror neurons also code the
intention associated with observed actions, even though there
is not a one-to-one mapping between actions and intentions
(I can grasp a cup because I want to drink or because I want
to put it in the dishwasher). This suggests that this system
can indeed code sequences of action (i.e., what happens after
I grasp the cup), even though only one action in the sequence
has been observed.
years ago, when we still were a very small group of neuroscientists
studying mirror neurons and we were just starting investigating
the role of mirror neurons in intention understanding, we
discussed the possibility of super mirror neurons. After
all, if you have such a powerful neural system in your brain,
you also want to have some control or modulatory neural mechanisms.
We have now preliminary evidence suggesting that some prefrontal
areas have super mirrors. I think super mirrors come in at
least two flavors. One is inhibition of overt mirroring,
and the other one — the one that might explain why
we imitate violent behavior, which require a fairly complex
sequence of motor acts — is mirroring of sequences
of motor actions. Super mirror mechanisms may provide a fairly
detailed explanation of imitative violence after being exposed
to media violence.
Birbeck, University of London; Coeditor, Knowing
Our Own Minds
We Know May Not Change Us
beings, like everything else, are part of the natural world.
The natural world is all there is. But to say that everything
that exists is just part of the one world of nature is not
the same as saying that there is just one theory of nature
that will describes and explain everything that there is.
Reality may be composed of just one kind of stuff and properties
of that stuff but we need many different kinds of theories
at different levels of description to account for everything
at these different levels may not be reduced one to another.
What matters is that they be compatible with one another.
The astronomy Newton gave us was a triumph over supernaturalism
because it united the mechanics of the sub-lunary world with
an account of the heavenly bodies. In a similar way, biology
allowed us to advance from a time when we saw life in terms
of an elan vital. Today, the biggest challenge is to explain
our powers of thinking and imagination, our abilities to
represent and report our thoughts: the very means by which
we engage in scientific theorising. The final triumph of
the natural sciences over supernaturalism will be an account
of nature of conscious experience. The cognitive and brain
sciences have done much to make that project clearer but
we are still a long way from a fully satisfying theory.
even if we succeed in producing a theory of human thought
and reason, of perception, of conscious mental life, compatible
with other theories of the natural and biological world,
will we relinquish our cherished commonsense conceptions
of ourselves as human beings, as selves who know ourselves
best, who deliberate and decide freely on what to do and
how to live? There is much evidence that we won't. As humans
we conceive ourselves as centres of experience, self-knowing
and free willing agents. We see ourselves and others as acting
on our beliefs, desires, hopes and fears, and has having
responsibility for much that we do and all that we say. And
even as results in neuroscience begin to show how much more
automated, routinised and pre-conscious much of our behaviour
is, we are remain unable to let go of the self-beliefs that
govern our day to day rationalisings and dealings with others.
are perhaps incapable of treating others as mere machines,
even if that turns out to be what we are. The self-conceptions
we have are firmly in place and sustained in spite of our
best findings, and it may be a fact about human beings that
it will always be so. We are curious and interested in neuroscientists
findings and we wonder at them and about their applications
to ourselves, but as the great naturalistic philosopher David
Hume knew, nature is too strong in us, and it will not let
us give up our cherished and familiar ways of thinking for
long. Hume knew that however curious an idea and vision of
ourselves we entertained in our study, or in the lab, when
we returned to the world to dine, make merry with our friends
our most natural beliefs and habits returned and banished
our stranger thoughts and doubts. It is likely, as this end
of the year, that whatever we have learned and whatever we
know about the error of our thinkings and about the fictions
we maintain, they will still remain the most dominant guiding
force in our everyday lives. We may not be comforted by this,
but as creatures with minds who know they have minds — perhaps
the only minded creatures in nature in this position — we
are at least able to understand our own predicament.
Physicist, Princeton University;
Nobel Laureate in Physics 1977; Author, Economy
as a Complex Evolving System
might not exist
try one in cosmology. The universe contains
at least 3 and perhaps 4 very different
kinds of matter, whose origins probably
are physically completely different. There
is the Cosmic Background Radiation (CBR)
which is photons from the later parts of
the Big Bang but is actually the residue
of all the kinds of radiation that were
in the Bang, like flavored hadrons and
mesons which have annihilated and become
photons. You can count them and they tell
you pretty well how many quanta of radiation
there were in the beginning; and observation
tells us that they were pretty uniformly
distributed, in fact very, and still are.
Next is radiant matter — protons, mostly,
and electrons. There are only a billionth
as many of them as quanta of CBR, but as
radiation in the Big Bang there were pretty
much the same number, so all but one out
of a billion combined with an antiparticle
and annihilated. Nonetheless they are much
heavier than the quanta of CBR, so they have,
all told, much more mass, and have some cosmological
effect on slowing down the Hubble expansion.
There was an imbalance — but what caused
that? That imbalance was generated by some
totally independent process, possibly during
the very turbulent inflationary era.
fact out to a tenth of the Hubble radius, which is as far
as we can see, the protons are very non-uniformly
distributed, in a fractal hierarchical clustering with things
called "Great Walls" and giant near-voids. The
conventional idea is that this is all caused by gravitational
instability acting on tiny primeval fluctuations, and it
barely could be, but in order to justify that you have to
have another kind of matter.
So you need — and actually see, but indirectly — Dark
Matter, which is 30 times as massive, overall, as protons but
you can't see anything but its gravitational effects. No one
has much clue as to what it is but it seems to have to be assumed
it is hadronic, otherwise why would it be anything as close
as a factor 30 to the protons? But really, there is no reason
at all to suppose its origin was related to the other two,
you know only that if it's massive quanta of any kind it is
nowhere near as many as the CBR, and so most of them annihilated
in the early stages. Again, we have no excuse for assuming
that the imbalance in the Dark Matter was uniformly distributed
primevally, even if the protons were, because we don't know
what it is.
of course there is Dark Energy, that is if there is. On that
we can't even guess if it is quanta at all, but again we
note that if it is it probably doesn't add up in numbers
to the CBR. The very strange coincidence is that when we
add this in there isn't any total gravitation at all, and
the universe as a whole is flat, as it would be, incidentally,
if all of the heavy parts were distributed everywhere according
to some random, fractal distribution like that of the matter
we can see — because on the largest scale, a fractal's
density extrapolates to zero. That suggestion, implying that
Dark Energy might not exist, is considered very dangerously
posterior probability of any particular God is pretty
another, which compared to many other peoples' propositions
isn't so radical. Isn't God very improbable? You can't in
any logical system I can understand disprove the
existence of God, or prove it for that matter. But I think
that in the probability calculus I use He is very improbable.
are a number of ways of making a formal probability theory
which incorporate Ockham's razor, the principle that one
must not multiply hypotheses unnecessarily. Two are called
Bayesian probability theory, and Minimum Entropy. If you
have been taking data on something, and the data are reasonably
close to a straight line, these methods give us a definable
procedure by which you can estimate the probability that
the straight line is correct, not the polynomial which has
as many parameters as there are points, or some intermediate
complex curve. Ockham's razor is expressed mathematically
as the fact that there is a factor in the probability derived
for a given hypothesis that decreases exponentially in the
number N of parameters that describe your hypothesis — it
is the inverse of the volume of parameter space. People who
are trying to prove the existence of ESP abominate Bayesianism
and this factor because it strongly favors the "Null
hypothesis" and beats them every time.
now, imagine how big the parameter space is for God. He could
have a long gray beard or not, be benevolent or malicious
in a lot of different ways and over a wide range of values,
he can have a variety of views on abortion, contraception,
like or abominate human images, like or abominate music,
and the range of dietary prejudices He has been credited
with is as long as your arm. There is the heaven-hell dimension,
the one vs three question, and I haven't even mentioned polytheism.
I think there are certainly as many parameters as sects,
or more. If there is even a sliver of prior probability for
the null hypothesis, the posterior probability of any particular
God is pretty small.
Archaeologist, University of Bradford; Author, The
human brain is a cultural artefact.
humans represent an evolutionary puzzle. Walking on two legs
free the hands to do new things, like chip stones to make
modified tools — the first artefacts, dating to 2.7
million years ago — but it also narrows the pelvis
and dramatically limits the size of possible fetal cranium.
Thus the brain expansion that began after 2 million years
ago should not have happened.
imagine that, alongside chipped stone tools, one genus of
hominin appropriates the looped entrails of a dead animal,
or learns to tie a simple knot, and invents a sling (chimpanzees
are known to carry water in leaves and gorillas to measure
water depth with sticks, so the practical and abstract thinking
required here can be safely assumed for our human ancestors
by this point).
its sling, the hominin child can now hip ride with little
impairment to its parent's hands-free movement. This has
the unexpected and certainly unplanned consequence that it
is no longer important for it to be able to hang on as chimps
do. Although, due to the bio-mechanical constraints of a
bipedal pelvis, the hominin child cannot be born with
a big head (thus large initial brain capacity) it can now
be born underdeveloped. That is to say, the sling frees fetuses
to be born in an ever more ontogenically retarded state.
This trend, which humans do indeed display, is called neoteny.
The retention of earlier features for longer means that the
total developmental sequence is extended in time far beyond
the nine months of natural gestation. Hominin children, born
underdeveloped, could grow their crania outside the womb
in the pseudo-marsupial pouch of an infant-carrying sling.
this point onwards it is not hard to see how a distinctively
human culture emerges through the extra-uterine formation
of higher cognitive capacities — the phylogenetic and
ontogenic icing on the cake of primate brain function. The
child, carried by the parent into social situations, watches
vocalization. Parental selection for smart features such
as an ability to babble early may well, as others have suggested,
have driven the brain size increases until 250,000 years
ago — a point when the final bio-mechanical limits
of big-headed mammals with narrow pelvises were reached by
two species: Neanderthals and us.
is the phylogeny side of the case. In terms of ontogeny the
obvious applies — it recapitulates phylogeny. The underdeveloped
brains of hominin infants were culture-prone, and in this
sense, I do not dissent from Dan Sperber's dangerous idea
that ‘culture is natural'. But human culture, unlike
the basic culture of learned routines and tool-using observed
in various mammals, is a system of signs — essentially
the association of words with things and the ascription and
recognition of value in relation to this.
Ernest Gellner once pointed out, taken cross-culturally,
as a species, humans exhibit by far the greatest range of
behavioural variation of any animal. However, within any
on-going community of people, with language, ideology and
a culturally-inherited and developed technology, conformity
has usually been a paramount value, with death often the
price for dissent. My belief is that, due to the malleability
of the neotenic brain, cultural systems are physically built
into the developing tissue of the mind.
of seeing the brain as the genetic hardware into which the
cultural software is loaded, and then arguing about the relative
determining influences of each in areas such as, say, sexual
orientation or mathematical ability (the old nature-nurture
debate), we can conclude that culture (as Richard Dawkins
long ago noted in respect of contraception) acts to subvert
genes, but is also enabled by them. Ontogenic retardation
allowed both environment and the developing milieu of cultural
routines to act on brain hardware construction alongside
the working through of the genetic blueprint. Just because
the modern human brain is coded for by genes does not mean
that the critical self-consciousness for which it (within
its own community of brains) is famous is non-cultural any
more than a barbed-and-tanged arrowhead is non-cultural just
because it is made of flint.
human brain has a capacity to go not just beyond nature,
but beyond culture too, by dissenting from old norms and
establishing others. The emergence of the high arts and science
is part of this process of the human brain, with its instrumental
extra-somatic adaptations and memory stores (books, laboratories,
computers), and is underpinned by the most critical thing
that has been brought into being in the encultured human
brain: free will.
not all humans, or all human communities, seem capable of
equal levels of free-will. In extreme cases they appear to
display none at all. Reasons include genetic incapacity,
but it is also possible for a lack of mental freedom to be
culturally engendered, and sometimes even encouraged. Archaeologically,
the evidence is there from the first farming societies in
Europe: the Neolithic massacre at Talheim, where an entire
community was genocidally wiped out except for the youngest
children, has been taken as evidence (supported by anthropological
analogies) of the re-enculturation of still flexible minds
within the community of the victors, to serve and live out
their orphaned lives as slaves. In the future, one might
surmise that the dark side of the development of virtual
reality machines (described by Clifford Pickover) will be
the infinitely more subtle cultural programming of impressionable
individuals as sophisticated conformists.
interplay of genes and culture has produced in us potential
for a formidable range of abilities and intelligences. It
is critical that in the future we both fulfil and extend
this potential in the realm of judgment, choice and understanding
in both sciences and arts. But the idea of the brain as a
cultural artefact is dangerous. Those with an interest in
social engineering — tyrants and authoritarian regimes — will
almost certainly attempt to develop it to their advantage.
Free-will is threatening to the powerful who, by understanding
its formation, will act to undermine it in sophisticated
ways. The usefulness of cultural artefacts that have the
degree of complexity of human brains makes our own species
the most obvious candidate for the enhanced super-robot of
the future, not just smart factory operatives and docile
consumers, but cunning weapons-delivery systems (suicide
bombers) and conformity-enforcers. At worst, the very special
qualities of human life that have been enabled by our remarkable
natural history, the confluence of genes and culture, could
end up as a realm of freedom for an elite few.
News and Features Editor at Nature; Author, Mapping
planet is not in peril
truth of this idea is pretty obvious. Environmental crises
are a fundamental part of the history of the earth: there have
been sudden and dramatic temperature excursions, severe glaciations,
vast asteroid and comet impacts. Yet the earth is still here,
have been mass extinctions associated with some of these events,
while other mass extinctions may well have been triggered by
subtler internal changes to the biosphere. But none of them
seem to have done long-term harm. The first ten million years
of the Triassic may have been a little dull by comparison to
the late Palaeozoic, what with a large number of the more interesting
species being killed in the great mass extinction at the end
of the Permian, but there is no evidence that any fundamentally
important earth processes did not eventually recover. I strongly
suspect that not a single basic biogeochemical innovation — the
sorts of thing that underlie photosynthesis and the carbon
cycle, the nitrogen cycle, the sulphur cycle and so on — has
been lost in the past 4 billion years.
there is an argument to be made that mass extinctions are in
fact a good thing, in that they wipe the slate clean a bit
and thus allow exciting evolutionary innovations. This may
be going a bit far. While the Schumpeter-for-the-earth-system
position seems plausible, it also seems a little crudely progressivist.
While to a mammal the Tertiary seems fairly obviously superior
to the Cretaceous, it's not completely clear to me that there's
an objective basis for that belief. In terms of primary productivity,
for example, the Cretaceous may well have had an edge. But
despite all this, it's hard to imagine that the world would
be a substantially better place if it had not undergone the
mass extinctions of the Phanerozoic.
this background, the current carbon/climate crisis seems pretty
small beer. The change in mean global temperatures seems quite
unlikely to be much greater than the regular cyclical change
between glacial and interglacial climates. Land use change
is immense, but it's not clear how long it will last, and there
are rich seedbanks in the soil that will allow restoration.
If fossil fuel use goes unchecked, carbon dioxide levels may
rise as high as they were in the Eocene, and do so at such
a rate that they cause a transient spike in ocean acidity.
But they will not stay at those high levels, and the Eocene
was not such a terrible place.
earth doesn't need ice caps, or permafrost, or any particular
sea level. Such things come and go and rise and fall as a matter
of course. The planet's living systems adapt and flourish,
sometimes in a way that provides negative feedback, occasionally
with a positive feedback that amplifies the change. A planet
that made it through the massive biogeochemical unpleasantness
of the late Permian is in little danger from a doubling, or
even a quintupling, of the very low carbon dioxide level that
preceded the industrial revolution, or from the loss of a lot
of forests and reefs, or from the demise of half its species,
or from the thinning of its ozone layer at high latitudes.
none of this is to say that we as people should not worry about
global change; we should worry a lot. This is because climate
change may not hurt the planet, but it hurts people. In particular,
it will hurt people who are too poor to adapt. Significant
climate change will change rainfall patterns, and probably
patterns of extreme events as well, in ways that could easily
threaten the food security of hundreds of millions of people
supporting themselves through subsistence agriculture or pastoralism.
It will have a massive effect on the lives of the relatively
small number of people in places where sea ice is an important
part of the environment (and it seems unlikely that anything
we do now can change that). In other, more densely populated
places local environmental and biotic change may have similarly
to this, the loss of species, both known and unknown, will
be experienced by some as a form of damage that goes beyond
any deterioration in ecosystem services. Many people will feel
themselves and their world diminished by such extinctions even
when they have no practical consequences, despite the fact
that they cannot ascribe an objective value to their loss.
One does not have to share the values of these people to recognise
of these effects provide excellent reasons to act. And yet
many people in the various green movements feel compelled to
add on the notion that the planet itself is in crisis, or doomed;
that all life on earth is threatened. And in a world where
that rhetoric is common, the idea that this eschatological
approach to the environment is baseless is a dangerous one.
Since the 1970s the environmental movement has based much of
its appeal on personifying the planet and making it seem like
a single entity, then seeking to place it in some ways "in
our care". It is a very powerful notion, and one which
benefits from the hugely influential iconographic backing of
the first pictures of the earth from space; it has inspired
much of the good that the environmental movement has done.
The idea that the planet is not in peril could thus come to
undermine the movement's power. This is one of the reasons
people react against the idea so strongly. One respected and
respectable climate scientist reacted to Andy Revkin's recent
use of the phrase "In fact, the planet has nothing to
worry about from global warming" in the New York Times with
near apoplectic fury.
the belief that the planet is in peril were merely wrong, there
might be an excuse for ignoring it, though basing one's actions
on lies is an unattractive proposition. But the planet-in-peril
idea is an easy target for those who, for various reasons,
argue against any action on the carbon/climate crisis at all.
In this, bad science is a hostage to fortune. What's worse,
the idea distorts environmental reasoning, too. For example,
laying stress on the non-issue of the health of the planet,
rather than the real issues of effects that harm people, leads
to a general preference for averting change rather than adapting
to it, even though providing the wherewithal for adaptation
will often be the most rational response.
planet-in-peril idea persists in part simply through widespread
ignorance of earth history. But some environmentalists, and
perhaps some environmental reporters, will argue that the inflated
rhetoric that trades on this error is necessary in order to
keep the show on the road. The idea that people can be more
easily persuaded to save the planet, which is not in danger,
than their fellow human beings, who are, is an unpleasant and
cynical one; another dangerous idea, not least because it may
indeed hold some truth. But if putting the planet at the centre
of the debate is a way of involving everyone, of making us
feel that we're all in this together, then one can't help noticing
that the ploy isn't working out all that well. In the rich
nations, many people may indeed believe that the planet is
in danger — but they don't believe that they are in danger,
and perhaps as a result they're not clamouring for change loud
enough, or in the right way, to bring it about.
is also a problem of learned helplessness. I suspect people
are flattered, in a rather perverse way, by the idea that their
lifestyle threatens the whole planet, rather than just the
livelihoods of millions of people they have never met. But
the same sense of scale that flatters may also enfeeble. They
may come to think that the problems are too great for them
to do anything about.
carbon/climate issues into the great moral imperative of improving
the lives of the poor, rather than relegating them to the dodgy
rhetorical level of a threat to the planet as a whole, seems
more likely to be a sustainable long-term strategy. The most
important thing about environmental change is that it hurts
people; the basis of our response should be human solidarity.
planet will take care of itself.