"WHAT
IS YOUR DANGEROUS IDEA?" |
|
KARL SABBAGH
Writer
and Television Producer; Author, The
Riemann Hypothesis

The
human brain and its products are incapable of understanding
the truths about the universe
Our
brains may never be well-enough equipped to understand the universe
and we are fooling ourselves if we think they will.
Why should we expect to be able eventually to understand how the
universe originated, evolved, and operates? While human brains
are complex and capable of many amazing things, there is not necessarily
any match between the complexity of the universe and the complexity
of our brains, any more than a dog's brain is capable of understanding
every detail of the world of cats and bones, or the dynamics of
stick trajectories when thrown. Dogs get by and so do we, but do
we have a right to expect that the harder we puzzle over these
things the nearer we will get to the truth? Recently I stood in
front of a three metre high model of the Ptolemaic universe in
the Museum of the History of Science in Florence and I remembered
how well that worked as a representation of the motions of the
planets until Copernicus and Kepler came along.
Nowadays,
no element of the theory of giant interlocking cogwheels at work
is of any use in understanding the motions of the stars and planets
(and indeed Ptolemy himself did not argue that the universe really
was run by giant cogwheels). Occam's Razor is used to compare
two theories and allow us to choose which is more likely to be
'true' but hasn't it become a comfort blanket whenever we are
faced with aspects of the universe that seem unutterably complex — string
theory for example. But is string theory just the Ptolemaic clockwork de
nos jours? Can it be succeeded by some simplification or
might the truth be even more complex and far beyond the neural
networks of our brain to understand?
The
history of science is littered with examples of two types of
knowledge advancement. There is imperfect understanding that
'sort of']' works, and is then modified and replaced by something
that works better, without destroying the validity of the earlier
theory. Newton's theory of gravitation replaced by Einstein.
Then there is imperfect understanding that is replaced by some
new idea which owes nothing to older ones. Phlogiston theory,
the ether, and so on are replaced by ideas which save the phenomena,
lead to predictions, and convince us that they are nearer the
truth. Which of these categories really covers today's science?
Could we be fooling ourselves by playing around with modern phlogiston?
And
even if we are on the right lines in some areas, how much of
what there is to be understood in the universe do we really understand?
Fifty percent? Five percent? The dangerous idea is that perhaps
we understand half a percent and all the brain and computer power
we can muster may take us up to one or two percent in the lifetime
of the human race.
Paradoxically,
we may find that the only justification for pursuing scientific
knowledge is for the practical applications it leads to — a
view that runs contrary to the traditional support of knowledge
for knowledge's sake. And why is this paradoxical? Because the
most important advances in technology have come out of research
that was not seeking to develop those advances but to understand
the universe.
So if my dangerous idea is right — that the human brain and
its products are actually incapable of understanding the truths
about the universe — it will not — and should not — lead
to any diminution at all in our attempts to do so. Which means,
I suppose, that it's not really dangerous at all. |
RUPERT SHELDRAKE
Biologist,
London; Author of The
Presence of the Past

A
sense of direction involving new scientific principles
We
don't understand animal navigation.
No
one knows how pigeons home, or how swallow migrate, or how green
turtles find Ascension Island from thousands of miles away to
lay their eggs. These kinds of navigation involve more than following
familiar landmarks, or orientating in a particular compass direction;
they involve an ability to move towards a goal.
Why
is this idea dangerous? Don't we just need a bit more time to
explain navigation in terms of standard physics, genes, nerve
impulses and brain chemistry? Perhaps.
But
there is a dangerous possibility that animal navigation may not
be explicable in terms of present-day physics. Over and above
the known senses, some species of animals may have a sense of
direction that depends on their being attracted towards their
goals through direct field-like connections. These spatial attractors
are places with which the animals themselves are already familiar,
or with which their ancestors were familiar.
What
are the facts? We know more about pigeons than any other species.
Everyone agrees that within familiar territory, especially within
a few miles of their home, pigeons can use landmarks; for example,
they can follow roads. But using familiar landmarks near home
cannot explain how racing pigeons return across unfamiliar terrain
from six hundred miles away, even flying over the sea, as English
pigeons do when they are raced from Spain.
Charles
Darwin, himself a pigeon fancier, was one of the first to suggest
a scientific hypothesis for pigeon homing. He proposed that they
might use a kind of dead reckoning, registering all the twists
and turns of the outward journey. This idea was tested in the
twentieth century by taking pigeons away from their loft in closed
vans by devious routes. They still homed normally. So did birds
transported on rotating turntables, and so did birds that had
been completely anaesthetized during the outward journey.
What
about celestial navigation? One problem for hypothetical solar
or stellar navigation systems is that many animals still navigate
in cloudy weather. Another problem is that celestial navigation
depends on a precise time sense. To test the sun navigation theory,
homing pigeons were clock-shifted by six or twelve hours and
taken many miles from their lofts before being released. On sunny
days, they set off in the wrong direction, as if a clock-dependent
sun compass had been shifted. But in spite of their initial confusion,
the pigeons soon corrected their courses and flew homewards normally.
Two
main hypotheses remain: smell and magnetism. Smelling the home
position from hundreds of miles away is generally agreed to be
implausible. Even the most ardent defenders of the smell hypothesis
(the Italian school of Floriano Papi and his colleagues) concede
that smell navigation is unlikely to work at distances over 30
miles.
That leaves a magnetic sense. A range of animal species can detect
magnetic fields, including termites, bees and migrating birds.
But even if pigeons have a compass sense, this cannot by itself
explain homing. Imagine that you are taken to an unfamiliar place
and given a compass. You will know from the compass where north
is, but not where home is.
The obvious way of dealing with this problem is to postulate complex
interactions between known sensory modalities, with multiple back-up
systems. The complex interaction theory is safe, sounds sophisticated,
and is vague enough to be irrefutable. The idea of a sense of direction
involving new scientific principles is dangerous, but it may be
inevitable. |
TOR NØRRETRANDERS
Science
Writer; Consultant; Lecturer, Copenhagen; Author, The
User Illusion

Social
Relativity
Relativity
is my dangerous idea. Well, neither the special nor the general
theory of relativity, but what could be called social relativity: The
idea that the only thing that matters to human well-being is
how one stands relatively to others. That is, only the relative
wealth of a person is important, the absolute level does not
really matter, as soon as everyone is above the level of having
their immediate survival needs fulfilled.
There
is now strong and consistent evidence (from fields such as
microeconomics, experimental economics, psychology, sociolology
and primatology) that it doesn't really matter how much you
earn, as long as you earn more than your wife's sister's husband.
Pioneers in these discussions are the late British social thinker
Fred Hirsch and the American economist Robert Frank.
Why
is this idea dangerous? It seems to imply that equality will
never become possible in human societies: The driving force
is always to get ahead of the rest. Nobody will ever settle
down and share.
So
it would seem that we are forever stuck with poverty, disease
and unjust hierarchies. This idea could make the rich and the
smart lean back and forget about the rest of the pack.
But
it shouldn't.
Inequality
may subjectively seem nice to the rich, but objectively it
is not in their interest.
A
huge body of epidemiological evidence points to the fact that
inequality is in fact the prime cause for human disease. Rich
people in poor countries are more healthy than poor people
in rich countries, even though the latter group has more resources
in absolute terms. Societies with strong gradients of wealth
show higher death rates and more disease, also amongst the
people at the top. Pioneers in these studies are the British
epidemiologists Michael Marmot and Richard Wilkinson.
Poverty
means spreading of disease, degradation of ecosystems and social
violence and crime — which are also bad for the rich.
Inequality means stress to everyone.
Social
relativity then boils down to an illusion: It seems nice to
me to be better off than the rest, but in terms of vitals — survival,
good health — it is not.
Believing
in social relativity can be dangerous to your health. |
JOHN HORGAN
Science
Writer; Author, Rational
Mysticism

We
Have No Souls
The
Depressing, Dangerous Hypothesis: We Have No Souls.
This
year's Edge question makes me wonder: Which
ideas pose a greater potential danger? False ones
or true ones? Illusions or the lack thereof? As a
believer in and lover of science, I certainly hope
that the truth will set us free, and save us, but
sometimes I'm not so sure.
The
dangerous, probably true idea I'd like to dwell on in this
Holiday season is that we humans have no souls. The soul
is that core of us that supposedly transcends and even persists
beyond our physicality, lending us a fundamental autonomy,
privacy and dignity. In his 1994 book The Astonishing Hypothesis: The
Scientific Search for the Soul, the late, great Francis
Crick argued that the soul is an illusion perpetuated, like
Tinkerbell, only by our belief in it. Crick opened his book
with this manifesto: "'You,' your joys and your sorrows,
your memories and your ambitions, your sense of personal
identity and free will, are in fact no more than the behavior
of a vast assembly of nerve cells and their associated molecules." Note
the quotation marks around "You." The subtitle
of Crick's book was almost comically ironic, since he was
clearly trying not to find the soul but to crush it out of
existence.
I
once told Crick that "The Depressing Hypothesis" would
have been a more accurate title for his book, since he was,
after all, just reiterating the basic, materialist assumption
of modern neurobiology and, more broadly, all of science.
Until recently, it was easy to dismiss this assumption as
moot, because brain researchers had made so little progress
in tracing cognition to specific neural processes. Even self-proclaimed
materialists
— who accept, intellectually, that we are just meat machines
— could harbor a secret, sentimental belief in a soul
of the gaps. But recently the gaps have been closing, as neuroscientists — egged
on by Crick in the last two decades of his life--have begun
unraveling the so-called neural code, the software that transforms
electrochemical pulses in the brain into perceptions, memories,
decisions, emotions, and other constituents of consciousness.
I've
argued elsewhere that the neural code may turn out to be
so complex that it will never be fully deciphered. But 60
years ago, some biologists feared the genetic code was too
complex to crack. Then in 1953 Crick and Watson unraveled
the structure of DNA, and researchers quickly established
that the double helix mediates an astonishingly simple genetic
code governing the heredity of all organisms. Science's success
in deciphering the genetic code, which has culminated in
the Human Genome Project, has been widely acclaimed — and
with good reason, because knowledge of our genetic makeup
could allow us to reshape our innate nature. A solution to
the neural code could give us much greater, more direct control
over ourselves than mere genetic manipulation.
Will
we be liberated or enslaved by this knowledge? Officials
in the Pentagon, the major funder of neural-code research,
have openly broached the prospect of cyborg warriors who
can be remotely controlled via brain implants, like the assassin
in the recent remake of "The Manchurian Candidate." On
the other hand, a cult-like group of self-described "wireheads" looks
forward to the day when implants allow us to create our own
realities and achieve ecstasy on demand.
Either
way, when our minds can be programmed like personal computers,
then, perhaps, we will finally abandon the belief that we
have immortal, inviolable souls, unless, of course, we program
ourselves to believe. |
ERIC R. KANDEL
Biochemist
and University Professor, Columbia University; Recipient,
The Nobel Prize, 2000; Author, Cellular
Basis of Behavior

Free
will is exercised unconsciously, without awareness
It
is clear that consciousness is central to understanding human
mental processes, and therefore is the holy grail of modern
neuroscience. What is less clear is that much of our mental
processes are unconscious and that these unconscious processes
are as important as conscious mental processes for understanding
the mind. Indeed most cognitive processes never reach consciousness.
As
Sigmund Freud emphasized at the beginning of the 20th century
most of our perceptual and cognitive processes are unconscious,
except those that are in the immediate focus of our attention.
Based on these insights Freud emphasized that unconscious mental
processes guide much of human behavior.
Freud's
idea was a natural extension of the notion of unconscious
inference proposed in the 1860s by Hermann Helmholtz,
the German physicist turned neural scientist. Helmholtz was
the first to measure the conduction of electrical signals in
nerves. He had expected it to be as the speed of light, fast
as the conduction of electricity in copper cables, and found
to his surprise that it was much slower, only about 90m sec.
He then examined the reaction time, the time it takes a subject
to respond to a consciously a perceived stimulus, and found
that it was much, much slower than even the combined conduction
times required for sensory and motor activities.
This
caused Helmholz to argue that a great deal of brain processing
occurred unconsciously prior to conscious perception of an
object. Helmholtz went on to argue that much of what goes on
in the brain is not represented in consciousness and that the
perception of objects depends upon "unconscious inferences" made
by the brain, based on thinking and reasoning without awareness.
This view was not accepted by many brain scientists who believed
that consciousness is necessary for making inferences. However,
in the 1970s a number of experiments began to accumulate in
favor of the idea that most cognitive processes that occur
in the brain never enter consciousness.
Perhaps
the most influential of these experiments were those carried
out by Benjamin Libet in 1986. Libet used as his starting point
a discovery made by the German neurologist Hans Kornhuber.
Kornhuber asked volunteers to move their right index finger.
He then measured this voluntary movement with a strain gauge
while at the same time recording the electrical activity of
the brain by means of an electrode on the skull. After hundreds
of trials, Kornhuber found that, invariably, each movement
was preceded by a little blip in the electrical record from
the brain, a spark of free will! He called this potential in
the brain the "readiness potential" and found that
it occurred one second before the voluntary movement.
Libet
followed up on Kornhuber's finding with an experiment in which
he asked volunteers to lift a finger whenever they felt the
urge to do so. He placed an electrode on a volunteer's skull
and confirmed a readiness potential about one second before
the person lifted his or her finger. He then compared the time
it took for the person to will the movement with the time of
the readiness potential.
Amazingly,
Libet found that the readiness potential appeared not after,
but 200 milliseconds before a person felt the urge
to move his or her finger! Thus by merely observing the electrical
activity of the brain, Libet could predict what a person would
do before the person was actually aware of having decided to
do it.
These
experiments led to the radical insight that by observing another
person's brain activity, one can predict what someone is going
to do before he is aware that he has made the decision to do
it. This finding has caused philosophers of mind to ask: If
the choice is determined in the brain unconsciously before
we decide to act, where is free will?
Are
these choices predetermined? Is our experience of freely willing
our actions only an illusion, a rationalization after the fact
for what has happened? Freud, Helmholtz and Libet would disagree
and argue that the choice is freely made but that it happens
without our awareness. According to their view, the unconscious
inference of Helmholtz also applies to decision-making.
They
would argue that the choice is made freely, but not consciously.
Libet for example proposes that the process of initiating a
voluntary action occurs in an unconscious part of the brain,
but that just before the action is initiated, consciousness
is recruited to approve or veto the action. In the 200 milliseconds
before a finger is lifted, consciousness determines whether
it moves or not.
Whatever
the reasons for the delay between decision and awareness, Libet's
findings now raise the moral question: Is one to be held responsible
for decisions that are made without conscious awareness? |
DANIEL GOLEMAN
Psychologist;
Author, Emotional
Intelligence

Cyber-disinhibition
The
Internet inadvertently undermines the quality of human interaction,
allowing destructive emotional impulses freer reign under specific
circumstances. The reason is a neural fluke that results in
cyber-disinhibition of brain systems that keep our more unruly
urges in check. The tech problem: a major disconnect between
the ways our brains are wired to connect, and the interface
offered in online interactions.
Communication
via the Internet can mislead the brain's social systems. The
key mechanisms are in the prefrontal cortex; these circuits
instantaneously monitor ourselves and the other person during
a live interaction, and automatically guide our responses so
they are appropriate and smooth. A key mechanism for this involves
circuits that ordinarily inhibit impulses for actions that
would be rude or simply inappropriate — or outright dangerous.
In order for this regulatory mechanism to operate well, we depend
on real-time, ongoing feedback from the other person. The Internet
has no means to allow such realtime feedback (other than rarely
used two-way audio/video streams). That puts our inhibitory circuitry
at a loss — there is no signal to monitor from the other
person. This results in disinhibition: impulse unleashed.
Such
disinhibition seems state-specific, and typically occurs rarely
while people are in positive or neutral emotional states. That's
why the Internet works admirably for the vast majority of communication.
Rather, this disinhibition becomes far more likely when people
feel strong, negative emotions. What fails to be inhibited
are the impulses those emotions generate.
This
phenomenon has been recognized since the earliest days of the
Internet (then the Arpanet, used by a small circle of scientists)
as "flaming," the tendency to send abrasive, angry
or otherwise emotionally "off" cyber-messages. The
hallmark of a flame is that the same person would never say
the words in the email to the recipient were they face-to-face.
His inhibitory circuits would not allow it — and so the
interaction would go more smoothly. He might still communicate
the same core information face-to-face, but in a more skillful
manner. Offline and in life, people who flame repeatedly tend
to become friendless, or get fired (unless they already run
the company).
The
greatest danger from cyber-disinhibition may be to young people.
The prefrontal inhibitory circuitry is among the last part
of the brain to become fully mature, doing so sometime in the
twenties. During adolescence there is a developmental lag,
with teenagers having fragile inhibitory capacities, but fully
ripe emotional impulsivity.
Strengthening
these inhibitory circuits can be seen as the singular task
in neural development of the adolescent years.
One
way this teenage neural gap manifests online is "cyber-bullying," which
has emerged among girls in their early teens. Cliques of girls
post or send cruel, harassing messages to a target girl, who
typically is both reduced to tears and socially humiliated.
The posts and messages are anonymous, though they become widely
known among the target's peers. The anonymity and social distance
of the Internet allow an escalation of such petty cruelty to
levels that are rarely found in person: face-to-face seeing
someone cry typically halts bullying among girls — but
that inhibitory signal cannot come via Internet.
A
more ominous manifestation of cyber-disinhibition can be seen
in the susceptibility of teenagers induced to perform sexual
acts in front of webcams for an anonymous adult audience who
pay to watch and direct. Apparently hundreds of teenagers have
been lured into this corner of child pornography, with an equally
large audience of pedophiles. The Internet gives strangers
access to children in their own homes, who are tempted to do
things online they would never consider in person.
Cyber-bullying
was reported last week in my local paper. The Webcam teenage
sex circuit was a front-page story in The New York Times two
days later.
As
with any new technology, the Internet is an experiment in progress.
It's time we considered what other such downsides of cyber-disinhibition
may be emerging — and looked for a technological fix,
if possible. The dangerous thought: the Internet may harbor
social perils our inhibitory circuitry was not designed to
handle in evolution. |
BRIAN
GREENE
Physicist
& Mathematician, Columbia University; Author, The
Fabric of the Cosmos; Presenter, three-part Nova program, The
Elegant Universe

The
Multiverse
The
notion that there are universes beyond our own — the
idea that we are but one member of a vast collection of universes
called the multiverse — is highly speculative, but
both exciting and humbling. It's also an idea that suggests
a radically new, but inherently risky approach to certain
scientific problems.
An
essential working assumption in the sciences is that with adequate
ingenuity, technical facility, and hard work, we can explain
what we observe. The impressive progress made over the past
few hundred years is testament to the apparent validity of
this assumption. But if we are part of a multiverse, then our
universe may have properties that are beyond traditional scientific
explanation. Here's why:
Theoretical
studies of the multiverse (within inflationary cosmology and
string theory, for example) suggest that the detailed properties
of the other universes may be significantly different from
our own. In some, the particles making up matter may have different
masses or electric charges; in others, the fundamental forces
may differ in strength and even number from those we experience;
in others still, the very structure of space and time may be
unlike anything we've ever seen.
In
this context, the quest for fundamental explanations of particular
properties of our universe — for example, the observed
strengths of the nuclear and electromagnetic forces — takes
on a very different character. The strengths of these forces
may vary from universe to universe and thus it may simply be
a matter of chance that, in our universe, these forces have
the particular strengths with which we're familiar. More intriguingly,
we can even imagine that in the other universes where their
strengths are different, conditions are not hospitable to our
form of life. (With different force strengths, the processes
giving rise to long-lived stars and stable planetary systems — on
which life can form and evolve — can easily be disrupted.)
In this setting, there would be no deep explanation for the
observed force strengths. Instead, we would find ourselves
living in a universe in which the forces have their familiar
strengths simply because we couldn't survive in any of the
others where the strengths were different.
If
true, the idea of a multiverse would be a Copernican revolution
realized on a cosmic scale. It would be a rich and astounding
upheaval, but one with potentially hazardous consequences.
Beyond the inherent difficulty in assessing its validity, when
should we allow the multiverse framework to be invoked in lieu
of a more traditional scientific explanation? Had this idea
surfaced a hundred years ago, might researchers have chalked
up various mysteries to how things just happen to be in our
corner of the multiverse, and not pressed on to discover all
the wondrous science of the last century?
Thankfully
that's not how the history of science played itself out, at
least not in our universe. But the point is manifest. While
some mysteries may indeed reflect nothing more than the particular
universe, within the multiverse, we find ourselves inhabiting,
other mysteries are worth struggling with because they are
the result of deep, underlying physical laws. The danger, if
the multiverse idea takes root, is that researchers may too
quickly give up the search for such underlying explanations.
When faced with seemingly inexplicable observations, researchers
may invoke the framework of the multiverse prematurely — proclaiming
some or other phenomenon to merely reflect conditions in our
bubble universe — thereby failing to discover the deeper
understanding that awaits us. |
DAVID
GELERNTER
Computer
Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies;
Author, Drawing Life
What are
people well-informed about in the Information
Age?
Let's
date the Information Age to 1982, when the Internet went into
operation & the PC had just been born. What if people have
been growing less well-informed ever since? What if people
have been growing steadily more ignorant ever since the so-called
Information Age began?
Suppose an average US voter, college teacher, 5th-grade teacher,
5th-grade student are each less well-informed today than they
were in '95, and were less well-informed then than in '85? Suppose,
for that matter, they were less well-informed in '85 than in
'65?
If this is indeed the "information age," what exactly
are people well-informed about? Video games? Clearly
history, literature, philosophy, scholarship in general are not
our specialities. This is some sort of technology age — are
people better informed about science? Not that I can tell. In
previous technology ages, there was interest across the population
in the era's leading technology.
In the 1960s, for example, all sorts of people were interested
in the space program and rocket technology. Lots of people learned
a little about the basics — what a "service module" or "trans-lunar
injection" was, why a Redstone-Mercury vehicle was different
from an Atlas-Mercury — all sorts of grade-school students,
lawyers, housewives, English profs were up on these topics. Today
there is no comparable interest in computers & the
internet, and no comparable knowledge. "TCP/IP," "Routers," "Ethernet
protocol," "cache hits" — these are topics
of no interest whatsoever outside the technical community. The
contrast is striking. |
MAHZARIN
R. BANAJI
Professor of Psychology, Harvard University

Using
science to get to the next and perhaps last frontier
We
do not (and to a large extent, cannot) know who we are through
introspection.
Conscious
awareness is a sliver of the machine that is human intelligence
but it's the only aspect we experience and hence the only aspect
we come to believe exists. Thoughts, feelings, and behavior
operate largely without deliberation or conscious recognition — it's
the routinized, automatic, classically conditioned, pre-compiled
aspects of our thoughts and feelings that make up a large part
of who we are. We don't know what motivates us even though
we are certain we know just why we do the things we do. We
have no idea that our perceptions and judgments are incorrect
(as measured objectively) even when they are. Even more stunning,
our behavior is often discrepant from our own conscious intentions
and goals, not just objective standards or somebody else's
standards.
The
same lack of introspective access that keeps us from seeing
the truth in a visual illusion is the lack of introspective
access that keeps us from seeing the truth of our own minds
and behavior. The "bounds" on our ethical sense rarely
come to light because the input into those decisions is kept
firmly outside our awareness. Or at least, they don't come
to light until science brings them into the light in a way
that no longer permits them to remain in the dark.
It
is the fact that human minds have a tendency to categorize
and learn in particular ways, that the sorts of feelings for
one's ingroup and fear of outgroups are part of our evolutionary
history. That fearing things that are different from oneself,
holding what's not part of the dominant culture (not American,
not male, not White, not college-educated) to be "less
good" whether one wants to or not, reflects a part of
our history that made sense in a particular time and place
- because without it we would not have survived. To know this
is to understand the barriers to change honestly and with adequate
preparation.
As
everybody's favorite biologist Richard Dawkins said thirty
years ago:
Let
us understand what our own selfish genes are up to, because
we may then at least have a chance to upset their designs,
something that no other species has ever aspired to do.
We
cannot know ourselves without the methods of science. The mind
sciences have made it possible to look into the universe between
the ear drums in ways that were unimagined.
Emily
Dickinson wrote in a letter to a mentor asking him to tell
her how good a poet she was: "The sailor cannot see the
north, but knows the needle can" she said. We have the
needle and it involves direct, concerted effort, using science
to get to the next and perhaps last frontier, of understanding
not just our place among other planets, our place among other
species, but our very nature. |
RODNEY
BROOKS
Director, MIT Computer Science
and Artificial Intelligence Laboratory (CSAIL); Chief
Technical Officer of iRobot Corporation; author Flesh
and Machines

Being
alone in the universe
The
thing that I worry about most that may or may not be
true is that perhaps the spontaneous transformation from
non-living matter to living matter is extraordinarily
unlikely. We know that it has happened once. But what
if we gain lots of evidence over the next few decades
that it happens very rarely.
In
my lifetime we can expect to examine the surface of
Mars, and the moons of the gas giants in some detail.
We can also expect to be able to image extra-solar
planets within a few tens of light years to resolutions
where we would be able to detect evidence of large
scale biological activity.
What
if none of these indicate any life whatsoever? What
does that do to our scientific belief that life did
arise spontaneously. It should not change it, but it
will make it harder to defend against non-scientific
attacks. And wouldn't it sadden us immensely if we
were to discover that there is a vanishing small probability
that life will arise even once in any given galaxy.
Being
alone in this solar system will not be such a such
a shock, but alone in the galaxy, or worse alone in
the universe would, I think, drive us to despair, and
back towards religion as our salve. |
|