"What
Do You Believe Is True Even Though You Cannot Prove It?"
|
|
BENOIT
MANDELBROT
Mathematician,
Yale University; Author, The Fractal Geometry
of Nature
Wandering
through the frontiers of the sciences,
and the arts, I have always trusted the
eye while leaving aside the issues that
elude it. It can mislead—of course—therefore
I check endlessly and never rush to print.
Meanwhile, for over fifty years, I have watched as some disciplines exhaust
the "top down" problems they know how to tackle. So they wander
around seeking totally new patterns in a dark and deep mess, where an
unlit lamp is of little help.
But
the eye can continually be trained and,
long ago, I have vowed to follow it, therefore
work "from the bottom up." Like
the Antaeus of Greek myth, I gather strength
and persist by often touching the earth.
A
few of the truths the eye told me have
been disproven. Let it be. Others have
been confirmed by enormous and fruitful
effort, and then blossomed, one being the
four thirds conjecture in Brownian motion.
Many others remain, one being the MLC conjecture
about the Mandelbrot set, in which I believe
for no other reason than trust in the eye.
|
STANISLAS
DEHAENE
Cognitive
Neuropsychology Researcher,
Institut National de la Santé,
Paris; Author, The Number
Sense
I
believe (but cannot
prove) that we vastly
underestimate the
differences that
set the human brain
apart from the brains
of other primates.
Certainly,
no one can deny that there are important
similarities in the overall layout of the
human brain and, say, the macaque monkey
brain. Our primary sensory and motor cortices
are organized in similar ways. Even in
higher brain areas, homologies can be found.
In the parietal lobe, using brain-imaging
methods, my lab has observed plausible
human counterparts to several areas of
the macaque brain, involved in eye movement,
hand gestures, and even number processing.
Yet
I fear that those early successes in drawing
human-monkey homologies tend to mask other
massive differences. If we compare the
primary visual areas of macaques and humans,
there is already a two-fold difference
in surface area, but in parietal and frontal
areas, a twenty-to fifty-fold increase
is found. Even such a massive distortion
may not suffice to "align" the macaque
and human brain. Many of us suspect that,
in regions such as the prefrontal and inferior
parietal cortices, the changes are so dramatic
that they may amount to the addition of
new brain areas.
At
a more microscopic level, it is already
known that there is a new type of neuron
which is found in the anterior cingulate
region of humans and great apes, but not
in other primates. These "spindle cells" send
connections throughout the cortex, and
thus contribute to a massive increase in
long-distance connectivity in the human
brain. Indeed, the change in relative white
matter volume is perhaps what is most dramatic
about the human brain.
I
believe that these surface and connectivity
changes, although they are in many cases
quantitative, have brought about a qualitative
revolution in brain function:
Breaking
the brain's modularity.
Jean-Pierre
Changeux and I have proposed that the increased
connectivity of the human brain gives access
to a new mode of brain function, characterized
by a very flexible communication between
distant brain areas. We may possess roughly
the same list of specialized cerebral processors
as our primate ancestors. However, I speculate
that what might be unique about the human
brain is its capacity to access the information
inside each processor, and make it available
to almost any other processor through long-distance
connections. I believe that we humans have
a much more developed conscious workspace—a
set of brain areas that can fluidly exchange
signals, thus allowing us to internally
manipulate information and to perform new
mental syntheses. Using the workspace's
long-distance connections, we can mobilize,
in a top-down manner, essentially any brain
area and bring it into consciousness.
Spontaneous
activity and the autonomy of consciousness.
Once
the internal connectivity of a system exceeds
a threshold, it begins to be dominated
by self-sustained, reverberating states
of activity. I believe that the human workspace
system has passed this threshold, and has
gained a considerable autonomy relative
to the outside world. The human brain is
much less at the mercy of signals from
the outside world. Its activity never ceases
to reverberate from area to area, thus
generating a highly structured spontaneous
flow of thoughts that we project on the
outside world.
Of
course, spontaneous brain activity is present
in all species, but if I am correct we
will discover that it is both more evident
and more structured in the human brain,
at least in higher cortical areas where "workspace" neurons
with long-distance axons are denser. Furthermore,
if human brain activity can be detached
from outside stimulation, we will need
to find new paradigms to study it, because
bombarding the human brain with stimuli,
as we do in most brain-imaging experiments,
will not suffice. There is already some
evidence for this statement: by directly
comparing fMRI activations evoked by the
same visual stimuli in humans and macaques,
Guy Orban and his colleagues in Leuven
have found that prefrontal cortex activity
is five times larger in macaques than in
humans. In their own words, "there may
be more volitional control over visual
processing in humans than in monkeys".
The
profound influence of culture on the
human brain.
The
human species is also unique in its ability
to expand its functionality by inventing
new cultural tools. Writing, arithmetic,
science, are all very recent inventions—our
brains did not have time to evolve for
them, but I speculate that they were made
possible because we can mobilize our old
areas in novel ways. When we learn to read,
we "recycle" a specific region of our visual
system, which has become known as the "visual
word form area", for the purpose of recognizing
strings of letters and connecting them
to language areas. When we learn Arabic
numerals, likewise, we build a circuit
to quickly convert those shapes into quantities,
a fast connection from bilateral visual
areas to the parietal quantity area. Even
an invention as elementary as finger counting
changes dramatically our cognitive abilities:
Amazonian people that have not invented
counting are unable to make exact calculations
as simple as 6-2.
Crucially,
this "cultural recycling" implies that
whenever we look at a human brain, the
functional architecture that we see results
from a complex mixture of biological and
cultural constraints. Education is likely
to greatly increase the gap between the
human brain and that of our primate cousins.
Virtually all human brain imaging experiments
today are performed on highly literate
volunteers—and therefore, presumably,
highly transformed brains. To better understand
the differences between the human brain
and the monkey brain, we will need to invent
new methods, both to decipher the organization
of the baby brain prior to education, and
to study of how it changes with education |
TOR
NØRRETRANDERS
Science
Writer; Consultant; Lecturer,
Copenhagen; Author, The
User Illusion
I
believe in belief—or
rather: I have faith
in having faith.
Yet, I am an atheist
(or a "bright" as
some would have it).
How can that be?
It is important to have faith, but not necessarily in God. Faith is important
far outside the realm of religion: having faith in other people, in oneself,
in the world, in the existence of truth, justice and beauty. There is
a continuum of faith, from the basic everyday trust in others to the
grand devotion to divine entities.
Recent discoveries in behavioural sciences, such as experimental economics
and game theory, shows that it is a common human attitude towards the
world to have faith. It is vital in human interactions; and it is no
coincidence that the importance of anchoring behaviour in riskful trust
is stressed in worlds as far apart as Søren Kierkegaard's existentialist
christianity and modern theories of bargaining behaviour in economic
interactions. Both stress the importance of the inner, subjective conviction
as the basis for actions, the feeling of an inner glow.
One could say that modern behavioral science is re-discovering the importance
of faith that has been known to religions for a long time. And I would
argue that this re-discovery shows us that the activity of having faith
can be decoupled from the belief in divine entities.
So here is what I have faith in: We have a hand backing us, not as a
divine foresight or control, but in the very simple and concrete sense
that we are all survivors. We are all the result of a very long line
of survivors who survived long enough to have offspring. Amoeba, rodents
and mammals. We can therefore have confidence that we are experts in
survival. We have a wisdom inside, inherited from millions of generations
of animals and humans, a knowledge of how to go about life. That does
not in any way imply foresight or planning ahead on our behalf. It only
implies that we have a reason to trust out ability to deal with whatever
challenges we meet. We have inherited such an ability.
Therefore, we can trust each other, ourselves and life itself. We have
no guarantee or promises for eternal life, not at all. The enigma of
death is still there, ineradicable.
But we a reason to have confidence in ourselves. The basic fact that
we are still here—despite snakes, stupidity and nuclear weapons—gives
us reason to have confidence in ourselves and each other, to trust others
and to trust life. To have faith.
Because we are here, we have reason for having faith in having faith.
|
STEVE
GIDDINGS
Theoretical
Physicist, University of California,
Santa Barbara
I
believe that black
holes do not destroy
information, as Hawking
argued long ago, and
the reason is that
strong gravitational
effects undermine the
statement that degrees
of freedom inside and
outside the black hole
are independent.
On the first point, I am far from alone; many string theorists and others
now believe that black holes don't destroy information, and thus don't
violate quantum mechanics. Hawking himself recently announced that he
believes this, and has conceded a famous bet, but has not yet published
the work giving a sharp statement where his original logic went wrong.
The second point I believe, but cannot yet prove to the point of convincing
many of my colleagues. While many believe that Hawking was wrong, there
is a lot of dissent over where exactly his calculation fails, and none
of the arguments previously presented have sharply identified this point
of failure. If black holes emit information instead of destroying it,
this probably comes from a breakdown of locality. Lowe, Polchinski, Susskind,
Thorlacius, and Uglum have argued that the mechanism for locality violation
involves formation of long strings. Horowitz and Maldacena have argued
that the singularity at the center of a black hole must be a unique state,
in effect squeezing information out in a ghostly way. And others have
made other suggestions.
But I believe, and my former student Lippert and I have published arguments,
that the breakdown of locality that invalidates Hawking's work involves
strong gravitational physics that makes it inconsistent to think of separate
and independent degrees of freedom inside and outside the black hole.
The assumption that these degrees of freedom are separate is fundamental
to Hawking's argument. Our argument for where it fails has a satisfying
generality that mirrors the generality of Hawking's original work—neither
depends on the specifics of what kind of matter exists in the theory.
We base our argument on a principle we call the locality bound. This
is a criterion for when physical degrees of freedom can be independent
(in technical language, described by vanishing of commutators of corresponding
operators). Roughly, a degree of freedom corresponding to a particle
at position x with momentum p and another at y with momentum q will be
independent only if the separation x-y is large enough that they are
outside of a black hole that would form from their mutual energy. I believe
this is the beginning of a general criterion (which will ultimately more
precisely formulated) for when locality breaks down in physics. This
could be the beginning of a deeper understanding of holography. And,
it should be relevant to black hole physics because of the large relative
energies of the Hawking radiation and degrees of freedom falling into
a black hole. But this is not fully proven. Yet.
|
HOWARD
RHEINGOLD
Communications
Expert; Author, Smart
Mobs
I
believe that we humans, who
know so much about cosmology
and immunology, lack a framework
for thinking about why and
how humans cooperate. I believe
that part of the reason for
this is an old story we tell
ourselves about the world: Businesses
and nations succeed by competing
well. Biology is a war, where
only the fit survive. Politics
is about winning. Markets grow
solely from self-interest.
Rooted in the zeitgeist of
Adam Smith's and Charles Darwin's
eras, the scientific, social,
economic, political stories
of the 19th and 20th centuries
overwhelmingly emphasized the
role of competition as a driver
of evolution, progress, commerce,
society.
I
believe that the outlines of a new narrative
are becoming visible—a story in which
cooperative arrangements, interdependencies,
and collective action play a more prominent
role and the essential (but not all-powerful)
story of competition and survival of the
fittest shrinks just a bit.
Although new knowledge in biology about the evolution of altruistic behavior
and the role of symbiotic relationships, new understandings of economic
behavior derived from experiments in game theory, neuroeconomic research,
sociological investigations of institutions for collective action, computation-enabled
technologies such as grid computing, mesh networks, and online markets
all provide important clues, I don't believe anyone is likely to formulate
an algorithm or recipe for human cooperation. I suspect that the complex
interdependencies of human thought, behavior and culture entails an equivalent
to the limits Heisenberg found to physics and Gödel established
for mathematics.
I believe that more knowledge than what we have now, together with a
conceptual framework that is neither reductionistic nor theological,
could lead to better-designed economic and political policies and institutions.
Institutional and conceptual barriers to mounting such an effort are
as formidable as the methodological barriers. I am reminded of Doug Engelbart's
problem in the 1950s. He couldn't convince computer engineers, librarians,
public policy analysts that computing machinery could be used to augment
human thinking, as well as performing scientific calculation and business
data processing. Nobody and no institution had ever thought about computing
machinery that way, and older ways of thinking about what machines could
be designed to do were inadequate. Engelbart had to create "A Framework
for Augmenting Human Intellect" before the various hardware, software,
and human interface designers could create the first personal computers
and networks.
By necessity, useful new understandings of how humans cooperate and fail
to cooperate is an interdisciplinary task. I don't believe that the obvious
importance of such an effort guarantees that it will be successfully
accomplished. All our institutions for gathering and validating knowledge—universities,
corporate research laboratories, and foundations—reward and support
specialization.
|
LEO
CHALUPA
Ophthalmologist
and Neurobiologist, University
of California, Davis
Here
are three of my unproven beliefs:
(i)
The human brain is the most complex entity
in the known universe;
(ii)
With this marvelous product of evolution
we will be successful in eventually discovering
all that there is to discover about the
physical world, provided of course, that
some catastrophic event doesn't terminate
our species; and
(iii)
Science provides the best means to attain
this ultimate goal.
When
the scientific endeavor is considered in
relation to the obvious limitations of
the human brain, the knowledge we have
gained in all fields to date is astonishing.
Consider the well-documented variability
in the functional properties of neurons.
When recordings are made from a single
cell—for instance in the visual cortex
to a flashing spot of light—one can't
help but be amazed by the trial-to-trial
variations in the resulting responses.
On
one trial this simple stimulus might elicit
a high frequency burst of discharges, while
on the next trial there could be just a
hint of a response. The same thing is apparent
when EEG recordings are made from the human
brain. Brain waves change in frequency
and amplitude in seemingly random fashion
even when the subject is lying in a prone
position without any variations in behavior
or the environment.
And
such variability is also evident when one
does brain imaging; the pretty pictures seen
in publications are averages of many trials
that have been "massaged" by various
computer programs.
So
how does the brain do it? How can it function
as effectively as it does given the "noise" inherent
in the system? I don't have a good answer,
and neither does anyone else, in spite
of the papers that have been published
on this problem. But in line with the second
of the three beliefs I have listed above,
I am certain that someday this question
will be answered in a definitive manner.
|
CARLO
ROVELLI
Physicist;
Institut Universitaire de France & University
of the Mediterraneum; Author, Quantum
Gravity
I
am convinced, but cannot
prove, that time does
not exist. I mean that
I am convinced that
there is a consistent
way of thinking about
nature, that makes
no use of the notions
of space and time at
the fundamental level.
And that this way of
thinking will turn
out to be the useful
and convincing one.
I
think that the notions of space and time
will turn out to be useful only within
some approximation. They are similar to
a notion like "the surface of the
water" which looses meaning when we
describe the dynamics of the individual
atoms forming water and air: if we look
at very small scale, there isn't really
any actual surface down there. I am convinced
space and time are like the surface of
the water: convenient macroscopic approximations,
flimsy but illusory and insufficient screens
that our mind uses to organize reality.
In
particular, I am convinced that time is
an artifact of the approximation in which
we disregard the large majority of the
degrees of freedom of reality. Thus "time" is
just the reflection of our ignorance.
I
am also convinced, but cannot prove, that
there are no objects, but only relations.
By this I mean that I am convinced that
there is a consistent way of thinking about
nature, that refers only to interactions
between systems and not to states or changes
of individual systems. I am convinced that
this way of thinking nature will end up
to be the useful and natural one in physics.
Beliefs
that one cannot prove are often wrong,
as proven by the fact that this Edge list
contains contradictory beliefs. But they
are essential in science and often healthy.
Here is a good example from 25 centuries
ago: Socrates, in Plato's Phaedon says:
"...
seems to me very hard to prove, and I
think I wouldn't be able to prove it
... but I am convinced ... that the Earth
is spherical."
Finally,
I am also convinced, but cannot prove,
that we humans have an instinct to collaborate,
and that we have rational reasons for collaborating.
I am convinced that ultimately this rationality
and this instinct of collaboration will
prevail over the shortsighted egoistic
and aggressive instinct that produces exploitation
and war. Rationality and instinct of collaboration
have already given us large regions and
long periods of peace and prosperity. Ultimately,
they will lead us to a planet without countries,
without wars, without patriotism, without
religions, without poverty, where we will
be able to share the world. Actually, maybe
I am not sure I truly believe that I believe
this; but I do want to believe that I believe
this.
|
JOHN
McCARTHY
Computer Scientist; Artificial
Intelligence Pioneer, Stanford
University

I
think, as did Gödel, that the continuum
hypothesis is false. No-one will ever prove
it false from the presently accepted axioms
of set theory. Chris Freiling's proposed
new (1986) axioms prove it false, but they
are not regarded as intuitive.
I
think human-level artificial intelligence
will be achieved.
|
JAMES
O'DONNELL
Classicist;
Cultural Historian; Provost,
Georgetown University; Author, Avatars
of the Word
What
do I believe is true even though I cannot
prove it? This question has a double edge
and needs two answers.
First,
and most simply: "everything".
On a strict Popperian reading, all the
things I "know" are only propositions
that I have not yet falsified. They are
best estimates, hypotheses that, so far,
make sense of all the data that I possess.
I cannot prove that my parents were married
on a certain day in a certain year, but
I claim to "know" that date quite
confidently. Sure, there are documents,
but in fact in their case there are different
documents that present two different dates,
and I recall the story my mother told to
explain that and I believe it, but I cannot "prove" that
I am right. I also know Newton's Laws and
indeed believe them, but I also now know
their limitations and imprecisions and
suspect that more surprises may lurk in
the future.
But
that's a generic answer and not much in
the forward-looking and optimistic spirit
that characterizes Edge. So let
me propose this challenge to practitioners
of my own historical craft. I believe that
there are in principle better descriptions
and explanations for the development and
sequence of human affairs than human historians
are capable of providing. We draw our data
mainly from witnesses who share our scale
of being, our mortality, and for that matter
our viewpoint. And so we explain history
in terms of human choices and the behavior
of organized social units. The rise of
Christianity or the Norman Conquest seem
to us to be events we can explain and we
explain them in human-scale terms. But
it cannot be excluded or disproved that
events can be better explained on a much
larger time scale or a much smaller scale
of behavior. An outright materialist could
argue that all my acts, from the day of
my birth, have been a determined result
of genetics and environment. It was fashionable
a generation ago to argue a Freudian grounding
for Luther's revolt, but in principle it
could as easily be true and, if we could
know it, more persuasive to demonstrate
that his acts were determined a the molecular
and submolecular level.
The
problem with such a notion is, of course,
that we are very far from being able to
outline such a theory, much less make it
persuasive, much less make it something
that another human being could comprehend.
Understanding even one other person's life
at such microscopic detail would take much
more than one lifetime.
So
what is to be done? Of course historians
will constantly struggle to improve their
techniques and tools. The advance of dendrochronology
(dating wood by the tree rings, and consequently
dating buildings and other artifacts far
more accurately than ever before) can stand
as one example of the way in which technological
advance can tell us things we never knew
before. But we will also continue to write
and to read stories in the old style, because
stories are the way human beings most naturally
make sense of their world. An awareness
of the powerful possibility of whole other
orders of possible description and explanation,
however, should at least teach us some
humility and give us some thoughtful pause
when we are tempted to insist too strongly
on one version of history—the one
we happen to be persuaded is true. Even
a Popperian can see that this kind of intuition
can have beneficial effect.
|
PAMELA
McCORDUCK
Writer; Author, Machines
Who Think

Although
I can't prove it, I believe that thanks to
new kinds of social modeling, that take into
account individual motives as well as group
goals, we will soon grasp in a deep way how
collective human behavior works, whether
it's action by small groups or by nations.
Any predictive power this understanding has
will be useful, especially with regard to
unexpected outcomes and even unintended consequences.
But it will not be infallible, because the
complexity of such behavior makes exact prediction
impossible. |
MARTIN
REES
Cosmologist,
Cambridge University; UK Astronomer
Royal; Author, Our Final
Hour
I
believe that intelligent life may presently
be unique to our Earth, but that, even
so, it has the potential to spread through
the galaxy and beyond—indeed, the
emergence of complexity could still be
near its beginning. If SETI searches fail,
that would not render life a cosmic sideshow
Indeed, it would be a boost to our cosmic
self-esteem: terrestrial life, and its
fate, would become a matter of cosmic significance.
Even if intelligence is now unique to Earth,
there's enough
time lying ahead for it to spread through the entire Galaxy, evolving
into a teeming complexity far beyond what we can even conceive.
There's
an unthinking tendency to imagine that
humans will be around in 6 billion years,
watching the Sun flare up and die. But
the forms of life and intelligence that
have by then emerged would surely be as
different from us as we are from a bacterium.
That conclusion would follow even if future
evolution proceeded at the rate at which
new species have emerged over the 3 or
4 billion years of the geological past.
But post-human evolution (whether of organic
species or of artefacts) will proceed far
faster than the changes that led to emergence,
because it will be intelligently directed
rather than being—like pre-human
evolution—the gradual outcome of
Darwinian natural selection. Changes will
drastically accelerate in the present century—through
intentional genetic modifications, targeted
drugs, perhaps even silicon implants in
to the brain. Humanity may not persist
as a single species for more than a few
centuries—especially if communities
have by then become established away from
the earth.
But
a few centuries is still just a millionth
of the Sun's future lifetime—and
the entire universe probably has a longer
future still. The remote future is squarely
in the realm of science fiction. Advanced
intelligences billions of years hence might
even create new universes. Perhaps they'll
be able to choose what physical laws prevail
in their creations. Perhaps these beings
could achieve the computational capability
to simulate a universe as complex as the
one we perceive ourselves to be in.
My
belief may remain unprovable for billions
of years. It could be falsified sooner—for
instance, we (or our immediate post-human
descendents) may develop theories that
reveal inherent limits to complexity. But
it's a substitute for religious belief,
and I hope it's true.
|
CAROLYN
PORCO
Planetary Scientist; Leader,
Cassini Imaging Team; Director, CICLOPS, Space Science
Institute, Boulder
This
is a treacherous question to ask, and a trivial
one to answer. Treacherous because the shoals
between the written lines can be navigated
by some to the conclusion that truth and
religious belief develop by the same means
and are therefore equivalent. To those unfamiliar
with the process by which scientific hunches
and hypotheses are advanced to the level
of verifiable fact, and the exacting standards
applied in that process, the impression may
be left that the work of the scientist is
no different than that of the prophet or
the priest.
Of
course, nothing could be further from reality.
The
whole scientific method relies on the deliberate,
high magnification scrutiny and criticism
by other scientists of any mechanisms proposed
by any individual to explain the natural
world. No matter how fervently a scientist
may "believe'"something to be true,
and unlike religious dogma, his or her belief
is not accepted as a true description or
even approximation of reality until it passes
every test conceivable, executable and reproducible.
Nature is the final arbiter, and great minds
are great only in so far as they can intuit
the way nature works and are shown by subsequent
examination and proof to be right.
With
that preamble out of the way, I can say that
for me personally, this is a trivial question
to answer. Though no one has yet shown that
life of any kind, other than Earthly life,
exists in the cosmos, I firmly believe that
it does. My justification for this belief
is a commonly used one, with no strenuous
exertion of the intellect or suspension of
disbelief required.
Our
reconstruction of early solar system history,
and the chronology of events that led to
the origin of the Earth and moon and the
subsequent development of life on our planet,
informs us that self-replicating organisms
originated from inanimate materials in a
very narrow window of time. The tail end
of the accretion of the planets—a period
known as "the heavy bombardment"—ended
about 3.8 billion years ago, approximately
800 million years after the Earth formed.
This is the time of formation and solidification
of the big flooded impact basins we readily
see on the surface of the Moon, and the time
when the last large catastrophe-producing
impacts also occurred on the Earth. In other
words, the terrestrial surface environment
didn't settle down and become conducive to
the development of fragile living organisms
until nearly a billion years had gone by.
However,
the first appearance of life forms on the
Earth, the oldest fossils we have discovered
so far, occurred shortly after that: around
3.5 billion years ago or even earlier. The
interval in between—only 300 millions
years and less than the time represented
by the rock layers in the walls of the Grand
Canyon—is the proverbial blink of the
cosmic eye. Despite the enormous complexity
of even the simplest biological forms and
processes, and the undoubtedly lengthy and
complicated chain of chemical events that
must have occurred to evolve animated molecular
structures from inanimate atoms, it seems
an inevitable conclusion that Earthly life
developed very quickly and as soon as the
coast was clear long enough to do so.
Evidence
is gathering that the events that created
the solar system and the Earth, driven predominantly
by gravity, are common and pervasive in our
galaxy and, by inductive reasoning, in galaxies
throughout the cosmos. The cosmos is very,
very big. Consider the overwhelming numbers
of galaxies in the visible cosmos alone and
all the Sun-like stars in those galaxies
and the number of habitable planets likely
to be orbiting those stars and the ease with
which life developed on our own habitable
planet, and it becomes increasingly unavoidable
that life is itself a fundamental feature
of our universe ... along with dark matter,
supernovae, and black holes.
I
believe we are not alone. But it doesn't
matter what I think because I can't prove
it. It is so beguiling a question, though,
that humankind is presently and actively
seeking the answer. The search for life and
so-called "habitable zones" is
becoming increasingly the focus of our planetary
explorations, and it may in fact transpire
one day that we discover life forms under
the ice on some moon orbiting Jupiter or
Saturn, or decode the intelligible signals
of an advanced, unreachably distant, alien
organism. That will be a singular day indeed.
I only hope I'm still around when it happens. |
|