"What
Do You Believe Is True Even Though You Cannot Prove It?"
|
|
|
GARY
MARCUS
Psychologist,
New York University; Author, The
Birth of the Mind
If
computers are made up of hardware and software,
transistors and resistors, what are neural
machines we know as minds made up of?
Minds
clearly are not made up of transistors and
resistors, but I firmly believe that at least
one of the most basic elements of computation
is shared by man and machine: the ability
to represent information in terms of an abstract,
algebra-like code.
In
a computer, this means that software is made
up of hundreds, thousands, even millions
of lines that say things like IF X IS
GREATER THAN Y, DO Z, or CALCULATE
THE VALUE OF Q BY ADDING A, B, AND C. The
same kind of abstraction seems to underlie
our knowledge of linguistics. For instance,
the famous linguistic dictum that a Sentence
consists of a Noun Phrase plus a Verb Phrase
can apply to an infinite number of possible
nouns and verbs, not just a few familiar
words. In its open-endedness, it is an example
of mental algebra par excellence.
In
my lab, we discovered that even infants seem
to be able to grasp something quite similar.
For example, in the course of just two minutes,
a seven-month-old baby can extract the ABA "grammar" inherent
in set of made-up sentences like la ta
la, ga na ga, je li je. Or the ABB "grammar" in
sentences like la ta ta, ga na na, je
li li.
Of
course, this experiment doesn't prove that
there is an "algebra" circuit in
the brain—psychological techniques
alone can't do that. For final proof, we'll
need neuroscientific techniques far more
sophisticated than contemporary brain imaging,
such that we can image the brain at the level
of interactions between individual neurons.
But every bit of evidence that we can collect
now—from babies, from toddlers, from
adults, from psychology and from linguistics—seems
to confirm the idea that algebra-like abstraction
is a fundamental component of thought.
|
|
KARL
SABBAGH
Writer
and Television Producer; Author, The
Riemann Hypothesis
I
believe it is true that if there is intelligent
life elsewhere in the universe, of whatever form,
it will be familiar with the same concept of
counting numbers.
Some
philosophers believe that pure mathematics is
human-specific and that it is possible for an
entirely different type of mathematics to emerge
from a different type of intelligence, a type
of mathematics that has nothing in common with
ours and may even contradict it. But it is difficult
to think of what sort of life-form would not
need the counting numbers. The stars in the sky
are discrete points and cry out to be counted
by beings throughout the universe, but alien
life-forms may not have vision.
Intelligent objects with boundaries between being and non-being surely
want to be measured— "I'm bigger that you", "I
need a size 312 overcoat"—but perhaps there are life-forms
which don't have boundaries but are continuously varying density changes
in some Jovian sea. Intelligent
life might be disembodied or at least lack a discrete body and merely
be transmitted between various points in a solid material matrix, so
that it was impossible to distinguish one intelligent being from another.
But
sooner or later, whether it is to measure the
passing of time, the magnitude of distance, the
density of one Jovian being compared with another,
numbers will have to be used. And if numbers
are used, 2 + 2 must always equal 4, the number
of stars in the Pleiades brighter than magnitude
5.7 will always be 11 which will always be a
prime number, and two measurements of the speed
of light in any units in identical conditions
will always be identical. Of course, the fact
that I find it difficult to think of beings which
won't need our sort of mathematics doesn't mean
they don't exist, but that's what I believe without
proof.
|
|
SCOTT
ATRAN
Anthropologist,
University of Michigan; Author, In
God's We Trust
There
is no God that has existence apart from people's
thoughts of God. There is certainly no Being
that can simply suspend the (nomological) laws
of the universe in order to satisfy our personal
or collective yearnings and whims—like
a stage director called on to change and improve
a play. But there is a mental (cognitive and
emotional) process common to science and religion
of suspending belief in what you see and take
for obvious fact. Humans have a mental compulsion—perhaps
a by-product of the evolution of a hyper-sensitive
reasoning device to serve our passions—to
situate and understand the present state of mundane
affairs within an indefinitely extendable and
overarching system of relations between hitherto
unconnected elements. In any event, what drives
humanity forward in history is this quest for
non-apparent truth.
|
|
JESSE
BERING
Psychologist,
University of Arkansas
In
1936, shortly after the outbreak of the Spanish
Civil War, the moribund philosopher Miguel
de Unamuno, author of the classic existential
text Tragic Sense of Life, died
alone in his office of heart failure at the
age of 72.
Unamuno
was no religious sentimentalist. As a rector
and Professor of Greek at the University
of Salamanca, he was an advocate of rationalist
ideals and even died a folk hero for openly
denouncing Francisco Franco's fascist regime.
He was, however, ridden with a 'spiritual'
burden that troubled him nearly all his life.
It was the problem of death. Specifically,
the problem was his own death, and what,
subjectively, it would be "like" for
him after his own death: "The effort
to comprehend it causes the most tormenting
dizziness." I've taken to calling this
dilemma "Unamuno's paradox" because
I believe that it is a universal problem.
It is, quite simply, the materialist understanding
that consciousness is snuffed out by death
coming into conflict with the human inability
to simulate the psychological state of death.
Of
course, adopting a parsimonious stance allows
one to easily deduce that we as corpses cannot
experience mental states, but this theoretical
proposition can only be justified by a working
scientific knowledge (i.e., that the non-functioning
brain is directly equivalent to the cessation
of the mind). By stating that psychological
states survive death, or even alluding to
this possibility, one is committing oneself
to a radical form of mind-body dualism. Consider
how bizarre it truly is: Death is seen as
a transitional event that unbuckles the body
from its ephemeral soul, the soul being the
conscious personality of the decedent and
the once animating force of the now inert
physical form. This dualistic view sees the
self as being initially contained in bodily
mass, as motivating overt action during this
occupancy, and as exiting or taking leave
of the body at some point after its biological
expiration. So what, exactly, does the brain
do if mental activities can exist independently
of the brain? After all, as John Dewey put
it, mind is a verb, not a noun.
And
yet this radicalism is especially common.
In the United States alone, as much as 95%
of the population reportedly believes in
life after death. How can so many people
be wrong? Quite easily, if you consider that
we're all operating with the same standard,
blemished psychological hardware. It's tempting
to argue, as Freud did, that it's just people's
desire for an afterlife that's behind it
all. But it would be a mistake to leave it
at that. Although there is convincing evidence
showing that emotive factors can be powerful
contributors to people's belief in life after
death, whatever one's motivations for rejecting
or endorsing the idea of an immaterial soul
that can defy physical death, the ability
to form any opinion on the matter would be
absent if not for our species' expertise
at differentiating unobservable minds from
observable bodies.
But
here's the rub. The materialist version of
death is the ultimate killjoy null hypothesis.
The epistemological problem of knowing what
it is "like" to be dead can never
be resolved. Nevertheless, I think that Unamuno
would be proud of recent scientific attempts
to address the mechanics of his paradox.
In a recent study, for example, I reported
that when adult participants were asked to
reason about the psychological abilities
of a protagonist who had just died in an
automobile accident, even participants who
later classified themselves as "extinctivists" (i.e.,
those who endorsed the statement "what
we think of as the 'soul,' or conscious personality
of a person, ceases permanently when the
body dies") nevertheless stated that
the dead person knew that he was dead. For
example, when asked whether the dead protagonist
knew that he was dead (a feat demanding,
of course, ongoing cognitive abilities),
one young extinctivist's answer was almost
comical. "Yeah, he'd know, because I
don't believe in the afterlife. It is non-existent;
he sees that now." Try hard as he might
to be a good materialist, this subject couldn't
help but be a dualist.
How
do I explain these findings? Like reasoning
about one's past mental states during dreamless
sleep or while in other somnambulistic states,
consciously representing a final state of
non-consciousness poses formidable, if not
impassable, cognitive constraints. By relying
on simulation strategies to derive information
about the minds of dead agents, you would
in principle be compelled to "put yourself
into the shoes" of such organisms, which
is of course an impossible task. These constraints
may lead to a number of telltale errors,
namely Type I errors (inferring mental states
when in fact there are none), regarding the
psychological status of dead agents. Several
decades ago, the developmental psychologist
Gerald Koocher described, for instance, how
a group of children tested on death comprehension
reflected on what it might be like to be
dead "with references to sleeping, feeling
'peaceful,' or simply 'being very dizzy.'" More
recently, my colleague David Bjorklund and
I found evidence that younger children are
more likely to attribute mental states to
a dead agent than are older children, which
is precisely the opposite pattern that one
would expect to find if the origins of such
beliefs could be traced exclusively to cultural
learning.
It
seems that the default cognitive stance is
reasoning that human minds are immortal;
the steady accretion of scientific facts
may throw off this stance a bit, but, as
Unamuno found out, even science cannot answer
the "big" question. Don't get me
wrong. Like Unamuno, I don't believe in the
afterlife. Recent findings have led me to
believe that it's all a cognitive illusion
churned up by a psychological system specially
designed to think about unobservable minds.
The soul is distinctly human all right. Without
our evolved capacity to reason about minds,
the soul would never have been. But in this
case, the proof isn't in the empirical pudding.
It can't be. It's death we're talking about,
after all.
|
|
IRENE
PEPPERBERG
Research
Scientist, MIT School of Architecture and
Planning; Author, The Alex Studies
I
believe, but can't prove, that human language evolved
from a combination of gesture and innate vocalizations,
via the concomitant evolution of mirror neurons,
and that birds will provide the best model for
language evolution.
Work on mirror neurons over the past decade has provided intriguing evidence,
although no solid proof, for the gestural origins of speech. What can
be called the mirror neuron hypothesis(MNH) suggests that only a small
re-organization of the nonhuman primate brain was needed to create the
wiring that underlies speech acquisition/learning. What is missing from
the MNH is a model of the development of language from speech; it is
here that I believe that a model based on avian vocalizations is most
valuable.
First, some background. Passerine birds can be divided into two groups:
the oscines, who learn their songs, and the sub-oscines, who have a limited
number of what seem to be innately-specified songs; the former have a
well-defined neural architectures and mechanisms for song acquisition;
the latter lack brain structures for song acquisition, although they
obviously have brain and vocal tract structures for producing song. The
sub-oscines, in parallel with nonhuman primates, often use various activities
or gestures (posture, numbers of repetitions of songs, feather erectness,
types of flights, etc) to provide additional information about the meaning
of their utterances. W. John Smith, for example, can predict a flycatchers
actions by the combination of posture, flight, and singing pattern he
observes. The songbirds, like human children learning language, will
not learn their vocalizations if deafened, and need to hear, babble and
practice songs before attaining adult competence; very recent work by
Rose et al. demonstrate that even the syntax of their song is learned
through early exposure to paired phrases, which are then combined to
create the adult vocalizations. Such data, demonstrating how sparrows
integrate information about temporally-related events and how they use
that information to develop sequential vocal behavior, is a viable model
for human syntax acquisition.
Now, no one knows if any birds have any mirror neurons, and how their
mirror neurons would function if they did exist; some neural data on
responses to self-song provide intriguing hints but go no further. I
predict (a) the existence of such neurons in oscines and (b) that such
neurons will have a robust role in oscine song development, but (c) that
only more primitively-functioning mirror neurons (akin to the differences
separating monkey and human MNs) will be found in sub-oscines.
Now, what about the so-called missing link between learned and unlearned
vocal behavior? No one has found such a missing link in the primate line.
But Donald Kroodsma has recently discovered a flycatcher (a supposedly
sub-oscine bird) that apparently learns its song. The song is simple,
but has variations among groups of birds that constitute dialects. No
one yet knows if these birds have brain mechanisms for song learning,
or what these mechanisms might be. But I predict that Kroodsma's flycatchers
will have mirror neurons that function in intermediate manner, between
those of the oscines and sub-oscines, and will provide a model for the
missing link between nonhuman primate and human communication.
|
|
NASSIM
NICHOLAS TALEB
Mathematical
trader; Author, Fooled By Randomness
We
are good at fitting explanations to the past, all
the while living in the illusion of understanding
the dynamics of history.
My
claim is about the severe overestimation of knowledge
in what I call the " ex post" historical
disciplines, meaning almost all of social science
(economics, sociology, political science) and the
humanities, everything that depends on the non-experimental
analysis of past data. I am convinced that these
disciplines do not provide much understanding of
the world or even their own subject matter; they
mostly fit a nice sounding narrative that caters
to our desire (even need) to have a story. The
implications are quite against conventional wisdom.
You do not gain much by reading the newspapers,
history books, analyses and economic reports; all
you get is misplaced confidence about what you
know. The difference between a cab driver and a
history professor is only cosmetic as the latter
can express himself in a better way.
There
is convincing but only partial empirical evidence
of this effect. The evidence can only be seen in
the disciplines that offer both quantitative data
and quantitative predictions by the experts, such
as economics. Economics and finance are an empiricist's
dream as we have a goldmine of data for such testing.
In addition there are plenty of "experts",
many of whom make more than a million a year, who
provide forecasts and publish them for the benefits
of their clients. Just check their forecasts against
what happens after. Their projections fare hardly
better than random, meaning that their "stories" are
convincing, beautiful to listen to, but do not
seem to help you more than listening to, say, a
Chicago cab driver. This extends to inflation,
growth, interest rates, balance of payment, etc.
(While someone may argue that their forecasts might
impact these variables, the mechanism of "self-canceling
prophecy" can be taken into account). Now
consider that we depend on these people for governmental
economic policy!
This
implies that whether or not you read the newspapers
will not make the slightest difference to your
understanding of what can happen in the economy
or the markets. Impressive tests on the effect
of the news on prices were done by the financial
empiricist Victor Niederhoffer in the 60s and repeated
throughout with the same results.
If
you look closely at the data to check the reasons
of this inability to see things coming, you will
find that these people tend to guess the regular
events (though quite poorly); but they miss on
the large deviations, these " unusual" events
that carry large impacts. These outliers have a
disproportionately large contribution to the total
effect.
Now
I am convinced, yet cannot prove it quantitatively,
that such overestimation can be generalized to
anything where people give you a narrative-style
story from past information, without experimentation.
The difference is that the economists got caught
because we have data (and techniques to check the
quality of their knowledge) and historians, news
analysts, biographers, and "pundits" can
hide a little longer. Basically historians might
get a small trend here and there, but they did
miss on the big events of the past centuries and,
I am convinced, will not see much coming in the
future. It was said: "the wise see things
coming". To me the wise persons are the ones
who know that they can't see things coming.
|
|
TODD
FEINBERG, M.D.
Psychiatrist
and Neurologist, Albert Einstein College of Medicine;
Author, Altered Egos
I
believe the human race will never decide that
an advanced computer possesses consciousness.
Only in science fiction will a person be charged
with murder if they unplug a PC. I believe this
because I hold, but cannot yet prove, that in
order for an entity to be consciousness and possess
a mind, it has to be a living being.
Being
alive, of course, does not guarantee the presence
of a mind. For example, a plant carries on the
necessary metabolic functions to be alive, but
still does not possess a mind. A chimpanzee, on
the other hand, is a different story. All the behavioral
features we share with chimps in addition to life,
such as intelligence, the ability to deceive, mirror
self-recognition, some individual social identity,
make chimps seem so much like us that many in the
scientific community intuitively grant chimps "beinghood" and
consciousness.
In addition to being alive, therefore, it appears that a living thing
must be a being, must possess a self, to possess a mind. But silicon
chips are not alive, and computers are not beings. I argue that this
is so because the particular material substance and arrangement of the
brain is essential to the creation of consciousness and "beinghood." Computers
will never achieve consciousness because in order for a computer to be "conscious
like us" it will need to be made of living stuff like us, to grow
like us, and unfortunately, to be able to die like us.
|
|
KAI
KRAUSE
Software: Concepts, Artwork & Interface
Design; Byteburg Research Lab above the Rhein River
I
always felt, but can't prove outright: Zen
is wrong. Then is right. Everything
is not about the now, as in the "here
and how", "living for the moment" On
the contrary: I believe everything is about
the before then and the back then.
It
is about the anticipation of the moment and the
memory of the moment, but not the moment.
In
German there is a beautiful little word for it: "Vorfreude",
which still is a shade different from "delight" or "pleasure" or
even "anticipation". It is the "Pre-Delight",
the "Before-Joy", or as a little linguistic
concoction: the "ForeFun"; in a single
word trying to express the relationship of time,
the pleasure of waiting for the moment to arrive,
the can't wait moments of elation, of hoping for
some thing, some one, some event to happen.
Whether
it's on a small scale like that special taste of
your favorite food, waiting to see a loved one,
that one moment in a piece of music, a sequence
in a movie....or the larger versions: the expectation
of a beautiful vacation, the birth of a baby, your
acceptance of an Oscar.
We
have been told by wise men, Dalais and Maharishis
that it is supposedly all about those moments,
to cherish the second it happens and never mind
the continuance of time...
But
for me, since early childhood days, I realized
somehow: the beauty lies in the time before, the
hope for, the waiting for, the imaginary picture
painted in perfection of that instant in time.
And then, once it passes, in the blink of an eye,
it will be the memory which really stays with you,
the reflection, the remembrance of that time. Cherish
the thought..., remember how....
Nothing
ever is as beautiful as its abstraction
through the rose-colored glasses of anticipation...The
toddlers hope for Santa Claus on Christmas eve
turns out to be a fat guy with a fashion issue.
Waiting for the first kiss can give you waves of
emotional shivers up your spine, but when it then
actually happens, it's a bunch of molecules colliding,
a bit of a mess, really. It is not the
real moment that matters. In Anticipation the moment
will be glorified by innocence, not knowing yet.
In Remembrance the moment will be sanctified by
memory filters, not knowing any more.
In
the Zen version, trying to uphold the beauty of
the moment in that moment is in my eyes a sad undertaking.
Not so much because it can be done, all manner
of techniques have been put forth how to be a happy
human by mastering the art of it. But it
also implies, by definition, that all those other
moments live just as much under the spotlight:
the mundane, the lame, the gross, the everyday
routines of dealing with life's mere mechanics.
In
the Then version, it is quite the opposite: the
long phases before and after last hundreds or thousands
of times longer than the moment, and drown out
the everyday humdrum entirely.
Bluntly
put: spend your life in the eternal bliss of always
having something to hope for, something to wait
for, plans not realized, dreams not come true....
Make sure you have new points on the horizon, that
you purposely create. And at the same time, relive
your memories, uphold and cherish them, keep them
alive and share them, talk about them.
Make
plans and take pictures.
I
have no way of proving such a lofty philosophical
theory, but I greatly anticipate the moment that
I might... and once I have done it, I will, most
certainly, never forget.
|
|
ELIZABETH
SPELKE
Psychologist, Harvard University
I
believe, first, that all people have the same fundamental
concepts, values, concerns, and commitments, despite
our diverse languages, religions, social practices,
and expressed beliefs. If defenders and opponents
of abortion, Israelis and Palestinians, or Cambridge
intellectuals and Amazonian jungle dwellers were
to get beyond their surface differences, each would
discover that the common ground linking them to members
of the other group equals that which binds their
own group together. Our common conceptual and moral
commitments spring from the core cognitive systems
that allow an infant to grow rapidly and spontaneously
into a competent participant in any human society.
Second,
one of our shared core systems centers on a notion
that is false: the notion that members of different
human groups differ profoundly in their concepts
and values. This notion leads us to interpret the
superficial differences between people as signs
of deeper differences. It has quite a grip on us:
Many people would lay down their lives for perfect
strangers from their own community, while looking
with suspicion at members of other communities.
And all of us are apt to feel a special pull toward
those who speak our language and share our ethnic
background or religion, relative to those who don't.
Third,
the most striking feature of human cognition stems
not from our core knowledge systems but from our
capacity to rise above them. Humans are capable
of discovering that our core conceptions are false,
and of replacing them with truer ones. This change
has happened dramatically in the domain of astronomy.
Core capacities to perceive, act on, and reason
about the surface layout predispose us to believe
that the earth is a flat, extended surface on which
gravity acts as a downward force. This belief has
been decisively overturned, however, by the progress
of science. Today, every child who plays computer
games or watches Star Wars knows that the earth
is one sphere among many, and that gravity pulls
all these bodies toward one another.
Together,
my three beliefs suggest a fourth. If the cognitive
sciences are given sufficient time, the truth of
the claim of a common human nature eventually will
be supported by evidence as strong and convincing
as the evidence that the earth is round. As humans
are bathed in this evidence, we will overcome our
misconceptions of human differences. Ethnic and
religious rivalries and conflicts will come to
seem as pointless as debates over the turtles that
our pancake earth sits upon, and our common need
for a stable, sustainable environment for all people
will be recognized. But this fourth belief is conditional.
Our species is caught in a race between the progress
of our science and the escalation both of our intergroup
conflicts and of the destructive means to pursue
them. Will humans last long enough for our science
to win this race?
|
|
SAM
HARRIS
Neuroscience
Researcher; Author, The
End of Faith
Twenty-two
percent of Americans claim to be certain that Jesus
will return to earth to judge the living and the
dead sometime in the next fifty years. Another
twenty-two percent believe that he is likely to
do so. The problem that most interests me at this
point, both scientifically and socially, is the
problem of belief itself. What does it mean, at
the level of the brain, to believe that a proposition
is true? The difference between believing and disbelieving
a statement—Your spouse is cheating on you;
you've just won ten million dollars—is one
of the most potent regulators of human behavior
and emotion. The instant we accept a given representation
of the world as true, it becomes the basis for
further thought and action; rejected as false,
it remains a string of words.
What
I believe, though cannot yet prove, is that belief
is a content-independent process. Which is to say
that beliefs about God—to the degree that
they are really believed—are the same as
beliefs about numbers, penguins, tofu, or anything
else. This is not to say that all of our representations
of the world are acquired through language, or
that all linguistic representations are on the
same logical footing. And we know that different
regions of the brain are involved in judging the
truth-value of statements drawn from different
content domains. What I do believe, however, is
that the neural processes that govern the final
acceptance of a statement as "true" rely on more
fundamental, reward-related circuitry in our frontal
lobes—probably the same regions that judge
the pleasantness of tastes and odors. Truth may
be beauty, and beauty truth, in more than a metaphorical
sense. And false statements may, quite literally,
disgust us.
Once
the neurology of belief becomes clear, and it stands
revealed as an all-purpose emotion arising in a
wide variety of contexts (often without warrant),
religious faith will be exposed for what it is:
a humble species of terrestrial credulity. We will
then have additional, scientific reasons to declare
that mere feelings of conviction are not enough
when it comes time to talk about the way the world
is. The only thing that guarantees that (sufficiently
complex) beliefs actually represent the world,
are chains of evidence and argument linking them
to the world. Only on matters of religious faith
do sane men and women regularly dispute this fact.
Apart from removing the principle reason we have
found to kill one another, a revolution in our
thinking about religious belief would clear the
way for new approaches to ethics and spiritual
experience. Both ethics and spirituality lie at
the very heart of what is good about being human,
but our thinking on both fronts has been shackled
to the preposterous for millennia. Understanding
belief at the level of the brain may hold the key
to new insights into the nature of our minds, to
new rules of discourse, and to new frontiers of
human cooperation.
|
|
LYNN
MARGULIS
Biologist, University of Massachusetts, Amherst;
Author, Symbiosis in Cell Evolution.
I
feel that I know something that will turn out
to be correct and eventually proved to be true
beyond doubt
What?
That
our ability to perceive signals in the environment
evolved directly from our bacterial ancestors.
That is, we, like all other mammals including our
apish brothers detect odors, distinguish tastes,
hear bird song and drum beats and we too feel the
vibrations of the drums. With our eyes closed we
detect the light of the rising sun. These abilities
to sense our surroundings are a heritage that preceded
the evolution of all primates, all vertebrate animals,
indeed all animals. Such sensitivities to wafting
plant scents, tasty salted mixtures, police cruiser
sirens, loving touches and star light register
because of our "sensory cells".
These
avant guard cells of the nasal passages, the taste
buds, the inner ear, the touch receptors in the
skin and the retinal rods and cones all have in
common the presence at their tips of projections
("cell processes") called cilia. Cilia have a recognizable
fine structure. With a very high power ("electron")
microscope a precise array of protein tubules,
nine, exactly nine pairs of tubules are arranged
in a circular array and two singlet tubules are
in the center of this array. All sensory cells
have this common feature whether in the light-sensitive
retina of the eye or the balance-sensitive semicircular
canals of the inner ear. Cross-section slices of
the tails of human, mouse and even insect (fruit-fly)
sperm all share this same instantly recognizable
structure too. Why this peculiar pattern? No one
knows for sure but it provides the evolutionist
with a strong argument for common ancestry. The
size (diameter) of the circle (0.25 micrometers)
and of the constituent tubules (0.024 micrometers)
aligned in the circle is identical in the touch
receptors of the human finger and the taste buds
of the elephant.
What
do I feel that I know, what Oscar Wilde said (that "even
true things can be proved")?
Not
only that the sensory cilia derive from these exact
9-fold symmetrical structures in protists such
as the "waving feet" of the paramecium or the tail
of the vaginal-itch protist called Trichomonas
vaginalis. Indeed, all biologists agree with
the claim that sperm tails and all these forms
of sensory cilia share a common ancestry.
But
I go much farther. I think the the common ancestor
of the cilium, but not the rest of the cell, was
a free-swimming entity, a skinny snake-like bacterium
that, 1500 million years ago squiggled through
muds in a frantic search for food. Attracted by
some smells and repelled by others the bacteria,
by themselves, already enjoyed a repertoire of
sensory abilities that remain with their descendants
to this day. In fact, this bacterial ancestor of
the cilium never went extinct, rather some of its
descendants are uncomfortably close to us today.
This hypothetical bacterium, ancestor to all the
cilia, was no ordinary rod-shaped little dot.
No,
this bacterium who still has many live relatives,
entered into symbiotic partnerships with other
very different kinds of bacteria. Together this
two component partnership swam and stuck together
both persisted. What kind of bacterium became an
attached symbiont that impelled its partner forward?
None other than a squirming spirochete bacterium.
The
spirochete group of bacteria includes many harmless
mud-dwellers but it also contains a few scary freaks:
the treponeme of syphilis and the borrelias of
Lyme disease. We animals got our exquisite ability
to sense our surroundings—to tell light from
dark, noise from silence, motion from stillness
and fresh water from brackish brine—from
a kind of bacterium whose relatives we despise.
Cilia were once free-agents but they became an
integral part of all animal cells. Even though
the concept that cilia evolved from spirochetes
has not been proved I think it is true. Not only
is it true but, given the powerful new techniques
of molecular biology I think the hypothesis will
be conclusively proved. In the not-too-distant
future people will wonder why so many scientists
were so against my idea for so long!
|
|
GREGORY
BENFORD
Physicist,
UC Irvine; Author, Deep Time
Why
is there scientific law at all?
We
physicists explain the origin and structure of
matter and energy, but not the laws that do this.
Does the idea of causation apply to where the laws
themselves came from? Even Alan Guth's "free
lunch" gives us the universe after the laws start
acting. We have narrowed down the range of field
theories that can yield the big bang universe we
live in, but why do the laws that govern it seem
to be constant in time, and always at work?
One
can imagine a universe in which laws are not truly
lawful. Talk of miracles does just this, when God
is supposed to make things work. Physics aims to
find The Laws and hopes that these will be uniquely
constrained, as when Einstein wondered if God had
any choice when He made the universe. One fashionable
escape hatch from this asserts that there are infinitely
many universes, each sealed off from the others,
which can obey any sort of law one can imagine,
with parameters or assumptions changed. This "multiverse" view
represents the failure of our grand agenda, of
course, and seems to me contrary to Occam's Razor—solving
our lack of understanding by multiplying unseen
entities into infinity.
Perhaps
it is a similar philosophical failure of imagination
to think, as I do, that when we see order, there
is usually an ordering principle. But what can
constrain the nature of physical law? Evolution
gave us our ornately structured biosphere, and
perhaps a similar principle operates in selecting
universes. Perhaps our universe arises, then, from
selection for intelligences that can make fresh
universes, perhaps in high energy physics experiments.
Or near black holes (as Lee Smiolin supposed),
where space-time gets contorted into plastic forms
that can make new space-times. Then an Ur-universe
that had intelligence could make others, and this
reproduction with perhaps slight variation ion "genetics" drives
the evolution of physical law.
Selection
arises because only firm laws can yield constant,
benign conditions to form new life. Ed Harrison
had similar ideas. Once life forms realize this,
they could intentionally make more smart universes
with the right, fixed laws, to produce ever more
grand structures. There might be observable consequences
of this prior evolution, If so, then we are an
inevitable consequence of the universe, mirroring
intelligences that have come before, in some earlier
universe that deliberately chose to create more
sustainable order. The fitness of our cosmic environment
is then no accident. If we find evidence of fine-tuning
in the Dyson and Rees sense, then, is this evidence
for such views?
|
|