"WHAT
IS YOUR DANGEROUS IDEA?" |
|
PAUL
W. EWALD
Evolutionary
Biologist; Director, Program in Evolutionary Medicine, University
of Louisville; Author, Plague Time

A
New Golden Age of Medicine
My
dangerous idea is that we have in hand most of the information
we need to facilitate a new golden age of medicine. And what
we don't have in hand we can get fairly readily by wise investment
in targeted research and intervention. In this golden age we
should be able to prevent most debilitating diseases in developed
and undeveloped countries within a relatively short period of
time with much less money than is generally presumed. This is
good news. Why is it dangerous?
One
array of dangers arises because ideas that challenge the status
quo threaten the livelihood of many. When the many are embedded
in powerful places the threat can be stifling, especially when
a lot of money and status are at stake. So it is within the arena
of medical research and practice. Imagine what would happen if
the big diseases — cancers, arteriosclerosis, stroke, diabetes — were
largely prevented.
Big pharmas would become small because the
demand for prescription drugs would drop. The prestige of physicians
would drop because they would no longer be relied upon to prolong
life. The burgeoning industry of biomedical research would shrink
because governmental and private funding for this research would
diminish. Also threatened would be scientists whose sense of
self-worth is built upon the grant dollars they bring in for
discovering miniscule parts of big puzzles. Scientists have been
beneficiaries of the lack of progress in recent decades, which
has caused leaders such as the past head of NIH, Harold Varmus,
to declare that what is needed is more basic research. But basic
research has not generated many great advancements in the prevention
or cure of disease in recent decades.
The major exception is
in the realm of infectious disease where many important advancements
were generated from tiny slices of funding. The discovery that
peptic ulcers are caused by infections that can be cured with
antibiotics is one example. Another is the discovery that liver
cancer can often be prevented by a vaccine against the hepatitis
B virus or by screening blood for hepatitis B and C viruses.
The
track record of the past few decades shows that these examples
are not quirks. They are part of a trend that goes back over
a century to the beginning of the germ theory itself. And the
accumulating evidence supporting infectious causation of big
bad diseases of modern society is following the same pattern
that occurred for diseases that have been recently accepted as
caused by infection.
The process of acceptance typically occurs
over one or more decades and accords with Schopenhauer's generalization
about the establishment of truth: it is first ridiculed, then
violently opposed, and finally accepted as being self-evident.
Just a few groups of pathogens seem to be big players: streptococci, Chlamydia,
some bacteria of the oral cavity, hepatitis viruses, and herpes
viruses. If the correlations between these pathogens and the
big diseases of wealthy countries does in fact reflect infectious
causation, effective vaccines against these pathogens could contribute
in a big way to a new golden age of medicine that could rival
the first half of the 20th century.
The transition to this golden
age, however, requires two things: a shift in research effort
to identifying the pathogens that cause the major diseases and
development of effective interventions against them. The first
would be easy to bring about by restructuring the priorities
of NIH — where money goes, so go the researchers. The second
requires mechanisms for putting in place programs that cannot
be trusted to the free market for the same kinds of reasons that
Adam Smith gave for national defense. The goals of the interventions
do not mesh nicely with the profit motive of the free market.
Vaccines, for example, are not very profitable.
Pharmas cannot
make as much money by selling one vaccine per person to prevent
a disease as they can selling a patented drug like Vioxx which
will be administered day after day, year after year to treat
symptoms of an illness that is never cured. And though liability
issues are important for such symptomatic treatment, the pharmas
can argue forcefully that drugs with nasty side effects provide
some benefit even to those who suffer most from the side effects
because the drugs are given not to prevent an illness but rather
to people who already have an illness. This sort of defense is
less convincing when the victim is a child who developed permanent
brain damage from a rare complication of a vaccine that was given
to protect them against a chronic illness that they might have
acquired decades later.
Another
part of this vision of a new golden age will be the ability to
distinguish real threats from pseudo-threats. This ability will
allow us to invest in policy and infrastructure that will protect
people against real threats without squandering resources and
destroying livelihoods in efforts to protect against pseudo-threats.
Our present predicament on this front is far from this ideal.
Today experts on infectious diseases and institutions entrusted
to protect and improve human health sound the alarm in response
to each novel threat. The current fears over a devastating pandemic
of bird flu is a case in point. Some of the loudest voices offer
a simplistic argument: failing to prepare for the worst-case
scenarios is irresponsible and dangerous. This criticism has
been recently leveled at me and others who question expert proclamations,
such as those from the World Health Organization and the Centers
for Disease Control.
These proclamations inform us that H5N1
bird flu virus poses an imminent threat of an influenza pandemic
similar to or even worse than the 1918 pandemic. I have decreased
my popularity in such circles by suggesting that the threat of
this scenario is essentially nonexistent. In brief I argue that
the 1918 influenza viruses evolved their unique combination of
high virulence and high transmissibility in the conditions at
the Western Front of World War I.
By transporting contagious
flu patients into a series of tightly packed groups of susceptible
individuals, personnel fostered transmission from people who
were completely immobilized by their illness. Such conditions
must have favored the predator-like variants of the influenza
virus; these variants would have a competitive edge because they
could ruthlessly exploit a person for their own replication and
still get transmitted to large numbers of susceptible individuals.
These conditions have not recurred in human populations since
then and, accordingly, we have never had any outbreaks of influenza
viruses that have been anywhere near as harmful as those that
emerged at the Western Front. So long as we do not allow such
conditions to occur again we have little to fear from a reevolution
of such a predatory virus.
The
fear of a 1918 style pandemic has fueled preparations by a government
which, embarrassed by its failure to deal adequately with the
damage from Katrina, seems determined to prepare for any perceived
threat to save face. I would have no problem with the accusation
of irresponsibility if preparations for a 1918 style pandemic
were cost free. But they are not.
The $7 billion that the Bush
administration is planning as a downpayment for pandemic preparedness
has to come from somewhere. If money is spent to prepare for
an imaginary pandemic, our progress could be impeded on other
fronts that could lead to or have already established real improvements
in public health.
Conclusions about responsibility or irresponsibility
of this argument require that the threat from pandemic influenza
be assessed relative to the damage that results from the procurement
of the money from other sources. The only reliable evidence of
the damage from pandemic influenza under normal circumstances
is the experience of the two pandemics that have occurred since
1918, one in 1957 and the other in 1968. The mortality caused
by these pandemics was one-tenth to one-hundredth the death toll
from the 1918 pandemic.
We do need to be prepared for an influenza pandemic of the normal
variety, just as we needed to be prepared for category 5 hurricanes
in the Gulf of Mexico. If possible our preparations should allow
us to stop an incipient pandemic before it materializes. In contrast
with many of the most vocal experts I do not conclude that our
surveillance efforts will be quickly overwhelmed by a highly transmissible
descendent of the influenza virus that has generated the most recent
fright (dubbed H5N1). The transition of the H5N1 virus to a pandemic
virus would require evolutionary change.
The dialogue on this matter,
however, continues to neglect the primary mechanism of the evolutionary
change: natural selection. Instead it is claimed that H5N1 could
mutate to become a full-fledged human virus that is both highly
transmissible and highly lethal. Mutation provides only the variation
on which natural selection acts. We must consider natural selection
if we are to make meaningful assessments of the danger posed by
the H5N1 virus.
The evolution of the 1918 virus was gradual, and
both evidence and theory lead to the conclusion that any evolution
of increased transmissibility of H5N1 from human to human will
be gradual, as it was with SARS. With surveillance we can detect
such changes in humans and intervene to stop further spread as
was done with SARS. We do not need to trash the economy of southeast
asia each year to accomplish this.
The
dangerous vision of a golden age does not leave the poor countries
behind. As I have discussed in my articles and books, we should
be able to control much of the damage caused by the major killers
in poor countries by infrastructural improvements that not only
reduce the frequency of infection but also cause the infectious
agents to evolve toward benignity.
This integrated approach offers
the possibility to remodel our current efforts against the major
killers — AIDS, malaria, tuberculosis, dysentery and the
like. We should be able to move from just holding ground to institution
of the changes that created the freedom from acute infectious
diseases that have been enjoyed by inhabitants of rich countries
over the past century.
Dangerous
indeed! Excellent solutions are often dangerous to the status
quo because they they work. One measure of danger to some but
success to the general population is the extent to which highly
specialized researchers, physicians, and other health care workers
will need to retrain, and the extent to which hospitals and pharmaceutical
companies will need to downsize. That is what happens when we
introduce excellent solutions to health problems. We need not
be any more concerned about these difficulties than the loss
of the iron lung industry and the retraining of polio therapists
and researchers in the wake of the Salk vaccine. |
JESSE
BERING
Psychologist,
University of Arkansas

Science
will never silence God
With
each meticulous turn of the screw in science, with each tightening
up of our understanding of the natural world, we pull more taut
the straps over God's
muzzle. From botany to bioengineering, from physics to psychology,
what is science really but true Revelation — and what is
Revelation but the negation of God? It is a humble pursuit we scientists
engage in: racing to reality. Many of us suffer the harsh glare
of the American theocracy, whose heart still beats loud and strong
in this new year of the 21st century. We bravely favor truth, in
all its wondrous, amoral, and 'meaningless' complexity
over the singularly destructive Truth born of the trembling minds
of our ancestors. But my dangerous idea, I fear, is that no matter
how far our thoughts shall vault into the eternal sky of scientific
progress, no matter how dazzling the effects of this progress,
God will always bite through his muzzle and banish us from the
starry night of humanistic ideals.
Science is an endless series
of binding and rebinding his breath; there will never be a day
when God does not speak for the majority. There will never be a
day even when he does not whisper in the most godless of scientists'
ears. This is because God is not an idea, nor a cultural invention,
not an 'opiate of the masses' or any such thing; God is a
way of thinking that was rendered permanent by natural selection.
As scientists, we must toil and labor and toil again to silence
God, but ultimately this is like cutting off our ears to hear more
clearly. God too is a biological appendage; until we acknowledge
this fact for what it is, until we rear our children with this
knowledge, he will continue to howl his discontent for all of time. |
PHILIP
CAMPBELL
Editor-in Chief, Nature

Scientists
and governments developing public engagement about science and
technology are missing the point
This turns
out to be true in cases where there are collapses in consensus
that have serious societal consequences. Whether in relation to
climate change, GM crops or
the UK's triple vaccine for measles, mumps and rubella, alternative
science
networks develop amongst people who are neither ignorant nor irrational,
but
have perceptions about science, the scientific literature and its
implications that differ from those prevailing in the scientific
community.
These perceptions and discussions may be half-baked, but are no
less
powerful for all that, and carry influence on the internet and
in the media.
Researchers and governments haven't yet learned how to respond
to such
"citizen's science". Should they stop explaining and
engaging? No. But they
need also to understand better the influences at work within such
networks —
often too dismissively stereotyped — at an early stage in the debate
in
order to counter bad science and minimize the impacts of falsehoods. |
PAUL
BLOOM
Psychologist, Yale University;
Author, Descartes' Baby

There
are no souls
I am
not concerned here with the radical claim that personal identity,
free will, and consciousness do not exist. Regardless of its merit,
this position is so intuitively outlandish that nobody but a philosopher
could take it seriously, and so it is unlikely to have any real-world
implications, dangerous or otherwise.
Instead
I am interested in the milder position that mental life has a purely
material basis. The dangerous idea, then, is that Cartesian dualism
is false. If what you mean by "soul" is something immaterial
and immortal, something that exists independently of the brain,
then souls do not exist. This is old hat for most psychologists
and philosophers, the stuff of introductory lectures. But the rejection
of the immaterial soul is unintuitive, unpopular, and, for some
people, downright repulsive.
In the
journal "First Things", Patrick Lee and Robert P. George
outline some worries from a religious perspective.
"If
science did show that all human acts, including conceptual thought
and free choice, are just brain processes,... it would mean that
the difference between human beings and other animals is only superficial-a
difference of degree rather than a difference in kind; it would
mean that human beings lack any special dignity worthy of special
respect. Thus, it would undermine the norms that forbid killing
and eating human beings as we kill and eat chickens, or enslaving
them and treating them as beasts of burden as we do horses or oxen."
The
conclusions don't follow. Even if there are no souls, humans might
differ from non-human animals in some other way, perhaps with regard
to the capacity for language or abstract reasoning or emotional
suffering. And even if there were no difference, it would hardly
give us license to do terrible things to human beings. Instead,
as Peter Singer and others have argued, it should make us kinder
to non-human animals. If a chimpanzee turned out to possess the
intelligence and emotions of a human child, for instance, most
of us would agree that it would be wrong to eat, kill, or enslave
it.
Still,
Lee and George are right to worry that giving up on the soul means
giving up on a priori distinction between humans and other creatures,
something which has very real consequences. It would affect as
well how we think about stem-cell research and abortion, euthenasia,
cloning, and cosmetic psychopharmacology. It would have substantial
implications for the legal realm
— a belief in immaterial souls has led otherwise sophisticated
commentators to defend a distinction between actions that we do and
actions that our brains do. We are responsible only for the former,
motivating the excuse that Michael Gazzaniga has called, "My
brain made me do it." It has been proposed, for instance, that
if a pedophile's brain shows a certain pattern of activation while
contemplating sex with a child, he should not be viewed as fully
responsible for his actions. When you give up on the soul, and accept
that all actions correspond to brain activity, this sort of reasoning
goes out the window.
The
rejection of souls is more dangerous than the idea that kept us
so occupied in 2005 — evolution by natural selection. The
battle between evolution and creationism is important for many
reasons; it is
where science takes a stand against superstition. But, like the origin
of the universe, the origin of the species is an issue of great intellectual
importance and little practical relevance. If everyone were to become
a sophisticated Darwinian, our everyday lives would change very little.
In contrast, the widespread rejection of the soul would have profound
moral and legal consequences. It would also require people to rethink
what happens when they die, and give up the idea (held by about 90%
of Americans) that their souls will survive the death of their bodies
and ascend to heaven. It is hard to get more dangerous than that. |
DAVID
BUSS
Psychologist, University of
Texas, Austin; Author, The Murderer Next
Door: Why the Mind is Designed to Kil
l
The
Evolution of Evil
When
most people think of torturers, stalkers, robbers, rapists,
and murderers, they imagine crazed drooling monsters with maniacal
Charles Manson-like eyes. The calm normal-looking image starring
back at you from the bathroom mirror reflects a truer representation.
The dangerous idea is that all of us contain within our large
brains adaptations whose functions are to commit despicable
atrocities against our fellow humans — atrocities most
would label evil.
The
unfortunate fact is that killing has proved to be an effective
solution to an array of adaptive problems in the ruthless evolutionary
games of survival and reproductive competition: Preventing
injury, rape, or death; protecting one's children; eliminating
a crucial antagonist; acquiring a rival's resources; securing
sexual access to a competitor's mate; preventing an interloper
from appropriating one's own mate; and protecting vital resources
needed for reproduction.
The idea that evil has evolved is dangerous on several counts.
If our brains contain psychological circuits that can trigger
murder, genocide, and other forms of malevolence, then perhaps
we can't hold those who commit carnage responsible: "It's
not my client's fault, your honor, his evolved homicide adaptations
made him do it." Understanding causality, however, does
not exonerate murderers, whether the tributaries trace back
to human evolution history or to modern exposure to alcoholic
mothers, violent fathers, or the ills of bullying, poverty,
drugs, or computer games. It would be dangerous if the theory
of the evolved murderous mind were misused to let killers
free.
The evolution of evil is dangerous for a more disconcerting
reason. We like to believe that evil can be objectively located
in a particular set of evil deeds, or within the subset people
who perpetrate horrors on others, regardless of the perspective
of the perpetrator or victim. That is not the case. The perspective
of the perpetrator and victim differ profoundly. Many view
killing a member of one's in-group, for example, to be evil,
but take a different view of killing those in the out-group.
Some people point to the biblical commandment "thou
shalt not kill" as an absolute. Closer biblical inspection
reveals that this injunction applied only to murder within
one's group.
Conflict with terrorists provides a modern example. Osama
bin Laden declared: "The ruling to kill the Americans
and their allies — civilians and military — is
an individual duty for every Muslim who can do it in any
country in which it is possible to do it." What is evil
from the perspective of an American who is a potential victim
is an act of responsibility and higher moral good from the
terrorist's perspective. Similarly, when President Bush identified
an "axis of evil," he rendered it moral for Americans
to kill those falling under that axis — a judgment
undoubtedly considered evil by those whose lives have become
imperiled.
At a rough approximation, we view as evil people who inflict
massive evolutionary fitness costs on us, our families, or
our allies. No one summarized these fitness costs better
than the feared conqueror Genghis Khan (1167-1227): "The
greatest pleasure is to vanquish your enemies, to chase them
before you, to rob them of their wealth, to see their near
and dear bathed in tears, to ride their horses and sleep
on the bellies of their wives and daughters."
We can be sure that the families of the victims of Genghis
Khan saw him as evil. We can be just as sure that his many
sons, whose harems he filled with women of the conquered
groups, saw him as a venerated benefactor. In modern times,
we react with horror at Mr. Khan describing the deep psychological
satisfaction he gained from inflicting fitness costs on victims
while purloining fitness fruits for himself. But it is sobering
to realize that perhaps half a percent of the world's population
today are descendants of Genghis Khan.
On reflection, the dangerous idea may not be that murder
historically has been advantageous to the reproductive success
of killers; nor that we all house homicidal circuits within
our brains; nor even that all of us are lineal descendants
of ancestors who murdered. The danger comes from people who
refuse to recognize that there are dark sides of human nature
that cannot be wished away by attributing them to the modern
ills of culture, poverty, pathology, or exposure to media
violence. The danger comes from failing to gaze into the
mirror and come to grips the capacity for evil in all of
us. |
V.S.
RAMACHANDRAN
Neuroscientist;
Director, Center for Brain and Cognition, University
of California, San Diego; Author, A Brief Tour of
Human Consciousness

Francis
Crick's "Dangerous" Idea
I
am a brain, my dear Watson, and the rest of me is a
mere appendage.
— Sherlock Holmes
An
idea that would be "dangerous if true" is
what Francis Crick referred to as "the astonishing
hypothesis"; the notion that our conscious experience
and sense of self is based entirely on the activity of
a hundred billion bits of jelly — the neurons that constitute
the brain. We take this for granted in these enlightened
times but even so it never ceases to amaze me.
Some
scholars have criticized Cricks tongue-in-cheek phrase
(and title of his book) on the grounds that the hypothesis
he refers to is "neither astonishing
nor a hypothesis". (Since we already know it to
be true) Yet the far reaching philosophical, moral and
ethical dilemmas posed by his hypothesis have not been
recognized widely enough. It is in many ways the ultimate dangerous idea .
Lets put this in historical perspective.
Freud
once pointed out that the history of ideas in the last
few centuries has been punctuated by "revolutions" major
upheavals of thought that have forever altered our view
of ourselves and our place in the cosmos.
First
there was the Copernican system dethroning the earth
as the center of the cosmos.
Second
was the Darwinian revolution; the idea that far from
being the climax of "intelligent design" we
are merely neotonous apes that happen to be slightly
cleverer than our cousins.
Third,
the Freudian view that even though you claim to be "in
charge" of your life, your behavior
is in fact governed by a cauldron of drives and motives
of which you are largely unconscious.
And
fourth, the discovery of DNA and the genetic code with
its implication (to quote James Watson) that "There are only molecules.
Everything else is sociology".
To
this list we can now add the fifth, the "neuroscience
revolution" and
its corollary pointed out by Crick — the "astonishing
hypothesis" — that even our loftiest thoughts and
aspirations are mere byproducts of neural activity. We
are nothing but a pack of neurons.
If all this seems dehumanizing, you haven't seen anything
yet.
[Editor's
Note: An lengthly essay by Ramachandran on this subject
is scheduled for publication by Edge in January.] |
LEO
CHALUPA
Ophthalmologist
and Neurobiologist, University of California, Davis

A
24-hour period of absolute solitude
Our
brains are constantly subjected to the demands of multi-tasking
and a seemingly endless cacophony of information from diverse
sources. Cell phones, emails, computers, and cable television
are omnipresent, not to mention such archaic venues as books,
newspapers and magazines.
This
induces an unrelenting barrage of neuronal activity that in turn
produces long-lasting structural modification in virtually all
compartments of the nervous system. A fledging industry touts
the virtues of exercising your brain for self-improvement. Programs
are offered for how to make virtually any region of your neocortex
a more efficient processor. Parents are urged to begin such regimes
in preschool children and adults are told to take advantage of
their brain's plastic properties for professional advancement.
The evidence documenting the veracity for such claims is still
outstanding, but one thing is clear. Even if brain exercise does
work, the subsequent waves of neuronal activities stemming from
simply living a modern lifestyle are likely to eradicate the
presumed hard-earned benefits of brain exercise.
My
dangerous idea is that what's needed to attain optimal brain
performance — with or without prior brain exercise —
is a 24-hour period of absolute solitude. By absolute solitude
I mean no verbal interactions of any kind (written or spoken, live
or recorded) with another human being. I would venture that a significantly
higher proportion of people reading these words have tried skydiving
than experienced one day of absolute solitude.
What
to do to fill the waking hours? That's a question that each person
would need to answer for him/herself. Unless you've spent time
in a monastery or in solitary confinement it's unlikely that
you've had to deal with this issue. The only activity not proscribed
is thinking. Imagine if everyone in this country had the opportunity
to do nothing but engage in uninterrupted thought for one full
day a year!
A national day of absolute solitude would do more to improve the
brains of all Americans than any other one-day program. (I leave
it to the lawmakers to figure out a plan for implementing this
proposal.)The danger stems from the fact that a 24 period for uninterrupted
thinking could cause irrevocable upheavals in much of what our
society currently holds sacred.But whether that would improve our
present state of affairs cannot be guaranteed. |
J.
CRAIG VENTER
Genomics
Researcher; Founder & President, J. Craig Venter
Science Foundation

Revealing
the genetic basis of personality and behavior will create
societal conflicts
From our initial analysis of the sequence of the human genome,
particularly with the much smaller than expected number of human
genes, the genetic determinists seemed to have clearly suffered
a setback. After all, those looking for one gene for each human
trait and disease couldn't possibly be accommodated with as few
as twenty-odd thousand genes when hundreds of thousands were
anticipated. Deciphering the genetic basis of human behavior
has been a complex and largely unsatisfying endeavor due to the
limitations of the existing tools of genetic trait analysis particularly
with complex traits involving multiple genes.
All
this will soon undergo a revolutionary transformation. The rate
of change of DNA sequencing technology is continuing at an exponential
pace. We are approaching the time when we will go from having
a few human genome sequences to complex databases containing
first tens, to hundreds of thousands, of complete genomes, then
millions. Within a decade we will begin rapidly accumulating
the complete genetic code of humans along with the phenotypic
repertoire of the same individuals. By performing multifactorial
analysis of the DNA sequence variations, together with the comprehensive
phenotypic information gleaned from every branch of human investigatory
discipline, for the first time in history, we will be able to
provide answers to quantitatively questions of what is genetic
versus what is due to the environment. This is already taking
place in cancer research where we can measure the differences
in genetic mutations inherited from our parents versus those
acquired over our lives from environmental damage. This good
news will help transform the treatment of cancer by allowing
us to know which proteins need to be targeted.
However,
when these new powerful computers and databases are used to help
us analyze who we are as humans, will society at large, largely
ignorant and afraid of science, be ready for the answers we are
likely to get?
For
example, we know from experiments on fruit flies that there are
genes that control many behaviors, including sexual activity.
We sequenced the dog genome a couple of years ago and now an
additional breed has had its genome decoded. The canine world
offers a unique look into the genetic basis of behavior. The
large number of distinct dog breeds originated from the wolf
genome by selective breeding, yet each breed retains only subsets
of the wolf behavior spectrum. We know that there is a genetic
basis not only of the appearance of the breeds with 30-fold difference
in weight and 6-fold in height but in their inherited actions.
For example border collies can use the power of their stare to
herd sheep instead of freezing them in place prior to devouring
them.
We
attribute behaviors in other mammalian species to genes and genetics
but when it comes to humans we seem to like the notion that we
are all created equal, or that each child is a "blank slate".
As we obtain the sequences of more and more mammalian genomes
including more human sequences, together with basic observations
and some common sense, we will be forced to turn away from the
politically correct interpretations, as our new genomic tool
sets provide the means to allow us to begin to sort out the reality
about nature or nurture. In other words, we are at the threshold
of a realistic biology of humankind.
It
will inevitably be revealed that there are strong genetic components
associated with most aspects of what we attribute to human existence
including personality subtypes, language capabilities, mechanical
abilities, intelligence, sexual activities and preferences, intuitive
thinking, quality of memory, will power, temperament, athletic
abilities, etc. We will find unique manifestations of human activity
linked to genetics associated with isolated and/or inbred populations.
The danger rests with what we already know: that we are not
all created equal. Further danger comes with our ability to
quantify and measure the genetic side of the equation before
we can fully understand the much more difficult task of evaluating
environmental components of human existence. The genetic determinists
will appear to be winning again, but we cannot let them forget
the range of potential of human achievement with our limiting
genetic repertoire. |
MARTIN
REES
President,
The Royal Society; Professor of Cosmology & Astrophysics, Master,
Trinity College, University of Cambridge; Author, Our Final
Century: The 50/50 Threat to Humanity's Survival

Science may be 'running out of control'
Public
opinion surveys (at least in the UK) reveal a generally positive
attitude to science. However, this is coupled with widespread
worry that science may be 'running out of control'. This latter
idea is, I think, a dangerous one, because if widely believed
it could be self-fulfilling.
In
the 21st century, technology will change the world faster than
ever — the global environment, our lifestyles, even human
nature itself. We are far more empowered by science than any
previous generation was: it offers immense potential — especially
for the developing world — but there could be catastrophic
downsides. We are living in the first century when the greatest
risks come from human actions rather than from nature.
Almost
any scientific discovery has a potential for evil as well as
for good; its applications can be channelled either way, depending
on our personal and political choices; we can't accept the benefits
without also confronting the risks. The decisions that we make,
individually and collectively, will determine whether the outcomes
of 21st century sciences are benign or devastating. But there's'
a real danger that that, rather than campaigning energetically
for optimum policies, we'll be lulled into inaction by a feeling
of fatalism — a belief that science is advancing so fast,
and is so much influenced by commercial and political pressures,
that nothing we can do makes any difference.
The
present share-out of resources and effort between different sciences
is the outcome of a complicated 'tension' between many extraneous
factors. And the balance is suboptimal. This seems so whether
we judge in purely intellectual terms, or take account of likely
benefit to human welfare. Some subjects have had the 'inside
track' and gained disproportionate resources. Others, such as
environmental researches, renewable energy sources, biodiversity
studies and so forth, deserve more effort. Within medical research
the focus is disproportionately on cancer and cardiovascular
studies, the ailments that loom largest in prosperous countries,
rather than on the infectious diseases endemic in the tropics.
Choices
on how science is applied — to medicine, the environment,
and so forth — should be the outcome of debate extending
way beyond the scientific community. Far more research and development
can be done than we actually want or can afford to do; and there
are many applications of science that we should consciously eschew.
Even
if all the world's scientific academies agreed that a specific
type of research had a specially disquieting net 'downside' and
all countries, in unison, imposed a ban, what is the chance that
it could be enforced effectively enough? In view of the failure
to control drug smuggling or homicides, it is unrealistic to
expect that, when the genie is out of the bottle, we can ever
be fully secure against the misuse of science. And in our ever
more interconnected world, commercial pressure are harder to
control and regulate. The challenges and difficulties of 'controlling'
science in this century will indeed be daunting.
Cynics
would go further, and say that anything that is scientifically
and technically possible will be done — somewhere, sometime — despite
ethical and prudential objections, and whatever the regulatory
regime. Whether this idea is true or false, it's an exceedingly
dangerous one, because it's engenders despairing pessimism, and
demotivates efforts to secure a safer and fairer world. The future
will best be safeguarded — and science has the best chance
of being applied optimally — through the efforts of people
who are less fatalistic. |
|