"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?" |
|
MARTIN
SELIGMAN
Psychologist,
University of Pennsylvania, Author, Authentic Happiness

We
Are Alone
If
my math had been better, I would have become an astronomer
rather than a psychologist. I was after the very greatest
questions and finding life elsewhere in the universe seemed
the greatest of them all. Understanding thinking, emotion,
and mental health was second best — science for weaker
minds like mine.Carl
Sagan and I were close colleagues in the late 1960's
when we both taught at Cornell. I devoured his thrilling
book with I.I. Shklovskii (Intelligent Life in the Universe,
1966) in one twenty-four hour sitting, and I came away
convinced that intelligent life was commonplace across our
galaxy.
The
book, as most readers know, estimates a handful of parameters
necessary to intelligent life, such as the probability that
an advanced technical civilization will in short order destroy
itself and the number of "sol-like" stars in
the galaxy. Their conclusion is that there are between 10,000
and two million advanced technical civilizations hereabouts.
Some of my happiest memories are of discussing all this with
Carl, our colleagues, and our students into the wee hours
of many a chill Ithaca night.And
this made the universe a less chilly place as well. What
consolation! That homo sapiens might really partake of something
larger, that there really might be numerous civilizations
out there populated by more intelligent beings than we are,
wiser because they had outlived the dangers of premature
self-destruction. What's more we might contact them
and learn from them.
A
fledging program of listening for intelligent radio signals
from out there was starting up. Homo sapiens was just taking
its first balky steps off the planet; we exuberantly watched
the moon landing together at the faculty club. We worked
on the question of how we would respond if humans actually
heard an intelligent signal. What would our first "words" be?
We worked on what would be inscribed on the almost immortal
Voyager plaque that would leave our solar system just about
now — allowing the sentient beings who cadged it epochs
hence to surmise who we were, where we were, when we were,
and what we were (Should the man and woman be holding
hands? No, they might think we were one conjoined organism.) SETI
(the Search for Extraterrestrial Intelligence) and its forerunners
are almost forty years old. They scan the heavens for intelligent
radio signals, with three million participants using their
home computers to analyze the input. The result has been
zilch. There are plenty of excuses for zilch, however, and
lots of reason to hope: only a small fraction of the sky
has been scanned and larger more efficient arrays are coming
on line. Maybe really advanced civilizations don't
use communication techniques that produce waves we can pick
up.
Maybe
intelligent life is so unimaginably different from us that
we are looking in all the wrong "places." Maybe
really intelligent life forms hide their presence.So
I changed my mind. I now take the null hypothesis very seriously:
that Sagan and Shklovskii were wrong: that the number of
advanced technical civilizations in our galaxy is exactly
one, that the number of advanced technical civilizations
in the universe is exactly one.What
is the implication of the possibility, mounting a bit every
day, that we are alone in the universe? It reverses the millennial
progression from a geocentric to a heliocentric to a Milky
Way centered universe, back to, of all things, a geocentric
universe. We are the solitary point of light in a darkness
without end. It means that we are precious, infinitely so.
It means that nuclear or environmental cataclysm is an infinitely
worse fate than we thought.
It
means that we have a job to do, a mission that will last
all our ages to come: to seed and then to shepherd intelligent
life beyond this pale blue dot. |
JOSEPH
LEDOUX
Neuroscientist,
New York University; Author, The Synaptic Self

Like
many scientists in the field of memory, I used to think that
a memory is something stored in the brain and then accessed
when used. Then, in 2000, a researcher in my lab, Karim Nader,
did an experiment that convinced me, and many others, that
our usual way of thinking was wrong. In a nutshell, what
Karim showed was that each time a memory is used, it has
to be restored as a new memory in order to be accessible
later. The old memory is either not there or is inaccessible.
In short, your memory about something is only as good as
your last memory about it. This is why people who witness
crimes testify about what they read in the paper rather than
what they witnessed. Research on this topic, called reconsolidation,
has become the basis of a possible treatment for post-traumatic
stress disorder, drug addiction, and any other disorder that
is based on learning.
That
Karim's study changed my mind is clear from the fact that
I told him, when he proposed to do the study, that it was
a waste of time. I'm not swayed by arguments based on faith,
can be moved by good logic, but am always swayed by a good
experiment, even if it goes against my scientific beliefs.
I might not give up on a scientific belief after one experiment,
but when the evidence mounts over multiple studies, I change
my mind. |
KARL
SABBAGH
Writer
and Television Producer; Author, The
Riemann Hypothesis

I
used to believe that there were experts and non-experts
and that, on the whole, the judgment of experts is more
accurate, more valid, and more correct than my own judgment.
But over the years, thinking — and I should add,
experience — has changed my mind. What experts have
that I don't are knowledge and experience in some
specialized area. What, as a class, they don't have
any more than I do is the skills of judgment, rational
thinking and wisdom. And I've come to believe that
some highly ‘qualified' people have less of
that than I do.
I
now believe that the people I know who are wise are not
necessarily knowledgeable; the people I know who are knowledgeable
are not necessarily wise. Most of us confuse expertise
with judgment. Even in politics, where the only qualities
politicians have that the rest of us lack are knowledge
of the procedures of parliament or congress, and of how
government works, occasionally combined with specific knowledge
of economics or foreign affairs, we tend to look to such
people for wisdom and decision-making of a high order.
Many
people enroll for MBA's to become more successful businessmen.
An article in Fortune magazine a couple of years ago compared
the academic qualifications of people in business and found
the qualification that correlated most highly with success
was a philosophy degree. When I ran a television production
company and was approached for a job by budding directors
or producers, I never employed anyone with a degree in
media studies. But I did employ lots of intelligent people
with good judgment who knew nothing about television to
start with but could make good decisions. The results justified
that approach.
Scientists — with
a few eccentric exceptions — are, perhaps, the one
group of experts who have never claimed for themselves
wisdom outside the narrow confines of their specialties.
Paradoxically, they are the one group who are blamed for
the mistakes of others. Science and scientists are criticized
for judgments about weapons, stem cells, global warming,
nuclear power, when the decisions are made by people who
are not scientists.
As
a result of changing my mind about this, I now view the
judgments of others, however distinguished or expert they
are, as no more valid than my own. If someone who is a ‘specialist' in
the field disagrees with me about a book idea, the solution
to the Middle East problems, the non-existence of the paranormal
or nuclear power, I am now entirely comfortable with the
disagreement because I know I'm just as likely to
be right as they are. |
DOUGLAS
RUSHKOFF
Media
Analyst; Documentary Writer; Author, Get Back in
the Box: Innovation from the Inside Out

The
Internet
I
thought that it would change people. I thought it would allow
us to build a new world through which we could model new
behaviors, values, and relationships. In the 90's, I thought
the experience of going online for the first time would change
a person's consciousness as much as if they had dropped acid
in the 60's.
I
thought Amazon.com was a ridiculous idea, and that the Internet
would shrug off business as easily as it did its original
Defense Department minders.
For
now, at least, it's turned out to be different.
Virtual
worlds like Second Life have been reduced to market opportunities:
advertisers from banks to soft drinks purchase space and
create fake characters, while kids (and Chinese digital sweatshop
laborers) earn "play money" in the game only to
sell it to lazier players on eBay for real cash.
The
businesspeople running Facebook and MySpace are rivaled only
by the members of these online "communities" in
their willingness to surrender their identities and ideals
for a buck, a click-through, or a better market valuation.
The
open source ethos has been reinterpreted through the lens
of corporatism as "crowd sourcing" — meaning just
another way to get people to do work for no compensation.
And even "file-sharing" has been reduced to a frenzy
of acquisition that has less to do with music than it does
the ever-expanding hard drives of successive iPods.
Sadly,
cyberspace has become just another place to do business.
The question is no longer how browsing the Internet changes
the way we look at the world; it's which browser we'll be
using to buy and sell stuff in the same old world. |
PIET
HUT
Professor of Astrophysics, Institute for Advanced
Study, Princeton

Explanations
I
used to pride myself on the fact that I could explain almost
anything to anyone, on a simple enough level, using analogies.
No matter how abstract an idea in physics may be, there
always seems to be some way in which we can get at least
some part of the idea across. If colleagues shrugged and
said, oh, well, that idea is too complicated or too abstract
to be explained in simple terms, I thought they were either
lazy or not very skilled in thinking creatively around
a problem. I could not imagine a form of knowledge that
could not be communicated in some limited but valid approximation
or other.
However,
I've changed my mind, in what was for me a rather unexpected
way. I
still think I was right in thinking that any type of insight
can be summarized to some degree, in what is clearly a
correct first approximation when judged by someone who
shares in the insight. For a long time my mistake was that
I had not realized how totally wrong this first approximation
can come across for someone who does not share the original
insight.
Quantum
mechanics offers a striking example. When someone hears
that there is a limit on how accurately you can simultaneously
measure various properties of an object, it is tempting
to think that the limitations lie in the measuring procedure,
and that the object itself somehow can be held to have
exact values for each of those properties, even if they
cannot be measured. Surprisingly, that interpretation
is wrong: John Bell showed that such a 'hidden variables'
picture is actually in clear disagreement with quantum
mechanics. An initial attempt at explaining the measurement
problem in quantum mechanics can be more misleading than
not saying anything at all.
So
for each insight there is at least some explanation possible,
but the same explanation may then be given for radically
different insights. There is nothing that cannot be explained,
but there are wrong insights that can lead to explanations
that are identical to the explanation for a correct but
rather subtle insight. |
HOWARD
GARDNER
Psychologist, Harvard University; Author, Changing Minds

Wrestling with Jean Piaget, my Paragon
Like
many other college students, I turned to the study of psychology
for personal reasons. I wanted to understand myself better.
And so I read the works of Freud; and I was privileged
to have as my undergraduate tutor, the psychoanalyst Erik
Erikson, himself a sometime pupil of Freud. But once I
learned about new trends in psychology, through contacts
with another mentor Jerome Bruner, I turned my attention
to the operation of the mind in a cognitive sense — and
I've remained at that post ever since.
The
giant at the time — the middle 1960s — was Jean
Piaget. Though I met and interviewed him a few times, Piaget
really functioned for me as a paragon. In the term of Dean
Keith Simonton, a paragon is someone whom one does not know
personally but who serves as a virtual teacher and point
of reference. I thought that Piaget had identified the most
important question in cognitive psychology — how does
the mind develop; developed brilliant methods of observation
and experimentation; and put forth a convincing picture of
development — a set of general cognitive operations that
unfold in the course of essentially lockstep, universally
occurring stages. I wrote my first books about Piaget; saw
myself as carrying on the Piagetian tradition in my own studies
of artistic and symbolic development (two areas that he had
not focused on); and even defended Piaget vigorously in print
against those who would critique his approach and claims.
Yet, now forty years later, I have come to realize that
the bulk of my scholarly career has been a critique of the
principal claims that Piaget put forth. As to the specifics
of how I changed my mind:
Piaget believed in general stages of development that cut
across contents (Space, time, number); I now believe that
each area of content has its own rules and operations and
I am dubious about the existence of general stages and structures.
Piaget believed that intelligence was a single general capacity
that developed pretty much in the same way across individuals:
I now believe that humans posses a number of relatively independent
intelligences and these can function and interact in idiosyncratic
ways,
Piaget was not interested in individual differences; he
studied the 'epistemic subject.' Most of my work
has focused on individual differences, with particular attention
to those with special talents or deficits, and unusual profiles
of abilities and disabilities.
Piaget
assumed that the newborn had a few basic biological capacities — like sucking and looking — and
two major processes of acquiring knowledge, that he called
assimilation and accommodation. Nowadays, with many others,
I assume that human beings possess considerable innate
or easily elicited cognitive capacities, and that Piaget
way underestimated the power of this inborn cognitive architecture.
Piaget
downplayed the importance of historical and cultural factors — cognitive
development consisted of the growing child experimenting
largely on his own with the physical (and, minimally, the
social ) world. I see development as permeated from the
first by contingent forces pervading the time and place
of origin.
Finally,
Piaget saw language and other symbols systems (graphic,
musical, bodily etc) as manifestations, almost epiphenomena,
of a single cognitive motor; I see each of these systems
as having its own origins and being heavily colored by the
particular uses to which a systems is put in one's
own culture and one's own time.
Why I changed my mind is an issue principally of biography:
some of the change has to do with my own choices (I worked
for 20 years with brain damaged patients); and some with
the Zeitgeist (I was strongly influenced by the ideas of
Noam Chomsky and Jerry Fodor, on the one hand, and by empirical
discoveries in psychology and biology on the other).
Still,
I consider Piaget to be the giant of the field. He raised
the right questions; he developed exquisite methods; and
his observations of phenomena have turned out to be robust.
It's a tribute to Piaget that we continue to ponder
these questions, even as many of us are now far more critical
than we once were. Any serious scientist or scholar will
change his or her mind; put differently, we will come to
agree with those with whom we used to disagree, and vice
versa. We differ in whether we are open or secretive about
such "changes of mind": and in whether we choose
to attack, ignore, or continue to celebrate those with whose
views we are no longer in agreement. |
DONALD
HOFFMAN
Cognitive Scientist, UC, Irvine; Author, Visual Intelligence

Veridical Perception
I have changed my mind about the nature of perception. I thought
that a goal of perception is to estimate properties of an objective
physical world, and that perception is useful precisely to the
extent that its estimates are veridical. After all, incorrect
perceptions beget incorrect actions, and incorrect actions beget
fewer offspring than correct actions. Hence, on evolutionary
grounds, veridical perceptions should proliferate.
Although
the image at the eye, for instance, contains insufficient information
by itself to recover the true state of the world, natural selection
has built into the visual system the correct prior assumptions
about the world, and about how it projects onto our retinas,
so that our visual estimates are, in general, veridical. And
we can verify that this is the case, by deducing those prior
assumptions from psychological experiments, and comparing them
with the world. Vision scientists are now succeeding in this
enterprise. But we need not wait for their final report to conclude
with confidence that perception is veridical. All we need is
the obvious rhetorical question: Of what possible use is non-veridical
perception?
I now think that perception is useful because it is not veridical.
The argument that evolution favors veridical perceptions is wrong,
both theoretically and empirically. It is wrong in theory, because
natural selection hinges on reproductive fitness, not on truth,
and the two are not the same: Reproductive fitness in a particular
niche might, for instance, be enhanced by reducing expenditures
of time and energy in perception; true perceptions, in consequence,
might be less fit than niche-specific shortcuts. It is wrong
empirically: mimicry, camouflage, mating errors and supernormal
stimuli are ubiquitous in nature, and all are predicated on non-veridical
perceptions. The cockroach, we suspect, sees little of the truth,
but is quite fit, though easily fooled, with its niche-specific
perceptual hacks. Moreover, computational simulations based on
evolutionary game theory, in which virtual animals that perceive
the truth compete with others that sacrifice truth for speed
and energy-efficiency, find that true perception generally goes
extinct.
It
used to be hard to imagine how perceptions could possibly be
useful if they were not true. Now, thanks to technology, we
have a metaphor that makes it clear — the windows interface of
the personal computer. This interface sports colorful geometric
icons on a two-dimensional screen. The colors, shapes and positions
of the icons on the screen are not true depictions of what they
represent inside the computer. And that is why the interface
is useful. It hides the complexity of the diodes, resistors,
voltages and magnetic fields inside the computer. It allows us
to effectively interact with the truth because it hides the truth.
It
has not been easy for me to change my mind about the nature
of perception. The culprit, I think, is natural selection.
I have been shaped by it to take my perceptions seriously.
After all, those of our predecessors who did not, for instance,
take their tiger or viper or cliff perceptions seriously had
less chance of becoming our ancestors. It is apparently a small
step, though not a logical one, from taking perception seriously to taking it literally.
Unfortunately our ancestors faced no
selective pressures that would prevent them from conflating
the serious with the literal: One who takes the cliff both
seriously and literally avoids harms just as much as one who
takes the cliff seriously but not literally. Hence our collective
history of believing in flat earth, geocentric cosmology, and
veridical perception. I should very much like to join Samuel
Johnson in rejecting the claim that perception is not veridical,
by kicking a stone and exclaiming "I refute it thus." But even
as my foot ached from the ill-advised kick, I would still harbor
the skeptical thought, "Yes, you should have taken that
rock more seriously, but should you take it literally?" |
MICHAEL
SHERMER
Publisher
of Skeptic magazine, monthly columnist for Scientific
American; Author, Why Darwin Matters

The
Nature of Human Nature
When
I was a graduate student in experimental psychology I cut
my teeth in a Skinnerian behavioral laboratory. As a behaviorist
I believed that human nature was largely a blank slate on
which we could impose positive and negative reinforcements
(and punishments if necessary) to shape people and society
into almost anything we want. As a young college professor
I taught psychology from this perspective and even created
a new course on the history and psychology of war, in which
I argued that people are by nature peaceful and nonviolent,
and that wars were thus a byproduct of corrupt governments
and misguided societies.
The
data from evolutionary psychology has now convinced me that
we evolved a dual set of moral sentiments: within groups
we tend to be pro-social and cooperative, but between groups
we are tribal and xenophobic. Archaeological evidence indicates
that Paleolithic humans were anything but noble savages,
and that civilization has gradually but ineluctably reduced
the amount of within-group aggression and between group violence.
And behavior genetics has erased the tabula rasa and
replaced it with a highly constrained biological template
upon which the environment can act.
I
have thus changed my mind about this theory of human nature
in its extreme form. Human nature is more evolutionarily
determined, more cognitively irrational, and more morally
complex than I thought. |
JAMES
O'DONNELL
Classicist;
Cultural Historian; Provost, Georgetown University; Author, Augustine:
A New Biography

I
stopped cheering for the Romans
Sometimes
the later Roman empire seems very long ago and far away,
but at other times, when we explore Edward Gibbon's famous
claim to have described the triumph of "barbarism and
religion", it can seem as fresh as next week. And
we always know that we're supposed root for the Romans. When
I began my career as historian thirty years ago, I was all
in favor of those who were fighting to preserve the old order. "I'd
rather be Belisarius than Stilicho," I said to my classes
often enough that they heard it as a mantra of my attitude — preferring
the empire-restoring Roman general of the sixth-century to
the barbarian general who served Rome and sought compromise
and adjustment with neighbors in the fourth.
But
a career as a historian means growth, development, and change. I
did what the historian — as much a scientist as any
biochemist, as the German use of the word Wissenschaft for
what both practice — should do: I studied the
primary evidence, I listened to and participated in the debates
of the scholars. I had moments when a new book blew
me away, and others when I read the incisive critique of
the book that had blown me away and thought through the issues
again. I've been back and forth over a range of about
four centuries of late Roman history many times now, looking
at events, people, ideas, and evidence in different lights
and moods.
What I have found is that the closer historical examination
comes to the lived moment of the past, the harder it is to
take sides with anybody. And it is a real fact that the
ancient past (I'm talking now about the period from 300-700
CE) draws closer and closer to us all the time. There
is a surprisingly large body of material that survives and
really only a handful of hardy scholars sorting through it. Much
remains to be done: The sophist Libanius of Antioch in
the late fourth century, partisan for the renegade 'pagan'
emperor Julian, left behind a ton of personal letters and essays
that few have read, only a handful have been translated, and
so only a few scholars have really worked through his career
and thought — but I'd love to read, and even more dearly
love to write, a good book about him someday. In addition
to the books, there is a growing body of archaeological evidence
as diggers fan out across the Mediterranean, Near East, and
Europe, and we are beginning to see new kinds of quantitative
evidence as well — climate change measured from tree-ring
dating, even genetic analysis that suggests that my O'Donnell
ancestors came from one of the most seriously inbred populations
(Ireland) on the planet — and right now the argument
is going on about the genetic evidence for the size of the
Anglo-Saxon migrations to Britain. We know more than
we ever did, and we are learning more all the time, and with
each decade, we get closer and closer to even the remote past.
When
you do that, you find that the past is more a tissue of choices
and chances than we had imagined, that fifty or a hundred
years of bad times can happen — and can end and be
replaced by the united work of people with heads and hearts
that makes society peaceful and prosperous again; or the
opportunity can be kicked away.
And
we should remember that when we root for the Romans, there
are contradictory impulses at work. Rome brought the
ancient world a secure environment (Pompey cleaning up the
pirates in the Mediterranean was a real service), a standard
currency, and a huge free trade zone. Its taxes were
heavy, but the wealth it taxed so immense that it could support
a huge bureaucracy for a long time without damaging local
prosperity. Fine: but it was an empire
by conquest, ruled as a military dictatorship, fundamentally
dependent on a slave economy, and with no clue whatever about
the realities of economic development and management. A
prosperous emperor was one who managed by conquest or taxation
to bring a flood of wealth into the capital city and squander
it as ostentatiously as possible. Rome "fell",
if that's the right word for it, partly because it ran out
of ideas for new peoples to plunder, and fell into a funk
of outrage at the thought that some of the neighboring peoples
preferred to move inside the empire's borders, settle down,
buy fixer-upper houses, send their kids to the local schools,
and generally enjoy the benefits of civilization. (The real barbarians
stayed outside.) Much of the worst damage to Rome was
done by Roman emperors and armies thrashing about, thinking
they were preserving what they were in fact destroying.
So
now I have a new mantra for my students: "two
hundred years is a long time." When we talk about
Shakespeare's time or the Crusades or the Roman Empire or
the ancient Israelites, it's all too easy to talk about centuries
as objects, a habit we bring even closer to our own time,
but real human beings live in the short window of a generation,
and with ancient lifespans shorter than our own, that window
was brief. We need to understand and respect just how
much possibility was there and how much accomplishment was
achieved if we are to understand as well the opportunities
that were squandered. Learning to do that, learning
to sift the finest grains of evidence with care, learning
to learn from and debate with others — that's how history
gets done.
The excitement begins when you discover that the past is constantly
changing. |
COLIN TUDGE
Science Writer; Author, The Tree: A Natural History of What Trees Are, How They Live, and Why They Matter

The Omniscience and Omnipotence of Science
I have changed my mind about the omniscience and omnipotence
of science. I now realize that science is strictly limited,
and that it is extremely dangerous not to appreciate this.
Science proceeds in general by being reductionist. This term
is used in different ways in different contexts but here I
take it to mean that scientists begin by observing a world
that seems infinitely complex and inchoate, and in order to
make sense of it they first "reduce" it to a series
of bite-sized problems, each of which can then be made the
subject of testable hypotheses which, as far as possible, take
mathematical form.
Fair enough. The approach is obviously powerful, and it is
hard to see how solid progress of a factual kind could be made
in any other way. It produces answers of the kind known as "robust". "Robust" does
not of course mean "unequivocally true" and still
less does it meet the lawyers' criteria — "the
whole truth, and nothing but the truth". But robustness
is pretty good; certainly good enough to be going on with.
The
limitation is obvious, however. Scientists produce robust
answers only because they take great care to tailor the questions.
As Sir Peter Medawar said, "Science is the art of the soluble"
(within the time and with the tools available).
Clearly it is a huge mistake to assume that what is soluble
is all there is — but some scientists make this mistake
routinely.
Or to put the matter another way: they tend conveniently to
forget that they arrived at their "robust" conclusions
by ignoring as a matter of strategy all the complexities of
a kind that seemed inconvenient. But all too often, scientists
then are apt to extrapolate from the conclusions they have
drawn from their strategically simplified view of the world,
to the whole, real world.
Two
examples of a quite different kind will suffice:
1: In the 19th century the study of animal psychology was
a mess. On the one hand we had some studies of nerve function
by a few physiologists, and on the other we had reams of wondrous
but intractable natural history which George Romanes in particular
tried to put into some kind of order. But there was nothing
much in between. The behaviourists of the 20th century did
much to sort out the mess by focusing on the one manifestation
of animal psychology that is directly observable and measurable — their
behaviour.
Fair enough. But when I was at university in the early 1960s
behaviourism ruled everything. Concepts such as "mind" and "consciousness" were
banished. B F Skinner even tried to explain the human acquisition
of language in terms of his "operant conditioning".
Since then the behaviourist agenda has largely been put in
its place. Its methods are still useful (still helping to provide "robust" results)
but discussions now are far broader. "Consciousness", "feeling",
even "mind" are back on the agenda.
Of course you can argue that in this instance science proved
itself to be self-correcting — although this historically
is not quite true. Noam Chomsky, not generally recognized as
a scientist, did much to dent behaviourist confidence through
his own analysis of language.
But for decades the confident assertions of the behaviourists
ruled and, I reckon, they were in many ways immensely damaging.
In particular they reinforced the Cartesian notion that animals
are mere machines, and can be treated as such. Animals such
as chimpanzees were routinely regarded simply as useful physiological "models" of
human beings who could be more readily abused than humans can.
Jane Goodall in particular provided the corrective to this — but
she had difficulty getting published at first precisely because
she refused to toe the hard-nosed Cartesian (behaviourist-inspired)
line. The causes of animal welfare and conservation are still
bedeviled by the attitude that animals are simply "machines" and
by the crude belief that modern science has "proved" that
this is so.
2: In the matter of GMOs we are seeing the crude simplifications
still in their uncorrected form. By genetic engineering it
is possible (sometimes) to increase crop yield. Other things
being equal, high yields are better than low yields. Ergo (the
argument goes) GMOs must be good and anyone who says differently
must be a fool (unable to understand the science) or wicked
(some kind of elitist, trying to hold the peasants back).
But
anyone who knows anything about farming in the real world
(as opposed to the cosseted experimental fields of the English
home counties and of California) knows that yield is by no
means the be-all and end-all. Inter alia, high yields require
high inputs of resources and capital — the very things that
are often lacking. Yield typically matters far less than
long-term security — acceptable yields in bad years rather
than bumper yields in the best conditions. Security requires
individual toughness and variety — neither of which necessarily
correlate with super-crop status. In a time of climate change,
resilience is obviously of paramount importance — but this
is not, alas, obvious to the people who make policy. Bumper
crops in good years cause glut — unless the market is regulated; and
glut in the current economic climate (though not necessarily
in the real world of the US and the EU) depresses prices
and put farmers out of work.
Eventually the penny may drop — that the benison of
the trial plot over a few years cannot necessarily be transferred
to real farms in the world as a whole. But by that time the
traditional crops that could have carried humanity through
will be gone, and the people who know how to farm them will
be living and dying in urban slums (which, says the UN, are
now home to a billion people).
Behind all this nonsense and horror lies the simplistic belief,
of a lot of scientists (though by no means all, to be fair)
and politicians and captains of industry, that science understands
all (ie is omniscient, or soon will be) and that its high technologies
can dig us out of any hole we may dig ourselves into (ie is
omnipotent).
Absolutely not. |
|