"WHAT
IS YOUR DANGEROUS IDEA?" |
|
JAMSHED
BHARUCHA
Professor
of Psychology, Provost, Senior Vice President, Tufts University

Education
as we know it does not accomplish what we believe it does
The
more we discover about cognition and the brain, the more we
will realize that education as we know it does not accomplish
what we believe it does.
It
is not my purpose to echo familiar critiques of our schools.
My concerns are of a different nature and apply to the full spectrum
of education, including our institutions of higher education,
which arguably are the finest in the world.
Our
understanding of the intersection between genetics and neuroscience
(and their behavioral correlates) is still in its infancy. This
century will bring forth an explosion of new knowledge on the
genetic and environmental determinants of cognition and brain
development, on what and how we learn, on the neural basis of
human interaction in social and political contexts, and on variability
across people.
Are
we prepared to transform our educational institutions if new
science challenges cherished notions of what and how we learn?
As we acquire the ability to trace genetic and environmental
influences on the development of the brain, will we as a society
be able to agree on what our educational objectives should be?
Since
the advent of scientific psychology we have learned a lot about
learning. In the years ahead we will learn a lot more that will
continue to challenge our current assumptions. We will learn
that some things we currently assume are learnable are not (and
vice versa), that some things that are learned successfully don't
have the impact on future thinking and behavior that we imagine,
and that some of the learning that impacts future thinking and
behavior is not what we spend time teaching. We might well discover
that the developmental time course for optimal learning from
infancy through the life span is not reflected in the standard
educational time line around which society is organized. As we
discover more about the gulf between how we learn and how we
teach, hopefully we will also discover ways to redesign our systems
— but I suspect that the latter will lag behind the former.
Our
institutions of education certify the mastery of spheres of knowledge
valued by society. Several questions will become increasingly
pressing, and are even pertinent today. How much of this learning
persists beyond the time at which acquisition is certified? How
does this learning impact the lives of our students? How central
is it in shaping the thinking and behavior we would like to see
among educated people as they navigate, negotiate and lead in
an increasingly complex world?
We
know that tests and admissions processes are selection devices
that sort people into cohorts on the basis of excellence on various
dimensions. We know less about how much even our finest examples
of teaching contribute to human development over and above selection
and motivation.
Even
current knowledge about cognition (specifically, our understanding
of active learning, memory, attention, and implicit learning)
has not fully penetrated our educational practices, because of
inertia as well as a natural lag in the application of basic
research. For example, educators recognize that active learning
is superior to the passive transmission of knowledge. Yet we
have a long way to go to adapt our educational practices to what
we already know about active learning.
We
know from research on memory that learning trials bunched up
in time produce less long term retention than the same learning
trials spread over time. Yet we compress learning into discrete
packets called courses, we test learning at the end of a course
of study, and then we move on. Furthermore, memory for both facts
and methods of analytic reasoning are context-dependent. We don't
know how much of this learning endures, how well it transfers
to contexts different from the ones in which the learning occurred,
or how it influences future thinking.
At
any given time we attend to only a tiny subset of the information
in our brains or impinging on our senses. We know from research
on attention that information is processed differently by the
brain depending upon whether or not it is attended, and that
many factors — endogenous and exogenous
— control our attention. Educators have been aware of the
role of attention in learning, but we are still far from understanding
how to incorporate this knowledge into educational design. Moreover,
new information presented in a learning situation is interpreted
and encoded in terms of prior knowledge and experience; the increasingly
diverse backgrounds of students placed in the same learning contexts
implies that the same information may vary in its meaningfulness
to different students and may be recalled differently.
Most
of our learning is implicit, acquired automatically and unconsciously
from interactions with the physical and social environment. Yet
language
— and hence explicit, declarative or consciously articulated
knowledge — is the currency of formal education.
Social
psychologists know that what we say about why we think and act
as we do is but the tip of a largely unconscious iceberg that
drives our attitudes and our behavior. Even as cognitive and
social neuroscience reveals the structure of these icebergs under
the surface of consciousness (for example, persistent cognitive
illusions, decision biases and perceptual biases to which even
the best educated can be unwitting victims), it will be less
clear how to shape or redirect these knowledge icebergs under
the surface of consciousness.
Research
in social cognition shows clearly that racial, cultural and other
social biases get encoded automatically by internalizing stereotypes
and cultural norms. While we might learn about this research
in college, we aren't sure how to counteract these factors in
the very minds that have acquired this knowledge.
We
are well aware of the power of non-verbal auditory and visual
information, which when amplified by electronic media capture
the attention of our students and sway millions. Future research
should give us a better understanding of nuanced non-verbal forms
of communication, including their universal and culturally based
aspects, as they are manifest in social, political and artistic
contexts.
Even
the acquisition of declarative knowledge through language — the
traditional domain of education — is being usurped by the
internet at our finger tips. Our university libraries and publication
models are responding to the opportunities and challenges of
the information age. But we will need to rethink some of our
methods of instruction too. Will our efforts at teaching be drowned
out by information from sources more powerful than even the best
classroom teacher?
It
is only a matter of time before we have brain-related technologies
that can alter or supplement cognition, influence what and how
we learn, and increase competition for our limited attention.
Imagine the challenges for institutions of education in an environment
in which these technologies are readily available, for better
or worse.
The
brain is a complex organ, and we will discover more of this complexity.
Our physical, social and information environments are also complex
and are becoming more so through globalization and advances in
technology. There will be no simple design principles for how
we structure education in response to these complexities.
As
elite colleges and universities, we see increasing demand for
the branding we confer, but we will also see greater scrutiny
from society for the education we deliver. Those of us in positions
of academic leadership will need wisdom and courage to examine,
transform and justify our objectives and methods as educators. |
APRIL
GORNIK
Artist, New York City; Danese
Gallery
The exact
effect of art can't be controlled or fully anticipated
Great
art makes itself vulnerable to interpretation, which is one
reason that it keeps being stimulating and fascinating for
generations. The problem inherent in this is that art could
inspire malevolent behavior, as per the notion popularly expressed
by A Clockwork Orange. When I was young, aspiring
to be a conceptual artist, it disturbed me greatly that I couldn't
control the interpretation of my work. When I began painting,
it was even worse; even I wasn't completely sure of what my
art meant. That seemed dangerous for me, personally, at that
time. I gradually came not only to respect the complexity and
inscrutability of painting and art, but to see how it empowers
the object. I believe that works of art are animated by their
creators, and remain able to generate thoughts, feelings, responses.
However, the fact is that the exact effect of art can't be
controlled or fully anticipated. |
PAUL
DAVIES
Physicist, Macquarie University,
Sydney; Author, How to Build a Time Machine

The
fight against global warming is lost
Some
countries, including the United States and Australia, have been
in denial about global warming. They cast doubt on the science
that set alarm bells ringing. Other countries, such as the UK,
are in panic, and want to make drastic cuts in greenhouse emissions.
Both stances are irrelevant, because the fight is a hopeless
one anyway. In spite of the recent hike in the price of oil,
the stuff is still cheap enough to burn. Human nature being what
it is, people will go on burning it until it starts running out
and simple economics puts the brakes on. Meanwhile the carbon
dioxide levels in the atmosphere will just go on rising. Even
if developed countries rein in their profligate use of fossil
fuels, the emerging Asian giants of China and India will more
than make up the difference. Rich countries, whose own wealth
derives from decades of cheap energy, can hardly preach restraint
to developing nations trying to climb the wealth ladder. And
without the obvious solution — massive investment in nuclear
energy — continued warming looks unstoppable.
Campaigners
for cutting greenhouse emissions try to scare us by proclaiming
that a warmer world is a worse world. My dangerous idea is that
it probably won't be. Some bad things will happen. For example,
the sea level will rise, drowning some heavily populated or fertile
coastal areas. But in compensation Siberia may become the world's
breadbasket. Some deserts may expand, but others may shrink.
Some places will get drier, others wetter. The evidence that
the world will be worse off overall is flimsy. What is certainly
the case is that we will have to adjust, and adjustment is always
painful. Populations will have to move. In 200 years some currently
densely populated regions may be deserted. But the population
movements over the past 200 years have been dramatic too. I doubt
if anything more drastic will be necessary. Once it dawns on
people that, yes, the world really is warming up and that, no,
it doesn't imply Armageddon, then the international agreements
like the Kyoto protocol will fall apart.
The
idea of giving up the global warming struggle is dangerous because
it shouldn't have come to this. Mankind does have the resources
and the technology to cut greenhouse gas emission. What we lack
is the political will. People pay lip service to environmental
responsibility, but they are rarely prepared to put their money
where their mouth is. Global warming may turn out to be not so
bad after all, but many other acts of environmental vandalism
are manifestly reckless: the depletion of the ozone layer, the
destruction of rain forests, the pollution of the oceans. Giving
up on global warming will set an ugly precedent. |
HELEN
FISHER
Research Professor, Department
of Anthropology, Rutgers University; Author, Why
We Love

If
patterns of human love subtlely change, all sorts of social
and political atrocities can escalate
Serotonin-enhancing
antidepressants (such as Prozac and many others) can jeopardize
feelings of romantic love, feelings of attachment to a spouse
or partner, one's fertility and one's genetic future.
I
am working with psychiatrist Andy Thomson on this topic. We base
our hypothesis on patient reports, fMRI studies, and other data
on the brain.
Foremost,
as SSRIs elevate serotonin they also suppress dopaminergic pathways
in the brain. And because romantic love is associated with elevated
activity in dopaminergic pathways, it follows that SSRIs can
jeopardize feelings of intense romantic love. SSRIs also curb
obsessive thinking and blunt the emotions--central characteristics
of romantic love. One patient described this reaction well, writing: "After
two bouts of depression in 10 years, my therapist recommended
I stay on serotonin-enhancing antidepressants indefinitely. As
appreciative as I was to have regained my health, I found that
my usual enthusiasm for life was replaced with blandness. My
romantic feelings for my wife declined drastically. With the
approval of my therapist, I gradually discontinued my medication.
My enthusiasm returned and our romance is now as strong as ever.
I am prepared to deal with another bout of depression if need
be, but in my case the long-term side effects of antidepressants
render them off limits".
SSRIs
also suppress sexual desire, sexual arousal and orgasm in as
many as 73% of users. These sexual responses evolved to enhance
courtship, mating and parenting. Orgasm produces a flood of oxytocin
and vasopressin, chemicals associated with feelings of attachment
and pairbonding behaviors. Orgasm is also a device by which women
assess potential mates. Women do not reach orgasm with every
coupling and the "fickle" female orgasm is now regarded
as an adaptive mechanism by which women distinguish males who
are willing to expend time and energy to satisfy them. The onset
of female anorgasmia may jeopardize the stability of a long-term
mateship as well.
Men
who take serotonin-enhancing antidepressants also inhibit evolved
mechanisms for mate selection, partnership formation and marital
stability. The penis stimulates to give pleasure and advertise
the male's psychological and physical fitness; it also deposits
seminal fluid in the vaginal canal, fluid that contains dopamine,
oxytocin, vasopressin, testosterone, estrogen and other chemicals
that most likely influence a female partner's behavior.
These
medications can also influence one's genetic future. Serotonin
increases prolactin by stimulating prolactin releasing factors.
Prolactin can impair fertility by suppressing hypothalamic GnRH
release, suppressing pituitary FSH and LH release, and/or suppressing
ovarian hormone production. Clomipramine, a strong serotonin-enhancing
antidepressant, adversely affects sperm volume and motility.
I
believe that Homo sapiens has evolved (at least) three primary,
distinct yet overlapping neural systems for reproduction. The
sex drive evolved to motivate ancestral men and women to seek
sexual union with a range of partners; romantic love evolved
to enable them to focus their courtship energy on a preferred
mate, thereby conserving mating time and energy; attachment evolved
to enable them to rear a child through infancy together. The
complex and dynamic interactions between these three brain systems
suggest that any medication that changes their chemical checks
and balances is likely to alter an individual's courting, mating
and parenting tactics, ultimately affecting their fertility and
genetic future.
The
reason this is a dangerous idea is that the huge drug industry
is heavily invested in selling these drugs; millions of people
currently take these medications worldwide; and as these drugs
become generic, many more will soon imbibe — inhibiting
their ability to fall in love and stay in love. And if patterns
of human love subtlely change, all sorts of social and political
atrocities can escalate. |
JOEL
GARREAU
Cultural Revolution Correspondent, Washington
Post ; Author, Radical Evolution
Suppose
Faulkner was right?
In his December 10, 1950, Nobel Prize acceptance speech, William
Faulkner said:
I
decline to accept the end of man. It is easy enough to
say that man is immortal simply because he will endure:
that when the last ding-dong of doom has clanged and faded
from the last worthless rock hanging tideless in the last
red and dying evening, that even then there will still
be one more sound: that of his puny inexhaustible voice,
still talking. I refuse to accept this. I believe that
man will not merely endure: he will prevail.
He is immortal, not because he alone among creatures has an
inexhaustible voice, but because he has a soul, a spirit capable
of compassion and sacrifice and endurance. The poet's, the
writer's, duty is to write about these things. It is his privilege
to help man endure by lifting his heart, by reminding him of
the courasge and honor and hope and pride and compassion and
pity and sacrifice which have been the glory of his past. The
poet's voice need not merely be the record of man, it can be
one of the props, the pillars to help him endure and prevail.
It's
easy to dismiss such optimism. The reason I hope Faulkner was
right, however, is that we are at a turning point in history.
For the first time, our technologies are not so much aimed outward
at modifying our environment in the fashion of fire, clothes,
agriculture, cities and space travel. Instead, they are increasingly
aimed inward at modifying our minds, memories, metabolisms, personalities
and progeny. If we can do all that, then we are entering an era
of engineered evolution — radical evolution, if you will — in
which we take control of what it will mean to be human.
This
is not some distant, science-fiction future. This is happening
right now, in our generation, on our watch. The GRIN technologies — the
genetic, robotic, information and nano processes — are
following curves of accelerating technological change the arithmetic
of which suggests that the last 20 years are not a guide to the
next 20 years. We are more likely to see that magnitude of change
in the next eight. Similarly, the amount of change of the last
half century, going back to the time when Faulkner spoke, may
well be compressed into the next 14.
This raises the question of where we will gain the wisdom to
guide this torrent, and points to what happens if Faulkner
was wrong. If we humans are not so much able to control our
tools, but instead come to be controlled by them, then we will
be heading into a technodeterminist future.
You
can get different versions of what that might mean.
Some
would have you believe that a future in which our creations eliminate
the ills that have plagued mankind for millennia — conquering
pain, suffering, stupidity, ignorance and even death — is
a vision of heaven. Some even welcome the idea that someday soon,
our creations will surpass the pitiful limitations of Version
1.0 humans, themselves becoming a successor race that will conquer
the universe, and care for us benevolently.
Others feel strongly that a life without suffering is a life without
meaning, reducing humankind to ignominious, character-less husks.
They also point to what could happen if such powerful self-replicating
technologies get into the hands of bumblers or madmen. They can
easily imagine a vision of hell in which we wipe out not only our
species, but all of life on earth.
If Faulkner is right, however, there is a third possible future.
That is the one that counts on the ragged human convoy of divergent
perceptions, piqued honor, posturing, insecurity and humor once
again wending its way to glory. It puts a shocking premium on Faulkner's
hope that man will prevail "because he has a soul, a spirit
capable of compassion and sacrifice and endurance." It assumes
that even as change picks up speed, giving us less and less time
to react, we will still be able to rely on the impulse that Churchill
described when he said, "Americans can always be counted on
to do the right thing—after they have exhausted all other
possibilities."
The key measure of such a "prevail" scenario's success
would be an increasing intensity of links between humans, not transistors.
If some sort of transcendence is achieved beyond today's understanding
of human nature, it would not be through some individual becoming
superman. Transcendence would be social, not solitary. The measure
would be the extent to which many transform together.
The very fact that Faulkner's proposition looms so large as we
look into the future does at least illuminate the present.
Referring to Faulkner's breathtaking line, "when the last
ding-dong of doom has clanged and faded from the last worthless
rock hanging tideless in the last red and dying evening, that even
then there will still be one more sound: that of his puny inexhaustible
voice, still talking," the author Bruce Sterling once told
me, "You know, the most interesting part about that speech
is that part right there, where William Faulkner, of all people,
is alluding to H. G. Wells and the last journey of the Traveler
from The Time Machine. It's kind of a completely heartfelt,
probably drunk mishmash of cornball crypto-religious literary humanism
and the stark, bonkers, apocalyptic notions of atomic Armageddon,
human extinction, and deep Darwinian geological time. Man, that
was the 20th century all over." |
STANISLAS
DEHEANE
Cognitive Neuropsychology Researcher,
Institut National de la Santé, Paris; Author, The Number
Sense

Touching
and pushing the limits of the human brain
From
Copernicus to Darwin to Freud, science has a special way of deflating
human hubris by proposing what is frequently perceived,
at the time, as dangerous or pernicious ideas. Today, cognitive
neuroscience presents us with a new challenging idea, whose accommodation
will require substantial personal and societal effort — the
discovery of the intrinsic limits of the human brain.
Calculation
was one of the first domains where we lost our special status — right
from their inception, computers were faster than the human brain,
and they are now billions of times ahead of us in their speed
and breadth of number crunching. Psychological research shows
that our mental "central executive" is amazingly limited — we
can process only one thought at a time, at a meager rate of five
or ten per second at most. This is rather surprising. Isn't the
human brain supposed to be the most massively parallel machine
on earth? Yes, but its architecture is such that the collective
outcome of this parallel organization, our mind, is a very slow
serial processor. What we can become aware of is intrinsically
limited. Whenever we delve deeply into the processing of one
object, we become literally blind to other items that would require
our attention (the "attentional blink" paradigm). We also suffer
from an "illusion of seeing": we think that we take in a whole
visual scene and see it all at once, but research shows that
major chunks of the image can be changed surreptitiously without
our noticing.
True, relative to other animal species, we do have a special combinatorial
power, which lies at the heart of the remarkable cultural inventions
of mathematics, language, or writing. Yet this combinatorial faculty
only works on the raw materials provided by a small number of core
systems for number, space, time, emotion, conspecifics, and a few
other basic domains. The list is not very long — and within
each domain, we are now discovering lots of little ill-adapted
quirks, evidence of stupid design as expected from a brain arising
from an imperfect evolutionary process (for instance, our number
system only gives us a sense of approximate quantity — good
enough for foraging, but not for exact mathematics). I therefore
do not share Marc Hauser's optimism that our mind has a "universal" or "limitless" expressive
power. The limits are easy to touch in mathematics, in topology
for instance, where we struggle with the simplest objects (is a
curve a knot… or not?).
As we discover the limits of the human brain, we also find new
ways to design machines that go beyond those limits. Thus, we have
to get ready for a society where, more and more, the human mind
will be replaced by better computers and robots — and where
the human operator will be increasingly considered a nuisance rather
than an asset. This is already the case in aeronautics, where flight
stability is ensured by fast cybernetics and where landing and
take off will soon be assured by computer, apparently with much
improved safety.
There
are still a few domains where the human brain maintains an apparent
superiority. Visual recognition used to be one — but already,
superb face recognition software is appearing, capable of storing
and recognizing thousands of faces with close to human performance.
Robotics is another. No robot to date is capable of navigating
smoothly through a complicated 3-D world. Yet a third area of
human superiority is high-level semantics and creativity: the
human ability to make sense of a story, to pull out the relevant
knowledge from a vast store of potentially useful facts, remains
unequalled.
Suppose that, for the next 50 years, those are the main areas
in which engineers will remain unable to match the performance
of the human brain. Are we ready for a world in which the human
contributions are binary, either at the highest level (thinkers,
engineers, artists…) or at the lowest level, where human
workforce remains cheaper than mechanization? To some extent,
I would argue that this great divide is already here, especially
between North and South, but also within our developed countries,
between upper and lower casts.
What are the solutions? I envisage two of them. The first is
education. The human brain to some extent is changeable. Thanks
to education, we can improve considerably upon the stock of
mental tools provided to us by evolution. In fact, relative
to the large changes that schooling can provide, whatever neurobiological
differences distinguish the sexes or the races are minuscule
(and thus largely irrelevant — contra Steve
Pinker). The crowning achievements of Sir Isaac Newton are
now accessible to any student in physics and algebra — whatever
his or her skin color.
Of course, our learning ability isn't without bounds. It is
itself tightly limited by our genes, which merely allow a fringe
of variability in the laying down of our neuronal networks.
We never fully gain entirely new abilities — but merely
transform our existing brain networks, a partial and constrained
process that I have called "cultural recycling" or "recyclage".
As we gain knowledge of brain plasticity, a major application
of cognitive neuroscience research should be the improvement
of life-long education, with the goal of optimizing this transformation
of our brains. Consider reading. We now understand much better
how this cultural capacity is laid down. A posterior brain
network, initially evolved to recognize objects and faces,
gets partially recycled for the shapes of letters and words,
and learns to connect these shapes to other temporal areas
for sounds and words. Cultural evolution has modified the shapes
of letters so that they are easily learnable by this brain
network. But, the system remains amazingly imperfect. Reading
still has to go through the lopsided design of the retina,
where the blood vessels are put in front of the photoreceptors,
and where only a small region of the fovea has enough resolution
to recognize small print. Furthermore, both the design of writing
systems and the way in which they are taught are perfectible.
In the end, after years of training, we can only read at an
appalling speed of perhaps 10 words per second, a baud rate
surpassed by any present-day modem.
Nevertheless, this cultural invention has radically changed
our cognitive abilities, doubling our verbal working memory
for instance. Who knows what other cultural inventions might
lie ahead of us, and might allow us to further push the limits
of our brain biology?
A second, more futuristic solution may lie in technology. Brain-computer
interfaces are already around the corner. They are currently
being developed for therapeutic purposes. Soon, cortical implants
will allow paralyzed patients to move equipment by direct cerebral
command. Will such devices later be applied to the normal human
brain, in the hopes of extending our memory span or the speed
of our access to information? And will we be able to forge
a society in which such tools do not lead to further divisions
between, on the one hand, high-tech brains powered by the best
education and neuro-gear, and on the other hand, low-tech man
power just good enough for cheap jobs? |
ERIC
FISCHL
Artist, New York City; Mary Boone
Gallery
The unknown
becomes known, and is not replaced with a new unkown
Several
years ago I stood in front of a painting by Vermeer. It was
a painting of a woman reading a letter. She stood near the
window for better lighting and behind her hung a map of the
known world. I was stunned by the revelation of this work.
Vermeer understood something so basic to human need it had
gone virtually unnoticed: communication from afar.
Everything
we have done to make us more capable, more powerful, better protected,
more intelligent, has been by enhancing our physical limitations,
our perceptual abilities, our adaptability. When I think of Vermeer's
woman reading the letter I wonder how long did it take to get
to her? Then I think, my god, at some time we developed a system
in which one could leave home and send word back! We figured
out a way that we could be heard from far away and then another
system so that we can be seen from far away. Then I start to
marvel at the alchemy of painting and how we have been able to
invest materials with consciousness so that Vermeer can talk
to me across time! I see too he has put me in the position of
not knowing as I am kept from reading the content of the letter.
In this way he has placed me at the edge, the frontier of wanting
to know what I cannot know. I want to know how long has this
letter sender been away and what was he doing all this time.
Is he safe? Does he still love her? Is he on his way home?
Vermeer
puts me into what had been her condition of uncertainty. All
I can do is wonder and wait. This makes me think about how not
knowing is so important. Not knowing makes the world large and
uncertain and our survival tenuous. It is a mystery why humans
roam and still more a mystery why we still need to feel so connected
to the place we have left. The not knowing causes such profound
anxiety it, in turn, spawns creativity. The impetus for this
creativity is empowerment. Our gadgets, gizmoes, networks of
transportation and communication, have all been developed either
to explore, utilize or master the unknown territory.
If
the unknown becomes known, and is not replaced with a new unknown,
if the farther we reach outward is connected only to how fast
we can bring it home, if the time between not knowing and knowing
becomes too small, creativity will be daunted. And so I worry,
if we bring the universe more completely, more effortlessly,
into our homes will there be less reason to leave them? |
NICHOLAS
HUMPHREY
Psychologist, London School of
Economics; Author, The Mind Made Flesh
It
is undesirable to believe in a proposition when there
is no ground whatever for supposing it true
Bertrand
Russell's idea, put forward 80 years ago, is about as dangerous
as they come. I don't think I can better it: "I wish to
propose for the reader's favourable consideration a doctrine
which may, I fear, appear wildly paradoxical and subversive.
The doctrine in question is this: that it is undesirable to
believe in a proposition when there is no ground whatever for
supposing it true." (The opening lines of his Sceptical
essays). |
DAVID
BODANIS
Writer, Consultant; Author: The
Electric Universe
The hyper-Islamicist
critique of the West as a decadent force that is
already on a downhill course might be true
I
wonder sometimes if the hyper-Islamicist critique of the West
as a decadent force that is already on a downhill course might
be true. At first it seems impossible: no one's richer than
the US, and no one has as powerful an Army; western Europe
has vast wealth and university skills as well.
But
what got me reflecting was the fact that in just four years after
Pearl Harbor, the US had defeated two of the greatest military
forces the world had ever seen. Everyone naturally accepted there
had to be restrictions on gasoline sales, to preserve limited
source of gasoline and rubber; profiteers were hated. But the
first four years after 9/11? Detroit automakers find it easy
to continue paying off congressmen to ensure that gasoline-wasting
SUV's aren't restricted in any way.
There
are deep trends behind this. Technology is supposed to be speeding
up, but if you think about it, airplanes have a similar feel
and speed to ones of 30 years ago; cars and oil rigs and credit
cards and the operations of the NYSE might be a bit more efficient
than a few decades ago, but also don't feel fundamentally different.
Aside from the telephones, almost all the objects and and daily
habits in Spielberg's 20 year old film E.T. are about the same
as today.
What
has transformed is the possibility of quick change. It's a lot,
lot harder than it was before. Patents for vague, general ideas
are much easier to get than they were before, which slows down
the introduction of new technology; academics in biotech and
other fields are wary about sharing their latest research with
potentially competing colleagues (which slows down the creation
of new technology as well).
Even
more, there's a tension, a fear of falling from the increasingly
fragile higher tiers of society, which means that social barriers
are higher as well. I went to adequate but not extraordinary
public (state) schools in Chicago, but my children go to private
schools. I suspect that many contributors to this site, unless
they live in academic towns where state schools are especially
strong, are in a similar position. This is fine for our children,
but not for those of the same theoretical potential, yet who
lack parents who can afford it.
Sheer
inertia can mask such flaws for quite a while. The National Academy
of Sciences has shown that, once again, the percentage of American-born
university students studying the hard physical sciences has gone
down. At one time that didn't matter, for life in America — and
at the top American universities — was an overwhelming
lure for ambitious youngsters from Seoul and Bangalore. But already
there are signs of that slipping, and who knows what it'll be
like in another decade or two.
There's
another sort of inertia that's coming to an end as well. The
first generation of immigrants from farm to city bring with them
the attitudes of their farm world; the first generation of 'migrants'
from blue collar city neighborhoods to upper middle class professional
life bring similar attitudes of responsibility as well. We ignore
what the media pours out about how we're supposed to live. We're
responsible for parents, even when it's not to our economic advantage;
we vote against our short-term economic interests, because it's
the 'right' thing to do; we engage in philanthropy towards individuals
of very different backgrounds from ourselves. But why? In many
parts of America or Europe, the rules and habits creating those
attitudes no longer exist at all.
When
that finally gets cut away, will what replaces it be strong enough
for us to survive? |
SAMUEL BARONDES
Neurobiologist and Psychiatrist,
University of California San Francisco; Author, Better
Than Prozac

Using Medications To Change Personality
Personality
— the pattern of thoughts, feelings, and actions that is typical
of each of us — is generally formed by early adulthood. But many
people still want to change. Some, for example, consider themselves
too gloomy and uptight and want to become more cheerful and flexible.
Whatever their aims they often turn to therapists, self-help
books, and religious practices.
In
the past few decades certain psychiatric medications have become
an additional tool for those seeking control of their lives.
Initially designed to be used for a few months to treat episodic
psychological disturbances such as severe depression, they are
now being widely prescribed for indefinite use to produce sustained
shifts in certain personality traits. Prozac is the best known
of them, but many others are on the market or in development.
By directly affecting brain circuits that control emotions, these
medications can produce desirable effects that may be hard to
replicate by sheer force of will or by behavioral exercises.
Millions keep taking them continuously, year after year, to modulate
personality.
Nevertheless,
despite the testimonials and apparent successes, the sustained
use of such drugs to change personality should still be considered
dangerous. Not because manipulation of brain chemicals is intrinsically
cowardly, immoral, or a threat to the social order. In the opinion
of experienced clinicians medications such as Prozac may actually
have the opposite effect, helping to build character and to increase
personal responsibility. The real danger is that there are no
controlled studies of the effects of these drugs on personality
over the many years or even decades in which some people are
taking them. So we are left with a reliance on opinion and belief.
And this, as in all fields, we know to be dangerous. |
|