"WHAT
ARE YOU OPTIMISTIC ABOUT?" |
|
GEORGE
CHURCH
Professor of Genetics, Harvard Medical School; Director,
Center for Computational Genetics
Personal Genomics Will Arrive This Year, and With It a Revolutionary
Wave Of Volunteerism and Self-Knowledge
A
small but crucial set of human pursuits have experienced smooth
exponential growth for many decades—sometimes so smooth
as to be hidden and then revealed with a jolt. These growth
industries involve information—reading and writing complex
artifacts made of electronic and/or DNA parts. The iconic
example is the personal computer, which though traceable back
to 1962, became manifest in 1993 when free web browsers spawned
millions of personal and commercial web pages within one year. I’m
optimistic that something similar is happening to personal genomics
this year. We are in free-fall from a stratospheric $3
billion generic genome sequence (which only an expert could love)
down to a sea level price for our personal genomic data. Early-adopters
are posing and positing how to exploit it, while surrounded by
envious and oblivious bystanders. We can now pinpoint the
1% of our genomes which in concert with our environment influences
the traits that make us different from one another. Ways
to tease out that key 1%, coalesce with "next-generation" DNA
reading technology popping up this year, to suddenly bring the
street-price down to $3000—about as easy (or hard) to justify
as buying some bleeding-edge electronic gadget at an early stage
when only minimal software is ready.
I
am optimistic that while society is not now ready, it will be
this year. The inevitable initial concerns about techno-downsides,
e.g. the "Genetic Information Nondiscrimination Act of 2005",
are already morphing into concerns about how to make these new
gifts useful and reliable. Witness just this August, the
US Senate began consideration of the "Genomics and Personalized
Medicine Act of 2006". Momentum is thus building for
millions of people to volunteer to have their genome data correlated
with their physical-traits to benefit the billions who will hang
back (due to inertia or uncertainty). These volunteers
deserve up-to-the minute education in genetics, media, and privacy
issues. They deserve protection, encouragement and maybe
even rewards. Many current medical research studies do
not encourage their human research subjects to fully fathom the
potential identifiability of both their personal genome and physical
traits data, nor to learn enough to access and appreciate their
own data. The cost of educating the subjects is far less
than the other costs of such studies and yields benefits far
beyond the immediate need for fully informed consent. In contrast,
other studies like the Personal Genome Project, emphasize pre-education
sufficient to choose among (1) opting-out of the study completely,
(2) de-linking genomic and physical traits, (3) restricting linked
data to qualified researchers, (4) allow peer-to-peer sharing,
or (5) a fully open public database. The subjects can redact
specific items in their records at any point, realizing that
items used to support conclusions in published work cannot be
easily reversed. The excitement and dedication of these
volunteers is already awesome.
I
am optimistic that millions more will share. Millions already
do share to benefit society (or whatever) in old and new social
phenomena ranging from the Red Cross to Wikipedia, from MySpace/YouTube
to SEC compensation disclosures. We wear ribbons and openly
share personal experiences on topics that were once taboo, hidden
from view, like depression, sexual orientation, and cancer. Rabbis'
daily tasks now include genetic counseling. Our ability
to track disease spread, not just HIV, bird flu, or bioterrorism,
but even the common cold, will benefit from the new technologies
and the new openness—leading to a bio-weather map. We
will learn so much more about ourselves and how we interact with
our environment and our fellow humans. We will be able
to connect with other people who share our traits. I am
optimistic that we will not be de-humanized (continuing the legacy
of feudalism and industrial revolution), but we might be re-humanized,
relieved of a few more ailments, to contemplate our place in
the universe, and transcend out brutal past. |
CHRIS
DIBONA
Open Source Programs Manager, Google Inc.; Editor, Open Sources:
Voices From the Open Source Software Revolution and Open Sources
2.0

Widely Available, Constantly Renewing, High Resolution Images
of the Earth Will End Conflict and Ecological Devastation As We
Know It
I am not so much of a fool to think that war will end, no matter
how much I wish that our shared future could include such a thing.
Nor do I think that people will stop the careless destruction of
flora and fauna for personal, corporate, national or international
gain. I do believe that the advent of rapidly updating, citizenry-available
high resolution imagery will remove the protection of the veil
of ignorance and secrecy from the powerful and exploitative among
us.
One
cannot tell us that a clear cutting a forest isn't so bad if
you can see past the half acre of preserved trees into the desert
like atmosphere of the former rain forest. One cannot tell you
that they are not destroying villages in Sudan if you can view
the burned out carcasses of the homes of the slaughtered. One
cannot intimate that the impact of a dam is minimal as humanity
watches countless villages being submerged in real time. One cannot
paint a war as a simple police action when the results of the carpet
bombing will be available in near real time on the internet.
We have already started down this path, with journalists, bloggers
and photographers taking pictures and in near real time uploading
them to any of a variety of websites for people to see. Secrecy
of this kind is dying, but it needs one last nudge to push our
national and international leadership into a realm of truth unheard
of to date.
With sufficient resolution, many things will be as clear to all:
Troop movements, power plant placement, ill-conceived dumping,
or just your neighbor building a pool. I am optimistic enough to
think that the long term reaction to this kind of knowledge will
be the recognition of the necessity, or the proper management and
monitored phase out of the unwanted. I am not as optimistic about
the short term, with those in power opting to suppress this kind
of information access, or worse, acting on the new knowledge by
launching into a boil the conflicts that have been simmering for
uncountable years.
Can our leaders stand before us and say a thing is not occurring
if we can see via our low earth orbiting eyes that it is in fact
occurring? Only the truly deluded will be unable to see and then
perhaps we can remove them and their psychopaths from power. A
more honest existence, with humankind understanding the full, global,
impact of its decisions, is in our future if we can reach it. It
is likely to be a rough ride. |
TERRENCE
SEJNOWSKI
Computational Neuroscientist, Salk Institute,
Coauthor, The
Computational Brain
A
Breakthrough In Understanding Intelligence Is Around The Corner
The
clinically depressed often have a more realistic view of their
problems than those who are optimistic. Without a biological
drive for optimism it might be difficult to motivate humans to take
on difficult problems and face long odds. What optimistic view
of the future drives string theorists in physics working on theories
that are probably hundreds of years ahead of their time? There
is always the hope that a breakthrough is just around the corner.
In 1956
a small group of optimists met for a summer conference at Dartmouth,
inspired by the recent invention of digital computers and breakthroughs
in writing computer programs that could solve mathematical theorems
and play games. Since mathematics was among the highest
levels of human achievement, they thought that engineered intelligence
was immanent. Last summer, 50 years later, another meeting
was held at Dartmouth that brought together the founders of Artificial
Intelligence and a new generation of researchers. Despite all
the evidence to the contrary, the pioneers from the first meeting
were still optimistic and chided the younger generation for having
given up the goal of achieving human level intelligence.
Problems
that seem easy, like seeing, hearing and moving about, are much
more difficult to program than theorem proving and chess. How
could this be? It took hundreds of millions of years to evolve
efficient ways for animals to find food, avoid danger and interact
with one another, but humans have been developing mathematics for
only a few thousand years, probably using bits of our brains that
were meant to do something altogether different. We vastly
underestimated the complexity of our interactions with the world
because we are unaware of the immense computation our brains perform
to make seeing objects and turning doorknobs seem effortless.
The
early pioneers of AI sought logical descriptions that were black
or white and geometric models with a few parameters, but the world
is high dimensional and comes in shades of gray. The new generation
of researchers has made progress by focusing on specific problems
in computer vision, planning, and other areas of AI. Intractable
problems have yielded to probabilistic analysis of large databases
using powerful statistical techniques. The first algorithms
that could handle this complexity were neural networks with many
thousands of parameters that learned to categorize input patterns
from labeled examples. New machine learning algorithms have
been discovered that can extract hidden statistical structure from
large datasets without the need for any labels. Progress is
accelerating now that the internet provides truly large datasets
of text and images. Computational linguists, for example, have
adopted statistical algorithms for parsing sentences and language
translation, having found transformational grammars too impoverished.
One
of the most impressive learning systems is TD-Gammon, a computer
program that taught itself to play backgammon at the championship
level. Built by Gerald Tesauro at IBM Yorktown Heights, TD-Gammon
started out with little more than the board position and the rules
of the game, and the only feedback was who won. TD-gammon solved
the temporal credit assignment problem: If after a long string
of choices you win, how do you know which choices were responsible
for the victory? Unlike rule-based game programs, TD-Gammon
discovered better ways to play positions on its own, and developed
a surprisingly subtle sense of when to play safely and when to be
aggressive. This captures some important aspects of human intelligence.
Neuroscientists
have discovered that dopamine neurons, found in the brains of all
vertebrates, are central to reward learning. The
transient responses of dopamine neurons signal to the brain predictions
for future reward, which are used to guide behavior and regulate
synaptic plasticity. The dopamine responses have the same properties
as the temporal difference learning algorithm used in TD-Gammon. Reinforcement
learning was dismissed years ago as too weak a learner to handle
the complexity of cognition. This belief needs to be re-evaluated
in the light of the successes of TD-Gammon and learning algorithms
in other areas of AI.
What
would a biological theory of intelligence look like, based on internal
brain states derived from experimental studies rather than introspection? I
am optimistic that we are finally on the right track and that,
before too long, an unexpected breakthrough will occur. |
PHILIP
CAMPBELL
Editor-in Chief, Nature

Optimism
Needs To Have Bite So That Pioneering Work In Early Cancer
Detection Is Championed and Funded
Thinking
of myself as a perennial optimist, I was surprised how challenging
this question turned out to be. I realise that it's because
my optimism is an attitude, rather than founded on careful
estimation, and therefore bears little scrutiny. A corollary
of this is that my optimism makes little difference to what
I manage to achieve in a typical day apart from, importantly,
getting out of bed in the morning.
Turning to a dictionary I confirm that optimism is either "a
doctrine that this world is the best possible world" or" an
inclination to put the most favorable construction upon actions
and events or to anticipate the best possible outcome" (both
from Webster). An attitude, therefore, of questionable
robustness, and idiotically dangerous in some circumstances.
But amongst several similar definitions in the Oxford English
Dictionary I also find something less loaded: "Hopefulness
and confidence about the future or the successful outcome of
something". This permits optimism also to be rational.
And, to get serious, if I look for one aspect of life where both
rationality and hope are essential, and where both seem to be
paying off, it's in the battle against cancer. I focus my optimism
on what currently looks like a peripheral flank in that battle,
but could — and I think eventually will — become
a more central focus of attention. We should of course be delighted
by the few instances of drugs that hit a cancer target, even
when the target wasn't the one originally intended. But just
as important to me is the prospect of the use of proteins or
other markers that permit the early detection and identification
of cancer, hugely increasing the prospects of survival.
An early-detection cancer diagnostic needs to show low rates
of false positive and false negative outcomes, should be able
to distinguish tumours needing therapy from those that will do
no harm, and should be acceptable in term of cost and practicality.
This combination is a very tall order.
But hope arises from the unprecedented sensitivities of mass
spectrometers, of single-molecule detection and of DNA amplification,
not to mention the power of high-throughput biological screening.
These bring us the almost unimaginable prospect of successful
discrimination of cancer or — even better — pre-cancer
marker-molecules within the bloodstream. The recent discovery
in mice of genetic pathways underlying progression from precursor
to advanced stages of ovarian cancer is another milestone to
sustain optimism.
Although the US National Institutes of Health has made early
detection a priority, it remains relatively underfunded in most
cancer agencies. It has big challenges of clinical validation
ahead of it. At the policy level, health planners and drug companies
will need to be sure of its societal cost-effectiveness.
These considerations, and the fact that diagnostics are less
scientifically sexy than 'cures', can deter researchers from
pursuing early detection studies. So it's precisely now that
optimism needs to have bite, so that pioneering work in early
cancer detection is championed and funded despite the daunting
obstacles ahead. |
GINO
SEGRE
Physicist,
University of Pennsylvania; Author: Faust In Copenhagen:
A Struggle for the Soul of Physics

The
Future Of String Theory
I
am optimistic about the future of our thinking regarding string
theory and the early universe. Until fairly recently I did not
feel this way since string theory seemed to be a community unto
itself, albeit a very talented one. Controversy has created an
important dialogue and strife has erupted. I think this is all
to the good. The basis for the disagreement goes back 30 years.
A
unified understanding or so-called "theory of everything" has
long been sought. The standard model that emerged in the 1970s
provided a very significant step forward but left undetermined
some 20 parameters: the values of the six quark and six lepton
masses, various couplings etc. Initially it was hoped that string
theory, aside from a unification of forces with quantum gravity,
would determine the values of these parameters. That dream has
not been realized.
A
very significant group of theoretical physicists has now abandoned
the dream. Pointing out that even string theory supports the
view that an essentially infinite number of possibilities can
be realized for a universe, the so-called landscape, they maintain
that we live in one of these choices, the universe where the
20 or so parameters are fixed to be the values we observe. Other
universes, with other values of the parameters, are continuously
emerging and dying and still others live by our side. However
we are limited in the possibility of observations and measurements
to our own universe so that, in a deep sense, the 20 parameters
that determine our world are completely arbitrary. We would not
exist if they were not what they are, but there is no further
understanding of their values.
A
second group maintains that abandoning the dream that set elementary
particle physics on its course a century ago, that of determining
the forces and parameters of the sub-atomic world, is both premature
and intellectually wrong. They maintain this is not science.
There
is an intermediate position that, understandably, has not been
embraced vigorously by either side. Perhaps very few of the 20
or so parameters, some of the mass scales, correspond to the
universe we live in, but the others are set by string theory
or some future theory we have not yet discovered. This could
happen if e.g. the quark and lepton masses are calculable numbers
that multiply a mass given by the particular universe we happen
to live in. In this case both sides would be right. The numbers
would be set by the theory and the mass scale by the choice of
universe. I find the notion intriguing, but it may also be that
both sides are wrong and some other stunning synthesis will emerge.
So
why am I optimistic? Because I believe that controversy, with
clearly drawn out opposing positions, galvanizes both sides to
refine their opinions, creates excitement in the field for the
participants, stimulates new ideas, attracts new thinkers to
the fray and finally because it provides the public at large
with an entrée into the world of science at the highest
level, exhibiting for them heated arguments between great minds
differing on questions vital to them. What could be more exciting? |
ERNST
PÖPPEL
Neuroscientist, Chairman, Board of Directors,
Human Science Center and Department of Medical Psychology, Munich
University, Germany; Author, Mindworks

"Monocausalitis" —
Pessimistic Optimism To Overcome a Common Disease
Since
the question is in "edge world" it has thought
implicitly day and
night in my brain what I could be optimistic about, (on the explicit
level I had to do also some other things). Frankly speaking with
respect
to the "big questions" nothing came to the surface of
my mind. Can I be
optimistic as a scientist or as a citizen about such questions
like: Can
we come to sustainable peace? Will we really solve one day the
question
how our brain functions? Are we going to win the battle against
diseases?
Will it once be possible to be free from prejudices? Etc, etc.
The answer
is an emphatic "no". There is no reason to be
optimistic about such "big
questions".
On the other hand, I look at myself as an optimistic person; on
a personal
level I am optimistic about the future of my children and grandchildren,
about the career of my doctoral students then and now, about the
realization of some new research projects in the near future, about
my
health after some problems in the past; etc. etc. Thus, if everybody
would
be optimistic about personal matters (which empirically speaking
is
unfortunately not the case), possibly in the big picture there
could be a
reason for an optimistic attitude towards others and the world.
Such
optimism would be an expression of trust, not an expression of
solving the
problems of humankind.
On
the other hand: It would be great if I could be optimistic about
fighting successfully a disease of all humans, namely "monocausalitis".
(But again: there is no reason to be optimistic; we better be realistic).
Humans have the urge to explain everything in a monocausal way.
We are
always looking for one reason only. The philosophical sentence "nothing
is
without reason" (nihil est sine ratione) is usually misunderstood
as
"nothing is without one reason". Occam's razor, i.e.
to look for the
simplest solution of a problem is OK, as long as a solution is
not too
simple. We are apparently victims of our evolutionary heritage
being
satisfied only if one and only one cause for the solution of a
problem is
identified (or claimed).
In
understanding biological processes, for instance brain processes,
and how they control the "mindworks", we better free ourselves
from this
monocausal trap. I am not only referring to the problem of the
many hidden
variables which have to be accepted in any analysis of a biological
process and which create the typical headache of an experimenter
— it is
never possible to control every variable —, but I am referring
also to a
structural problem. Biological phenomena can better be understood,
if
multicausality is accepted as a guiding principle. In particular,
I would
like to promote "complementarity as a generative principle".
In quantum
mechanics to the best of my knowledge complementarity is a descriptive
principle; in biology it is a creative principle. Just one example:
It does
not make much sense to explain human behaviour only on a genetic
basis;
genetic and environmental information have to come together to
form for
instance the matrix of our brain. This and many other examples
are so
self-evident that it is even embarrassing to refer to them.
But
still, if one looks at the expressed optimism that may save our
world or that gives the final insight into mother nature's tricks
we are
confronted with monocausal solutions. Possibly, if we accept our
evolutionary heritage, the burden of "monocausalitis",
we may overcome
this disease, at least partially. |
SETH
LLOYD
Quantum
Mechanical Engineer, MIT, Author, Programming the Universe

Once and Future Optimism
I
am optimistic about the past. It's looking better and better
every day. A couple of hundred years from now, when
the Greenland and Antarctic ice caps have melted and sea levels
have risen by two hundred feet, our genetically engineered descendants
will be sitting by their ocean-front property in Nevada reminiscing
about the high old times in those now submerged cities of New
York, London, and Tokyo. From their perspective, the past
is really going to look good.
I'm
also optimistic about the future. It is well within our power
as a species to avert the environmental catastrophe envisaged
in the previous paragraph. Prudent investment in carbon
conserving technologies and economic strategies can postpone
or prevent entirely the more extreme consequences of global warming. I
am hopeful that policy makers will realize that relatively small
sacrifices made voluntarily now, can prevent much larger, involuntary
sacrifices later.
Let's
be realistic: we human beings are addicted to damaging ourselves
and others. When one rationale for conflict loses force, we seek
out a new one, no matter how trivial, for prolonging the strife. Nonetheless,
we are capable of pulling back from the brink. During the
cold war, the strategy pursued the United States and the Soviet
Union was officially called MAD, or Mutually Assured Destruction:
anyone who started a nuclear war was guaranteed to be annihilated
themselves. While risky in the long run — if the
radar confuses a flock of geese with an incoming missile, we're
all dead — the strategy worked for long enough for our leaders
to realize just how mad MAD was, and to begin to disarm. We
are currently on the brink of a major environmental catastrophe;
but there is still time to pull back.
Even
if global warming does flood most of the world's major cities,
human beings will survive and adapt. Just how they will
adapt, we can't predict: but they will. Technology got
us into this mess in the first place by providing the wherewithal
for modern industrial society. I am optimistic that our descendants
will develop technologies to cope with whatever mess we leave
them. The technologies for survival into the twenty third
century need not be high technologies: simple, low technologies
of water and fuel conservation will suffice. If we're careful
with our basic resources, there should be enough left over to
keep on playing video games.
We
need not even leave the world a mess. The key to using resources
wisely is to distribute them fairly. If only because the
global distribution of resources such as money and energy is
currently so skewed, I am guardedly optimistic that our increasingly
globalized society can make progress towards a world in which
each human being has equal access to food, clean water, education,
and political representation. This optimism is tempered
by the acknowledgement that the world's 'haves' have little motivation
to share the world's resources with its 'have nots' We are unaccustomed
to thinking of democracy as a 'technology,' but that is what
it is: a systematic arrangement of human beings into a social
machine that functions better in many respects than the social
machine of totalitarianism. The real technology that we
currently require is not a more fuel-efficient SUV, but rather
a political system that gives each human being on earth a voice
in policy.
Finally,
I am wildly optimistic about the future of scientific ideas.
Wherever I travel in the world — first, second, or third —
I meet young scientists whose ideas blow me away. The internet
distributes cutting edge scientific work much more widely and
cheaply than ever before. As a result, the fundamental
intellectual equality of human beings is asserting itself in
a remarkable way: people are just as smart in Peru and Pakistan
as they are in London and Los Angeles, and those people can now
participate in scientific inquiry with far greater effectiveness
than ever before. Human beings are humanity's greatest
resource, and when those humans start becoming scientists, watch
out!
|
ELIZABETH
F. LOFTUS
Psychologist, University of California,
Irvine

The
Importance Of Innocence
"I don't think a lot of people realize how important innocence
is to innocent people." These are haunting words spoken in
the film "A Cry in the Dark"
The
wrongful conviction of innocent people has been a serious problem
in our society. It is a problem that we are now becoming
acutely aware of through the release of individuals who were shown
to be actually innocent by DNA testing. One happy consequence
of these sad cases is the advent of a number of "innocence
projects," typically operated out of law schools and dedicated
to the freeing of those who were wrongfully convicted.
I
wish I could say that I was optimistic that the problem of wrongful
convictions will virtually disappear, sort of like polio. I can't.
But I am optimistic that the problem of wrongful convictions will
become smaller than it once was. Here's why. Just
as the a plane crash leads to a microscopic analysis of what went
wrong, so these cases of proven wrongful conviction have been dissected
to determine what went wrong. The answer in the majority of cases
is faulty memory. In a recent case, a rape victim misidentified
a man as her attacker — a mistake of faulty memory.
Readers can find out more about how these kinds of errors happen
by reviewing the cases on the website of the Innocence Project.
The
mistaken identification by the rape victim, and others similarly
situated, comes as no surprise to scientists who have studied
eyewitness memory. We have learned a great deal about what it
is about our system that promotes these tragic errors. And finally
our government is listening, a price paid by the hundreds individuals
whose suffered through years of imprisonment and are now free.
The Dept of Justice convened a committee to make recommendations
to law enforcement for how witnesses and victims should be handled
to preserve that valuable "memory evidence." Many
states have recently adopted a package of reforms for how witnesses
are interviewed and lineups are conducted. It has been a triumph
of scientific discovery — a science that has taught us much
about the workings of the human mind, and also has made a difference
in the way our world works. But the science has only scratched
the surface, and has layers upon layers to go. During this
period we will see more memory science, more reforms in the justice
system, and we will have fewer errors.
As we invest in the science, and make more progress, society needs
to keep one important idea in mind. Memory, like liberty,
must be cherished, nourished, and protected. Without one,
we can easily lose the other. |
MAX
TEGMARK
Physicist, MIT; Researcher, Precision Cosmology

We're
Not Insignificant After All
When
gazing up on a clear night, it's easy to feel insignificant.
Since our earliest ancestors admired the stars, our human egos
have suffered a series of blows. For starters, we're smaller
than we thought. Eratosthenes showed that Earth was larger than
millions of humans, and his Hellenic compatriots realized that
the solar system was thousands of times larger still. Yet for
all its grandeur, our Sun turned out to be merely one rather
ordinary star among hundreds of billions in a galaxy that in
turn is merely one of billions in our observable universe, the
spherical region from which light has had time to reach us during
the 14 billion years since our big bang. Then there are probably
more (perhaps infinitely many) such regions. Our lives are small
temporally as well as spatially: if this 14 billion year cosmic
history were scaled to one year, then 100,000 years of human history
would be 4 minutes and a 100 year life would be 0.2 seconds.
Further deflating our hubris, we've learned that we're not that
special either. Darwin taught us that we're animals, Freud taught
us that we're irrational, machines now outpower us, and just
last month, Deep Fritz outsmarted our Chess champion Vladimir
Kramnik. Adding insult to injury, cosmologists have found that
we're not even made out of the majority substance.
The more I learned about this, the less significant I felt. Yet
in recent years, I've suddenly turned more optimistic about our
cosmic significance. I've come to believe that advanced evolved
life is very rare, yet has huge growth potential, making our place
in space and time remarkably significant.
The
nature of life and consciousness is of course a hotly debated
subject. My guess is that these phenomena can exist much more
generally that in the carbon-based examples we know of.
I believe that consciousness is, essentially, the way information
feels when being processed. Since matter can be arranged to process
information in numerous ways of vastly varying complexity, this
implies a rich variety of levels and types of consciousness. The
particular type of consciousness that we subjectively know is then
a phenomenon that arises in certain highly complex physical systems
that input, process, store and output information. Clearly, if
atoms can be assembled to make humans, the laws of physics also
permit the construction of vastly more advanced forms of sentient
life. Yet such advanced beings can probably only come about in
a two-step process: first intelligent beings evolve through natural
selection, then they choose to pass on the torch of life by building
more advanced consciousness that can further improve itself.
Unshackled by the limitations of our human bodies, such advanced
life could rise up and eventually inhabit much of our observable
universe. Science fiction writers, AI-aficionados and transhumanist
thinkers have long explored this idea, and to me the question isn't
if it can happen, but if it will happen.
My
guess is that evolved life as advanced as ours is very rare.
Our universe contains countless other solar systems, many of
which are billions of years older than ours. Enrico Fermi pointed
out that if advanced civilizations have evolved in many of them,
then some have a vast head start on us — so where are they?
I don't buy the explanation that they're all choosing to keep
a low profile: natural selection operates on all scales, and
as soon as one life form adopts expansionism (sending off rogue
self-replicating interstellar nanoprobes, say), others can't
afford to ignore it. My personal guess is that we're the only
life form in our entire observable universe that has advanced
to the point of building telescopes, so let's explore that hypothesis.
It was the cosmic vastness that made me feel insignificant to
start with. Yet those galaxies are visible and beautiful to us — and
only us. It is only we who give them any meaning, making our
small planet the most significant place in our observable universe.
Moreover, this brief century of ours is arguably the most significant
one in the history of our universe: the one when its meaningful
future gets decided. We'll have the technology to either self-destruct
or to seed our cosmos with life. The situation is so unstable that
I doubt that we can dwell at this fork in the road for more than
another century. If we end up going the life route rather than
the death route, then in a distant future, our cosmos will be teeming
with life that all traces back to what we do here and now. I have
no idea how we'll be thought of, but I'm sure that we won't be
remembered as insignificant. |
SIMON
BARON-COHEN
Psychologist,
Autism Research Centre, Cambridge University; Author, The
Essential Difference

The Rise of Autism and The Digital Age
Whichever
country I travel to, attending conferences on the subject of
autism, I hear the same story: autism is on the increase.
Thus in 1978 the rate of autism was 4 in 10,000 children, but
today (according to a Lancet article in 2006) it is 1%. No one
quite knows what this increase is due to, though conservatively
it is put down to better recognition, better services, and broadening
the diagnostic category to include milder cases such as Asperger
Syndrome. It is neither proven nor disproven that the increase
might reflect other factors, such as genetic change or some environmental
(e.g., hormonal) change. And for scientists to answer the question
of what is driving this increase will require imaginative research
comparing historical as well as cross-cultural data.
Some may throw up their hands at this increase in autism and
feel despair and pessimism. They may feel that the future is
bleak for all of these newly diagnosed cases of autism. But I
remain optimistic that for a good proportion of them, it has
never been a better time to have autism.
Why? Because there is a remarkably good fit between the autistic
mind and the digital age. The digital revolution brought us computers,
but this age is remarkably recent. It was only in 1953 that IBM
produced their first computer, but a mere 54 years later many
children now have their own computer.
Computers operate on the basis of extreme precision, and so does
the autistic mind. Computers deal in black and white binary code,
and so does the autistic mind. Computers follow rules, and so
does the autistic mind. Computers are systems, and the autistic
mind is the ultimate systemizer. The autistic mind is only interested
in data that is predictable and lawful. The inherently ambiguous
and unpredictable world of people and emotions is a turn off
for someone with autism, but a rapid series of clicks of the
mouse that leads to the same result every time that sequence
is performed is reassuringly attractive. Many children with autism
develop an intuitive understanding of computers in the same way
that other children develop an intuitive understanding of people.
So, why am I optimistic? For this new generation of children
with autism, I anticipate that many of them will find ways to
blossom, using their skills with digital technology to find employment,
to find friends, and in some cases to innovate. When I think
back to the destiny of children with autism some 50 years ago,
I imagine there were relatively fewer opportunities for such
children. When I think of today's generation of children with
autism, I do not despair. True, many of them will have a rocky
time during their school years, whilst their peer group shuns
them because they cannot socialize easily. But by adulthood,
a good proportion of these individuals will have not only found
a niche in the digital world, but will be exploiting that niche
in ways that may bring economic security, respect from their
peer group, and make the individual feel valued for the contribution
they are able to make.
Of course, such opportunities may only be relevant to those individuals
with autism who have language and otherwise normal intelligence,
but this is no trivial subgroup. For those more severely affected,
by language delay and learning difficulties, the digital age
may offer less. Though even for this subgroup I remain optimistic
that new computer-based teaching methods will have an appeal
that can penetrate the wall that separates autism from the social
world. The autistic mind — at any level of IQ — latches onto
those aspects of the environment that provide predictability,
and it is through such channels that we can reach in to help. |
|