"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?" |
|
LAWRENCE
KRAUSS
Physicist,
Case Western Reserve University; Author, Atom

What
is the Universe Made of and How Will it End?
Like
99% of particle physicists, and 95% of cosmologist (perhaps
98% of theorists and 90% of observers, to be more specific),
I was relatively certain that there was precisely enough
matter in the universe to make it geometrically flat. What
does geometrically flat mean? Well, according to general
relativity it means there is a precise balance between the
positive kinetic energy associated with the expansion of
space, and the negative potential energy associated with
the gravitational attraction of matter in the universe so
that the total energy is precisely zero. This is not only
mathematically attractive, but in fact the only theory we
have that explains why the universe looks the way it does
today tends to predict a flat universe today.
Now,
the only problem with this prediction is that visible matter
in the universe only accounts for a few percent of the total
amount of matter required to make the universe flat. Happily,
however, during the period from 1970 or so to the early 1990's
it had become abundantly clear that our galaxy, and indeed
all galaxies are dominated by 'dark matter'... material that
does not shine, or, as far as we can tell, interact electromagnetically. This
material, which we think is made up of a new type of elementary
particle, accounts for at least 10 times as much matter as
can be accounted for in stars, hot gas etc.. With the inference
that dark matter existed in such profusion, it was natural
to suspect that there was enough of it to account for a flat
universe.
The
only problem was that the more our observations of the universe
improved, the less evidence there appeared to be that there
was enough dark matter to result in a flat universe. Moreover,
all other other indicators of cosmology, from the age of
the universe, to the data on large scale structure, all began
to suggest a flat universe dominated by dark matter was inconsistent
with observation. In 1995, this led my colleague Mike
Turner and I to suggest that the only way a flat universe
could be consistent with observation was if most of the energy,
indeed almost 75% of the total energy, was contributed not
by matter, but by empty space!
As
heretical as our suggestion was, to be fair, I think we were
being more provocative than anything, because the one thing
that everyone knew was that the energy of empty space had
to be precisely zero. The alternative, which would
have resulted in something very much like the 'Cosmological
Constant' first proposed by Einstein when he incorrectly
thought the universe was static and needed some exotic new
adjustment to his equations of general relativity so that
the attractive force of gravity was balanced by a repulsive
force associated with empty space, was just too ugly to imagine.
And
then, in 1998 two teams measuring the recession velocity
of distant galaxies using observations of exploding stars
within them to probe their distance from us at the same time
discovered something amazing. The expansion of the
universe seemed to be speed up with time, not slowing down,
as any sensible universe should be doing! Moreover,
if one assumed this acceleration was caused by a new repulsive
force throughout empty space that would be caused if the
energy of empty space was not precisely zero, then the amount
of extra energy needed to produce the observed acceleration
was precisely the amount needed to account for a flat universe!
Now
here is the really weird thing. Within a year after
the observation of an accelerating universe, even though
the data was not yet definitive, I and pretty well everyone
else in the community who had previously thought there was
enough dark matter to result in a flat universe, and who
had previously thought the energy of empty space must be
precisely zero had completely changed our minds...
All of the signals were just too overwhelming to continue
to hold on to our previous rosy picture... even if the alternative
was so crazy that none of our fundamental theories could
yet account for it.
So
we are now pretty sure that the dominant energy-stuff in
our universe isn't normal matter, and isn't dark matter,
but rather is associated with empty space! And what
is worse (or better, depending upon your viewpoint) is that
our whole picture of the possible future of the universe
has changed.. An accelerating universe will carry away
almost everything we now see, so that in the far future our
galaxy will exist alone in a dark, and seemingly endless
void....
And
that is what I find so satisfying about science. Not
just that I could change my own mind because the evidence
of reality forced me to... but that the whole community could
throw out a cherished notion, and so quickly! That
is what makes science different than religion, and that is
what makes it worth continuing to ask questions about the
universe ... because it never fails to surprise us. |
STEPHEN
M. KOSSLYN
Psychologist,
Harvard University; Author, Wet Mind

The
World in the Brain
I
used to believe that we could understand psychology at different
levels of analysis, and events at any one of the levels could
be studied independently of events at the other levels. For
example, one could study events at the level of the brain (and
seek answers in terms of biological mechanisms), the level
of the person (and seek answers in terms of the contents of
thoughts, beliefs, knowledge, and so forth), or the level of
the group (and seek answers in terms of social interactions).
This approach seemed reasonable; the strategy of "divide
and conquer" is a cornerstone in all of science, isn't
it? In fact, virtually all introductory psychology textbooks
are written as if events at the different levels are largely
independent, with separate chapters (that only rarely include
cross-references to each other) on the brain, perception, memory,
personality, social psychology, and so on.
I've changed my mind. I don't think it's possible to understand
events at any one level of analysis without taking into account
what occurs at other levels. In particular, I'm now convinced
that at least some aspects of the structure and function of the
brain can only be understood by situating the brain in a specific
cultural context. I'm not simply saying that the brain has evolved
to function in a specific type of environment (an idea that forms
a mainstay of evolutionary psychology and some areas of computer
vision, where statistics of the natural environment are used
to guide processing). Rather, I'm saying that to understand how
any specific brain functions, we need to understand how that
person was raised, and currently functions, in the surrounding
culture.
Here's my line of reasoning. Let's begin with a fundamental fact:
The genes, of which we have perhaps only some 30,000, cannot
program us to function equally effectively in every possible
environment. Hence, evolution has licensed the environment to
set up and configure each individual's brain, so that it can
work well in that context. For example, consider stereovision.
We all know about stereo in audition; the sound from each of
two loudspeakers has slightly different phases, so the listener's
brain glues them together to provide the sense of an auditory
panorama. Something similar is at work in vision. In stereovision,
the slight disparity in the images that reach the two eyes are
a cue for how far away objects are. If you're focused on an object
directly in front of you, your eyes will converge slightly. Aside
from the exact point of focus, the rest of the image will strike
slightly different places on the two retinas (at the back of
the eye, which converts light into neural impulses), and the
brain uses the slight disparities to figure out how far away
something is.
There are two important points here. First, this stereo process — of
computing depth on the basis of the disparities in where images
strike the two retinas — depends on the distance between
the eyes. And second, and this is absolutely critical, there's
no way to know at the moment of conception how far apart a person's
eyes are going to be, because that depends on bone growth — and
bone growth depends partly on the mother's diet and partly on
the infant's diet.
So, given that bone growth depends partly on the environment,
how could the genes set up stereovision circuits in the brain?
What the genes did is really clever: Young children (peaking
at about age 18 months) have more connections among neurons than
do adults; in fact, until about eight years old, children have
about twice as many neural connections as they do as adults.
But only some of these connections provide useful information.
For example, when the infant reaches, only the connections from
some neurons will correctly guide reaching. The brain uses a
process called pruning to get rid of the useless connections.
The connections that turn out to work, with the distance between
the eyes the infant happens to have, would not be the ones that
would work if the mother did not have enough calcium, or the
infant hadn't had enough of various dietary supplements.
This is a really elegant solution to the problem that the genes
can't know in advance how far apart the eyes will be. To cope
with this problem, the genes overpopulate the brain, giving us
options for different environments (where the distance between
eyes and length of the arms are part of the brain's "environment," in
this sense), and then the environment selects which connections
are appropriate. In other words, the genes take advantage of
the environment to configure the brain.
This overpopulate-and-select mechanism is not limited to stereovision.
In general, the environment sets up the brain (above and beyond
any role it may have had in the evolution of the species), configuring
it to work well in the world a person inhabits. And by environment
I'm including everything outside the brain — including
the social environment. For example, it's well known that children
can learn multiple languages without an accent and with good
grammar, if they are exposed to the language before puberty.
But after puberty, it's very difficult to learn a second language
so well. Similarly, when I first went to Japan, I was told not
even to bother trying to bow, that there were something like
a dozen different bows and I was always going to "bow with
an accent" — and in my case the accent was so thick
that it was impenetrable.
The notion is that a variety of factors in our environment, including
in our social environment, configure our brains. It's true for
language, and I bet it's true for politeness as well as a raft
of other kinds of phenomena. The genes result in a profusion
of connections among neurons, which provide a playing field for
the world to select and configure so that we fit the environment
in which we inhabit. The world comes into our head, configuring
us. The brain and its surrounding environment are not as separate
as they might appear.
This perspective leads me to wonder whether we can assume that
the brains of people living in different cultures process information
in precisely the same ways. Yes, people the world over have much
in common (we are members of the same species, after all), but
even small changes in the wiring may lead us to use the common
machinery in different ways. If so, then people from different
cultures may have unique perspectives on common problems, and
be poised to make unique contributions toward solving such problems.
Changing my mind about the relationship between events at different
levels of analysis has led me to change fundamental beliefs.
In particular, I now believe that understanding how the surrounding
culture affects the brain may be of more than merely "academic
interest." |
GARY
KLEIN
Research
Psychologist; Founder, Klein Associates; Author, The Power
of Intuition

Exchanging
Your Mind
It's
generally a bad idea to change your mind and an even worse
idea to do it publicly. Politicians who get caught changing
their minds are labeled "flip-floppers." When managers
change their minds about what they want they risk losing credibility
and they create frustration in subordinates who find that much
of their work has now been wasted. Researchers who change their
minds may be regarded as sloppy, shooting from the hip rather
than delaying publication until they nail down all the loose
ends in their data.
Clearly
the Edge Annual Question for 2008 carries with it
some dangers in disclosure: "What have you changed
your mind about? Why?" Nevertheless, I'll take the bait
and describe a case where I changed my mind about the nature
of the phenomenon I was studying.
My
colleagues Roberta Calderwood, Anne Clinton-Cirocco, and I
were investigating how people make decisions under time pressure.
Obviously, under time pressure people can't canvass all the
relevant possibilities and compare them along a common set
of dimensions. So what are they doing instead?
I
thought I knew what happened. Peer Soelberg had investigated
the job-choice strategy of students. In most cases they
quickly identified a favorite job option and evaluated it by
comparing it to another option, a choice comparison, trying
to show that their favorite option was as good as or better
than this comparison case on every relevant dimension. This
strategy seemed like a very useful way to handle time pressure. Instead
of systematically assessing a large number of options, you
only have to compare two options until you're satisfied that
your favorite dominates the other.
To
demonstrate that people used this strategy to handle time pressure
I studied fireground commanders. Unhappily, the firefighters
had not read the script. We conducted interviews with them
about tough cases, probing them about the options they considered. And
in the great majority of cases (about 81%), they insisted that
they only considered one option.
The
evidence obviously didn't support my hypothesis. Still, I wasn't
convinced that my hypothesis was wrong. Perhaps we hadn't
phrased the questions appropriately. Perhaps the firefighters'
memories were inaccurate. At this point I hadn't changed my
mind. I had just conducted a study that didn't work out.
People
are very good at deflecting inconvenient evidence. There are
very few facts that can't be explained away. Facts rarely
force us to change our minds.
Eventually
my frustration about not getting the results I wanted was replaced
by a different emotion: curiosity. If the firefighters weren't
comparing options just what were they doing?
They
described how they usually knew what to do once they sized
up the situation. This claim generated two mysteries: How
could the first option they considered have such a high likelihood
of succeeding? And how could they evaluate an option
except by comparing it to another?
Going
back over the data we resolved each of these mysteries. They
were using their years of experience to rapidly size up situations. The
patterns they had acquired suggested typical ways of reacting.
But they still needed to evaluate the options they identified. They
did so by imagining what might happen if they carried out the
action in the context of their situation. If it worked,
they proceeded. If it almost worked then they looked for
ways to repair any weaknesses or else looked at other typical
reactions until they found one that satisfied them.
Together,
this forms a recognition-primed decision strategy that is based
on pattern recognition but tests the results using deliberate
mental simulation. This strategy is very different from
the original hypothesis about comparing the favorite versus
a choice comparison.
I
had an advantage in that I had never received any formal training
in decision research. One of my specialty areas was the nature
of expertise. Therefore, the conceptual shift I made was
about peripheral constructs, rather than core constructs
about how decisions are made. The notions of Peer Soelberg
that I was testing weren't central to my understanding of skilled
performance.
Changing
one's mind isn't merely revising the numerical value of a fact
in a mental data base or changing the beliefs we hold. Changing
my mind also means changing the way I will then use my mind
to search for and interpret facts. When I changed my understanding
of how the fireground commanders were making decisions I altered
the way I viewed experts and decision makers. I altered
the ways I collected and analyzed data in later studies. As
a result, I began looking at events with a different mind,
one that I had exchanged for the mind I previously had been
using. |
ALAN
KRUEGER
Bendheim
Professor of Economics and Public Affairs at Princeton University; Author, What Makes a Terrorist: Economics
and the Roots of Terrorism

I
used to think the labor market was very competitive, but
now I think it is better characterized by monopsony, at
least in the short run. |
SETH
LLOYD
Quantum
Mechanical Engineer, MIT, Author, Programming
the Universe
I
have changed my mind about technology.
I
used to take a dim view of technology. One should live
one's life in a simple, low-tech fashion, I thought. No cell
phone, keep off the computer, don't drive. No nukes, no remote
control, no DVD, no TV. Walk, read, think — that was the proper
path to follow.
What
a fool I was! A dozen years ago or so, by some bizarre accident,
I became a professor of Mechanical Engineering at MIT. I had
never had any training, experience, or education in engineering. My
sole claim to engineering expertise was some work on complex
systems and a few designs for quantum computers. Quantum-mechanical
engineering was in its early days then, however, and MIT needed
a quantum mechanic. I was ready to answer the call.
It
was not my fellow professors who converted me to technology,
uber-techno-nerds though they were. Indeed, my colleagues
in Mech. E. were by and large somewhat suspicious of me, justifiably
so. I was wary of them in turn, as one often is of co-workers
who are hugely more knowledgeable than one is oneself. (Outside
of the Mechanical Engineering department, by contrast, I found
large numbers of kindred souls: MIT was full of people whose
quanta needed fixing, and as a certified quantum mechanic,
I was glad to oblige.) No, it was not the brilliant technologists
who filled the faculty lunchroom who changed my mind. Rather,
it was the students who had come to have me teach them about
engineering who taught me to value technology.
Your
average MIT undergraduate is pretty technologically adept.
In the old days, freshmen used to arrive MIT having disassembled
and reassembled tractors and cars; slightly later on, they
arrived having built ham radios and guitar amplifiers; more
recently, freshmen and fresh women were showing up with a scary
facility with computers. Nowadays, few of them have used a
screwdriver (except maybe to install some more memory in their
laptop), but they are eager to learn how robots work, and raring
to build one themselves.
When
I stepped into my first undergraduate classroom, a controls
laboratory, I knew just about as little about how to build
a robot as the nineteen and twenty year olds who were expectantly
sitting, waiting for me to teach them how. I was terrified.
Within a half an hour, the basis for my terror was confirmed.
Not only did I know as little as the students, in many cases
I knew significantly less: about of the quarter of the students
knew demonstrably more about robotics than I, and were happy
to display their knowledge. I emerged from the first lab session
a sweaty mess, having managed to demonstrate my ignorance and
incompetence in a startling variety of ways.
I
emerged from the second lab session a little cooler.
There is no better way to learn, and learn fast, than to teach.
Humility actually turns out to have its virtues, too. It
turns out to be rather fun to admit one's ignorance, if that
admission takes the form of an appeal to the knowledge of all
assembled. In fact, it turned out that, either through my training
in math and physics, or through a previous incarnation, I possessed
more intuitive knowledge of control theory than I had any right
to, given my lack of formal education on the subject. Finally,
no student is more empowered than the one who has just correctly
told her professor that he is wrong, and showed him why her
solution is the right one.
In
the end, the experience of teaching the technology that I did
not know was one of the most intellectually powerful of my
life. In my mental ferment of trying to learn the material
faster and deeper than my students, I began to grasp concepts
and ways of looking at the world, of whose existence I had
no previous notion. One of the primary features of the lab
was a set of analog computers, boxy things festooned with dials
and plugs, and full of amplifiers, capacitors, and resistors,
that were used to simulate, or construct an analog, of the
motors and loads that we were trying to control. In my feverish
attempt to understand analog computers, I constructed model
for a quantum-mechanical analog computer that would operate
at the level of individual atoms. This model resulted in one
of my best scientific papers. In the end, scarily enough, my
student evaluations gave me the highest possible marks for
knowledge of the material taught.
And
technology? Hey, it's not so bad. When it comes to walking
in the rain, Goretex and fleece beat oilskin and wool hollow.
If we're not going to swamp our world in greenhouse gases,
we damn well better design dramatically more efficient cars
and power plants. And if I could contribute to technology
by designing and helping to build quantum computers and quantum
communication systems, so much the better. Properly conceived
and constructed technology does not hinder the simple life,
but helps it.
OK. So
I was wrong about technology. What's my next misconception?
Religion? God forbid. |
JOHN
MCCARTHY
Computer
Scientist; 1st Generation Artificial Intelligence Pioneer, Stanford
University

Attitudes
Trump Facts
I
have a collection of web pages on the sustainability of material
progress that treats many problems that have been proposed
as possible stoppers. I get email about the pages, both
unfavorable and favorable, mostly the latter.
I
had believed that the email would concern specific problems
or would raise new ones, e.g. "What about erosion of agricultural
land?"
There's
some of that, but overwhelmingly the email, both pro and con,
concerns my attitude, not my (alleged) facts. "How
can you be so blithely cornucopian when everybody knows ..." or "I'm
glad someone has the courage to take on all those doomsters."
It
seems, to my surprise, that people's attitude that the future
stems at least as much from personality as from opinions about
facts. People look for facts to support their attitudes — which
have earlier antecedents. |
ERNST
PÖPPEL
Neuroscientist,
Chairman, Board of Directors,
Human Science Center and
Department of Medical Psychology,
Munich University, Germany;
Author, Mindworks

Being
Caught In The Language Trap — Or Wittgenstein's
Straitjacket
When
I look at something, when I talk to somebody, when I write
a few sentences about "what I have changed my mind about
and why", the neuronal network in my brain changes all
the time and there are even structural changes in the brain.
Why is it that these changes don't come to mind all the time
but remain subthreshold? Certainly, if everything would
come to mind what goes on in the brain, and if there would
not be an efficient mechanism of informational garbage disposal,
we would end up in mental chaos (which sometimes happens
in unfortunate cases with neuronal dysfunctioning). It is
only sometimes that certain events produce so much neuronal
energy and catch so much attention that a conscious representation
is made possible.
As
most neuronal information processing remains in mental darkness,
i.e. happens on an implicit level, it is in my view impossible
to make a clear statement why somebody changed his or her
mind about something. If somebody gives an explicit reason
for having changed the mind about something, I am very suspicious.
As "it thinks" all the time in my brain, and as
these processes are beyond voluntary control, I am much less
transparent to myself as I might want, and this is true for
everybody. Thus, I cannot give a good reason why I changed
my mind about a strong hypothesis or even belief or perhaps
a prejudice in my scientific work which I had until several
years ago.
A
sentence of Ludwig Wittgenstein from his Tractatus Logico-Philosophicus
(5.6) was like a dogma for me: "Die Grenzen meiner Sprache
bedeuten die Grenzen meiner Welt. — The limits of my
language signify the limits of my world " (my translation).
Now I react to this sentence with an emphatic "No!".
As
a neuroscientist I have to stay away from the language trap.
In our research we are easily misguided by words. Without
too much thinking we are referring to "consciousness",
to "free will", to "thoughts", to "attention",
to the "self", etc, and we give an ontological
status to these terms. Some people even start to look at
the potential site of consciousness or of free will in the
brain, or some people ask the "what is ..."
question that never can find an answer. The prototypical "what
is ..." question was formulated 1600 years ago by Augustinus
who said in the 11th book of his Confessions: "Quid
est ergo tempus? Si nemo ex me quaerat scio, si quaerenti explicare
velim nescio. — What is time? If nobody asks me, I know
it, but if I have to explain it to somebody, I don't know it" (my
translation).
Interestingly,
Augustinus made a nice categorical mistake by referring to "knowing"
at first on an implicit, and second on an explicit level. This
categorical mistake is still with us when we ask questions
like: "What is consciousness, free will,..."; one
knows, but one does not. As neuroscientists we have to focus
on processes in the brain which rarely or perhaps never map
directly onto such terms as we use them. Complexity reduction
in brains is necessary and it happens all the time, but the
goal of this reductive process is not such terms, that might
be useful for our communication, but efficient action. This
is what I think today, but why I came to this conclusion I
don't know; it was probably several reasons that finally resulted
in a shift of mind. i.e. overcoming Wittgenstein's straitjacket. |
SCOTT
SAMPSON
Chief
Curator, Utah Museum of Natural History; Associate Professor,
University of Utah; Host, Dinosaur Planet TV series

The
Death of the Dinosaurs
An
asteroid did it . . . .
Ok,
so this may not seem like news to you. The father-son
team of Luis and Walter Alvarez first put forth the asteroid
hypothesis in 1980 to account for the extinction of dinosaurs
and many other lifeforms at the end of the Mesozoic (about
65.5 million years ago). According to this now familiar
scenario, an asteroid about 10 km in diameter slammed into
the planet at about 100,000 km/hour. Upon impact, the
bolide disintegrated, vaporizing a chunk of the earth's crust
and propelling a gargantuan cloud of gas and dust high into
the atmosphere. This airborne matter circulated around
the globe, blocking out the sun and halting photosynthesis
for a period of weeks or months. If turning the lights
out wasn't bad enough, massive wild fires and copious amounts
of acid rain apparently ensued.
Put
simply, it was hell on Earth. Species succumbed in great
numbers and food webs collapsed the world over, ultimately
wiping out about half of the planet's biodiversity. Key
geologic evidence includes remnants of the murder weapon itself;
iridium, an element that occurs in small amounts in the Earth's
crust but is abundant in asteroids, was found by the Alvarez
team to be anomalously abundant in a thin layer within Cretaceous-Tertiary
(K-T) boundary sediments at various sites around the world. In
1990, announcement came of discovery of the actual impact crater
in the Gulf of Mexico. It seemed as if arguably the most
enduring mystery in prehistory had finally been solved. Unsurprisingly,
this hypothesis was also a media darling, providing a tidy
yet incredibly violent explanation to one of paleontology's
most perplexing problems, with the added bonus of a possible
repeat performance, this time with humans on the roster of
victims.
To
some paleontologists, however, the whole idea seemed just a
bit too tidy.
Ever
since the Alvarezes proposed the asteroid, or "impact
winter,"
hypothesis, many (at times the bulk of) dinosaur paleontologists
have argued for an alternative scenario to account for the K-T
extinction. I have long counted myself amongst the ranks
of doubters. It is not so much that I and my colleagues
have questioned the occurrence of an asteroid impact; supporting
evidence for this catastrophic event has been firmly established
for some time. At issue has been the timing of the event. Whereas
the impact hypothesis invokes a rapid extinction—on the
order of weeks to years—others argue for a more gradual
dying that spanned from one million to several million years. Evidence
cited in support of the latter view includes an end-Cretaceous
drop in global sea levels and a multi-million year bout of volcanism
that makes Mount St. Helens look like brushfire.
Thus,
at present the debate has effectively been reduced to two alternatives. First
is the Alvarez scenario, which proposes that the K-T extinction
was a sudden event triggered by a single extraterrestrial bullet. Second
is the gradualist view, which proposes that the asteroid impact
was accompanied by two other global-scale perturbations (volcanism
and decreasing sea-level), and that it was only this combination
of factors acting in concert that decimated the end-Mesozoic
biosphere.
Paleontologists
of the gradualist ilk have argued that dinosaurs (and certain
other groups) were already on their way out well before the
K-T "big bang" occurred. Unfortunately, the
fossil record of dinosaurs is relatively poor for the last
stage of the Mesozoic and only one place on Earth — a
small swath of badlands in the Western Interior of North America — has
been investigated in detail. Several authors have argued
that the latest Cretaceous Hell Creek fauna, as it's called
(best known from eastern Montana), was depauperate relative
to earlier dinosaur faunas. In particular, comparisons
are often been made with the ca. 75 million year old Late Cretaceous
Dinosaur Park Formation of southern Alberta, which has yielded
a bewildering array of herbivorous and carnivorous dinosaurs.
For
a long time, I regarded myself a card-carrying member of the
gradualist camp. However, at least two lines of evidence
have persuaded me to change my mind and join the ranks of the
sudden-extinction-precipitated-by-an-asteroid group.
First
is a growing database indicating that the terminal Cretaceous
world was not stressed to the breaking point, awaiting arrival
of the coup de grâce from outer space. With regard
to dinosaurs in particular, recent work has demonstrated that
the Hell Creek fauna was much more diverse than previously
realized. Second, new and improved stratigraphic age controls
for dinosaurs and other Late Cretaceous vertebrates in the
Western Interior indicate that ecosystems like those preserved
the Dinosaur Park Formation were not nearly as diverse as previously
supposed.
Instead,
many dinosaur species appear to have existed for relatively
short durations (< 1 million years), with some geologic
units preserving a succession of relatively short-lived faunas. So,
even within the well sampled Western Interior of North America
(let alone the rest of the world, for which we currently have
little hard data), I see no grounds for arguing that dinosaurs
were undergoing a slow, attritional demise. Other groups,
like plants, also seem to have been doing fine in the interval
leading up to that fateful day 65.5 million years ago. Finally,
extraordinary events demand extraordinary explanations, and
it does not seem parsimonious to make an argument for a lethal
cascade of agents when compelling evidence exists for a single
agent capable of doing the job on its own.
So
yes, as far as I'm concerned (at least for now), the asteroid
did it. |
PETER
SCHWARTZ
Futurist,
Business Strategist; Cofounder. Global Business Network, a
Monitor Company; Author, The Long Boom

In
the last few years I have changed my mind about nuclear
power. I used to believe that expanding nuclear power was
too risky. Now I believe that the risks of climate change
are much greater than the risks of nuclear power. As a
result we need to move urgently toward a new generation
of nuclear reactors.
What
led to the change of view? First I came to believe that the
likelihood of major climate related catastrophes was increasing
rapidly and that they were likely to occur much sooner than
the simple linear models of the IPCC indicated. My
analysis developed as a result of work we did for the defense
and intelligence community on the national security implications
of climate change. Many regions of the Earth are likely to
experience an increasing frequency of extreme weather events.
These catastrophic events include megastorms, super tornados,
torrential rains and floods, extended droughts, ecosystem
disruptions all added to steadily rising sea levels. It also
became clear that human induced climate change is ever more
at the causal center of the story.
Research
by climatologists like William Ruddiman indicate that the
climate is more sensitive to changes in human societies ranging
from agricultural practices like forest clearing and irrigated
rice growing to major plagues to the use of fossil fuels.
Human societies have often gone to war as a result of the
ecological exhaustion of their local environments. So it
becomes an issue of war and peace. Will Vietnam simply roll
over and die when the Chinese dam what remains of the trickle
of the Mekong as an extended drought develops at is source
in the Tibetan highlands?
Even
allowing for much greater efficiency and a huge expansion
of renewable energy, the real fuel of the future is coal,
especially in the US, China and India. if all three go ahead
with their current plans on building coal fired electric
generating plants then that alone will over the next two
decades double all the CO2 that human kind has put into the
atmosphere since the industrial revolution began more than
two hundred years ago. And the only meaningful alternative
to coal is nuclear power. It is true that we can hope that
our ability to capture the CO2 from coal burning and sequester
it in various ways will grow, but it will take a decade or
more before that technology reaches commercial maturity.
At
the same time I also came to believe that risks of nuclear
power are less than we feared. That shift began with a trip
to visit the proposed nuclear waste depository at Yucca Mountain
in Nevada. A number of Edge folk went including
Stewart Brand, Kevin Kelly, Danny Hillis, and Pierre Omidyar.
When it became clear that very long term storage of waste
(e.g. 10,000 to 250,000 years) is a silly idea and not meaningfully
realistic we began to question many of the assumptions about
the future of nuclear power. The right answer to nuclear
waste is temporary storage for perhaps decades and then recycling
the fuel as much of the world already does, not sticking
it underground for millennia. We will likely need the fuel
we can extract from the waste.
There
are emerging technologies for both nuclear power and waste
reprocessing that will reduce safety risk, the amount of
waste and most especially the risk of nuclear weapons proliferation
as the new fuel cycle produces no plutonium, the offending
substance of concern. And the economics are increasingly
favorable as the French have demonstrated for decades. The
average French citizen produces 70% less CO2 than the average
American as a result. We have also learned that the long
term consequences of the worst nuclear accident in history,
Chernobyl were much less than feared.
So
the conclusion is that the risks of climate change are far
greater than the risks of nuclear power. Furthermore, human
skill and knowledge in managing a nuclear system are only
likely to grow with time. While the risks of climate change
will grow as billions more people get rich and change the
face of the planet with their demands for more stuff. Nuclear
power is the only source of electricity that we can now see
that is likely to enable the next three or four billion who
want what we all have to get what they want without radically
changing the climate of the Earth. |
MARCEL
KINSBOURNE, M.D.
Neurologist
& Cognitive Neuroscientist, The New School; Coauthor, Children's
Learning and Attention Problems

The
Impressionable Brain
When
the phenomenon of "mirror neurons" that fire both
when a specific action is perceived and when it is intended
was first reported, I was impressed by the research but skeptical
about its significance. Specifically, I doubted, and continue
to doubt, that these circuits are specific adaptations for
purposes of various higher mental functions. I saw mirror neurons
as simple units in circuits that represent specific actions,
oblivious as to whether they had been viewed when performed
by someone else, or represented as the goal of one's own intended
action (so-called reafference copy). Why have two separate
representations of the same thing when one will do? Activity
elsewhere in the brain represents who the agent is, self or
another. I still think that this is the most economical interpretation.
But from a broader perspective I have come to realize that
mirror neurons are not only less than meets the eye but also
more. Instead of being a specific specialization, they play
their role as part of a fundamental design characteristic of
the brain; that is, when percepts are activated, relevant intentions,
memories and feelings automatically fall into place.
External
event are "represented" by the patterns of neuronal
activity that they engender in sensory cortex. These representations
also incorporate the actions that the percepts potentially
afford. This "enactive coding"
or "common coding" of input implies a propensity in
the observer's brain to imitate the actions of others (consciously
or unconsciously). This propensity need not result in overt imitation.
Prefrontal cortex is thought to hold these impulses to imitate
in check. Nonetheless, the fact that these action circuits have
been activated, lowers their threshold by subtle increments as
the experience in question is repeated over and over again, and
the relative loading of synaptic weights in brain circuitry become
correspondingly adjusted. Mirror neurons exemplify this type
of functioning, which extends far beyond individual circuits
to all cell assemblies that can form representations,
That
an individual is likely to act in the same ways that others
act is seen in the documented benefit for sports training of
watching experts perform. "Emotional contagion" occurs
when someone witnesses the emotional expressions of another
person and therefore experiences that mood state oneself. People's
viewpoints can subtly and unconsciously converge when their
patterns of neural activation match, in the total absence of
argument or attempts at persuasion. When people entrain with
each other in gatherings, crowds, assemblies and mobs, diverse
individual views reduce into a unified group viewpoint. An
extreme example of gradual convergence might be the "Stockholm
Syndrome"; captives gradually adopt the worldview of their
captors. In general, interacting with others makes one converge
to their point of view (and vice versa). Much ink has been
spilled on the topic of the lamentable limitations of human
rationality. Here is one reason why.
People's
views are surreptitiously shaped by their experiences, and
rationality comes limping after, downgraded to rationalization.
Once opinions are established, they engender corresponding
anticipations. People actively seek those experiences that
corroborate their own self-serving expectations. This may be
why as we grow older, we become ever more like ourselves. Insights
become consolidated and biases reinforced when one only pays
attention to confirming evidence. Diverse mutually contradictory "firm
convictions" are the result. Science does take account
of the negative instance as well as the positive instance.
It therefore has the potential to help us understand ourselves,
and each other.
If
I am correct in my changed views as to what mirror neurons
stand for and how representation routinely merges perception,
action, memory and affect into dynamic reciprocal interaction,
these views would have a bearing on currently disputed issues.
Whether an effect is due to the brain or the environment would
be moot if environmental causes indeed become brain causes,
as the impressionable brain resonates with changing circumstances.
What we experience contributes mightily to what we are and
what we become. An act of kindness has consequences for the
beneficiary far beyond the immediate benefit. Acts of violence
inculcate violence and contaminate the minds of those who stand
by and watch. Not only our private experiences, but also the
experiences that are imposed on us by the media, transform
our predispositions, whether we want them to or not. The implications
for child rearing are obvious, but the same implications apply
beyond childhood to the end of personal time.
What
people experience indeed changes their brain, for better and
for worse. In turn, the changed brain changes what is experienced.
Regardless of its apparent stability over time, the brain is
in constant flux, and constantly remodels. Heraclitus was right: "You
shall not go down twice to the same river". The river
will not be the same, but for that matter, neither will you.
We are never the same person twice. The past is etched into
the neural network, biasing what the brain is and does in the
present. William Faulkner recognized this: "The past is
never dead. In fact, it's not even past". |
KEVIN
KELLY
Editor-At-Large,
Wired; Author, New Rules for the New Economy

Much
of what I believed about human nature, and the nature of
knowledge, has been upended by the Wikipedia. I knew that
the human propensity for mischief among the young and bored — of
which there were many online — would make an encyclopedia
editable by anyone an impossibility. I also knew that even
among the responsible contributors, the temptation to exaggerate
and misremember what we think we know was inescapable, adding
to the impossibility of a reliable text. I knew from my own
20-year experience online that you could not rely on what
you read in a random posting, and believed that an aggregation
of random contributions would be a total mess. Even unedited
web pages created by experts failed to impress me, so an
entire encyclopedia written by unedited amateurs, not to
mention ignoramuses, seemed destined to be junk.
Everything
I knew about the structure of information convinced me that
knowledge would not spontaneously emerge from data, without
a lot of energy and intelligence deliberately directed to transforming
it. All the attempts at headless collective writing I had been
involved with in the past only generated forgettable trash.
Why would anything online be any different?
So
when the first incarnation of the Wikipedia launched in 2000
(then called Nupedia) I gave it a look, and was not surprised
that it never took off. There was a laborious process of top-down
editing and re-writing that discouraged a would-be random contributor.
When the back-office wiki created to facilitate the administration
of the Nupedia text became the main event and anyone could
edit as well as post an article, I expected even less from
the effort, now re-named Wikipedia.
How
wrong I was. The success of the Wikipedia keeps surpassing
my expectations. Despite the flaws of human nature, it keeps
getting better. Both the weakness and virtues of individuals
are transformed into common wealth, with a minimum of rules
and elites. It turns out that with the right tools it is easier
to restore damage text (the revert function on Wikipedia) than
to create damage text (vandalism) in the first place, and so
the good enough article prospers and continues. With the right
tools, it turns out the collaborative community can outpace
the same number of ambitious individuals competing.
It
has always been clear that collectives amplify power — that
is what cities and civilizations are — but what's been
the big surprise for me is how minimal the tools and oversight
are needed. The bureaucracy of Wikipedia is relatively so small
as to be invisible. It's the Wiki's embedded code-based governance,
versus manager-based governance that is the real news. Yet
the greatest surprise brought by the Wikipedia is that we still
don't know how far this power can go. We haven't seen the limits
of wiki-ized intelligence. Can it make textbooks, music and
movies? What about law and political governance?
Before
we say, "Impossible!" I say, let's see. I know all
the reasons why law can never be written by know-nothing amateurs.
But having already changed my mind once on this, I am slow
to jump to conclusions again. The Wikipedia is impossible,
but here it is. It is one of those things impossible in theory,
but possible in practice. Once you confront the fact that it
works, you have to shift your expectation of what else that
is impossible in theory might work in practice.
I
am not the only one who has had his mind changed about this.
The reality of a working Wikipedia has made a type of communitarian
socialism not only thinkable, but desirable. Along with other
tools such as open-source software and open-source everything,
this communtarian bias runs deep in the online world.
In
other words it runs deep in this young next generation. It
may take several decades for this shifting world perspective
to show its full colors. When you grow up knowing rather
than admitting that such a thing as the Wikipedia
works; when it is obvious to you that open source
software is better; when you are certain that sharing your
photos and other data yields more than safeguarding them — then
these assumptions will become a platform for a yet more radical
embrace of the commonwealth. I hate to say it but there is
a new type of communism or socialism loose in the world, although
neither of these outdated and tinged terms can accurately capture
what is new about it.
The
Wikipedia has changed my mind, a fairly steady individualist,
and lead me toward this new social sphere. I am now much more
interested in both the new power of the collective, and the
new obligations stemming from individuals toward the collective.
In addition to expanding civil rights, I want to expand civil
duties. I am convinced that the full impact of the Wikipedia
is still subterranean, and that its mind-changing power is
working subconsciously on the global millennial generation,
providing them with an existence proof of a beneficial hive
mind, and an appreciation for believing in the impossible.
That's
what it's done for me. |
|