is the difference between men and pigs?"
We ask many questions ...
...about our species, our gender, our friends, lovers and ourselves,
the mystery who each of us is and where in the ant hill the intelligence
lies, if each ant has no clue.
We search for the variations amongst a set, try to define the
set in its limits and borders against other sets, look for analogies,
anomalies and statistical outliers....
There is in fact an almost universal algorithm, like cats stalking
their prey, to makes sense of our nature by boundary conditions,
alas compiled with spotty statistics and messy heuristics, gullible
souls, political machinations, cheats, lies and video tape, in
short: human nature. We search and probe, the literate digerati
confer virtually, each wondering about the other, each looking
at their unique sets of parents, and the impossibility to imagine
them in the act of procreation.
In other words, we still have no idea what-so-ever who we really
are, what mankind as a whole is all about. We have mild inclinations
on where we have been, sort of, and contradictory intentions on
where we may be headed, kind of, but all in all, we are remarkably
But that question at least need no longer be asked and
has indeed vanished after this:
Even when they had way too much to drink, pigs don´t turn
KAI KRAUSE is currently building a research lab dubbed "Byteburg"
in a thousand year old castle above the Rhein river in the geometric
center of Europe. He asked not to be summed up by previous accomplishments,
titles or awards.
has Darwin gone?"
Darwinism is alive and well in academic discussions and in pop
thinking. Natural selection is a key element in explaining just
about everything we encounter today, from the origin and spread
of AIDS to the realization that our parents didn't "make us do
it," our ancestors did. Ironically, though, Darwinism has disappeared
from the area where it was first and most firmly seated the evolution
of life, and especially the evolution of humanity. Human evolution
was once pictured as a series of responses to changing environments
coordinated by differences in reproduction and survivorship, as
opportunistic changes taking advantage of the new possibilities
opened up by the cultural inheritance of social information, as
the triumph of technology over brute force, as the organization
of intelligence by language. Evolutionary psychologists and other
behavioralists still view it this way, but this is no longer presented
as the mainstream view of human paleontologists and geneticists
who address paleodemographic problems.
Human evolution is now commonly depicted as the consequence of
species replacements, where there are a series of species emanating
from different, but usually African homelands, each sooner or
later replacing the earlier ones. It is not the selection process
that provides the source of human superiority in each successive
replacement, but the random accidents that take place when new
species are formed from small populations of old ones. The process
is seen as being driven by random extinctions, opening up unexpected
opportunities for those fortunate new species lucky to be at the
right time and place.
The origin and evolution of human species are now also addressed
by geneticists studying the variation and distribution of human
genes today (and in a few cases ancient genes from Neandertals).
They use this information to estimate the history of human population
size and the related questions of when the human population might
have been small, where it might have originated, and when it might
have been expanding. It is possible to do this if one can assume
that mutation and genetic drift are the only driving forces of
genetic change, because the effect of drift depends on population
size. But this assumption means that Darwinian selection did not
play any significant role in genetic evolution. Similarly, interpreting
the distribution of ancient DNA as reflecting population history
(rather than the history of the genes studied the histories are
not necessarily the same) also assumes that selection on the DNA
studied did not play a role in its evolution. In fact, the absence
of Darwinian selection is the underlying assumption for these
types of genetic studies.
Human paleontology has taken a giant step away from Darwin will
it have the courage to follow the lead of evolutionary behavior
and step back?
H. WOLPOFF is Professor of Anthropology and Adjunct Associate
Research Scientist, Museum of Anthropology at the University of
Michigan. His work and theories on a "multiregional" model of
human development challenge the popular "Eve" theory. His work
has been covered in The New York Times, New Scientist,
Discover, and Newsweek, among other publications.
He is the author (with Rachel Caspari) of Race and Human Evolution:
A Fatal Attraction
is our sense of beauty and elegance such a useful tool for discriminating
between a good theory and a bad theory?"
During the early 1980s, I had the wonderful fortune to spend a
great deal of time with Richard Feynman, and our innumerable conversations
extended over a very broad range of topics (not always physics!).
At that time, I had just finished re-reading his wonderful book,
The Character of Physical Law, and wanted to discuss an interesting
question with him, not directly addressed by his book:
Why is our sense of beauty and elegance such a useful tool for
discriminating between a good theory and a bad theory?
And a related question:
Why are the fundamental laws of the universe self-similar?
Over lunch, I put the questions to him.
goddam useless to discuss these things. It's a waste of time,"
was Dick's initial response. Dick always had an immediate gut-wrenching
approach to philosophical questions. Nevertheless, I persisted,
because it certainly was to be admitted that he had a strong intuitive
sense of the elegance of fundamental theories, and might be able
to provide some insight rather than just philosophizing. It was
also true that this notion was a successful guiding principle
for many great physicists of the twentieth century including Einstein,
Bohr, Dirac, Gell-Mann, etc. Why this was so, was interesting
We spent several hours trying to get at the heart of the problem
and, indeed, trying to determine if it was even a true notion
rather than some romantic representation of science.
We did agree that it was impossible to explain honestly the beauties
of the laws of nature in a way that people can feel, without their
having some deep understanding of mathematics. It wasn't that
mathematics was just another language for physicists, it was a
tool for reasoning by which you could connect one statement with
another. The physicist has meaning to all his phrases. He needs
to have a connection of words to the real world.
Certainly, a beautiful theory meant being able to describe it
very simply in terms of fundamental mathematical quantities. "Simply"
meant compression into a small mathematical expression with tremendous
explanatory powers, which required only a finite amount of interpretation.
In other words, a huge number of relationships between data are
concisely fit into a single statement. Later, Murray Gell-Mann
expressed this point well, when he wrote, "The complexity of what
you have to learn in order to be able to read the statement of
the law is not really very great compared to the apparent complexity
of the data that are being summarized by that law. That apparent
complexity is partly removed when the law is formed."
Another driving principle was that the laws of the universe are
self similar, in that there are connections between two sets of
phenomena previously thought to be distinct. There seemed to be
a beauty in the inter-relationships fed by perhaps a prejudice
that at the bottom of it all was a simple unifying law.
It was easy to find numerous examples from the history of modern
science that fit within this framework (Maxwell's equations for
electromagnetism, Einstein's general-relativistic equations for
gravitation, Dirac's relativistic quantum mechanics, etc.,), but
Dick and I were still working away at the fringes of the problem.
So far, all we could do was describe the problem, find numerous
examples, but we could not answer what provided the feeling for
great intuitive guesses.
Perhaps, our love of symmetries and patterns, are an integral
part of why would embrace certain theories and not others. For
example, for every conservation law, there was a corresponding
symmetry, albeit sometimes these symmetries would be broken. But
this led us to another question: Is symmetry inherent in nature
or is it something we create? When we spoke of symmetries, we
were referring to the symmetry of the mathematical laws of physics,
not to the symmetry of objects commonly found in nature. We felt
that symmetry was inherent in nature, because it was not something
that we expected to find in physics. Another psychological prejudice
was our love for patterns. The simplicity of the patterns in physics
were beautiful. This does not mean simple in action the
motion of the planets and of atoms can be very complex, but the
basic patterns underneath are simple. This is what is common to
all of our fundamental laws.
It should be noted that we could also come up with numerous examples
where one's sense of elegance and beauty led to beautiful theories
that were wrong. A perfect example of a mathematically elegant
theory that turned out to be wrong is Francis Crick's 1957 attempt
at working out the genetic coding problem (Codes without Commas).
It was also true that there were many examples of physical theories
that were pursued on the basis of lovely symmetries and patterns,
and that these also turned out to be false. Usually, these were
false because of some logical inconsistency or the crude fact
that they did not agree with experiment.
The best that Dick and I could come up with was an unscientific
response, which is, given our fondness for patterns and symmetry,
we have a prejudice that nature is simple and therefore
Since that time, the question has disappeared from my mind, and
it is fun thinking about it again, but in doing scientific research,
I now have to concern myself with more pragmatic questions.
SECKEL is acknowledged as one of the world's leading authorities
on illusions. He has given invited lectures on illusions at Caltech,
Harvard, MIT, Berkeley, Oxford University, University of Cambridge,
UCLA, UCSD, University of Lund, University of Utrecht, and many
other fine institutions. Seckel is currently under contract with
the Brain and Cognitive Division of the MIT Press to author a
comprehensive treatise on illusions, perception, and cognitive
will we face another energy crisis, and how will we cope with
This question (or pair of questions) was on everyone's lips in
the 1970s, following the oil shortage and lines at gas stations.
It stimulated a lot of good thinking and good work on alternative
energy sources, renewable energy sources, and energy efficiency.
Although this question is still asked by many knowledgeable and
concerned people, it has disappeared from the public's radar screen
(or, better, television screen). Even the recent escalation of
fuel prices and the electricity shortage in California have not
lent urgency to thinking ahead about energy.
But we should be asking, we should be worrying, and we should
be planning. A real energy crisis is closer now than it was when
the question had high currency. The energy-crisis question is
only part of a larger question: How is humankind going to deal
in the long term with its impact on the physical world we inhabit
(of which the exhaustion of fossil fuels is only a part)? Another
way to phrase the larger question: Are we going to manage more
or less gracefully a transition to a sustainable world, or will
eventual sustainability be what's left, willy nilly, after the
chaos of unplanned, unanticipated change?
Science will provide no miracles (as the Wall Street Journal,
in its justification of inaction, would have us believe), but
science can do a lot to ameliorate the dislocations that this
century will bring. We need to encourage our public figures to
lift their eyes beyond the two-, four-, and six-year time horizons
of their jobs.
KENNETH FORD is a retired physicist who teaches at Germantown
Friends School in Philadelphia. He is the co-author, with John
Wheeler, of Geons, Black Holes, and Quantum Foam: A Life in
Stephen H. Schneider
the free market finally triumph?"
Despite Seattle and the French farmers, free market advocates
of globalization have largely won even CHina is signing
up to be a major player in the international trading and growth-oriented
global political economy. So it is rare to hear this question
anymore, even from so-called "enterprise institutes" dedicated
to protecting property rights.
The problem is, what has been won?? My concern is not with the
question no-longer asked in this context, but rather with the
companion question not often enough asked: "Is there any such
thing as a free market"?
To be sure, markets are generally efficient ways of allocating
resources and accomplishing economic goals. However, markets are
notorious for leaving out much of what people really value. In
different words, the market price of doing business simply excludes
much of the full costs or benefits of doing business because many
effects aren't measured in traditional monetary units. For example,
the cost of a ton of coal isn't just the extraction costs plus
transportation costs plus profit, but also real expenses to real
people (or real creatures) who happen to be external to the energy
market. Such "externalities" are very real to coastal dwellers
trying to cope with sea level rises likely to be induced from
the global warming driven by massive coal burning.
I recall a discussion at the recent international negotiations
to limit emissions of greenhouse gasses in which a chieftain from
the tiny Pacific island of Kiribati was being told by an OPEC
supporter opposed to international controls on emissions from
fossil fuels that the summed economies of all the small island
states were only a trivial fraction of the global GDP, and thus
even if sea level rise were to drive them out of national existence,
this was "not sufficient reason to hold back to economic progress
of the planet by constricting the free use of energy markets".
are not ungenerous", he said, so in the "unlikely event" that
you were a victim of sea level rise, "we'll just pay to relocate
all of you and your people to even better homes and jobs than
you have now", and this, he went on, will be much cheaper than
to "halt industrial growth" (THis isn't the forum to refute the
nonsense that controls on emissions will halt industrial growth.)
After hearing this offer, the aging and stately chieftain paused,
scratched his flowing hair, politely thanked the OPEC man for
his thoughtfulness and simply said, "we may be able to move, but
what do I do with the buried bones of my grandfather?"
Economists refer to the units of value in cost-benefit analyses
as "numeraires" dollars per ton carbon emitted in the climate
example, is the numeraire of choice for "free market" advocates.
But what of lives lost per ton of emissions from intensified hurricanes,
or species driven off mountain tops to extinction per ton, or
heritage sites lost per ton?? Or what if global GDP indeed goes
up fastest by free markets but 25% of the world gets left further
behind as globally economically efficient markets expand? Is equity
a legitimate numeraire too?
Therefore, while market systems seem indeed to have triumphed,
it is time to phase in a new, multi-part question: "How can free
markets be adjusted to value what is left out of private cost-benefit
calculus but represents real value so we can get the price signals
in markets to reflect all the costs and benefits to society across
all the numeraires, and not simply have market prices rigged to
preserve the status quo in which monetary costs to private parties
are the primary condition?"
I hope the new US president soon transcends all that obligatory
free market rhetoric of the campaign and learns much more about
what constitutes a full market price. It is very likely he'll
get an earful as he jetsets about the planet in Air Force 1 catching
up on the landscapes political and physical of the
vastly diverse countries in the world that it is time for him
to visit. Many world leaders are quite worried about just what
we will have won as currently defined free markets triumph.
STEPHEN H. SCHNEIDER is Professor in the Biological Sciences Department
at Stanford University and the Former Department Director and
Head of Advanced Study Project at the National Center for Atmospheric
Research Boulder. He is internationally recognized as one of the
world's leading experts in atmospheric research and its implications
for environment and society. Dr. Schneider's books include
The Genesis Strategy: Climate Change and Global Survival; The
Coevolution Of Climate and Life and Global Warming: Are
We Entering The Greenhouse Century?; and Laboratory Earth
subordinate clauses more typical of languages with a long literary
tradition than integral features of human speech?"
Contemporary linguists tend to assume in their work that subordinate
clauses, such as "The boy that I saw yesterday" or "I knew
what happened when she came down the steps", are an integral
part of the innate linguistic endowment, and/or central features
of "human speech" writ large. Most laymen would assume the same
thing. However, the fact is that when we analyze a great many
strictly spoken languages with no written tradition, subordinate
clauses are rare to nonexistent. In many Native American languages,
for example, the only way to express something like the men
who were members is a clause which parses approximately as
"The 'membering' men"; the facts are similar in thousands of other
languages largely used orally.
In fact, even in earlier documents in today's "tall building"
literary languages, one generally finds a preference for stringing
simple main clauses together she came down the steps,
and I knew what happened rather than embedding them in one
another along the lines of when she came down the steps, I
knew what happened. The guilty sense we often have when reading
English of the first half of the last millennium that the writing
is stylistically somewhat "clunky" is due largely to the marginality
of the subordinate clause: here is Thomas Malory in the late fifteenth
And thenne they putte on their helmes and departed
and recommaunded them all wholly unto the Quene
and there was wepynge and grete sorowe
Thenne the Quene departed in to her chamber
and helde her
that no man shold perceyue here grete sorowes
Early Russian parses similarly, and crucially, so do the Hebrew
Bible and the Greek of Homer.
At the time that these documents were written, writing conventions
had yet to develop, and thus written language hewed closer to
the way language is actually spoken on the ground. Over time,
subordinate clauses, a sometime thing in speech, were developed
as central features in written speech, their economy being aesthetically
pleasing, and more easily manipulated via the conscious activity
of writing than the spontaneous "on-line" activity of speaking.
Educated people, exposed richly to written speech via education,
tended to incorporate the subordinate clause mania into their
spoken varieties. Hence today we think of subordinate clauses
as "English", as the French do "French", and so on even
though if we listen to a tape recording of ourselves speaking
casually, even we tend to embrace main clauses strung together
in favor of the layered sentential constructions of Cicero.
But the "natural" state of language persists in the many which
have had no written tradition. In the 1800s, various linguists
casually speculated as to whether subordinate clauses were largely
artifactual rather than integral to human language, with one (Karl
Brugmann) even going as far as to assert that originally, humans
spoke only with main clauses.
Today, however, linguistics operates under the sway of our enlightened
valuation of "undeveloped" cultures, which has, healthily, included
an acknowledgment of the fact that the languages of "primitive"
peoples are as richly complex as written Western languages. (In
fact, the more National Geographic the culture, the more
fearsomely complex the language tends to be overall.) However,
this sense has discouraged most linguists from treading into the
realm of noting that one aspect of "complexity", subordinate clauses,
is in fact not central to expression in unwritten languages
and is most copiously represented in languages with a long written
tradition. In general, the idea that First World written languages
might exhibit certain complexities atypical of languages spoken
by preliterate cultures has largely been tacitly taboo for decades
in linguistics, generally only treated in passing in obscure venues.
The problem is that this could be argued to bode ill for investigations
of the precise nature of Universal Grammar, which will certainly
require a rigorous separation of the cultural and contingent from
JOHN H. MCWHORTER is Assistant Professor of Linguistics at the
University of California at Berkeley. He taught at Cornell University
before entering his current position at Berkeley. He specializes
in pidgin and creole languages, particularly of the Caribbean,
and is the author of Toward a New Model of Creole Genesis
and The Word on the Street : Fact and Fable About American
English. He also teaches black musical theater history at
Berkeley and is currently writing a musical biography of Adam
Clayton Powell, Jr.
you have an artificial intelligence?"
Progress in the domain that Marvin Minsky once characterized as
"making machines do things that would be considered intelligent
if done by people" has not been as dramatic as its founders might
once have hoped, but the penetration of machine cognition into
everyday life (from the computer that plays chess to the computer
that determines if your toast is done) has been broad and deep.
We now the term "intelligent" to refer to the kind of helpful
smartness embedded in such objects. So the language has shifted
and the question has disappeared. But until recently, there was
a tendency to limit appreciation of machine mental prowess to
the realm of the cognitive. In other words, acceptance of artificial
intelligence came with a certain "romantic reaction." People were
willing to accept that simulated thinking might well be deemed
thinking, but simulated feeling was not feeling. Simulated love
could never be love.
These days, however, the realm of machine emotion has become a
contested terrain. There is research in "affective computing"
and in robotics which produces virtual pets and digital dolls
objects that present themselves as experiencing subjects.
In artificial intelligence's "essentialist" past, researchers
tried to argue that the machines they had built were "really"
intelligent. In the current business of building machines that
self-present as "creatures," the work of inferring emotion is
left in large part to the user. The new artificial creatures are
designed to push our evolutionary buttons to respond to
their speech, their gestures, and their demands for nurturance
by experiencing them as sentient, even emotional. And people are
indeed inclined to respond to creatures they teach and nurture
by caring about them, often in spite of themselves. People tell
themselves that the robot dog is a program embodied in plastic,
but they become fond of it all the same. They want to care for
it and they want it to care for them.
In cultural terms, old questions about machine intelligence has
given way to a question not about the machines but about us: What
kind of relationships is it appropriate to have with a machine?
It is significant that this question has become relevant in a
day-to-day sense during a period of unprecedented human redefinition
through genomics and psychopharmacology, fields that along with
robotics, encourage us to ask not only whether machines will be
able to think like people, but whether people have always thought
SHERRY TURKLE is a professor of the sociology of science at MIT.
She is the author of Life on the Screen: Identity in the Age
of the Internet; The Second Self: Computers and the Human Spirit;
and Psychoanalytic Politics: Jacques Lacan and Freud's
This question was at the heart of heated debates for decades during
the recently past century, and it was at the ambitious origins
of the Artificial Intelligence adventure. It had profound implications
not only for science, but also for philosophy, technology, business,
and even theology. In the 50's and 60's, for instance, it made
a lot of sense to ask the question whether one day a computer
could defeat an international chess master, and if it did, it
was assumed that we would learn a great deal about how human thought
works. Today we know that building such a machine is possible,
but the reach of the issue has dramatically changed. Nowadays
not many would claim that building such a computer actually informs
us in an interesting way about what human thought is and how it
works. Beyond the (indeed impressive) engineering achievements
involved in building such machines, we got from them little (if
any) insight into the mysteries, variability, depth, plasticity,
and richness of human thought. Today, the question "do computers
think?" has become completely uninteresting and it has disappeared
from the cutting edge academic circus, remaining mainly in the
realm of pop science, Hollywood films, and video games.
And why it disappeared?
It disappeared because it was finally answered with categorical
responses that stopped generating fruitful work. The question
became useless and uninspiring, ... boring. What is interesting,
however, is that the question disappeared with no single definitive
answer! It disappeared with categorical "of-course-yes" and "of-course-not"
responses. Of-course-yes people, in general motivated by a technological
goal (i.e., "to design and to build something") and implicitly
based on functionalist views, built their arguments on the amazing
ongoing improvement in the design and development of hardware
and software technologies. For them the question became uninteresting
because it didn't help to design or to build anything anymore.
What became relevant for of-course-yes people was mainly the engineering
challenge, that is, to actually design and to build computers
capable of processing algorithms in a faster, cheaper, and more
flexible manner. (And also, for many, what became relevant was
to build computers for human activities and purposes). Now when
of-course-yes people are presented with serious problems that
challenge their view, they provide the usual response: "just wait
until we get better computers" (once known as the wait-until the-year-2000
argument). On the other hand there were the of-course not people,
who were mainly motivated by a scientific task (i.e., "to describe,
explain, and predict a phenomenon"), which was not necessarily
technology-driven. They mainly dealt with real-time and real-world
biological, psychological, and cultural realities. These people
understood that most of the arrogant predictions made by Artificial
Intelligence researchers in the 60's and 70's hadn't been realized
because of fundamental theoretical problems, not because of the
lack of powerful enough machines. They observed that even the
simplest everyday aspects of human thought, such as common sense,
sense of humor, spontaneous metaphorical thought, use of counterfactuals
in natural language, to mention only a few, were in fact intractable
for the most sophisticated machines. They also observed that the
nature of the brain and other bodily mechanisms that make thinking
and the mind possible, were by several orders of magnitude, way
more complex than what it was thought during the hey-days of Artificial
Intelligence. Thus for of course-not people the question whether
computers think became uninteresting, since it didn't provide
insights into a genuine understanding of the intricacies of human
thinking. Today the question is dead. The answer had become a
matter of faith.
RAFAEL E. NÚÑEZ, currently at the Department of
Psychology of the University of Freiburg, is a research associate
of the University of California, Berkeley. He has worked for more
than a decade on the foundations of embodied cognition, with special
research into the nature and origin of mathematical concepts.
He has published in several languages in a variety of areas, and
has taught in leading academic institutions in Europe, the United
States, and South America. He is the author (with George Lakoff)
of Where Mathematics Comes From: How the Embodied Mind Brings
Mathematics into Being; and co-editor (with Walter Freeman)
of Reclaiming Cognition: The Primacy of Action, Intention,
at the world upside down: what are we enhancing or what is vanishing
in our brains while flat and dormant views of the universe are
Wrapped like hotdogs full of mustard, snorting in search of air
to breath from beneath the blanket like dendrites looking
for the first time for new contacts , the skull plunged
in a floppy pillow and the eyes allowed only to stare at the grey
sky, most of the time too flat and low to enjoy a more diversified
life in three dimensions. What has been the impact on the newly
born brain's positioned mummy-like, and tight for generations
in the pram, of this upside down perception of the Universe?
We do have a point of reference to imagine what life was like
during the first eight or nine months after birth, before the
invention of the anatomically shaped infant car seat that makes
our youngest travel and look around from their earliest age. I'll
come to that later.
First let me insist for those unaware of radical innovations in
evolutionary psychology, that no baby has ever been found
there are plenty of very reliable tests for that , who
after having experienced the glamour of looking at the Universe
face to face, right and left, backwards and forward, has regretted
the odd way of being carried around by previous generations. Not
only that; no newly born would ever accept now to look at the
Universe from other vantage points than the high-tech pushchair,
carriages, and travelling systems for children aged birth to four
years, developed in the mid-80's , out of the original baby car
seat invented in America.
Just as monkeys become quickly aware of new inventions and adopt
them without second thoughts, our youngest do not accept any longer
to be carried in prams where they lied flat and dormant. They
have suddenly become aware that they can be taken around in efficiently
designed traveling engines, from where they can look at the world
in movement practically as soon as they open their eyes.
If somebody thinks that the end of looking upside-down at the
Universe during the first eight or nine months of life is not
important enough to be quoted as the end of anything, think of
what neuroscientists are discovering about what happens during
the first five months of the unborn just after conception.
Professor Beckman in Wursburg University (Germany) has convinced
at last his fellow psychiatrists that neuron's mistakes in their
migration from the limbic to the upper layers of the brain of
the unborn are responsible, to a very large extent, for the 1%
of epileptics and schizophrenics in the world's population. By
the way, the 1% is fixed, no matter how many neuroscientists join
the battle against mental illness. It is like a sort of cosmic
radiation background. The only exception that shows up is whenever
deep malnutrition or feverish influenza in expectant mothers pushes
the rate significantly up.
Likewise, very few scientist would refuse to acknowledge today,
that what happens during the first five months of the embryo is
not only relevant in the case of malformations and mental disorders,
but also in the case of levels of intelligence and other reasonable
behavior patterns. How could anybody discard then the tremendous
impact on the newly born brain of interacting with the Universe
face to face during the first eight to nine months?
Surely, if we continue searching for the missing link between
a single gene and a bark and I deeply hope that we do now
that molecular biology and genetics have joined forces
, everybody should care about the end of the upside-down perception
of the Universe, and the silent revolution led by babies nurtured
in the latest high-tech travelling system's interactive culture.
Professor EDUARDO PUNSET teaches Economics at the Sarriá
Chemichal Institute of Ramon Llull University (Barcelona). He
is Chairman of Planetary Agency, an audiovisual concern for the
public understanding of Science. He was IMF Representative in
the Caribbean, Professor of Innovation & Technology at Madrid
University, and Minister for Relations with the UE.