| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >

Architect, Cartographer; Founder, TED Conference; Author, 33: Understanding Change & the Change in Understanding













Physicist; Appleton Professor of Natural Philosophy, Dartmouth College; Author, The Prophet and the Astronomer: Apocalyptic Science and the End of the World

We Are Unique

To improve everybody's cognitive toolkit, the required scientific concept has to be applicable to all humans. It needs to make a difference to us as a species, or, more to the point I am going to make, as a key factor in defining our collective role. This concept must impact the way we perceive who we are and why we are here. Hopefully, it will redefine the way we live our lives and plan for our collective future. This concept must make it clear that we matter.

A concept that might grow into this life-redefining powerhouse is the notion that we, humans in a rare planet, are unique and uniquely important. But what of Copernicanism — the notion that the more we learn about the universe the less important we become? I will argue that modern science, traditionally considered guilty of reducing our existence to a pointless accident in an indifferent universe, is actually saying the opposite. While it does say that we are an accident in an indifferent universe, it also says that we are a rare accident and thus not pointless.

But wait! Isn't it the opposite? Shouldn't we expect life to be common in the cosmos and us to be just one of many creatures out there? After all, as we discover more and more worlds circling other suns, the so-called exoplanets, we find an amazing array of possibilities. Also, given that the laws of physics and chemistry are the same across the universe, we should expect life to be ubiquitous: if it happened here, it must've happened in many other places. So why am I claiming that we are unique?

There is an enormous difference between life and intelligent life. By intelligent life I don't mean clever crows or dolphins, but minds capable of self-awareness and the ability to develop advanced technologies, that is, not just use what is at hand but transform materials into new devices that can perform a multitude of tasks. Keeping this definition in mind, I agree that single-celled life, although dependent on a multitude of physical and biochemical factors, shouldn't be an exclusive property of our planet. First, because life on Earth appeared almost as quickly as it could, no more than a few hundred million years after things quieted down enough; second, due to the existence of extremophiles, life forms capable of surviving in extreme conditions (very hot or cold, very acidic or/and radioactive, no oxygen, etc.), showing that life is very resilient and spreads into every niche that it can.

However, the existence of single-celled organisms doesn't necessarily lead to that of multicellular ones, much less to that of intelligent multicellular ones. Life is in the business of surviving the best way it can in a given environment. If the environment changes, those creatures that can survive under the new conditions will. Nothing in this dynamics supports the notion that once there is life all you have to do is wait long enough and puff, there pops a clever creature. (This smells of biological teleology, the concept that life's purpose is to create intelligent life, a notion that seduces many people for obvious reasons: it makes us the special outcome of some grand plan.) The history of life on Earth doesn't support this evolution toward intelligence: there have been many transitions toward greater complexity, none of them obvious: prokaryotic to eukaryotic unicellular creatures (and nothing more for 3 billion years!), unicellular to multicellular, sexual reproduction, mammals, intelligent mammals, edge.org...Play the movie differently, and we wouldn't be here.

As we look at planet Earth and the factors that came into play for us to be here, we quickly realize that our planet is very special. Here is a short list: the long-term existence of a protective and oxygen-rich atmosphere; Earth's axial tilt, stabilized by a single large moon; the ozone layer and the magnetic field that jointly protect surface creatures from lethal cosmic radiation; plate tectonics that regulate the levels of carbon dioxide and keep the global temperature stable; the fact that our sun is a smallish, fairly stable star not too prone to releasing huge plasma burps. Consequently, it's rather naive to expect life — at the complexity level that exists here — to be ubiquitous across the universe.

A further point: even if there is intelligent life elsewhere and, of course, we can't rule that out (science is much better at finding things that exist than at ruling out things that don't), it will be so remote that for all practical purposes we are alone. Even if SETI finds evidence of other cosmic intelligences, we are not going to initiate a very intense collaboration. And if we are alone, and alone have the awareness of what it means to be alive and of the importance of remaining alive, we gain a new kind of cosmic centrality, very different — and much more meaningful — from the religiously-inspired one of pre-Copernican days, when Earth was the center of Creation: we matter because we are rare and we know it.

The joint realization that we live in a remarkable cosmic cocoon and that we are able to create languages and rocket ships in an otherwise apparently dumb universe ought to be transformative. Until we find other self-aware intelligences, we are how the universe thinks. We might as well start enjoying each other's company.

Editor, New Scientist; Coauthor, After Dolly

The Snuggle For Existence

Everyone is familiar with the struggle for existence. In the wake of the revolutionary work by Charles Darwin we realized that competition is at the very heart of evolution. The fittest win this endless "struggle for life most severe", as he put it, and all others perish. In consequence, every creature that crawls, swims, and flies today has ancestors that once successfully reproduced more often than their unfortunate competitors.

This is echoed in the way that people see life as competitive. Winners take all. Nice guys finish last. We look after number one. We are motivated by self-interest. Indeed, even our genes are said to be selfish.

Yet competition does not tell the whole story of biology.

I doubt many realise that, paradoxically, one way to win the struggle for existence is to pursue the snuggle for existence: to cooperate.

We already do this to a remarkable extent. Even the simplest activities of everyday life involve much more cooperation than you might think. Consider, for example, stopping at a coffee shop one morning to have a cappuccino and croissant for breakfast. To enjoy that simple pleasure could draw on the labors of a small army of people from at least half a dozen countries. Delivering that snack also relied on a vast number of ideas, which have been widely disseminated around the world down the generations by the medium of language.

Now we have remarkable new insights into what makes us all work together. Building on the work of many others, Martin Nowak of Harvard University has identified at least five basic mechanisms of cooperation. What I find stunning is that he shows the way that we human beings collaborate is as clearly described by mathematics as the descent of the apple that once fell in Newton's garden. The implications of this new understanding are profound.

Global human cooperation now teeters on a threshold. The accelerating wealth and industry of Earth's increasing inhabitants — itself a triumph of cooperation-is exhausting the ability of our home planet to support us all. Many problems that challenge us today can be traced back to a profound tension between what is good and desirable for society as a whole and what is good and desirable for an individual. That conflict can be found in global problems such as climate change, pollution, resource depletion, poverty, hunger, and overpopulation.

As once argued by the American ecologist Garrett Hardin, the biggest issues of all — saving the planet and maximizing the collective lifetime of the species Homo sapiens — cannot be solved by technology alone. If we are to win the struggle for existence, and avoid a precipitous fall, there's no choice but to harness this extraordinary creative force. It is down to all of us to refine and to extend our ability to cooperate.

Nowak's work contains a deeper message. Previously, there were only two basic principles of evolution — mutation and selection — where the former generates genetic diversity and the latter picks the individuals that are best suited to a given environment. We must now accept that cooperation is the third principle. From cooperation can emerge the constructive side of evolution, from genes to organisms to language and the extraordinarily complex social behaviors that underpin modern society.

Journalist; Author, The Tangled Bank: An Introduction to Evolution; Blogger, The Loom

Life As A Side Effect

It's been over 150 years since Charles Darwin published the Origin of Species, but we still have trouble appreciating the simple, brilliant insight at its core. That is, life's diversity does not exist because it is necessary for living things. Birds did not get wings so that they could fly. We do not have eyes so that we can read. Instead, eyes, wings, and the rest of life's wonder has come about as a side effect of life itself. Living things struggle to survive, they reproduce, and they don't do a perfect job of replicating themselves. Evolution spins off of that loop, like heat coming off an engine. We're so used to seeing agents behind everything that we struggle to recognize life as a side effect. I think everyone would do well do overcome that urge to see agents where there are none. It would even help us to understand why we are so eager to see agents in the first place.

Professor of Geography and Earth & Space Sciences, UCLA, Author: The World in 1050: Four Forces Shaping Civilization's Northern Future


As scientists, we're sympathetic to this question. We've asked it of ourselves before, many times, after fruitless days lost at the lab bench or computer seat. If only our brains could find a new way to process the delivered information faster, to interpret it better, to align the world's
noisy torrents of data in an crystalline moment of clarity. In a word, for our brains to forgo their familiar thought sequences, and innovate.

To be sure, the word "innovate" has become something of a badly overused cliche. Tenacious CEO's, clever engineers, and restless artists come to mind before the methodical, data-obsessed scientist. But how often do we consider the cognitive role of innovation in the supposedly bone-dry world of hypothesis-testing, mathematical constraints and data-dependent empiricism?

In the world of science, innovation stretches the mind to find an explanation when the universe wants to hold on to its secrets just a little longer. This can-do attitude is made all the more valuable, not less, in a world constrained by ultimate barriers like continuity of mass and energy, Absolute zero, or the Clausius-Clapeyron relation. Innovation is a critical enabler of discovery around and of these bounds. It is the occasional architect of that rare, wonderful breakthrough even when the tide of scientific opinion is against you.

A reexamination of this word from the scientific perspective reminds us of the extreme power of this cognitive tool, one that most people possess
already. Through innovation, we all can transcend social, professional, political, scientific, and most importantly, personal limits. Perhaps we might all put it to better and more frequent use.

Science Writer; Consultant; Lecturer, Copenhagen; Author, The Generous Man and The User Illusion


Depth is what you do not see immediately at the surface of things. Depth is what is below that surface: a body of water below the surface of a lake, the rich life of a soil below the dirt or the spectacular line of reasoning behind a simple statement.

Depth is a straightforward aspect of the physical world. Gravity stacks stuff and not everything can be at the top. Below there is more and you can dig for it.

Depth acquired a particular meaning with the rise of complexity science a quarter of a century ago: What is characteristic of something complex? Very orderly things like crystals are not complex. They are simple. Very messy things like a pile of litter are very difficult to describe: They hold a lot of information. Information is a measure of how difficult something is to describe. Disorder has a high information content and order has a low one. All the interesting stuff in life is in-between: Living creatures, thoughts and conversations. Not a lot of information, but neither a little. So information content does not lead us to what is interesting or complex. The marker is rather the information that is not there, but was somehow involved in creating the object of interest. The history of the object is more relevant than the object itself, if we want to pin-point what is interesting to us.

It is not the informational surface of the thing, but its informational depth that attracts our curiosity. It took a lot to bring it here, before our eyes. It is not what is there, but what used to be there, that matters. Depth is about that.

The concept of depth in complexity science was expressed in different ways: You could talk about the actual amount of physical information that was involved in bringing about something — the thermodynamic depth — or the amount of computation it took to arrive at a result— the logical depth. Both express the notion that the process behind is more important than the eventual product.

This idea can also be applied to human communication.

When you say "yes" at a wedding it (hopefully) re-presents a huge amount of conversation, coexistence and fun that you have had with that other person present. And a lot of reflection upon it. There is not a lot of information in the "yes" (one bit, actually), but the statement has depth. Most conversational statements have some kind of depth: There is more than meets the ear, something that happened between the ears of the person talking — before a statement was made. When you understand the statement, the meaning of what is being said, you "dig it", you get the depth, what is below and behind. What is not said, but meant — the exformation content, information processed and thrown away before the actual production of explicit information.

2 + 2 = 4. This is a simple computation. The result, 4, hold less information than the problem, 2 + 2 (essentially because the problem could also have been 3 + 1 and yet the result would still be 4). Computation is wonderful as a method for throwing away information, getting rid of it. You do computations to ignore all the details, to get an overview, an abstraction, a result.

What you want is a way to distinguish between a very deep "yes" and a very shallow one: Did the guy actually think about what he said? Was the result 4 actually the result of a meaningful calculation? Is there in fact water below that surface? Does it have depth?

Most human interaction is about that question: Is this bluff or for real? Is there sincere depth in the affection? Does the result stem from intense analysis or is it just an estimate? Is there anything between the lines?

Signaling is all about this question: fake or depth? In biology the past few decades have seen the rise of studies of how animals prove to each other that there is depth behind the signal. The handicap principle of sexual selection is about a way to prove that you signal has depth: If a peacock has long, spectacular feathers it proves that it can survive its predators even though the fancy plumage represents a disadvantage, a handicap. Hence, the peahen can know that the individual displaying the huge tail is a strong one, or else it could not survive with that extreme tail.

Amongst humans you have what economists call costly signals: Ways to show that you have something of value. The phenomenon conspicuous consumption was observed by sociologist Thorstein Veblen already in 1899: If you want to prove that you have a lot of money, you have to waste them. That is: Use them in a way that is absurd and idiotic, because only the rich guy can do so. But do it conspicuously, so that other people will know. Waste is a costly signal of the depth of pile of money. Poor people have to use their money in functional way.

Handicaps, costly signals, intense eye contact and rhetorical gestures are all about proving that what seems so simple really has a lot of depth.

That is also the point with abstractions: We want them to be shorthand for a lot of information that was digested in the process leading to the use of the abstraction, but is not present when we use it. Such abstractions have depth. We love them. Other abstraction have no depth. They are shallow and just used to impress the other guy. They do not help us. We hate them.

Intellectual life is very much about the ability to distinguish between the shallow and the deep abstractions. You need to know if there is any depth before you make that headlong dive and jump into it.

Consultant, Adaptive Optics; Adjunct Professor of Anthropology, University of Utah; Coauthor, The 10,000 Year Explosion

The Veeck Effect

There's an invidious rhetorical strategy that we've all seen — and I'm afraid that most of us have inflicted it on others as well. I call it the Veeck effect (of the first kind) — it occurs whenever someone adjusts the standards of evidence in order to favor a preferred outcome.

Why Veeck? Bill Veeck was a flamboyant baseball owner and promoter.
In his autobiography — (Veeck — As in Wreck) he described installing a flexible fence in the right field of the Milwaukee Brewers. At first he only put the fence up when facing a team full of power hitters, but eventually he took it to the limit, moving the fence up when the visitors were at bat and down when his team was.

The history of science is littered with flexible fences. The phlogiston theory predicted that phlogiston would be released when magnesium burned. It looked bad for that theory when experiments showed that burning magnesium became heavier — but its supporters happily explained that phlogiston had negative weight.

Consider Kepler. He came up with the idea that the distances of the six (known) planets could be explained by nesting the five Platonic solids. It almost worked for Earth, Mars, and Venus, but clearly failed for Jupiter. He dismissed the trouble with Jupiter, saying "nobody will wonder at it, considering the great distance". The theory certainly wouldn't have worked with any extra planets, but fortunately for Kepler's peace of mind, Uranus was discovered well after his death.

The Veeckian urge is strong in every field, but it truly flourishes in the human and historical sciences, where the definitive experiments that would quash such nonsense are often impossible, impractical, or illegal. Nowhere is this tendency stronger than among cultural anthropologists, who at times seem to have no reason for being other than refurbishing the reputations of cannibals.

Sometimes this has meant denying a particular case of cannibalism, for example among the Anasazi in the American Southwest. Evidence there has piled up and up -archaeologists have found piles of human bones with muscles scraped off, split open for marrow, polished by stirring in pots. They have even found human feces with traces of digested human tissue. But that's not good enough. For one thing, this implication of ancient cannibalism among the Anasazi is offensive to their Pueblo descendants, and that somehow trumps mounds of bloody evidence. You would think that the same principle would cause cultural anthropologists to embrace the face-saving falsehoods of other ethnic groups - didn't the South really secede over the tariff? But that doesn't seem to happen.

Some anthropologists have carried the effort further, denying that any culture was ever cannibalistic. They don't just deny Anasazi archaeology — they deny every kind of evidence, from archaeology to historical accounts, even reports from people alive today. When Álvaro de Mendaña discovered the Solomon Islands, he reported that a friendly chieftain threw a feast and offered him a quarter of a boy. Made up, surely. The conquistadors described the Aztecs as a cannibal kingdom — can't be right, even if the archeology supports it. When Papuans in Port Moresby volunteered to have a picnic in the morgue — to attract tourists, of course — they were just showing public spirit.

The Quaternary mass extinction, which wiped out much of the world's megafauna, offers paleontologists a chance to crank up their own fences. The large marsupials, flightless birds and reptiles of Australia disappeared shortly after humans arrived, about 50,000 years ago. The large mammals of North and South America disappeared about 10,000 years ago — again, just after humans showed up. Moas disappear within two centuries after Polynesian colonization in New Zealand, while giant flightless birds and lemurs disappeared from Madagascar shortly after humans arrived. What does this pattern suggest as the cause? Why, climate change, of course. Couldn't be human hunters — that's unpossible!

The Veeck effect is even more common in everyday life than it is in science. It's just that we expect more from scientists. But scientific examples are clear-cut, easy to see, and understanding the strategy helps you avoid succumbing to it.

Whenever some Administration official says that absence of evidence is not evidence of absence — whenever a psychiatrist argues that Freudian psychotherapy works for some people, even if proven useless on average — Bill Veeck's spirit goes marching on.

*If you're wondering about the second Veeck effect, it's the intellectual equivalent of putting a midget up to bat. And that's another essay.

Associate Professor of Physics, Haverford College

Duality and World Piece

In the northeast Bronx, I walk through a neighborhood that I once feared going into, this time with a big smile on my face. This is because I can quell the bullies with a new slang word in our dictionary "dual". As I approach the 2-train stop on East 225st , the bullies await me. I say, "Yo, whats the dual?" The bullies embrace me with a pound followed by a high five. I make my train.

In physics one of the most beautiful yet underappreciated ideas is that of duality. A duality allows us to describe a physical phenomenon from two different perspectives; often a flash of creative insight is needed to find both. However the power of the duality goes beyond the apparent redundancy of description. After all, why do I need more than one way to describe the same thing? There are examples in physics where either description of the phenomena fails to capture its entirety. Properties of the system 'beyond' the individual descriptions 'emerge'. I will provide two beautiful examples of how dualities manage to yield 'emergent' properties and, end with a speculation.

Most of us know about the famous wave-particle duality in quantum mechanics, which allowes the photon (and the electron) to attain their magical properties to explain all of the wonders of atomic physics and chemical bonding. The duality states that matter (such as the electron) has both wave-like and particle like properties depending on the context. What's weird is how quantum mechanics manifests the wave-particle duality. According to the traditional Copenhagen interpretation, the wave is a travelling oscillation of possibility that the electron can be realized omewhere as a particle.

Life gets strange in the example of quantum tunneling where the electron can penetrate a barrier only because of its 'wave-like' property. Classical physics tells us that an object will not surmount a barrier (like a hill) if its total kinetic energy is less than the potential energy of the barrier. However quantum mechanics predicts that particles can penetrate (or tunnel) through a barrier even when the kinetic energy is less than the potential energy of the barrier. This effect is used every time you use a flash drive or a CD player!

Most people assume that the conduction of electrons in a metal, is a well understood property of classical physics. But when we look deeper we realize that conduction happens because of the wave-like nature of the electrons. We call the collective electron waves that move through the periodic lattice of a metal a Bloch-wave. Qualitatively, when the electron's Bloch waves constructively interfere we get conduction. Moreover, the wave-particle duality takes us further to predict superconductivity, how it is that electrons (and other spin ½ particles like quarks) can conduct without resistance.

Nowadays in my field of quantum gravity and relativistic cosmology, theorists are exploiting another type of duality to address unresolved questions. This holographic duality was pioneered by Leonard Susskind and Gerhard 't Hooft, and later it found a home in the form of the AdS/CFT duality by Juan Maldacena.

This posits that the phenomenon of quantum gravity is described on one hand by a ordinary gravitational theory (a beefed up version of Einstein's general relativity). On the other hand a dual description of quantum gravity is described by a non-gravitational physics with a space-time of one lower dimension. We are left to wonder in the spirit of the wave-particle duality, what new physics we would glean from this type of duality.

The holographic duality also seems to persist in other approaches of quantum gravity, such as Loop Quantum Gravity, and researchers are still in exploring the true meaning behind holography and potential predictions for experiments.

Dualities seem to allow us to understand and make use of properties in physics that go beyond a singular lense of analysis. Might we wonder if duality can transcend its role in physics and into other fields? The dual of time will tell.

Cognitive Neuroscientist and Philosopher, Harvard University


There's a lot of stuff in the world: trees, cars, galaxies, benzene, the Baths of Caracalla, your pancreas, Ottawa, ennui, Walter Mondale. How does it all fit together? In a word… Supervenience. (Pronounced soo-per-VEEN-yence. The verb form is to supervene.)

Supervenience is a shorthand abstraction, native to Anglo-American philosophy, that provides a general framework for thinking about how everything relates to everything else. The technical definition of supervenience is somewhat awkward:

Supervenience is a relationship between two sets of properties. Call them Set A and Set B. The Set A properties supervene on the Set B properties if and only if no two things can differ in their A properties without also differing in their B properties.

This definition, while admirably precise, makes it hard to see what supervenience is really about, which is the relationships among different levels of reality. Take, for example, a computer screen displaying a picture. At a high level, at the level of images, a screen may depict an image of a dog sitting in a rowboat, curled up next to a life vest. The screen's content can also be described as an arrangement of pixels, a set of locations and corresponding colors. The image supervenes on the pixels. This is because a screen's image-level properties (its dogginess, its rowboatness) cannot differ from another screen's image-level properties unless the two screens also differ in their pixel-level properties.

The pixels and the image are, in a very real sense, the same thing. But — and this is key — their relationship is asymmetrical. The image supervenes on the pixels, but the pixels do not supervene on the image. This is because screens can differ in their pixel-level properties without differing in their image-level properties. For example, the same image may be displayed at two different sizes or resolutions. And if you knock out a few pixels, it's still the same image. (Changing a few pixels will not protect you from charges of copyright infringement.) Perhaps the easiest way to think about the asymmetry of supervenience is in terms of what determines what. Determining the pixels completely determines the image, but determining the image does not completely determine the pixels.

The concept of supervenience deserves wider currency because it allows us to think clearly about many things, not just about images and pixels. Supervenience explains, for example, why physics is the most fundamental science and why the things that physicists study are the most fundamental things. To many people, this sounds like a value judgment, but it's not, or need not be. Physics is fundamental because everything in the universe, from your pancreas to Ottawa, supervenes on physical stuff. (Or so "physicalists" like me claim.) If there were a universe physically identical to ours, then it would also include a pancreas just like yours and an Ottawa just like Canada's.

Supervenience is especially helpful when grappling with three contentious and closely related issues: (1) the relationship between science and the humanities, (2) the relationship between the mind and brain, and (3) the relationship between facts and values.

Humanists sometimes perceive science as imperialistic, as aspiring to take over the humanities, to "reduce" everything to electrons, genes, numbers, and neurons, and thus to "explain away" all of the things that make life worth living. Such thoughts are accompanied by disdain or fear, depending on how credible such ambitions are taken to be. Scientists, for their part, sometimes are imperious, dismissing humanists and their pursuits as childish and unworthy of respect. Supervenience can help us think about how science and the humanities fit together, why science is sometimes perceived as encroaching on the humanist's territory, and the extent to which such perceptions are and are not valid.

It would seem that humanists and scientists study different things. Humanists are concerned with things like love, revenge, beauty, cruelty, and our evolving conceptions of such things. Scientists study things like electrons and nucleotides. But sometimes it sounds like scientists are getting greedy. Physicists aspire to construct a complete physical theory, which is sometimes called a "Theory of Everything" (TOE). If humanists and scientists study different things, and if physics covers everything, then what is left for the humanists? (Or, for that matter, non-physicists?)

There is a sense in which a TOE really is a TOE, and there is a sense in which it's not. A TOE is a complete theory of everything upon which everything else supervenes. If two worlds are physically identical, then they are also humanistically identical, containing exactly the same love, revenge, beauty, cruelty, and conceptions thereof. But that does not mean that a TOE puts all other theorizing out of business, not by a long shot. A TOE won't tell you anything interesting about Macbeth or the Boxer Rebellion.

Perhaps the threat from physics was never all that serious. Today, the real threat, if there is one, is from the behavioral sciences, especially the sciences that connect the kind of "hard" science we all studied in high school to humanistic concerns. In my opinion, three sciences stand out in this regard: behavioral genetics, evolutionary psychology, and cognitive neuroscience. I study moral judgment, a classically humanistic topic. I do this in part by scanning people's brains while they make moral judgments. More recently I've started looking at genes, and my work is guided by evolutionary thinking. My work assumes that the mind supervenes on the brain, and I attempt to explain human values — for example the tension between individual rights and the greater good — in terms of competing neural systems.

I can tell you from personal experience that this kind of work makes some humanists uncomfortable. During the discussion following a talk I gave at Harvard's Humanities Center, a prominent professor declared that my talk — not any particular conclusion I'd drawn, but the whole approach — made him physically ill. (Of course, this could just be me!)

The subject matter of the humanities has always supervened on the subject matter of the physical sciences, but in the past a humanist could comfortably ignore the subvening physical details, much as an admirer of a picture can ignore the pixel-level details. Is that still true? Perhaps it is. Perhaps it depends on one's interests. In any case, it's nothing to be worried sick about.

NB: Andrea Heberlein points out that "supervenience" may also refer to exceptional levels of convenience, as in, "New Chinese take-out right around the corner — Supervenient!"

Department of Cognitive Biology, University of Vienna; Author, The Evolution of Language

An Instinct to Learn

One of the most pernicious misconceptions in cognitive science is the belief in a dichotomy between nature and nurture. Many psychologists, linguists and social scientists, along with the popular press, continue to treat nature and nurture as combatting ideologies, rather than complementary perspectives. For such people, the idea that something is both "innate" and "learned", or both "biological" and "cultural", is an absurdity. Yet most biologists today recognize that understanding behavior requires that we understand the interaction between inborn cognitive processes (e.g. learning and memory) and individual experience. This is particularly true in human behaviour, since the capacities for language and culture are some of the key adaptations of our species, and involve irreducible elements of both biology and environment, of both nature and nurture.

The antidote to "nature versus nurture" thinking is to recognize the existence, and importance, of "instincts to learn". This phrase was introduced by Peter Marler, one of the fathers of birdsong research. A young songbird, while still in the nest, eagerly listens to adults of its own species sing. Months later, having fledged, it begins singing itself, and shapes its own initial sonic gropings to the template provided by those stored memories.  During this period of "subsong" the bird gradually refines and perfects its own song, until by adulthood it is ready to defend a territory and attract mates with its own, perhaps unique, species-typical song. 

Songbird vocal learning is the classic example of an instinct to learn.  The songbird's drive to listen, and to sing, and to shape its song to that which it heard, is all instinctive.  The bird needs no tutelage, nor feedback from its parents, to go through these stages.  Nonetheless, the actual song that it sings is learned, passed culturally from generation to generation.  Birds have local dialects, varying randomly from region to region.  If the young bird hears no song, it will produce only an impoverished squawking, not a typical song.

Importantly, this capacity for vocal learning is only true of some birds, like songbirds and parrots. Other bird species, like seagulls, chickens or owls, do not learn their vocalizations: rather, their calls develop reliably in the absence of any acoustic input.  The calls of such birds are truly instinctive, rather than learned.  But for those birds capable of vocal learning, the song that an adult bird sings is the result of a complex interplay between instinct (to listen, to rehearse, and to perfect) and learning (matching the songs of adults of its species).
It is interesting, and perhaps surprising, to realize that most mammals do not have a capacity for complex vocal learning of this sort.  Current research suggests that, aside from humans, only marine mammals (whales, dolphins, seals…), bats, and elephants have this ability.  Among primates, humans appear to be the only species that can hear new sounds in the environment, and then reproduce them.  Our ability to do this seems to depend on a babbling stage during infancy, a period of vocal playfulness that is as instinctual as the young bird's subsong.  During this stage, we appear to fine tune our vocal control so that, as children, we can hear and reproduce the words and phrases of our adult caregivers.
So is human language an instinct, or learned? The question, presupposing a dichotomy, is intrinsically misleading.  Every word that any human speaks, in any of our species' 6000 languages, has been learned.   And yet the capacity to learn that language is a human instinct, something that every normal human child is born with, and that no chimpanzee or gorilla possesses. 

The instinct to learn language is, indeed, innate (meaning simply that it reliably develops in our species), even though every language is learned.  As Darwin put it in Descent of Man, "language is an art, like brewing or baking; but … certainly is not a true instinct, for every language has to be learnt.  It differs, however, widely from all ordinary arts, for man has an instinctive tendency to speak, as we see in the babble of our young children; whilst no child has an instinctive tendency to brew, bake, or write."
And what of culture? For many, human culture seems the very antithesis of "instinct". And yet it must be true that language plays a key role in every human culture. Language is the primary medium for the passing on of historically-accumulated knowledge, tastes, biases and styles that makes each of our human tribes and nations its own unique and precious entity.  And if human language is best conceived of as an instinct to learn, why not culture itself? 
The past decade has seen a remarkable unveiling of our human genetic and neural makeup, and the coming decade promises even more remarkable breakthroughs.  Each of us six billion humans is genetically unique (with the fascinating exception of identical twins).  For each of us, our unique genetic makeup influences, but does not determine, what we are. 
If we are to grapple earnestly and effectively with the reality of human biology and genetics, we will need to jettison outmoded dichotomies like the traditional distinction between nature and nurture.  In their place, we will need to embrace the reality of the many instincts to learn (language, music, dance, culture…) that make us human.  
I conclude that the dichotomy-denying phrase "instinct to learn" deserves a place in the cognitive toolkit of everyone who hopes, in the coming age of individual genomes, to understand human culture and human nature in the context of human biology.  Human language, and human culture, are not instincts — but they are instincts to learn.

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >