| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >




2011

WHAT SCIENTIFIC CONCEPT WOULD IMPROVE EVERYBODY'S COGNITIVE TOOLKIT?

PAUL KEDROSKY
Editor, Infectious Greed; Senior Fellow, Kauffman Foundation

Shifting Baseline Syndrome

When John Cabot came to the Grand Banks off Newfoundland in 1497 he was astonished at what he saw. Fish, so many fish — fish in numbers he could hardly comprehend. According to Farley Mowat, Cabot wrote that the waters were so "swarming with fish [that they] could be taken not only with a net but in baskets let down and [weighted] with a stone."

The fisheries boomed for five hundred years, but by 1992 it was all over. The Grand Banks cod fishery was destroyed, and the Canadian government was forced to close it entirely, putting 30,000 fishers out of work. It has never recovered.

What went wrong? Many things, from factory fishing to inadequate oversight, but much of it was aided and abetted by treating each step toward disaster as normal. The entire path, from plenitude to collapse, was taken as the status quo, right up until the fishery was essentially wiped out.

In 1995 fisheries scientist Daniel Pauly coined a phrase for this troubling ecological obliviousness — he called it "shifting baseline syndrome". Here is how Pauly first described the syndrome: "Each generation of fisheries scientist accepts as baseline the stock situation that occurred at the beginning of their careers, and uses this to evaluate changes. When the next generation starts its career, the stocks have further declined, but it is the stocks at that time that serve as a new baseline. The result obviously is a gradual shift of the baseline, a gradual accommodation of the creeping disappearance of resource species…"

It is blindness, stupidity, intergeneration data obliviousness. Most scientific disciplines have long timelines of data, but many ecological disciplines don't. We are forced to rely on second-hand and anecdotal information — we don't have enough data to know what is normal, so we convince ourselves that this is normal.

But it often isn't normal. Instead, it is a steadily and insidiously shifting baseline, no different than convincing ourselves that winters have always been this warm, or this snowy. Or convincing ourselves that there have always been this many deer in the forests of eastern North America. Or that current levels of energy consumption per capita in the developed world are normal. All of these are shifting baselines, where our data inadequacy, whether personal or scientific, provides dangerous cover for missing important longer-term changes in the world around us.

When you understand shifting baseline syndrome it forces you to continually ask what is normal. Is this? Was that? And, at least as importantly, it asks how we "know" that it's normal. Because, if it isn't, we need to stop shifting the baselines and do something about it before it's too late.


ROSS ANDERSON
FRS; Professor, Security Engineering, Cambridge Computer Laboratory; Researcher in Security Psychology

Science Versus Theatre

Modern societies waste billions on protective measures whose real aim is to reassure rather than to reduce risk. Those of us who work in security engineering refer to this as "security theatre", and there are examples all around us. We're searched going into buildings that no terrorist would attack. Social network operators create the pretence of a small intimate group of "friends" in order to inveigle users into disclosing personal information that can be sold to advertisers; the users don't get privacy but privacy theatre. Environmental policy is a third example: cutting carbon emissions would cost lots of money and votes, so governments go for gesture policies that are highly visible even if their effect is negligible. Specialists know that most of the actions that governments claim will protect the security of the planet are just theatre.

Theatre thrives on uncertainty. Wherever risks are hard to measure, or their consequences hard to predict, appearance can be easier to manage than reality. Reducing uncertainty and exposing gaps between appearance and reality are among the main missions of science.

Our traditional approach was the painstaking accumulation of knowledge that enables people to understand risks, options and consequences. But theatre is a deliberate construct rather than just an accidental side-effect of ignorance, so perhaps we need to become more sophisticated about theatrical mechanisms too. Science communicators need to become adept at disrupting the show, illuminating the dark corners of the stage and making the masks visible for what they are.


ADAM ALTER
Psychologist; Assistant Professor, Stern School of Business, NYU

The "Cognitive Iceberg:" Humans Are Blind To Many Of The Processes That Shape Their Mental Lives

The human brain is an inconceivably complex tool, and while we're focusing on the business of daily life, our brains are processing multitudes of information below the surface of conscious awareness. Meanwhile, this peripheral information subtly shapes our thoughts, feelings and actions, and crafts some of our most critical life outcomes. This information takes many forms, but I'll illustrate the principle with three brief examples:

1. Color

Color is a ubiquitous feature of the environment, though we rarely notice colors unless they're particularly bright or deviate dramatically from our expectations. Nonetheless, colors have the capacity to shape a range of outcomes: men are ever so slightly more attractive to women when they wear red colored shirts (rather than shirts of another color); the same effect applies to women, who seem more attractive to men when their pictures are bounded by a red colored border. Red signals both romantic intent and dominance amongst lower-order species, and this same signal applies to men and women. This same relationship between red and dominance explains why sporting teams that wear red are more aggressive and tend to dominate sporting teams that wear other colors; meanwhile, sports referees and umpires assign more points to teams wearing red uniforms, which may explain in part why those teams tend to outperform teams wearing other colors. But, red isn't always beneficial: we've come to associate red with errors and caution, which makes people avoidant and in turn limits their creativity (though it also improves their attention to detail). These effects have sound bases in biology and human psychology, but that doesn't make them any less remarkable or surprising to the lay population.

2. Weather and Ambient Temperature

No one's surprised that the sunny warmth of summer makes people happy, but weather conditions and ambient temperature have other more surprising effects on our mental lives. Rainy weather makes us more introspective and thoughtful, which in turn improves our memory — in one study, people remembered the features of a store with greater accuracy on rainy days than on sunny days. On a grander scale, the stock market tends to rise on fine, sunny days, while cooler, rainy days prompt sluggishness and brief downturns. More surprising, still, is the relationship between changes in weather and various accidents, suicide, depression and irritability, all of which are claimed to respond to changes in the electrical state of the atmosphere. The metaphor between warmth and human kindness is also more than a metaphor, as recent studies have shown that people find strangers more likable when they form their first impressions while holding a cup of warm coffee. The warmth-kindness metaphor extends to social exclusion, as people literally feel colder when they've been socially excluded. The simple relationship between fine weather and happiness is joined by a series of more surprising and complicated relationships between weather and warmth on the one hand, and a range of important life outcomes on the other.

3. Symbols and Images

Urban landscapes are populated by thousands of symbols and images that unwittingly influence how we think and behave. Self-identified Christians tend to behave more honestly when they're exposed to an image of the crucifix, even when they have no conscious memory of seeing the crucifix in the first place. Honesty is a virtue, but another experiment showed that Christians held lower opinions of themselves after they were subliminally exposed to an image of then Pope John Paul II. On a brighter note, people think more creatively when they're exposed to the Apple Computers logo, or when they witness the illumination of an incandescent light bulb; both the Apple logo and the illuminated light bulb are popularly associated with creativity, and deeply ingrained metaphors once activated have the capacity to shape actual behavior. Similar associative logic suggests that national flags should prompt unity, and indeed a sample of left-wing and right-wing Israelis were more accommodating of opposing political views when they were subliminally exposed to an image of the Israeli flag. Likewise, a sample of Americans responded more favorably to Muslims when seated in front of a large U.S. flag.

These three cues — colors, weather conditions, and symbols and images — are joined by dozens of others that have a surprising capacity to influence how we think, feel, behave, and decide. Once we understand what those cues are and how they shape our mental lives, we're better equipped to harness them to our advantage.


NICK BOSTROM
Professor; Director, Future of Humanity Institute, Faculty of Philosophy & Oxford Martin School, University of Oxford

Game of Life — And Looking For Generators

The Game of Life is a cellular automaton, invented by the British mathematician John Horton Conway in 1970.

Many will already be acquainted with Conway's invention. For those aren't, the best way to familiarize oneself with it is to experiment with one of the many free implementations that can be found on the Internet (or better yet, if you have at least rudimentary programming skills, make one yourself).

Basically, there is a grid and each cell can be in either of two states: dead or alive. One starts by seeding the grid with some initial distribution of live cells. Then one lets the system evolve according to three simple rules.

(Birth) A dead cell with exactly three live neighbours becomes a live cell.

(Survival) A live cell with two or three neighbours stays alive.

(Death) Any other cell dies or remains dead.

"Gosper's Glider Gun"

Why is this interesting? Certainly, the Game of Life is not biologically realistic. It doesn't do anything useful. It isn't even really game in the ordinary sense of the word.

But it's a brilliant demonstration platform for several important concepts — a virtual 'philosophy of science laboratory'. (The philosopher Daniel Dennett has expressed the view that it should be incumbent on every philosophy student to be acquainted with it.) It gives us the microcosm, simple enough that we can easily understand how things are happening, yet with sufficient generative power to produce interesting phenomena.

By playing with the Game of Life for an hour, one can develop an intuitive understanding of the following concepts and ideas:

Emergent complexity — How complex patterns can arise from very simple rules.

Basic dynamics concepts — such as the distinction between laws of nature and initial conditions.

Levels of explanation — One quickly notices patterns arising that can be efficiently described in higher-level terms (such as "gliders", a specific kind of pattern that crawls across the screen) but that are quite cumbersome to describe in the language of the basic physics upon which the patterns supervene (i.e., in terms of individual pixels being alive or dead)

Supervenience — This leads one to think about the relation between different sciences in the real world… Does chemistry, likewise, supervene on physics? Biology on chemistry? The mind on the brain?

Concept formation, and carving nature at its joints — how and why we recognize certain types of pattern and give them names. For instance, in the Game of Life one distinguishes "still lives", small local patterns that are stable and unchanging; "oscillators", local patterns that perpetually cycle through a fixed sequence of states; "spaceships", patterns that move across the grid (such as gliders); "guns", stationary patterns that send out an incessant stream of spaceships; and "puffer trains", patterns that move themselves across the grid leaving debris behind. As one begins to form these and other concepts, the chaos on the screen gradually becomes more comprehensible. Developing concepts that carve nature at its joints is the first crucial step towards understanding, not only in the Game of Life but in science and in ordinary life as well.

At a more advanced level, one discovers that the Game of Life is Turing complete. That is, it's possible to build a pattern that acts like a universal Turing machine. Thus, any computable function could be implemented in the Game of Life — including perhaps a function that describes a universe like the one we inhabit. It's also possible to build a universal constructor in the Game of Life, a pattern which can build many types of complex objects, including copies of itself. Nonetheless, it seems that the structures that evolve into Game of Life are different from the ones who find in the real world: Game of Life structures tend to be very fragile in the sense that changing a single cell will often cause them to dissolve. It is interesting to try to figure out exactly what it is about the rules of the Game of Life and the laws of physics that govern our own universe that accounts for these differences.

Conway's Life is perhaps best viewed not as a single shorthand abstraction, but rather as a generator of such abstractions. We get a whole bunch of useful abstractions — or at least a recipe for how to generate them — all for the price of one.

And this, in fact, points us to one especially useful shorthand abstraction: the strategy of Looking for Generators. We confront many problems. We can try to solve them one by one. But alternatively, we can try to create a generator that produces solutions to multiple problems.

Consider, for example, the challenge of advancing scientific understanding. We might make progress by directly tackling some random scientific problem. But perhaps we can make more progress by Looking for Generators and focusing our efforts on certain subsets of scientific problems, namely those whose solutions would do most to facilitate the discovery of many other solutions. On this approach, we would pay most attention to innovations in methodology that can be widely applied; and to the development of scientific instruments that can enable many new experiments; and to improvements in institutional processes, such as peer review, that can make many decisions about whom to hire, fund, and promote more closely reflecting true merit.

In the same vein, we would be extremely interested in developing effective biomedical cognitive enhancers and other ways of improving the human thinker — the brain being, after all, the generator par excellence.


ROBERT SAPOLSKY
Neuroscientist, Stanford University; Author, Monkeyluv

The Lure Of A Good Story

Various concepts come to mind for inclusion in that cognitive toolkit. "Emergence," or related to that, "the failure of reductionism" — mistrust the idea that if you want to understand a complex phenomenon, the only tool of science to use is to break it into its component parts, study them individually in isolation, and then glue the itty-bitty little pieces back together. This often doesn't work and, increasingly, it seems like it doesn't work for the most interesting and important phenomena out there. To wit — you have a watch that doesn't run correctly and often, indeed, you can fix it by breaking it down to its component parts and finding the gear that has had a tooth break (actually, I haven't a clue if there is any clock on earth that still works this way). But if you have a cloud that doesn't rain, you don't break it down to its component parts. Ditto for a person whose mind doesn't work right. Or for going about understanding the problems of a society or ecosystem. So that was a scientific concept that was tempting to cite.

Related to that are terms like "synergy" and "interdisciplinary," but heaven save us from having to hear more about those words. There are now whole areas of science where you can't get a faculty position unless you work one of those words into the title of your job talk and have it tattooed on the small of your back.

Another useful scientific concept is "genetic vulnerability." This would be great if it found its way into everyone's cognitive toolkit because its evil cousins of genetic inevitability and genetic determinism are already deeply entrenched there, and with long long legacies of bad consequences. Everyone should be taught about work like that of Avshalom Caspi and colleagues, who looked at genetic polymorphisms related to various neurotransmitter systems that are associated with psychiatric disorders and anti-social behaviors. Ah ha, far too many people will say, drawing on that nearly useless, misshapen tool of genetic determinism, have one of those polymorphisms and you're hosed by inevitability. And instead, what those studies beautifully demonstrate is how these polymorphisms carry essentially zero increased risk of those disorders…..unless you grow up in particularly malign environments. Genetic determinism, my tuches.

But the scientific concept that I've chosen is one that is useful simply because it isn't a scientific concept, can be the antithesis of — "anecdotalism." Every good journalist knows its power — start an article with statistics about foreclosure rates or feature a family victimized by some bank? No brainer. Display maps showing the magnitudes of refugees flowing out of Darfur or the face of one starving orphan in a camp? Obvious choice. Galvanize the readership.

But anecdotalism is potentially a domain of distortion as well. Absorb the lessons of science and cut saturated fats from your diet, or cite the uncle of the spouse of a friend who eats nothing but pork rinds and is still pumping iron at age 110? Depend on one of the foundations of the 20th century's extension of life span and vaccinate your child, or obsess over a National Enquirer-esque horror story of one vaccination disaster and don't immunize? I shudder at the current potential for another case of anecdotalism — I write four days after the Arizona shooting of Gabby Giffords and 19 other people by Jared Loughner. As of this writing, experts such as the esteemed psychiatrist Fuller Torrey are guessing that Loughner is a paranoid schizophrenic. And if this is true, this anecdotalism will give new legs to the tragic misconception that the mentally ill are more dangerous than the rest of us.

So maybe when I say argue for "anecdotalism" going into everyone's cognitive toolkit, I am really arguing for two things to be incorporated — a) appreciation of how distortive it can be, and b) recognition, in a salute to the work of people like Tversky and Kahnemann, of its magnetic pull, its cognitive satisfaction. As a social primate complete with a region of the cortex specialized for face recognition, the individual face — whether literal or metaphorical — has a special power. But unappealing, unintuitive patterns of statistics and variation generally teach us much more.


CHRISTINE FINN
Archaeologist, Journalist; Author, Artifacts

Absence and Evidence

I first heard the words "absence of evidence is not evidence of absence" as a first-year archaeology undergraduate. I now know it was part of Carl Sagan's retort against evidence from ignorance, but at the time the non-ascribed quote was part of the intellectual toolkit offered by my professor to help us make sense of the process of excavation.

Philosophically this is a challenging concept, but at an archaeological site all became clear in the painstaking tasks of digging, brushing and trowelling. The concept was useful to remind us, as we scrutinised what was there, to take note of the possibility of what was not there. What we were finding, observing, and lifting, were the material remains, the artifacts which had survived, usually as a result of their material or the good fortune of their deposition. There were barely recordable traces of what was there — the charcoal layer of a prehistoric hearth for example — and others recovered in the washing, or the lab, but this was still tangible evidence. What the concept brought home to us was the invisible traces, the material which had gone from our reference point in time, but which still had a bearing in the context.

It was powerful stuff which stirred my imagination. I looked for more examples outside philosophy. I learned about the great near-Eastern archaeologist, Sir Leonard Woolley who, when excavating the 3rd millennium BC Mesopotamian palace at Ur, modern day Iraq. There, he conjured up musical instruments from their absence. The evidence was the holes left in the excavation layers, the ghosts of wooden objects which had long since disappeared into time. He used this absence to advantage by making casts of the holes and realising the instruments as reproductions. It struck me at the time that he was creating works of art. The absent lyres were installations which he rendered as interventions, and transformed into artifacts. More recently the British artist Rachel Whiteread has made her name through an understanding of the absent form, from the cast of a house to the undersides and spaces of domestic interiors.

Recognising the evidence of absence is not about forcing a shape on the intangible, but acknowledging a potency in the not-thereness. Taking the absence concept to be a positive idea, I suggest interesting things happen. For years middle-eastern archaeologists puzzled over the numerous, isolated bath-houses and other structures in the deserts of North Africa. Where was the evidence of habitation? The clue was in the absence: the buildings were used by nomads who left only camel prints in the sand. Their habitations were ephemeral, tents which, if not taken away with them, were of such material that they would too disappear into the sand. Observed again in this light, the ariel photos of desert ruins are hauntingly repopulated.

The absent evidence of ourselves is all around us, beyond the range of digital traces.

When my parents died and I inherited their house, the task of clearing their rooms was both emotional, and archaeological. The last mantelpiece in the sitting room had accreted over 35 years of married life, a midden of photos, ephemera, beach-combing trove and containers of odd buttons and old coins. I wondered what a stranger — maybe a forensic scientist, or traditional archaeologist — would make of this array if the narrative was woven simply from the tangible evidence. But as I took the assemblage apart in a charged moment, I felt there was a whole lot of no-thing which was coming away with it. Something invisible, and unquantifiable, which had been holding these objects in that context.

I recognised the feeling, and cast my memory back to my first archaeological excavation. It was of a long-limbed hound, one of those 'fine, hunting dogs' the classical writer, Strabo, described as being traded from ancient Britain into the Roman world. As I knelt in the 2000 year old grave, carefully removing each tiny bone, as if engaged with a sculptural process, I felt the presence of something absent. I could not quantify it, but it was that unseen 'evidence' which, it seemed, had given the dog its dog-ness.


JON KLEINBERG
Professor, Computer Science, Cornell University

E Pluribus Unum

If you used a personal computer 25 years ago, everything you needed to worry about was taking place in the box in front of you. Today, the applications you use over the course of an hour are scattered across computers all over the world; for the most part, we've lost the ability to tell where our data sits at all. We invent terms to express this lost sense of direction: our messages, photos, and on-line profiles are all somewhere in "The Cloud".

The Cloud is not a single thing; what you think of as your Gmail account or Facebook profile is in fact made possible by the teamwork of a huge number of physically dispersed components — a distributed system, in the language of computer science. But we can think of it as a single thing, and this is the broader point: The ideas of distributed systems apply whenever we see many small things working independently but cooperatively to produce the illusion of a single unified experience. This effect takes place not just on the Internet, but in many other domains as well. Consider for example a large corporation that is able to release new products and make public announcements as though it were a single actor, when we know that at a more detailed level it consists of tens of thousands of employees. Or a massive ant colony engaged in coordinated exploration, or the neurons of your brain creating your experience of the present moment.

The challenge for a distributed system is to achieve this illusion of a single unified behavior in the face of so much underlying complexity. And this broad challenge, appropriately, is in fact composed of many smaller challenges in tension with each other.

One basic piece of the puzzle is the problem of consistency. Each component of a distributed system sees different things and has a limited ability to communicate with everyone else, so different parts of the system can develop views of the world that are mutually inconsistent. There are numerous examples of how this can lead to trouble, both in technological domains and beyond. Your handheld device doesn't sync with your e-mail, so you act without realizing that there's already been a reply to your message. Two people across the country both reserve seat 5F on the same flight at the same time. An executive in an organization "didn't get the memo" and so strays off-message. A platoon attacks too soon and alerts the enemy.

It is natural to try "fixing" these kinds of problems by enforcing a single global view of the world, and requiring all parts of the system to constantly refer to this global view before acting. But this undercuts many of the reasons why one uses a distributed system in the first place. It makes the component that provides the global view a massive bottleneck, and a highly dangerous single point of potential failure. The corporation doesn't function if the CEO has to sign off on every decision.

To get a more concrete sense for some of the underlying design issues, it helps to walk through an example in a little detail, a basic kind of situation in which we try to achieve a desired outcome with information and actions that are divided over multiple participants. The example is the problem of sharing information securely: imagine trying to back up a sensitive database on multiple computers, while protecting the data so that it can only be reconstructed if a majority of the backup computers cooperate. But since the question of secure information sharing ultimately has nothing specifically to do with computers or the Internet, let's formulate it instead using a story about a band of pirates and a buried treasure.

Suppose that an aging Pirate King knows the location of a secret treasure, and before retiring he intends to share the secret among his five shiftless sons. He wants them to be able to recover the treasure if three or more of them work together, but he also wants to prevent a "splinter group" of one or two from being able to get the treasure on their own. To do this, he plans to split the secret of the location into five "shares," giving one to each son, in such a way that he ensures the following condition. If at any point in the future, at least three of the sons pool their shares of the secret, then they will know enough to recover the treasure. But if only one or two pool their shares, they will not have enough information.

How to do this? It's not hard to invent ways of creating five clues so that all of them are necessary for finding the treasure. But this would require unanimity among the five sons before the treasure could be found. How can we do it so that cooperation among any three is enough, and cooperation among any two is insufficient?

Like many deep insights, the answer is easy to understand in retrospect. The Pirate King draws a secret circle on the globe (known only to himself) and tells his sons that he's buried the treasure at the exact southernmost point on this circle. He then tells each son a different point on this circle. Three points are enough to uniquely reconstruct a circle, so any three pirates can pool their information, identify the circle, and find the treasure. But for any two pirates, an infinity of circles pass through their two points, and they cannot know which is the one they need for recovering the secret. It's a powerful trick, and broadly applicable; in fact, versions of this secret-sharing scheme form a basic principle of modern data security, discovered by the cryptographer Adi Shamir, where arbitrary data can be encoded using points on a curve, and reconstructed from knowledge of other points on the same curve.

The literature on distributed systems is rich with ideas in this spirit. More generally, the principles of distributed systems give us a way to reason about the difficulties inherent in complex systems built from many interacting parts. And so to the extent that we sometimes are fortunate enough to get the impression of a unified Web, a unified global banking system, or a unified sensory experience, we should think about the myriad challenges involved in keeping these experiences whole.


JOHN MCWHORTER
Linguist; Cultural Commentator; William Simon Fellow, Colmbia; Author, That Being Said

Path Dependence

In an ideal world all people would spontaneously understand that what political scientists call path dependence explains much more of how the world works than is apparent. Path dependence refers to the fact that often, something that seems normal or inevitable today began with a choice that made sense at a particular time in the past, but survived despite the eclipse of the justification for that choice, because once established, external factors discouraged going into reverse to try other alternatives.

The paradigm example is the seemingly illogical arrangement of letters on typewriter keyboards. Why not just have the letters in alphabetical order, or arrange them so that the most frequently occurring ones are under the strongest fingers? In fact, the first typewriter tended to jam when typed on too quickly, so its inventor deliberately concocted an arrangement that put A under the ungainly little finger. In addition, the first row was provided with all of the letters in the word typewriter so that salesmen, new to typing, could wangle typing the word using just one row.

Quickly, however, mechanical improvements made faster typing possible, and new keyboards placing letters according to frequency were presented. But it was too late: there was no going back. By the 1890s typists across America were used to QWERTY keyboards, having learned to zip away on new versions of them that did not stick so easily, and retraining them would have been expensive and, ultimately, unnecessary. So QWERTY was passed down the generations, and even today we use the queer QWERTY configuration on computer keyboards where jamming is a mechanical impossibility.

The basic concept is simple, but in general estimation tends to be processed as the province of "cute" stories like the QWERTY one, rather than explaining a massive weight of scientific and historical processes. Instead, the natural tendency is to seek explanations for modern phenomena in present-day conditions.

One may assume that cats cover their waste out of fastidiousness, when the same creature will happily consume its own vomit and then jump on your lap. Cats do the burying as an instinct from their wild days when the burial helped avoid attracting predators, and there is no reason for them to evolve out of the trait now (to pet owners' relief). I have often wished there were a spontaneous impulse among more people to assume that path dependence-style explanations are as likely as jerry-rigged present-oriented ones.

For one, that the present is based on a dynamic mixture of extant and ancient conditions is simply more interesting than assuming that the present (mostly) all there is, with history as merely "the past," interesting only for seeing whether something that happened then could now happen again, which is different from path dependence.

For example, path dependence explains a great deal about language which is otherwise attributed to assorted just-so explanations. Much of the public embrace of the idea that one's language channels how one thinks is based on this kind of thing. Robert McCrum celebrates English as "efficient" in its paucity of suffixes of the kind that complexify most European languages. The idea is that this is rooted in something in its speakers' spirit, which would have propelled them to lead the world via exploration and the Industrial Revolution.

But English lost its suffixes starting in the eighth century, A.D. when Vikings invaded Britain and so many of them learned the language incompletely that children started speaking it that way. After that, you can't create gender and conjugation out of thin air — there's no going back until gradual morphing recreates such things over eons of time. That is, English's current streamlined syntax has nothing to do with any present-day condition of the spirit, nor with any even four centuries ago. The culprit is path dependence, as are most things about how a language is structured.

Or, we hear much lately about a crisis in general writing skills, suposedly due to email and texting. But there is a circularity here — why, precisely, could people not write emails and texts with the same "writerly" style that people used to couch letters in? Or, we hear of a vaguely defined effect of television, despite that kids were curled up endlessly in front of the tube starting in the fifties, long before the eighties when outcries of this kind first took on their current level of alarm in the report A Nation at Risk.

Once again, the presentist explanation does not cohere, whereas one based on an earlier historical development that there is no turning back from does. Public American English began a rapid shift from cosseted to less formal "spoken" style in the sixties, in the wake of cultural changes amidst the counterculture. This sentiment directly affected how language arts textbooks were composed, the extent to which any young person was exposed to an old-fashioned formal "speech," and attitudes towards the English language heritage in general. The result: a linguistic culture stressing the terse, demotic, and spontaneous. After just one generation minted in this context, there was no way to go back. Anyone who decided to communicate in the grandiloquent phraseology of yore would sound absurd and be denied influence or exposure. Path dependence, then, identifies this cultural shift as the cause of what dismays, delights, or just interests us in how English is currently used, and reveals television, email and other technologies as merely epiphenomenal.

Most of life looks path dependent to me. If I could create a national educational curriculum from scratch, I would include the concept as one taught to young people as early as possible.


SCOTT D. SAMPSON
Paleontologist and science communicator; Author: Dinosaur Odyssey: Fossil Threads in the Web of Life

INTERBEING

Humanity's cognitive toolkit would greatly benefit from adoption of "interbeing," a concept that comes from Vietnamese Buddhist monk Thich Nhat Hanh. In his words:

"If you are a poet, you will see clearly that there is a cloud floating in [a] sheet of paper. Without a cloud, there will be no rain; without rain, the trees cannot grow; and without trees, we cannot make paper. The cloud is essential for the paper to exist. If the cloud is not here, the sheet of paper cannot be here either . . . "Interbeing" is a word that is not in the dictionary yet, but if we combine the prefix "inter-" with the verb to be," we have a new verb, inter-be. Without a cloud, we cannot have a paper, so we can say that the cloud and the sheet of paper inter-are. . . . "To be" is to inter-be. You cannot just be by yourself alone. You have to inter-be with every other thing. This sheet of paper is, because everything else is."

Depending on your perspective, the above passage may sound like profound wisdom or New Age mumbo-jumbo. I would like to propose that interbeing is a robust scientific fact — at least insomuch as such things exist — and, further, that this concept is exceptionally critical and timely.

Arguably the most cherished and deeply ingrained notion in the Western mindset is the separateness of our skin-encapsulated selves — the belief that we can be likened to isolated, static machines. Having externalized the world beyond our bodies, we are consumed with thoughts of furthering our own ends and protecting ourselves. Yet this deeply rooted notion of isolation is illusory, as evidenced by our constant exchange of matter and energy with the "outside" world. At what point did your last breath of air, sip of water, or bite of food cease to be part of the outside world and become you? Precisely when did your exhalations and wastes cease being you? Our skin is as much permeable membrane as barrier, so much so that, like a whirlpool, it is difficult to discern where "you" end and the remainder of the world begins. Energized by sunlight, life converts inanimate rock into nutrients, which then pass through plants, herbivores, and carnivores before being decomposed and returned to the inanimate Earth, beginning the cycle anew. Our internal metabolisms are intimately interwoven with this Earthly metabolism; one result is the replacement of every atom in our bodies every seven years or so.

You might counter with something like, "Ok, sure, everything changes over time. So what? At any given moment, you can still readily separate self from other."

Not quite. It turns out that "you" are not one life form — that is, one self — but many. Your mouth alone contains more than 700 distinct kinds of bacteria. Your skin and eyelashes are equally laden with microbes and your gut houses a similar bevy of bacterial sidekicks. Although this still leaves several bacteria-free regions in a healthy body — for example, brain, spinal cord, and blood stream — current estimates indicate that your physical self possesses about a trillion human cells and about 10 trillion bacterial cells. In other words, at any given moment, your body is about 90% nonhuman, home to many more life forms than the number of people presently living on Earth; more even than the number of stars in the Milky Way Galaxy! To make things more interesting still, microbiological research demonstrates that we are utterly dependent on this ever-changing bacterial parade for all kinds of "services," from keeping intruders at bay to converting food into useable nutrients.

So, if we continually exchange matter with the outside world, if our bodies are completely renewed every few years, and if each of us is a walking colony of trillions of largely symbiotic life forms, exactly what is this self that we view as separate? You are not an isolated being. Metaphorically, to follow current bias and think of your body as a machine is not only inaccurate but destructive. Each of us is far more akin to a whirlpool, a brief, ever-shifting concentration of energy in a vast river that's been flowing for billions of years. The dividing line between self and other is, in many respects, arbitrary; the "cut" can be made at many places, depending on the metaphor of self one adopts. We must learn to see ourselves not as isolated but as permeable and interwoven — selves within larger selves, including the species self (humanity) and the biospheric self (life). The interbeing perspective encourages us to view other life forms not as objects but subjects, fellow travelers in the current of this ancient river. On a still more profound level, it enables us to envision ourselves and other organisms not as static "things" at all, but as processes deeply and inextricably embedded in the background flow.

One of the greatest obstacles confronting science education is the fact that the bulk of the universe exists either at extremely large scales (e.g., planets, stars, and galaxies) or extremely small scales (e.g., atoms, genes, cells) well beyond the comprehension of our (unaided) senses. We evolved to sense only the middle ground, or "mesoworld," of animals, plants, and landscapes. Yet, just as we have learned to accept the non-intuitive, scientific insight that the Earth is not the center of the universe, so too must we now embrace the fact that we are not outside or above nature, but fully enmeshed within it. Interbeing, an expression of ancient wisdom backed by science, can help us comprehend this radical ecology, fostering a much-needed transformation in mindset.


AMANDA GEFTER
Books & Arts editor, New Scientist; founder and editor, CultureLab

Duality

It is one of the stranger ideas to emerge from recent physics. Take two theories that describe utterly dissimilar worlds — worlds with different numbers of dimensions, different geometries of spacetime, different building blocks of matter. Twenty years ago, we'd say those are indisputably disparate and mutually exclusive worlds. Today, there's another option: two radically different theories might be dual to one another — that is, they might be two very different manifestations of the same underlying reality.

Dualities are as counterintuitive a notion as they come, but physics is riddled with them. When physicists looking to unite quantum theory with gravity found themselves with five very different but equally plausible string theories, it was an embarrassment of riches — everyone was hoping for one "theory of everything", not five. But duality proved to be the key ingredient. Remarkably, all five string theories turned out to be dual to one another, different expressions of a single underlying theory.

Perhaps the most radical incarnation of duality was discovered in 1997 by Juan Maldacena. Maldacena found that a version of string theory in a bizarrely shaped universe with five large dimensions is mathematically dual to an ordinary quantum theory of particles living on that universe's four-dimensional boundary. Previously, one could argue that the world was made up of particles or that the world was made up of strings. Duality transformed or into and — mutually exclusive hypotheses, both equally true.

In everyday language, duality means something very different. It is used to connote a stark dichotomy: male and female, east and west, light and darkness. Embracing the physicist's meaning of duality, however, can provide us with a powerful new metaphor, a one-stop shorthand for the idea that two very different things might be equally true. As our cultural discourse is becoming increasingly polarized, the notion of duality is both more foreign and more necessary than ever. If accessible in our daily cognitive toolkit, it could serve as a potent antidote to our typically Boolean, two-valued, zero-sum thinking — where statements are either true or false, answers are yes or no, and if I'm right, then you are wrong. With duality, there's a third option. Perhaps my argument is right and yours is wrong; perhaps your argument is right and mine is wrong; or, just maybe, our opposing arguments are dual to one another.

That's not to say that we ought to descend into some kind of relativism, or that there are no singular truths. It is to say, though, that truth is far more subtle than we once believed, and that it shows up in many guises. It is up to us to recognize it in all its varied forms.


| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >