| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >




2011

WHAT SCIENTIFIC CONCEPT WOULD IMPROVE EVERYBODY'S COGNITIVE TOOLKIT?

DONALD HOFFMAN
Cognitive Scientist, UC, Irvine; Author, Visual Intelligence

Sensory Desktop

Our perceptions are neither true nor false. Instead, our perceptions of space and time and objects, the fragrance of a rose, the tartness of a lemon, are all a part of our "sensory desktop," which functions much like a computer desktop.

Graphical desktops for personal computers have existed for about three decades. Yet they are now such an integral part of daily life that we might easily overlook a useful concept that they embody. A graphical desktop is a guide to adaptive behavior. Computers are notoriously complex devices, more complex than most of us care to learn. The colors, shapes and locations of icons on a desktop shield us from the computer's complexity, and yet they allow us to harness its power by appropriately informing our behaviors, such as mouse movements and button clicks, that open, delete and otherwise manipulate files. In this way, a graphical desktop is a guide to adaptive behavior.

Graphical desktops make it easier to grasp the idea that guiding adaptive behavior is different than reporting truth. A red icon on a desktop does not report the true color of the file it represents. Indeed, a file has no color. Instead, the red color guides adaptive behavior, perhaps by signaling the relative importance or recent updating of the file. The graphical desktop guides useful behavior, and hides what is true but not useful. The complex truth about the computer's logic gates and magnetic fields is, for the purposes of most users, of no use.

Graphical desktops thus make it easier to grasp the nontrivial difference between utility and truth. Utility drives evolution by natural selection. Grasping the distinction between utility and truth is therefore critical to understanding a major force that shapes our bodies, minds and sensory experiences.

Consider, for instance, facial attractiveness. When we glance at a face we get an immediate feeling of its attractiveness, a feeling that usually falls somewhere between hot and not. That feeling can inspire poetry, evoke disgust, or launch a thousand ships. It certainly influences dating and mating. Research in evolutionary psychology suggests that this feeling of attractiveness is a guide to adaptive behavior. The behavior is mating, and the initial feeling of attractiveness towards a person is an adaptive guide because it correlates with the likelihood that mating with that person will lead to successful offspring.

Just as red does not report the true color of a file, so hotness does not report the true feeling of attractiveness of a face: Files have no intrinsic color, faces have no intrinsic feeling of attractiveness. The color of an icon is an artificial convention to represent aspects of the utility of a colorless file. The initial feeling of attractiveness is an artificial convention to represent mate utility.

The phenomenon of synesthesia can help to understand the conventional nature of our sensory experiences. In many cases of synesthesia, a stimulus that is normally experienced in one way, say as a sound, is also automatically experienced in another way, say as a color. Someone with sound-color synesthesia sees colors and simple shapes whenever they hear a sound. The same sound always occurs with the same colors and shapes. Someone with taste-touch synesthesia feels touch sensations in their hands every time they taste something with their mouth. The same taste always occurs with the same feeling of touch in their hands. The particular connections between sound and color that one sound-color synesthete experiences typically differ from the connections experienced by another such synesthete. In this sense, the connections are an arbitrary convention. Now imagine a sound-color synesthete who no longer has sound experiences to acoustic stimuli, and instead has only their synesthetic color experiences. Then this synesthete would only experience as colors what the rest of us experience as sounds. In principle they could get all the acoustic information the rest of us get, only in a color format rather than a sound format.

This leads to the concept of a sensory desktop. Our sensory experiences, such as vision, sound, taste and touch, can all be thought of as sensory desktops that have evolved to guide adaptive behavior, not to report objective truths. As a result, we should take our sensory experiences seriously. If something tastes putrid, we probably shouldn't eat it. If it sounds like a rattlesnake, we probably should avoid it. Our sensory experiences have been shaped by natural selection to guide such adaptive behaviors.

We must take our sensory experiences seriously, but not literally. This is one place where the concept of a sensory desktop is helpful. We take the icons on a graphical desktop seriously; we won't, for instance, carelessly drag an icon to the trash, for fear of losing a valuable file. But we don't take the colors, shapes or locations of the icons literally. They are not there to resemble the truth. They are there to facilitate useful behaviors.

Sensory desktops differ across species. A face that could launch a thousand ships probably has no attraction to a macaque monkey. The rotting carrion that tastes putrid to me might taste like a delicacy to a vulture. My taste experience guides behaviors appropriate for me: Eating rotten carrion could kill me. The vulture's taste experience guides behaviors appropriate to it: Carrion is its primary food source.

Much of evolution by natural selection can be understood as an arms race between competing sensory desktops. Mimicry and camouflage exploit limitations in the sensory desktops of predators and prey. A mutation that alters a sensory desktop to reduce such exploitation conveys a selective advantage. This cycle of exploiting and revising sensory desktops is a creative engine of evolution.

On a personal level, the concept of a sensory desktop can enhance our cognitive toolkit by refining our attitude towards our own perceptions. It is common to assume that the way I see the world is, at least in part, the way it really is. Because, for instance, I experience a world of space and time and objects, it is common to assume that these experiences are, or at least resemble, objective truths. The concept of a sensory desktop reframes all this. It loosens the grip of sensory experiences on the imagination. Space, time and objects might just be aspects of a sensory desktop that is specific to Homo sapiens. They might not be deep insights into objective truths, just convenient conventions that have evolved to allow us to survive in our niche. Our desktop is just a desktop.


DANIEL GOLEMAN
Psychologist; Author, Ecological Intelligence

Anthropocene Thinking

Do you know the PDF of your shampoo? A 'PDF' refers to a "partially diminished fraction of an ecosystem," and if your shampoo contains palm oil cultivated on clearcut jungle in Borneo, say, that value will be high. How about your shampoo's DALY? This measure comes from public health: "disability adjusted life years," the amount of one's life that will be lost to a disabling disease because of, say, a liftetime's cumulative exposure to a given industrial chemical. So if your favorite shampoo contains two common ingredients, the carcinogen 1,4 dioxane, or BHA , an endocrine disrupter, its DALY will be higher.

PDFs and DALYs are among myriad metrics for Anthropocene thinking, which views how human systems impact the global systems that sustain life. This way of perceiving interactions between the built and the natural worlds comes from the geological sciences. If adopted more widely this lens might usefully inform how we find solutions to the singular peril our species faces: the extinction of our ecological niche.

Beginning with cultivation and accelerating with the Industrial Revolution, our planet left the Holocene Age and entered what geologists call the Anthropocene Age, in which human systems erode the natural systems that support life. Through the Anthropocene lens, the daily workings of the energy grid, transportation, industry and commerce inexorably deteriorate global biogeochemical systems like the carbon, phosphorous and water cycles. The most troubling data suggests that since the 1950s, the human enterprise has led to an explosive acceleration that will reach criticality within the next few decades as different systems reach a point-of-no-return tipping point. For instance, about half the total rise in atmospheric CO2 concentration has occurred in just the last 30 years — and of all the global life-support systems, the carbon cycle is closest to no-return. While such "inconvenient truths" about the carbon cycle have been the poster child for our species' slow motion suicide, that's just part of a much larger picture, with all the eight global life-support systems under attack by our daily habits.

Anthropocene thinking tells us the problem is not necessarily inherent in the systems like commerce and energy that degrade nature; hopefully these can be modified to become self-sustaining with innovative advances and entrepreneurial energy. The real root of the Anthropocene dilemma lies in our neural architecture.

We approach the Anthropocene threat with brains shaped in evolution to survive the previous geological epoch, the Holocene, when dangers were signaled by growls and rustles in the bushes, and it served one well to reflexively abhor spiders and snakes. Our neural alarm systems still attune to this largely antiquated range of danger.

Add to that misattunement to threats our built-in perceptual blindspot: we have no direct neural register for the dangers of the Anthropocene age, which are too macro or micro for our sensory apparatus. We are oblivious to, say, our body burden, the lifetime build-up of damaging industrial chemicals in our tissues.

To be sure, we have methods for assessing CO2 buildups or blood levels of BHA. But for the vast majority of people those numbers have little to no emotional impact. Our amygdala shrugs.

Finding ways to counter the forces that feed the Anthropocene effect should count high in prioritizing scientific efforts. The earth sciences of course embrace the issue — but do not deal with the root of the problem, human behavior. The sciences that have most to offer have done the least Anthropocene thinking.

The fields that hold keys to solutions include economics, neuroscience, social psychology and cognitive science — and their various hybrids. With a focus on Anthropocene theory and practice they might well contribute species-saving insights. But first they have to engage this challenge, which for the most part has remained off their agenda.

When, for example, will neuroeconomics tackle the brain's perplexing indifference to the news about planetary meltdown, let alone how that neural blindspot might be patched? Might cognitive neuroscience one day offer some insight that might change our collective decision-making away from a lemmings' march to oblivion? Could any of the computer, behavioral or brain sciences come up with an information prosthetic that might reverse our course?

Paul Crutzen, the Dutch atmospheric chemist who won a Nobel for his work on ozone depletion, coined the term 'Anthropocene' ten years ago. As a meme, 'Anthropocene' has as yet little traction in scientific circles beyond geology and environmental science, let alone the wider culture: A Google check on 'anthropocene' shows 78,700 references (mainly in geoscience), while by contrast 'placebo', a once-esoteric medical term now well-established as a meme, has more than 18 million (and even the freshly coined 'vuvuzela' has 3,650,000).


GIULIO BOCCALETTI
Physicist, Atmospheric and Oceanic scientist, and Associate Principal with McKinsey & Company

Scale Analysis

There is a well-known saying: dividing the universe into things that are linear and those that are non-linear is very much like dividing the universe into things that are bananas and things that are not. Many things are not bananas.

Non-linearity is a hallmark of the real world. It occurs anytime outputs of a system cannot be expressed in terms of a sum of inputs, each multiplied by a simple constant — a rare occurrence in the grand scheme of things. Non-linearity does not necessarily imply complexity, just as linearity does not exclude it, but most real systems do exhibit some non-linear feature that results in complex behaviour. Some, like the turbulent stream from a water tap, hide deep non-linearity under domestic simplicity, while others, weather for example, are evidently non-linear to the most distracted of observers. Non-linear complex dynamics are around us: unpredictable variability, tipping points, sudden changes in behaviour, hysteresis are frequent symptoms of a non-linear world.

Non-linear complexity has also the unfortunate characteristic of being difficult to manage, high-speed computing not withstanding, because it tends to lack the generality of linear solutions. As a result we have a tendency to try and view the world in terms of linear models — much for the same reason that looking for lost keys under a lamppost might make sense: because that is where the light is. Understanding — of the kind that "rests in the mind" — seems to require simplification, one in which complexity is reduced where possible and only the most material parts of the problem are preserved.

One of the most robust bridges between the linear and the non-linear, the simple and the complex, is scale analysis, the dimensional analysis of physical systems. It is through scale analysis that we can often make sense of complex non-linear phenomena in terms of simpler models. At its core reside two questions. The first asks what quantities matter most to the problem at hand (which tends to be less obvious than one would like). The second asks what the expected magnitude and — importantly — dimensions of such quantities are. This second question is particularly important, as it captures the simple yet fundamental point that physical behaviour should be invariant to the units we use to measure quantities in. It may sound like an abstraction but, without jargon, you could really call scale analysis "focusing systematically only on what matters most at a given time and place".

There are some subtle facts about scale analysis that make it more powerful than simply comparing orders of magnitude. A most remarkable example is that scale analysis can be applied, through a systematic use of dimensions, even when the precise equations governing the dynamics of a system are not known. The great physicist G.I. Taylor, a character whose prolific legacy haunts any aspiring scientist, gave a famous demonstration of this deceptively simple approach. In the 1950's, back when the detonating power of the nuclear bomb was a carefully guarded secret, the US Government incautiously released some unclassified photographs of a nuclear explosion. Taylor realized that, while its details would be complex, the fundamentals of the problem would be governed by few parameters. From dimensional arguments, he posited that there ought to be a scale-invariant number linking the radius of the blast, the time from detonation, energy released in the explosion and the density of the surrounding air. From the photographs, he was able to estimate the radius and timing of the blast, inferring a remarkably accurate — and embarrassingly public — estimate of the energy of the explosion.

Taylor's capacity for insight was no doubt uncommon: scale analysis seldom generates such elegant results. Nevertheless, it has a surprisingly wide range of applications and an illustrious history of guiding research in applied sciences, from structural engineering to turbulence theory.

But what of its broader application? The analysis of scales and dimensions can help understand many complex problems, and should be part of everybody's toolkit. In business planning and financial analysis for example, the use of ratios and benchmarks is a first step towards scale analysis. It is certainly not a coincidence that they became common management tools at the height of Taylorism — a different Taylor, F.W. Taylor the father of modern management theory — when "scientific management" and its derivatives made their first mark. The analogy is not without problems and would require further detailing than we have time here — for example, on the use of dimensions to infer relations between quantities. But inventory turnover, profit margin, debt and equity ratios, labour and capital productivity are dimensional parameters that could tell us a great deal about the basic dynamics of business economics, even without detailed market knowledge and day to day dynamics of individual transactions.

In fact, scale analysis in its simplest form can be applied to almost every quantitative aspect of daily life, from the fundamental timescales governing our expectations on returns on investments, to the energy intensity of our lives. Ultimately, scale analysis is a particular form of numeracy — one where the relative magnitude, as well as the dimensions of things that surround us, guide our understanding of their meaning and evolution. It almost has the universality and coherence of Warburg's Mnemosyne Atlas: a unifying system of classification, where distant relations between seemingly disparate objects can continuously generate new ways of looking at problems and, through simile and dimension, can often reveal unexpected avenues of investigation.

Of course, anytime a complicated system is translated into a simpler one, information is lost. Scale analysis is a tool that will only be as insightful as the person using it. By itself, it does not provide answers and is no substitute for deeper analysis. But it offers a powerful lens through which to view reality and to understand "the order of things".


HELEN FISHER
Research Professor, Department of Anthropology, Rutgers University; Author, Why We Love

Temperament Dimensions

"I am large, I contain multitudes" wrote Walt Whitman. I have never met two people who were alike. I am an identical twin, and even we are not alike. Every individual has a distinct personality, a different cluster of thoughts and feelings that color all their actions. But there are patterns to personality: people express different styles of thinking and behaving — what psychologists call "temperament dimensions." I offer this concept of temperament dimensions as a useful new member of our cognitive tool kit.

Personality is composed of two fundamentally different types of traits: those of "character;" and those of "temperament." Your character traits stem from your experiences. Your childhood games; your family's interests and values; how people in your community express love and hate; what relatives and friends regard as courteous or perilous; how those around you worship; what they sing; when they laugh; how they make a living and relax: innumerable cultural forces build your unique set of character traits. The balance of your personality is your temperament, all the biologically based tendencies that contribute to your consistent patterns of feeling, thinking and behaving. As Spanish philosopher, Jose Ortega y Gasset, put it, "I am, plus my circumstances." Temperament is the "I am," the foundation of who you are.

Some 40% to 60% of the observed variance in personality is due to traits of temperament. They are heritable, relatively stable across the life course, and linked to specific gene pathways and/or hormone or neurotransmitter systems. Moreover,   our temperament traits congregate in constellations, each aggregation associated with one of four broad, interrelated yet distinct brain systems: those associated with dopamine, serotonin, testosterone and estrogen/oxytocin. Each constellation of temperament traits constitutes a distinct temperament dimension.

For example, specific alleles in the dopamine system have been linked with exploratory behavior, thrill, experience and adventure seeking, susceptibility to boredom and lack of inhibition. Enthusiasm has been coupled with variations in the dopamine system, as have lack of introspection, increased energy and motivation, physical and intellectual exploration, cognitive flexibility, curiosity, idea generation and verbal and non-linguistic creativity.  

The suite of traits associated with the serotonin system includes sociability, lower levels of anxiety, higher scores on scales of extroversion, and lower scores on a scale of "No Close Friends," as well as positive mood, religiosity, conformity, orderliness, conscientiousness, concrete thinking, self-control, sustained attention, low novelty seeking, and figural and numeric creativity.  

Heightened attention to detail, intensified focus, and narrow interests are some of the traits linked with prenatal testosterone expression. But testosterone activity is also associated with emotional containment, emotional flooding (particularly rage), social dominance and aggressiveness, less social sensitivity, and heightened spatial and mathematical acuity.  

Last, the constellation of traits associated with the estrogen and related oxytocin system include verbal fluency and other language skills, empathy, nurturing, the drive to make social attachments and other prosocial aptitudes, contextual thinking, imagination, and mental flexibility.  

We are each a different mix of these four broad temperament dimensions. But we do have distinct personalities  People are malleable, of course; but we are not blank slates upon which the environment inscribes personality. A curious child tends to remain curious, although what he or she is curious about changes with maturity. Stubborn people remain obstinate; orderly people remain punctilious; and agreeable men and women tend to remain amenable.

We are capable of acting "out of character," but doing so is tiring. People are biologically inclined to think and act in specific patterns — temperament dimensions. But why would this concept of temperament dimensions be useful in our human cognitive tool kit? Because we are social creatures, and a deeper understanding of who we (and others) are can provide a valuable tool for understanding, pleasing, cajoling, reprimanding, rewarding and loving others — from friends and relatives to world leaders. It's also practical.

Take hiring. Those expressive of the novelty-seeking temperament dimension are unlikely to do their best in a job requiring rigid routines and schedules. Biologically cautious individuals are not likely to be comfortable in high-risk posts. Decisive, tough minded high testosterone types are not well suited to work with those who can't get to the point and decide quickly. And those predominantly of the compassionate, nurturing high estrogen temperament dimension are not likely to excel at occupations that require them to be ruthless.

Managers might form corporate boards containing all four broad types. Colleges might place freshman with roommates of a similar temperament, rather than similarity of background. Perhaps business teams, sports teams, political teams and teacher-student teams would operate more effectively if they were either more "like-minded" or more varied in their cognitive skills. And certainly we could communicate with our children, lovers, colleagues and friends more effectively. We are not puppets on a string of DNA. Those biologically susceptible to alcoholism, for example, often give up drinking. The more we come to understand our biology, the more we will appreciate how culture molds our biology.


JOEL GOLD, M.D.
Psychiatrist; Clinical Assistant Professor of Psychiatry, NYU School of Medicine

ARISE

ARISE, or Adaptive Regression In the Service of the Ego, is a psychoanalytic concept recognized for decades, but little appreciated today. It is one of the ego functions which, depending on who you ask, may number anywhere from a handful to several dozen. They include reality testing, stimulus regulation, defensive function and synthetic integration. For simplicity, we can equate the ego with the self (though ARISS doesn't quite roll off the tongue).

In most fields, including psychiatry, regression is not considered a good thing. Regression implies a return to an earlier and inferior state of being and functioning. But the key here is not the regression, but rather whether the regression is maladaptive or adaptive.

There are numerous vital experiences that cannot be achieved without adaptive regression: The creation and appreciation of art, music, literature and food; the ability to sleep; sexual fulfillment; falling in love; and, yes, the ability to free associate and tolerate psychoanalysis or psychodynamic therapy without getting worse. Perhaps the most important element in adaptive regression is the ability to fantasize, to daydream. The person who has access to their unconscious processes and can mine them, without getting mired in them, can try new approaches, can begin to see things in new ways and, perhaps, can achieve mastery of their pursuits.

In a word: Relax.

It was ARISE that allowed Friedrich August Kekulé to use a daydream about a snake eating its tail as inspiration for his formulation of the structure of the benzene ring. It's what allowed Richard Feynman to simply drop an O-ring into a glass of ice water, show that when cold the ring is subject to distortion, and thereby explain the cause of the Space Shuttle Challenger disaster. Sometimes it takes a genius to see that a fifth grade science experiment is all that is needed to solve a problem.

In another word: Play.

Sometimes in order to progress you need to regress. Sometimes you just have to let go and ARISE.


MATTHEW RITCHIE
Artist

Systemic Equilibrium

The second law of thermodynamics, the so-called "arrow of time", popularly associated with entropy (and by association death), is the most widely misunderstood shorthand abstraction in human society today. We need to fix this.

The second law states that over time, closed systems will become more similar, eventually reaching systemic equilibrium. It is not a question of if a system will reach equilibrium; it is only a question of whena system will reach equilibrium.

Living on a single planet, we are all participants in a single physical system which has only one direction — towards systemic equilibrium. The logical consequences are obvious; our environmental, industrial and political systems (even our intellectual and theological systems) will become more homogenous over time. It's already started. The physical resources available to every person on earth, including air, food and water, have already been significantly degraded by the high burn rate of industrialization, just as the intellectual resources available to every person on earth have already been significantly increased by the high distribution rate of globalization.

Human societies are already far more similar than ever before (does anyone really miss dynastic worship?) and it would be very tempting to imagine that a modern democracy based on equal rights and opportunities is the system in equilibrium. That seems unlikely, given our current energy footprint. More likely, if the total system energy is depleted too fast, is that modern democracies will be compromised if the system crashes to its lowest equilibrium too quickly for socially equitable evolution.

Our one real opportunity is to use the certain knowledge of ever increasing systemic equilibrium to build a model for an equitable and sustainable future. The mass distribution of knowledge and access to information through the world wide web is our civilization's signal achievement. Societies that adopt innovative, predictive and adaptive models designed around a significant, on-going redistribution of global resources will be most likely to survive in the future.

But since we are biologically and socially programmed to avoid discussing entropy (death), we reflexively avoid the subject of systemic changes to our way of life, both as a society and individuals. We think it's a bummer. Instead of examining the real problems, we consume apocalyptic fantasies as "entertainment" and deride our leaders for their impotence. We really need to fix this.

Unfortunately, even facing this basic concept faces an uphill battle today. In earlier, expansionist phases of society, various metaphorical engines such as "progress" and "destiny" allowed the metaphorical "arrow" to supplant the previously (admittedly spirit-crushing) "wheel" of time. Intellectual positions that supported scientific experimentation and causality were tolerated, even endorsed, as long as they contributed to the arrow's cultural momentum. But in a more crowded and contested world, the limits of projected national power and consumption control have become more obvious. Resurgent strands of populism, radicalism and magical thinking have found mass appeal in their rejection of many rational concepts. But perhaps most significant is the rejection of undisputed physical laws.

The practical effect of this denial on the relationship between the global economy and the climate change debate (for example) is obvious. Advocates propose continuous "good" (green) growth, while denialists propose continuous "bad" (brown) growth. Both sides are more interested in backing winners and losers in a future economic environment predicated on the continuation of today's systems, than accepting the physical inevitability of increasing systemic equilibrium in any scenario.

Of course, any system can temporarily cheat entropy. Hotter particles (or societies) can "steal" the stored energy of colder (or weaker) ones, for a while. But in the end, the rate at which the total energy is burned and redistributed will still determine the speed at which the planetary system will reach its true systemic equilibrium. Whether we extend the lifetime of our local "heat" through war, or improved window insulation, is the stuff of politics. But even if in reality we can't beat the house, its worth a try, isn't it?


LINDA STONE
Hi-Tech Industry Consultant; Former Executive at Apple Computer and Microsoft Corporation

Suspending Disbelief

Barbara McClintock was ignored and ridiculed, by the scientific community, for thirty-two years before winning a Nobel Prize in 1984, for discovering "jumping genes." During the years of hostile treatment by her peers, McClintock didn't publish, preferring to avoid the rejection of the scientific community. Stanley Prusiner faced significant criticism from his colleagues until his prion theory was confirmed. He, too, went on to win a Nobel Prize in 1982.

Barry Marshall challenged the medical "fact" that stomach ulcers were caused by acid and stress; and presented evidence that H. Pylori bacteria is the cause. Marshall is quoted as saying, "Everyone was against me."

Progress in medicine was delayed while these "projective thinkers" persisted, albeit on a slower and lonelier course.

Projective thinking is a term coined by Edward de Bono to describe generative rather than reactive thinking. McClintock, Prusiner, and Marshall offered projective thinking; suspending their disbelief regarding accepted scientific views at the time.

Articulate, intelligent individuals can skillfully construct a convincing case to argue almost any point of view. This critical, reactive use of intelligence narrows our vision. In contrast, projective thinking is expansive, "open-ended" and speculative, requiring the thinker to create the context, concepts, and the objectives.

Twenty years of studying maize created a context within which McClintock could speculate. With her extensive knowledge and keen powers of observation, she deduced the significance of the changing color patterns of maize seed. This led her to propose the concept of gene regulation, which challenged the theory of the genome as a static set of instructions passed from one generation to the next.

The work McClintock first reported in 1950, the result of projective thinking, extensive research, persistence, and a willingness to suspend disbelief, wasn't understood or accepted until many years later.

Everything we know, our strongly held beliefs, and, in some cases, even what we consider to be "factual," creates the lens through which we see and experience the world, and can contribute to a critical, reactive orientation. This can serve us well: Fire is hot; it can burn if touched. It can also compromise our ability to observe and to think in an expansive, generative way.

When we cling rigidly to our constructs, as McClintock's peers did, we can be blinded to what's right in front of us. Can we support a scientific rigor that embraces generative thinking and suspension of disbelief? Sometimes science fiction does become scientific discovery.


DAVID GELERNTER
Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, Mirror Worlds

Recursive Structure

Recursive structure is a simple idea (or shorthand abstraction) with surprising applications beyond science.

A structure is recursive if the shape of the whole recurs in the shape of the parts: for example, a circle formed of welded links that are circles themselves. Each circular link might itself be made of smaller circles, and in principle you could have an unbounded nest of circles made of circles made of circles.

The idea of recursive structure came into its own with the advent of computer science (that is, software science) in the 1950s. The hardest problem in software is controlling the tendency of software systems to grow incomprehensibly complex. Recursive structure helps convert impenetrable software rainforests into French gardens — still (potentially) vast and complicated, but much easier to traverse and understand than a jungle.

Benoit Mandelbrot famously recognized that some parts of nature show recursive structure of a sort: a typical coastline shows the same shape or pattern whether you look from six inches or sixty feet or six miles away.

But it also happens that recursive structure is fundamental to the history of architecture, especially to the gothic, renaissance and baroque architecture of Europe — covering roughly the 500 years between the 13th and 18th centuries. The strange case of "recursive architecture" shows us the damage one missing idea can create. It suggests also how hard it is to talk across the cultural Berlin Wall that separates science and art. And the recurrence of this phenomenon in art and nature underlines an important aspect of the human sense of beauty.

The re-use of one basic shape on several scales is fundamental to medieval architecture. But, lacking the idea (and the term) "recursive structure," art historians are forced to improvise ad hoc descriptions each time they need one. This hodgepodge of improvised descriptions makes it hard, in turn, to grasp how widespread recursive structure really is. And naturally, historians of post-medieval art invent their own descriptions—thus obfuscating a fascinating connection between two mutually alien aesthetic worlds.

For example: One of the most important aspects of mature gothic design is tracery — the thin, curvy, carved stone partitions that divide one window into many smaller panes. Recursion is basic to the art of tracery.

Tracery was invented at the cathedral of Reims circa 1220, and used soon after at the cathedral of Amiens. (Along with Chartres, these two spectacular and profound buildings define the High Gothic style.) To move from the characteristic tracery design of Reims to that of Amiens, just add recursion. At Reims, the basic design is a pointed arch with a circle inside; the circle is supported on two smaller arches. At Amiens, the basic design is the same — except that now, the window recurs in miniature inside each smaller arch. (Inside each smaller arch is a still-smaller circle supported on still-smaller arches.)

In the great east window at Lincoln Cathedral, the recursive nest goes one step deeper. This window is a pointed arch with a circle inside; the circle is supported on two smaller arches — much like Amiens. Within each smaller arch is a circle supported on two still-smaller arches. Within each still-smaller arch, a circle is supported on even-smaller arches.

There are other recursive structures throughout medieval art.

Jean Bony and Erwin Panofsky were two eminent 20th century art historians. Naturally they both noticed recursive structure. But neither man understood the idea in itself. And so, instead of writing that the windows of Saint-Denis show recursive structure, Bony said that they are "composed of a series of similar forms progressively subdivided in increasing numbers and decreasing sizes." Describing the same phenomenon in a different building, Panofsky writes of the "principle of progressive divisibility (or, to look at it the other way, multiplicability)." Panofsky's "principle of progressive divisiblity" is a fuzzy, roundabout way of saying "recursive structure."

Louis Grodecki noticed the same phenomenon—a chapel containing a display-platform shaped like the chapel in miniature, holding a shrine shaped like the chapel in extra-miniature. And he wrote that "This is a common principle of Gothic art." But he doesn't say what the principle is; he doesn't describe it in general or give it a name. William Worringer, too, had noticed recursive structure. He described gothic design as "a world which repeats in miniature, but with the same means, the expression of the whole."

So each historian makes up his own name and description for the same basic idea—which makes it hard to notice that all four descriptions actually describe the same thing. Recursive structure is a basic principle of medieval design; but this simple statement is hard to say or even think if we don't know what "recursive structure" is.

If the literature makes it hard to grasp the importance of recursive structure in medieval art, it's even harder to notice that exactly the same principle recurs in the radically different world of Italian Renaissance design.

George Hersey wrote astutely of Bramante's design (ca 1500) for St Peter's in the Vatican that it consists of "a single macrochapel…, four sets of what I will call maxichapels, sixteen minichapels, and thirty-two microchapels." "The principle [he explains] is that of Chinese boxes — or, for that matter, fractals."

If he had only been able to say that "recursive structure is fundamental to Bramante's thought," the whole discussion would have been simpler and clearer — and an intriguing connection between medieval and renaissance design would have been obvious.

Using instead of ignoring the idea of recursive structure would have had other advantages too.

It helps us understand the connections between art and technology; helps us see the aesthetic principles that guide the best engineers and technologists, and the ideas of clarity and elegance that underlie every kind of successful design. These ideas have practical implications. For one, technologists must study and understand elegance and beauty as design goals; any serious technology education must include art history. And we reflect, also, on the connection between great art and great technology on the one hand and natural science on the other.

But without the right intellectual tool for the job, new instances of recursive structure make the world more complicated instead of simpler and more beautiful.


DON TAPSCOTT
Founder, Moxie Insight;  Adjunct Professor, Rotman School of Management, University of Toronto;  Author, Grown Up Digital; Macrowikinomics

Designing Your Mind


Given recent research about brain plasticity and the dangers of cognitive load, the most powerful tool in our cognitive arsenal may well be design.
Specifically, we can use design principles and discipline to shape our minds. This is different than learning and acquiring knowledge. It's about designing how each of us thinks, remembers and communicates — appropriately and effectively for the digital age.

Today's popular handwringing about its effects on cognition has some merit. But rather than predicting a dire future, perhaps we should be trying to achieve a new one.

New neuroscience discoveries give hope. We know that brains are malleable and can change depending on how they are used. The well-known study of London taxi drivers showed that a certain region in the brain involved in memory formation was physically larger than in non-taxi-driving individuals of a similar age. This effect did not extend to London bus drivers, supporting the conclusion that the requirement of London's taxi drivers to memorize the multitude of London streets drove structural brain changes in the hippocampus.

Results from studies like these support the notion that even among adults the persistent, concentrated use of one neighborhood of the brain real can increase its  size, and presumably also its capacity. Not only does intense use change adult brain regional structure and function, but temporary training and perhaps even mere mental rehearsal seem to have an effect as well. A series of studies showed that one can improve tactile (Braille character) discrimination among seeing people who are temporarily blindfolded. Brain scans revealed that participants' visual cortex responsiveness was heightened to auditory and tactile sensory input after only five days of blindfolding for over an hour each time.

The existence of lifelong neuroplasticity is no longer in doubt. The brain runs on a "use it or lose it" motto. So could we "use it to build it right?" Why don't we use the demands of our information-rich, multi-stimuli, fast-paced, multi-tasking, digital existence to expand our cognitive capability? Psychiatrist Dr. Stan Kutcher, an expert on adolescent mental health who has studied the effect of digital technology on brain development, says we probably can: "There is emerging evidence suggesting that exposure to new technologies may push the Net Generation [teenagers and young adults] brain past conventional capacity limitations."

When the straight A student is doing her homework at the same time as five other things online, she is not actually multi-tasking. Instead, she has developed better active working memory and better switching abilities. I can't read my email and listen to iTunes at the same time, but she can. Her brain has been wired to handle the demands of the digital age.

How could we use design thinking to change the way we think?  Good design typically begins with some principles and functional objectives. You might aspire to have a strong capacity to perceive and absorb information effectively, concentrate, remember, infer meaning, be creative, write, speak and communicate well, and to enjoy important collaborations and human relationships. How could you design your use (or abstinence) of media to achieve these goals?

Something as old-school as a speed-reading course could increase your input capacity without undermining comprehension. If it made sense in Evelyn Woods' day it is doubly important now and we've learned a lot since her day about how to read effectively.

Feeling distracted? The simple discipline of reading a few full articles per day rather than just the headlines and summaries could strengthen attention.

Want to be a surgeon? Become a gamer or rehearse while on the subway. Rehearsal can produce changes in the motor cortex as big as those induced by physical movement. One study a group of participants were asked to play a simple five-finger exercise on the piano while another group of participants were asked to think about playing the same "song" in their heads using the same finger movements, one note at a time. Both groups showed a change in their motor cortex, with differences among the group who mentally rehearsed the song as great as those who physically played the piano.

Losing retention? Decide how far you want to adopt Alfred Einstein's law of memory. When asked why he went to the phone book to get his number he replied that he only memorizes things he can't look up. There is a lot to remember these days. Between the dawn of civilization and 2003 there were 5 exabytes of data collected (an exabyte equals 1 quintillion bytes). Today 5 exabytes of data gets collected every two days! Soon there will be 5 exabytes every few minutes. Humans have a finite memory capacity. Can you develop criteria for which will be inboard and outboard?

Or want to strengthen your working memory and capability to multitask? Try reverse mentoring — learning with your teenager. This is the first time in history when children are authorities about something important, and the successful ones are pioneers of a new paradigm in thinking. Extensive research shows that people can improve cognitive function and brain efficiency through simple lifestyle changes, such as incorporating memory exercises into their daily routine.

Why don't schools and universities teach design thinking for thinking? We teach physical fitness. But rather than brain fitness we emphasize cramming young heads with information and testing their recall. Why not courses that emphasize designing a great brain?

Does this modest proposal raise the specter of "designer minds?" I don't think so. The design industry is something done to us. I'm proposing we each become designers. But I suppose "I love the way she thinks" could take on new meaning.


ANDRIAN KREYE
Editor, The Feuilleton (Arts and Essays), of the German Daily Newspaper, Sueddeutsche Zeitung, Munich

Free Jazz

It's always worth to take a few cues from mid-20th-century avant-garde. So when it comes to improving your cognitive toolkit Free Jazz is perfect. It is a highly evolved new take on an art that has (at least in the West) been framed by a strict set of twelve notes played in accurate factions of bars. It is also the pinnacle of a genre that had begun with the Blues just a half century before Ornette Coleman assembled his infamous double quartet in the A&R Studio in New York City one December day in 1960. In science terms that would mean an evolutionary leap from elementary school math to game theory and fuzzy logic in a mere fifty years.

If you really want to appreciate the mental prowess of Free Jazz players and composers you should start just one step behind. A half a year before Ornette Coleman's Free Jazz session let loose the improvisational genius of eight of the best musicians of their times, John Coltrane recorded what is still considered the most sophisticated Jazz solo ever — his tour de force through the rapid chord progressions of his composition "Giant Steps".

The film student Daniel Cohen has recently animated the notation for Coltrane's solo in a YouTube video. You don't have to be able to read music to grasp the intellectual firepower of Coltrane. After the deceivingly simple main theme the notes start to race up and down the five lines of the stave in dizzying speeds and patterns. If you also take into consideration that Coltrane used to record unrehearsed music to keep it fresh, you know that he was endowed with a cognitive toolkit way beyond normal.

Now take these almost 4:43 minutes, multiply Coltrane's firepower by eight, stretch it into 37 minutes and deduct all traditional musical structures like chord progressions or time. The session that gave the genre it's name in the first place foreshadowed not just the radical freedom the album's title implied. It was a precursor to a form of communication that has left linear conventions and entered the realm of multiple parallel interactions.

It is admittedly still hard to listen to the album "Free Jazz: A Collective Improvisation by the Ornette Coleman Double Quartet". It is equally taxing to listen to recordings of Cecil Taylor, Pharoah Sanders, Sun Ra, Anthony Braxton or Gunter Hampel. It has always been easier to understand the communication processes of this music in a live setting. One thing is a given — it is never anarchy, never was meant to be.

If you're able to play music and you manage to get yourself invited to a Free Jazz session, there is an incredible moment, when all musicians find what is considered "The Pulse". It is a collective climax of creativity and communication that can leap to the audience and create an electrifying experience. It's hard to describe, but might be comparable to the moment when a surfer finds the point when the catalyst of a surfboard bring together the motor skills of his body and the forces of the swell of an ocean start in these few seconds of synergy on top of a wave. It is a fusion of musical elements though that defies common musical theory.

Of course there is a lot of Free Jazz that merely confirms prejudice. Or as the vibraphonist and composer Gunter Hampel phrased it: "At one point it was just about being the loudest on stage." But all the musicians mentioned above have found new forms and structures, Ornette Coleman's music theory called Harmolodics being just one of them. In the perceived cacophony of their music there is a multilayered clarity to discover that can serve as a model for a cognitive toolkit for the 21st century. The ability to find cognitive, intellectual and communication skills that work in parallel contexts rather than linear forms will be crucial. Just as Free Jazz abandoned harmonic structures to find new forms in polyrhythmic settings, one might just have to enable himself to work beyond proven cognitive patterns.


| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >