| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >



Philosopher; University Professor, Co-Director, Center for Cognitive Studies, Tufts University; Author, Breaking the Spell: Religion as a Natural Phenomenon

Competition in the brain

I've changed my mind about how to handle the homunculus temptation: the almost irresistible urge to install a "little man in the brain" to be the Boss, the Central Meaner, the Enjoyer of pleasures and the Sufferer of pains. In Brainstorms (1978) I described and defended the classic GOFAI (Good Old Fashioned AI) strategy that came to be known as "homuncular functionalism," replacing the little man with a committee.

The AI programmer begins with an intentionally characterized problem, and thus frankly views the computer anthropomorphically: if he solves the problem he will say he has designed a computer than can [e.g.,] understand questions in English . His first and highest level of design breaks the computer down into subsystems, each of which is given intentionally characterized tasks; he composes a flow chart of evaluators, rememberers, discriminators, overseers and the like. These are homunculi with a vengeance. . . . . Each homunculus in turn is analyzed into smaller homunculi, but, more important, into less clever homunculi. When the level is reached where the homunculi are no more than adders and subtractors, by the time they need only the intelligence to pick the larger of two numbers when directed to, they have been reduced to functionaries "who can be replaced by a machine." (p80)

I still think that this is basically right, but I have recently come to regret–and reject–some of the connotations of two of the terms I used: committee and machine. The cooperative bureaucracy suggested by the former, with its clear reporting relationships (an image enhanced by the no-nonsense flow charts of classical cognitive science models) was fine for the sorts of computer hardware–and also the levels of software, the virtual machines–that embodied GOFAI, but it suggested a sort of efficiency that was profoundly unbiological. And while I am still happy to insist that an individual neuron, like those adders and subtractors in the silicon computer, "can be replaced by a machine," neurons are bio-machines profoundly unlike computer components in several regards.

Notice that computers have been designed to keep needs and job performance almost entirely independent. Down in the hardware, the electric power is doled out evenhandedly and abundantly; no circuit risks starving. At the software level, a benevolent scheduler doles out machine cycles to whatever process has highest priority, and although there may be a bidding mechanism of one sort or another that determines which processes get priority, this is an orderly queue, not a struggle for life. (As Marx would have it, "from each according to his abilities, to each according to his needs). It is a dim appreciation of this fact that probably underlies the common folk intuition that a computer could never "care" about anything. Not because it is made out of the wrong materials — why should silicon be any less suitable a substrate for caring than organic molecules? — but because its internal economy has no built-in risks or opportunities, so it doesn't have to care.

Neurons, I have come to believe, are not like this. My mistake was that I had stopped the finite regress of homunculi at least one step too early! The general run of the cells that compose our bodies are probably just willing slaves–rather like the selfless, sterile worker ants in a colony, doing stereotypic jobs and living out their lives in a relatively non-competitive ("Marxist") environment. But brain cells — I now think — must compete vigorously in a marketplace. For what?

What could a neuron "want"? The energy and raw materials it needs to thrive–just like its unicellular eukaryote ancestors and more distant cousins, the bacteria and archaea. Neurons are robots; they are certainly not conscious in any rich sense–remember, they are eukaryotic cells, akin to yeast cells or fungi. If individual neurons are conscious then so is athlete’s foot. But neurons are, like these mindless but intentional cousins, highly competent agents in a life-or-death struggle, not in the environment between your toes, but in the demanding environment of the brain, where the victories go to those cells that can network more effectively, contribute to more influential trends at the virtual machine levels where large-scale human purposes and urges are discernible.

I now think, then, that the opponent-process dynamics of emotions, and the roles they play in controlling our minds, is underpinned by an "economy" of neurochemistry that harnesses the competitive talents of individual neurons. (Note that the idea is that neurons are still good team players within the larger economy, unlike the more radically selfish cancer cells. Recalling Francois Jacob’s dictum that the dream of every cell is to become two cells, neurons vie to stay active and to be influential, but do not dream of multiplying.)

Intelligent control of an animal’s behavior is still a computational process, but the neurons are "selfish neurons," as Sebastian Seung has said, striving to maximize their intake of the different currencies of reward we have found in the brain. And what do neurons "buy" with their dopamine, their serotonin or oxytocin, etc.? Greater influence in the networks in which they participate.

Physician and social scientist, Harvard

Culture can change our genes

I work in a borderland between social science and medicine, and I therefore often find myself trying to reconcile conflicting facts and perspectives about human biology and behavior.  There are fellow travelers at this border, of course, heading in both directions, or just dawdling, but the border is both sparsely populated and chaotic.  The border is also, strangely, well patrolled, and it is often quite hard to get authorities on both sides to coordinate activities.  Once in a while, however, I find that my passport (never quite in order, according to officials) has acquired a new visa.  For me, this past year, I acquired the conviction that human evolution may proceed much faster than I had thought, and that humans themselves may be responsible. 

In short, I have changed my mind about how people come literally to embody the social world around them.  I once thought that we internalized cultural factors by forming memories, acquiring language, or bearing emotional and physical marks (of poverty, of conquest).  I thought that this was the limit of the ways in which our bodies were shaped by our social environment.  In particular, I thought that our genes were historically immutable, and that it was not possible to imagine a conversation between culture and genetics.  I thought that we as a species evolved over time frames far too long to be influenced by human actions. 

I now think this is wrong, and that the alternative — that we are evolving in real time, under the pressure of discernable social and historical forces — is true.  Rather than a monologue of genetics, or a soliloquy of culture, there is a dialectic between genetics and culture.

Evidence has been mounting for a decade. The best example so far is the evolution of lactose tolerance in adults.  The ability of adults to digest lactose (a sugar in milk) confers evolutionary advantages only when a stable supply of milk is available, such as after milk-producing animals (sheep, cattle, goats) have been domesticated.  The advantages are several, ranging from a source of valuable calories to a source of necessary hydration during times of water shortage or spoilage.  Amazingly, just over the last 3-9 thousand years, there have been several adaptive mutations in widely separated populations in Africa and Europe, all conferring the ability to digest lactose (as shown by Sarah Tishkoff and others).  These mutations are principally seen in populations who are herders, and not in nearby populations who have retained a hunter/gatherer lifestyle. This trait is sufficiently advantageous that those with the trait have notably many more descendants than those without.

A similar story can be told about mutations that have arisen in the relatively recent historical past that confer advantages in terms of surviving epidemic diseases such as typhoid.  Since these diseases were made more likely when the density of human settlements increased and far-flung trade became possible, here we have another example of how culture may affect our genes.

But this past year, a paper by John Hawks and colleagues in PNAS functioned like the staccato plunk of a customs agent stamping my documents and waving me on.  The paper showed that the human genome may be changing at an accelerating rate over the past 80,000 years, and that this change may be in response not only to population growth and adaptation to new environments, but also to cultural developments that have made it possible for humans to sustain such population growth or survive in such environments.

Our biology and our culture have always been in conversation of course — just not (I had thought) on the genetic level.  For example, rising socio-economic status with industrial development results in people becoming taller (a biological effect of a cultural development) and taller people require architecture to change (a cultural effect of a biological development).  Anyone marveling at the small size of beds in colonial-era houses knows this firsthand.  Similarly, an epidemic may induce large-scale social changes, modifying kinship systems or political power.  But genetic change over short time periods?  Yes.

Why does this matter?  Because it is hard to know where this would stop.  There may be genetic variants that favor survival in cities, that favor saving for retirement, that favor consumption of alcohol, or that favor a preference for complicated social networks.  There may be genetic variants (based on altruistic genes that are a part of our hominid heritage) that favor living in a democratic society, others that favor living among computers, still others that favor certain kinds of visual perception (maybe we are all more myopic as a result of Medieval lens grinders).  Modern cultural forms may favor some traits over others.  Maybe even the more complex world we live in nowadays really is making us smarter. 

This has been very difficult for me to accept because, unfortunately, this also means that it may be the case that particular ways of living create advantages for some, but not all, members of our species.  Certain groups may acquire (admittedly, over centuries) certain advantages, and there might be positive or negative feedback loops between genetics and culture.  Maybe some of us really are better able to cope with modernity than others.  The idea that what we choose to do with our world modifies what kind of offspring we have is as amazing as it is troubling.

Biologist, London; Author, The Sense of Being Stared At

The skepticism of believers

I used to think of skepticism as a primary intellectual virtue, whose goal was truth. I have changed my mind. I now see it as a weapon.

Creationists opened my eyes. They use the techniques of critical thinking to expose weaknesses in the evidence for natural selection, gaps in the fossil record and problems with evolutionary theory. Is this because they are seeking truth? No. They believe they already know the truth. Skepticism is a weapon to defend their beliefs by attacking their opponents.

Skepticism is also an important weapon in the defence of commercial self-interest. According to David Michaels, who was assistant secretary for environment, safety and health in the US Department of Energy in the 1990s, the strategy used by the tobacco industry to create doubt about inconvenient evidence has now been adopted by corporations making toxic products such as lead, mercury, vinyl chloride, and benzene. When confronted with evidence that their activities are causing harm, the standard response is to hire researchers to muddy the waters, branding findings that go against the industry's interests as "junk science." As Michaels noted, "Their conclusions are almost always the same: the evidence is ambiguous, so regulatory action is unwarranted." Climate change skeptics use similar techniques.

In a penetrating essay called "The Skepticism of Believers", Sir Leslie Stephen, a pioneering agnostic (and the father of Virginia Woolf), argued that skepticism is inevitably partial. "In regard to the great bulk of ordinary beliefs, the so-called skeptics are just as much believers as their opponents." Then as now, those who proclaim themselves skeptics had strong beliefs of their own. As Stephen put it in 1893, " The thinkers generally charged with skepticism are equally charged with an excessive belief in the constancy and certainty of the so-called 'laws of nature'. They assign a natural cause to certain phenomena as confidently as their opponents assign a supernatural cause."

Skepticism has even deeper roots in religion than in science. The Old Testament prophets were withering in their scorn for the rival religions of the Holy Land. Psalm 115 mocks those who make idols of silver and gold: "They have mouths, and speak not: eyes have they, and see not." At the Reformation, the Protestants deployed the full force of biblical scholarship and critical thinking against the veneration of relics, cults of saints and other "superstitions" of the Catholic Church. Atheists take religious skepticism to its ultimate limits; but they are defending another faith, a faith in science.

In practice, the goal of skepticism is not the discovery of truth, but the exposure of other people's errors. It plays a useful role in science, religion, scholarship, and common sense. But we need to remember that it is a weapon serving belief or self-interest; we need to be skeptical of skeptics. The more militant the skeptic, the stronger the belief.

Editor in Chief, Wired Magazine; Author, The Long Tail

Seeing Through a Carbon Lens

Aside from whether Apple matters (whoops!), the biggest thing I've changed my mind about is climate change. There was no one thing that convinced me to flip from wait-and-see to the-time-for-debate-is-over. Instead, there were three things, which combined for me in early 2006. There was, of course, the scientific evidence, which kept getting more persuasive. There was also economics, and the recognition that moving to alternative, sustainable energy was going to be cheaper over the long run as oil got more expensive. And finally there was geopolitics, with ample evidence of how top-down oil riches destabilized a region and then the world. No one reason was enough to win me over to total energy regime change, but together they seemed win-win-win.

Now I see the entire energy and environmental picture through a carbon lens. It's very clarifying. Put CO2 above everything else, and suddenly you can make reasonable economic calculations about risks and benefits, without getting caught up in the knotty politics of full-spectrum environmentalism. I was a climate skeptic and now I'm a carbon zealot. I seem to annoy traditional environmentalists just as much, but I like to think that I've moved from behind to in front.

Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, Fantastic Realities

The Science Formerly Known as Religion

I was an earnest student in Catechism class. The climax of our early training, as thirteen year-olds, was an intense retreat in preparation for the sacrament of Confirmation. Even now I vividly remember the rapture of belief, the glow everyday events acquired when I felt that they reflected a grand scheme of the universe, in which I had a personal place. Soon afterward, though, came disillusionment. As I learned more about science, some of the concepts and explanations in the ancient sacred texts came to seem clearly wrong; and as I learned more about history and how it is recorded, some of the stories in those texts came to seem very doubtful.

What I found most disillusioning, however, was not that the sacred texts contained errors, but that they suffered by comparison. Compared to what I was learning in science, they offered few truly surprising and powerful insights. Where was there a vision to rival the concepts of infinite space, of vast expanses of time, of distant stars that rivaled and surpassed our Sun? Or of hidden forces and new, invisible forms of "light"? Or of tremendous energies that humans could, by understanding natural processes, learn to liberate and control? I came to think that if God exists, He (or She, or They, or It ) did a much more impressive job revealing Himself in the world than in the old books; and that the power of faith and prayer is elusive and unreliable, compared to the everyday miracles of medicine and technology.

For many years, like some of my colleagues and some recent bestselling authors, I thought that active, aggressive debunking might be in order. I've changed my mind. One factor was my study of intellectual history. Many of my greatest heros in physics, including Galileo, Newton, Faraday, Maxwell, and Planck, were deeply religious people. They truly believed that what they were doing, in their scientific studies, was discovering the mind of God. Many of Bach's and Mozart's most awesome productions are religiously inspired. Saint Augustine's writings display one of the most impressive intellects ever. And so on. Can you imagine hectoring this group? And what would be the point? Did their religious beliefs make them stupid, or stifle their creativity?

Also, debunking hasn't worked very well. David Hume already set out the main arguments for religious skepticism in the early eighteenth century. Bertrand Russell and many others have augmented them since. Textual criticism reduces fundamentalism to absurdity. Modern molecular biology, rooted in physics and chemistry, demonstrates that life is a natural process; Darwinian evolution illuminates its natural origin. These insights have been highly publicized for many decades, yet religious doctrines that contradict some or all of them have not merely survived, but prospered.

Why? Part of the answer is social. People tend to stay with the religion of their birth, for the same sorts of reasons that they stay loyal to their clan, or their country.

But beyond that, religion addresses some deep concerns that science does not yet, for most people, touch. The human yearning for meaningful understanding, our fear of death — these deep motivations are not going to vanish.

Understanding, of course, is what science is all about. Many people imagine, however, that scientific understanding is dry and mundane, with no scope for wonder and amazement. That is simply ignorant. Looking for wonder and amazement? Try some quantum theory!

Beyond understanding inter-connected facts, people want to discover their significance or meaning. Neuroscientists are beginning to map human motivations and drives at the molecular level. As this work advances, we will attain a deeper understanding of the meaning of meaning. Freud's theories had enormous impact, not because they are correct, but because they "explained" why people feel and act as they do. Correct and powerful theories that address these issues are sure to have much greater impact.

Meanwhile, medical science is taking a deep look at aging. Within the next century, it may be possible for people to prolong youth and good health for many years — perhaps indefinitely. This would, of course, profoundly change our relationship with death. So to me the important challenge is not to debunk religion, but to address its issues in better ways.

Editor-in Chief, Nature

I've changed my mind about the use of enhancement drugs by healthy people. A year ago, if asked, I'd have been against the idea, whereas now I think there's much to be said for it.

The ultimate test of such a change of mind is how I'd feel if my offspring (both adults) went down that road, and my answer is that with tolerable risks of side effects and zero risk of addiction, then I'd feel OK if there was an appropriate purpose to it. 'Appropriate purposes' exclude gaining an unfair advantage or unwillingly following the demands of others, but include gaining a better return on an investment of study or of developing a skill.

I became interested in the issues surrounding cognitive enhancement as one example of debates about human enhancement — debates that can only get more vigorous in future. It's also an example of a topic in which both natural and social sciences can contribute to better regulation — another theme that interests me. Thinking about the issues and looking at the evidence-based literatures made me realise how shallow was my own instinctive aversion to the use of such drugs by healthy people. It also led to a thoughtful article by Barbara Sahakian and Sharon Morein-Zamir in Nature (20 December 2007) that triggered many blog discussions.

Social scientists report that a small but significant proportion of students on at least some campuses are using prescription drugs in order to help their studies — drugs such as modafinil (prescribed for narcolepsy) and methylphenidate (prescribed for attention-deficit hyperactivity disorder). I've not seen studies that quantify similar use by academic faculty, or by people in other non-military walks of life, though there is no doubt that it is happening. There are anecdotal accounts and experimental small-scale trials showing that such drugs do indeed improve performance to a modest degree under particular circumstances.

New cognitive enhancing drugs are being developed, officially for therapy. And the therapeutic importance — both current and potential — of such drugs is indeed significant. But manufacturers won't turn away the significant revenues from illegal use by the healthy.

That word 'illegal' is the rub. Off-prescription use is illegal in the United States, at least. But that illegality reflects an official drugs culture that is highly questionable. It's a culture in which the Food and Drugs Administration seems reluctant generally to embrace the regulation of enhancement for the healthy, though it is empowered to do so. It is also a culture that is rightly concerned about risk but wrongly founded in the idea that drugs used by healthy people are by definition a Bad Thing. That in turn reflects instinctive attitudes to do with 'naturalness' and 'cheating on yourself' that don't stand up to rational consideration. Perhaps more to the point, they don't stand up to behavioral consideration, as Viagra has shown.

Research and societal discussions are necessary before cognitive enhancement drugs should be made legally available for the healthy, but I now believe that that is the right direction in which to head.

With reference to the precursor statements of this year's annual question, there are facts behind that change of mind, some thinking, and some secular faith in humans, too.

Founder and CEO of O'Reilly Media, Inc.

I was skeptical of the term "social software"....

In November 2002, Clay Shirky organized a "social software summit," based on the premise that we were entering a "golden age of social software... greatly extending the ability of groups to self-organize."

I was skeptical of the term "social software" at the time. The explicit social software of the day, applications like friendster and meetup, were interesting, but didn't seem likely to be the seed of the next big Silicon Valley revolution.

I preferred to focus instead on the related ideas that I eventually formulated as "Web 2.0," namely that the internet is displacing Microsoft Windows as the dominant software development platform, and that the competitive edge on that platform comes from aggregating the collective intelligence of everyone who uses the platform. The common thread that linked Google's PageRank, ebay's marketplace, Amazon's user reviews, Wikipedia's user-generated encyclopedia, and CraigsList's self-service classified advertising seemed too broad a phenomenon to be successfully captured by the term "social software." (This is also my complaint about the term "user generated content.") By framing the phenomenon too narrowly, you can exclude the exemplars that help to understand its true nature. I was looking for a bigger metaphor, one that would tie together everything from open source software to the rise of web applications.

You wouldn't think to describe Google as social software, yet Google's search results are profoundly shaped by its collective interactions with its users: every time someone makes a link on the web, Google follows that link to find the new site. It weights the value of the link based on a kind of implicit social graph (a link from site A is more authoritative than one from site B, based in part on the size and quality of the network that in turn references either A or B). When someone makes a search, they also benefit from the data Google has mined from the choices millions of other people have made when following links provided as the result of previous searches.

You wouldn't describe ebay or Craigslist or Wikipedia as social software either, yet each of them is the product of a passionate community, without which none of those sites would exist, and from which they draw their strength, like Antaeus touching mother earth. Photo sharing site Flickr or bookmark sharing site del.icio.us (both now owned by Yahoo!) also exploit the power of an internet community to build a collective work that is more valuable than could be provided by an individual contributor. But again, the social aspect is implicit — harnessed and applied, but never the featured act.

Now, five years after Clay's social software summit, Facebook, an application that explicitly explores the notion of the social network, has captured the imagination of those looking for the next internet frontier. I find myself ruefully remembering my skeptical comments to Clay after the summit, and wondering if he's saying "I told you so."

Mark Zuckerberg, Facebook's young founder and CEO, woke up the industry when he began speaking of "the social graph" — that's computer-science-speak for the mathematical structure that maps the relationships between people participating in Facebook — as the core of his platform. There is real power in thinking of today's leading internet applications explicitly as social software.

Mark's insight that the opportunity is not just about building a "social networking site" but rather building a platform based on the social graph itself provides a lens through which to re-think countless other applications. Products like xobni (inbox spelled backwards) and MarkLogic's MarkMail explore the social graph hidden in our email communications; Google and Yahoo! have both announced projects around this same idea. Google also acquired Jaiku, a pioneer in building a social-graph enabled address book for the phone.

This is not to say that the idea of the social graph as the next big thing invalidates the other insights I was working with. Instead, it clarifies and expands them:

  • Massive collections of data and the software that manipulates those collections, not software alone, are the heart of the next generation of applications.
  • The social graph is only one instance of a class of data structure that will prove increasingly important as we build applications powered by data at internet scale. You can think of the mapping of people, businesses, and events to places as the "location graph", or the relationship of search queries to results and advertisements as the "question-answer graph."
  • The graph exists outside of any particular application; multiple applications may explore and expose parts of it, gradually building a model of relationships that exist in the real world.
  • As these various data graphs become the indispensable foundation of the next generation "internet operating system," we face one of two outcomes: either the data will be shared by interoperable applications, or the company that first gets to a critical mass of useful data will become the supplier to other applications, and ultimately the master of that domain.

So have I really changed my mind? As you can see, I'm incorporating "social software" into my own ongoing explanations of the future of computer applications.

It's curious to look back at the notes from that first Social Software summit. Many core insights are there, but the details are all wrong. Many of the projects and companies mentioned have disappeared, while the ideas have moved beyond that small group of 30 or so people, and in the process have become clearer and more focused, imperceptibly shifting from what we thought then to what we think now.

Both Clay, who thought then that "social software" was a meaningful metaphor and I, who found it less useful then than I do today, have changed our minds. A concept is a frame, an organizing principle, a tool that helps us see. It seems to me that we all change our minds every day through the accretion of new facts, new ideas, new circumstances. We constantly retell the story of the past as seen through the lens of the present, and only sometimes are the changes profound enough to require a complete repudiation of what went before.

Ideas themselves are perhaps the ultimate social software, evolving via the conversations we have with each other, the artifacts we create, and the stories we tell to explain them.

Yes, if facts change our mind, that's science. But when ideas change our minds, we see those facts afresh, and that's history, culture, science, and philosophy all in one.

Former Europe editor, Time Magazine; Author, Geary's Guide to the World's Great Aphorists

Neuroeconomics really explains human economic behavior

Often a new field comes along purporting to offer bold new insights into questions that have long vexed us. And often, after the initial excitement dies down, that field turns out to really only offer a bunch of new names for stuff we basically already knew. I used to think neuroeconomics was such a field. But I was wrong.

Neuroeconomics mixes brain science with the dismal science — throwing in some evolutionary psychology and elements of prospect theory as developed by Daniel Kahneman and Amos Tversky — to explain the emotional and psychological quirks of human economic behavior. To take a common example — playing the stock market. Our brains are always prospecting for pattern. Researchers at Duke University showed people randomly generated sequences of circles and squares. Whenever two consecutive circles or squares appeared, the subjects' nucleus accumbens — the part of the brain that's active whenever a stimulus repeats itself — went into overdrive, suggesting the participants expected a third circle or square to continue the sequence.

The stock market is filled with patterns. But the vast majority of those patterns are meaningless, at least in the short term. The hourly variance of a stock price, for example, is far less significant than its annual variance. When you're checking your portfolio every hour, the noise in those statistics drowns out any real information. But our brains evolved to detect patterns of immediate significance, and the nucleus accumbens sends a jolt of pleasure into the investor who thinks he's spotted a winner. Yet studies consistently show that people who follow their investments closely earn lower returns than those who don't pay much attention at all. Why? Because their nucleus accumbens isn't prompting them to make impulsive decisions based on momentary patterns they think they've detected.

The beauty of neuroeconomics is that it's easily verified by personal experience. A while back, I had stock options that I had to exercise within a specific period of time. So I started paying attention to the markets on a daily basis, something I normally never do. I was mildly encouraged every time the stock price ratcheted up a notch or two, smugly satisfied that I hadn't yet cashed in my options. But I was devastated when the price dropped back down again, recriminating myself for missing a golden opportunity. (This was Kahneman and Tversky's "loss aversion" — the tendency to strongly prefer to avoid a loss rather than to acquire a gain — kicking in. Some studies suggest that the fear of a loss has twice the psychological impact as the lure of a gain.) I eventually exercised my options after the price hit a level it hadn't reached for several years. I was pretty pleased with myself — until the firm sold some of its businesses a few weeks later and the stock price shot up by several dollars.

Neuroeconomics really does explain the non-rational aspects of human economic behavior; it is not just another way of saying there's a sucker born every minute. And now, thanks to this new field, I can blame my bad investment decisions on my nucleus accumbens rather than my own stupidity.

Psychologist; Author, Social Intelligence

The Inexplicable Monks

One of my most basic assumptions about the relationship between mental effort and brain function has begun to crumble. Here's why.

My earliest research interests as a psychologist were in the ways mental training can shape biological systems.  My doctoral dissertation was a psychophysiological study of meditation as an intervention in stress reactivity; I found (as have many others since) that the practice of meditation seems to speed the rate of physiological recovery from a stressor.

My guiding assumptions included the standard premise that the mind-body relationship operates according to orderly, understandable principles.  One such might be called the "dose-response" rule, that the more time put into a given method of training, the greater the result in the targeted biological system.  This is a basic correlate of neuroplasticity, the mechanism through which repeated experience shapes the brain.

For example, a string of research has now established that more experienced meditators recover more quickly from stress-induced physiological arousal than do novices. Nothing remarkable there.  The dose-response rule would predict this is so. Thus brain imaging studies show that the spatial areas of London taxi drivers become enhanced during the first six months they spend driving around that city's winding streets; likewise, the area for thumb movement in the motor cortex becomes more robust in violinists as they continue to practice over many months.

This relationship has been confirmed in many varieties of mental training. A seminal 2004 article in the Proceedings of the National Academy of Science found that, compared to novices, highly adept meditators generated far more high-amplitude gamma wave activity — which reflects finely focused attention — in areas of the prefrontal cortex while meditating.

The seasoned meditators in this study — all Tibetan lamas — had undergone cumulative levels of mental training akin to the amount of lifetime sports practice put in by Olympic athletes: 10,000 to 50,000 hours. Novices tended to increase gamma activity by around 10 to 15 percent in the key brain area, while most experts had increases on the order of 100 percent from baseline. What caught my eye in this data was not this difference between novices and experts (which might be explained in any number of ways, including a self-selection bias), but rather a discrepancy in the data among the group of Olympic-level meditators.

Although the experts' average boost in gamma was around 100 percent, two lamas were "outliers": their gamma levels leapt 700 to 800 percent. This goes far beyond an orderly dose-response relationship — these jumps in high-amplitude gamma activity are the highest ever reported in the scientific literature apart from pathological conditions like seizures. Yet the lamas were voluntarily inducing this extraordinarily heightened brain activity for just a few minutes at a time — and by meditating on "pure compassion," no less.

I have no explanation for this data, but plenty of questions. At the higher reaches of contemplative expertise, do principles apply (as the Dalai Lama has suggested in dialogues with neuroscientists) that we do not yet grasp? If so, what might these be? In truth, I have no idea. But these puzzling data points have pried open my mind a bit as I've had to question what had been a rock-solid assumption of my own.

Feuilleton (Arts & Ideas) Editor, Sueddeutsche Zeitung, Munich

That empirical data of journalism are no match for the bigger picture of science

I had witnessed the destructive power of faith more than a few times. As a reporter I had seen how evangelists supported a ruthless war in Guatemala, where the ruling general and evangelical minister Rios Montt had set out to eradicate the remnants of Mayan culture in the name of God. I had spent a month with the Hamas in the refugee camps of Gaza, where fathers and mothers would praise their dead sons' suicide missions against Israel as spiritual quests. Long before 911 I had attended a religious conference in Khartoum, where the spiritual leader of the Sudan Hassan al Turabi found common intellectual ground for such diverse people as Cardinal Arinze, Reverend Moon and the future brothers in arms Osama bin Laden and Ayman al-Zawahiri. There the Catholic Church, dubious cults and Islamist terror declared war on secularism and rational thought

It didn't have to be outright war though. Many a times I saw the paralysis of thinking and intellect in faith. I had listened to evangelical scholar Kent Hovind explain to children how Darwin was wrong, because dinosaurs and man roamed the earth together. I had spent time with the Amish, a stubborn, backwards people romanticized by Luddite sentiment. I had visited the Church of Scientology's Celebrity Center in Hollywood, where a strange and stifling dogma is glamorized by movie and pop stars.

It was during my work on faith in the US that I came across the New Religions Movement studies of David B. Barrett's "World Christian Encyclopedia". Barrett and his fellow researchers George T. Kurian and Todd M. Johnson had come to the conclusion that of all centuries, it was the alleged pinnacle of secular thought the 20th century that had brought on the most new religions in the history of civilization. They had counted 9900 full-fledged religions around the world. The success of new religions, they explained, came with the disintegration of traditional structures like the family, the tribe and the village. In the rootless world of mega cities and in American suburbia alike religious groups function as the very social fabric, society can't provide anymore.

It was hard facts against hard facts. First the visceral experience out in the field first overpowered the raw data of the massive body of scientific research. It still forced me to rethink my hardened stand towards faith. It was hard to let go of the empirical data of experience and accept the hard facts of science. This is a route normally leading from faith to rational thought. No, it hasn't brought me to faith, but I had to acknowledge it's persistence and cultural power. First and foremost it demonstated that the empirical data of journalism are no match for the bigger picture of science.

< previous

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17|

next >

John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2008 by
Edge Foundation, Inc
All Rights Reserved.