The Edge Annual Question — 2008

When thinking changes your mind, that's philosophy.
When God changes your mind, that's faith.
When facts change your mind, that's science.

WHAT HAVE YOU CHANGED YOUR MIND ABOUT? WHY?

Science is based on evidence. What happens when the data change? How have scientific findings or arguments changed your mind?"

165 contributors; 112,600 words

"The world's finest minds have responded with some of the most insightful, humbling, fascinating confessions and anecdotes, an intellectual treasure trove. ... Best three or four hours of intense, enlightening reading you can do for the new year. Read it now." — Mark Morford, San Francisco Chronicle

"As in the past, these world-class thinkers have responded to impossibly open-ended questions with erudition, imagination and clarity."J. Peder Zane, The News & Observer

"A jolt of fresh thinking...The answers address a fabulous array of issues. This is the intellectual equivalent of a New Year's dip in the lake — bracing, possibly shriek-inducing, and bound to wake you up."
— Margaret Wente, The Globe and Mail

"Answers ring like scientific odes to uncertainty, humility and doubt; passionate pleas for critical thought in a world threatened by blind convictions." Sandro Contenta, The Toronto Star

"For an exceptionally high quotient of interesting ideas to words, this is hard to beat. ...What a feast of egg-head opinionating!" — John Derbyshire, National Review Online

"Even the world’s best brains have to admit to being wrong sometimes: here, leading scientists respond to a new year challenge." — Lewis Smith, The Times

"Provocative ideas put forward today by leading figures." Roger Highfield, The Telegraph

"The splendidly enlightened Edge website (www.edge.org) has rounded off each year of inter-disciplinary debate by asking its heavy-hitting contributors to answer one question. I strongly recommend a visit."— Boyd Tonkin, The Independent

"A remarkable feast of the intellect... an amazing group of reflections on science, culture, and the evolution of ideas. Reading the Edge question is like being invited to dinner with some of the most interesting people on the planet." — Tim O'Reilly, O'Reilly Radar

"A great event in the Anglo-Saxon culture."El Mundo

"As fascinating and weighty as one would imagine." — Comment (Leading Article), The Independent

"They are the intellectual elite, the brains the rest of us rely on to make sense of the universe and answer the big questions. But in a refreshing show of new year humility, the world's best thinkers have admitted that from time to time even they are forced to change their minds." James Randerson, The Guardian

PRESS COVERAGE: Arts & Letters Daily; bloggingheads.tv; boingboing; Canberra Times; Corriere Della Sera; The Globe and Mail; The Guardian; Il Giornale; Infectious Greed; The Independent; El Mundo; National Review Online; The News & Observer; [email protected]; O'Reilly Radar; San Francisco Chronicle; Slashdot; Spiegel Online; Süddeutsche Zeitung; Sunday Tribune; The Telegraph; The Times; Toronto Star; The Wall Street Journal; The Washington Post; Die Zeit


CONTRIBUTORS

[166 contributors; 113,000 words ; most recent first:] Daniel Kahneman, Nassim Nicholas Taleb, W. Daniel Hillis, David Goodhart, Mark Henderson, Ray Kurzweil, Lewis Wolpert, David Gelernter, Bart Kosko, Randolph M. Nesse, Linda S. Gottfredson, Kai Krause, Clay Shirky, Denis Dutton, Jamshed Bharucha, Lera Boroditsky, Gregory Benford, Richard Dawkins, Roger Bingham, Jesse Bering, Barry Smith, Steve Connor, Geoffrey Miller, George Johnson, Stephon Alexander, Beatrice Golomb, Chris DiBona, Jordan Pollack, Alison Gopnik, Paul Saffo, Neil Gershenfeld, J. Craig Venter, David Sloan Wilson, Simon Baron-Cohen, Austin Dacey, Daniel Engber, Roger Highfield, Francesco De Pretis, Dimitar Sasselov, Jaron Lanier, Janna Levin, Martin Rees, Esther Dyson, Anton Zeilinger, Gerd Gigerenzer, PZ Myers, Susan Blackmore, Adam Bly, Nicholas Humphrey, Paul Ewald, Seirian Sumner, Brian Eno, Hans Ulrich Obrist, Robert Shapiro, Sam Harris, Yossi Vardi, David Buss, Andrian Kreye, Daniel Goleman, James Geary, Tim O'Reilly, Philip Campbell, Frank Wilczek, Chris Anderson, Rupert Sheldrake Nicholas A. Christakis, Daniel C. Dennett, Helena Cronin, Aubrey de Grey, Nicholas Carr, Lisa Randall, Brian Goodwin, Carolyn Porco, William H. Calvin, Mary Catherine Bateson, Stanislas Dehaene, Linda Stone, Sean Carroll, Richard Wrangham, Marco Iacoboni, Scott Atran, Leo Chalupa, John Allen Paulos, Eduardo Punset, Rebecca Goldstein, Juan Enriquez, George Dyson, Paul Davies, Steven Pinker, Alan Alda, Patrick Bateson, Jon Haidt, George Church, Terrence Sejnowski, Judith Rich Harris, Oliver Morton, Stewart Brand, Daniel Gilbert, Sherry Turkle, John Horgan, Roger Schank, Carlo Rovelli, Xeni Jardin, Stephen Schneider, Diane Halpern, Alan Kay, Marti Hearst, Kevin Kelly, Marcel Kinsbourne, Peter Schwartz, Scott Sampson, Ernst Pöppel, John McCarthy, Seth Lloyd, Gary Klein, Stephen Kosslyn,Lawrence Krauss,Jeffrey Epstein, Ken Ford, John Baez, A. Garrett Lisi, Lee Smolin, Gary Marcus, Lee Silver, Laurence Smith, Robert Trivers, Rodney Brooks, Paul Steinhardt, Helen Fisher, Steve Nadis, Tor Nørretranders, Robert Sapolsky, Max Tegmark, David Dalrymple, Daniel Everett, David Myers, Keith Devlin, Todd Feinberg, Robert Provine, Marc D. Hauser, Thomas Metzinger, Dan Sperber, Leon Lederman, Timothy Taylor, Haim Harari, David Bodanis, Charles Seife, Mark Pagel, Arnold Trehub, Gino Segre, Nick Bostrom, Rudy Rucker, David Brin, Ed Regis, Freeman Dyson, Marcelo Gleiser, Irene Pepperberg, Colin Tudge, James O'Donnell, Michael Shermer, Donald Hoffman, Howard Gardner, Piet Hut, Douglas Rushkoff, Karl Sabbagh, Joseph LeDoux, Martin Seligman


boingboing
January 10, 2008

EDGE Question 2008: What have you changed your mind about?

POSTED BY XENI JARDIN, JANUARY 10, 2008 9:44 AM | PERMALINK

I've been traveling in Central America for the past few weeks, so I'm late on blogging a number of things -- including this. Each year, EDGE.org's John Brockman asks a new question, and a bunch of tech/sci/internet folks reply. This year's question: What have you changed your mind about?

Science is based on evidence. What happens when the data change? How have scientific findings or arguments changed your mind?

Link.

I was one of the 165 participants, and wrote about what I learned from Boing Boing's community experiments, under the guidance of our community manager Teresa Nielsen Hayden: Link to "Online Communities Rot Without Daily Tending By Human Hands."

Here's a partial link-list of my favorite contributions from others:

Tor Nørretranders, W. Daniel Hillis, Ray Kurzweil, David Gelernter, Kai Krause, Clay Shirky, J. Craig Venter, Simon Baron-Cohen, Jaron Lanier, Martin Rees, Esther Dyson, Brian Eno, Yossi Vardi, Tim O'Reilly, Chris Anderson, Rupert Sheldrake, Daniel C. Dennett, Aubrey de Grey, Nicholas Carr, Linda Stone, George Dyson,Steven Pinker, Alan Alda, Stewart Brand, Sherry Turkle, Rudy Rucker, Freeman Dyson, Douglas Rushkoff .

...



SAN FRANCISCO CHRONICLE
January 9, 2008

A top 10 of the top 10
Mark Morford

Honorable mention (links.sfgate.com/ZBZY): It's not a top 10 list. It's not even a top 100. It has nothing to do with fashion or trends or politics or the year's coolest iPod accessories. It is intellectual hotbed Edge.org's annual question, this time a profound doozy: "What have you changed your mind about. Why?"

As of now, 165 of the world's finest minds have responded with some of the most insightful, humbling, fascinating confessions and anecdotes, an intellectual treasure trove of proof that flip-flopping is a very good thing indeed, especially when informed/inspired by facts and shot through with personal experience and laced with mystery and even a little divine insight. Best three or four hours of intense, enlightening reading you can do for the new year. Read it now.

Then flip it over and answer the same question for yourself.

...



NEWS @ORF.at
January 9, 2008

Wenn Wissenschaftler ihre Meinung ändern Lukas Wieselberg, science.ORF.at

"Flip-Flops" werden im Englischen verächtlich Menschen genannt, die plötzlich ihre Meinung ändern. Was bei Politikern oft als ein Zeichen von Opportunismus interpretiert wird, gehört in der Wissenschaft zum Wesen. Dennoch ist es auch unter Forschern und Forscherinnen nicht üblich, sich öffentlich zu einem Sinneswandel zu bekennen. Genau das haben sie aber nun gemacht. Bereits zum elften Mal hat der New Yorker Literaturagent John Brockman namhaften Wissenschaftlern zum Jahreswechsel knifflige Fragen gestellt. Diesmal lauten sie "Wobei haben Sie Ihre Meinung geändert? Und warum?"

Die Antworten von insgesamt 165 Forschern und Expertinnen sind unterschiedlich und oft amüsant: Der Biologe Richard Dawkins erklärt, warum Meinungswandel kein evolutionärer Nachteil sind; die Philosophin Helena Cronin zeigt, dass es unter Männer zwar mehr Nobelpreisträger gibt, aber auch mehr Trottel; und Anton Zeilinger erzählt von seinem Irrtum, die Quantenphysik einst für "nutzlos" gehalten zu haben. ...

...



THE GLOBE AND MAIL
January 9, 2008

RECOMMENDED LINKS

IT doublethink
Shane Schick

Even IT gurus have the right to think twice.

This year the online salon Edge.org has drawn a lot of attention for the annual question it put out to a mixture of scientists and artists: What have you changed your mind about?

Contributors range from actor Alan Alda to folk singer Joan Baez, but some of the real gems came from technology visionaries who decided to take a second look at their original visions.

[Note to Globe and Mail: It's "the mathematician physicist John C. Baez", not his cousin the "folk singer Joan Baez", daughter of the physicist Albert Baez.]

...



TEMPOS DEL MUNDO (Buenos Aires)
January 8, 2008

The most prestigious scientists also change their minds

BUENOS AIRES, jan. 8 (UPI) — On the occasion of the new year, the most sublime thinkers of the world have recognized that, from time to time, they are obliged to rectify their views.When addressing topics as diverse as evolution man, the laws of physics and differences sex, a group of scientists and philosophers, among Which includes Steven Pinker, Daniel Dennett, Paul Davies and Richard Wrangham, have confessed, all of them Without exception, they have changed their minds, reports Madrimasd.org.This exhibition of scientific modesty has occurred As a result of the questions, coinciding with New year, annually raised the website edge.org, which has obtained responses from more than 120 of the most Important thinkers in the world.A recurring theme in the answers is that what distinguishes science from other forms of knowledge and faith is that new ideas based on quickly replace old ones when they are based on evidence produced by tests. Accordingly, in the intellectual scope there is nothing of shameful in recognizing that one has changed positions.

[Spanish Original ...]



SÜDDEUTSCHE ZEITUNG (Munich)
January 8, 2008

FEUILLETON — Page 1

Die Partei der Zweifler;
Bei der Frage des Jahres im Onlinemagazin Edge machen
sich Wissenschaftler Gedanken Ÿber ihre eigene Fehlbarkeit
Ralf Bönt

Eines der anregendsten intellektuellen Spiele findet sich jedes Jahr im Januar auf der Website Edge.org, wenn Wissenschaftler und Künstler im "World Question Center" auf die Frage des Jahres antworten. 2007 prügelte man mit Vehemenz auf die Religionen ein, und so klingt schon die Frage für 2008 wie ein erneuter Generalangriff auf die Seligen: "Welche Ihrer Meinungen haben Sie einmal geändert?" Ist die Religion doch der Ort der göttlichen Wahrheit, die sich nicht begründen muss und nicht bezweifelt werden kann. Wenn er einer Partei angehöre, hatte der Agnostiker Camus auch gesagt, dann der des Zweifels. Keine Konfrontation sollte mehr gescheut werden. Die letzte Heimat der Unverzweifelten bleibt dagegen der Glaube. Was Edge angeht, wird diese Erwartung jedoch enttäuscht. ...



IL GIORNALE (Genoa)
January 6, 2008

Turnaround for Scientists
Matteo Sacchi

What is the coolest online forum, one where scientists and great minds from all over the world exchange opinions and ideas, and the one that keeps the scientific debate alive? Almost certainly it’s edge.org, an American website whose most ardent supporters include, to quote some of the best known, Richard Dawkins, the famous and controversial evolutionary biologist and author of The Selfish Gene; Brian Eno, the visionary music producer; psychologist Steven Pinker; and physicists like Alan Guth or Gino Segré, who are changing the present vision of the universe. This where you’ll run into debates that count, thanks also to a device that has started a cultural trend: every year edge.org asks an artful question that the big brains who haunt its electronic pages are invited to answer. This year’s question is: What have you changed your mind about? Why?

The mea culpa flocked in in great numbers and from prestigious sources, (more than a hundred in a few days), revealing that the greatest minds are changing their opinions on a lot of subjects, from the expansion of the universe to evolution, from the meaning of science to the workings of the human brain through the value of the Roman Empire in front of the barbarians.

PDF VERSION

...



THE NEWS & OBSERVER (Raleigh-Durham)
January 6, 2008

Zane:

The changing of the mind
By J. Peder Zane, Staff Writer

... As in the past, these world-class thinkers have responded to Web site editor John Brockman's impossibly open-ended questions with erudition, imagination and clarity.

In explaining why they have cast aside old assumptions, the respondents' short essays tackle an array of subjects, including the nature of consciousness, the existence of the soul, the course of evolution and whether reason will ultimately triumph over superstition.

Two of the most interesting answers may signal a cease-fire in the gender wars.

In 2005, Harvard President Lawrence *. Summers was assailed for suggesting that innate differences might explain why there are few top women scientists. Now Diane F. Halpern, a psychology professor at Claremont Mc-Kenna College and a self-described "feminist," says Summers was onto something.

"There are real, and in some cases sizable, sex differences with respect to cognitive abilities," she writes.

Her views are echoed by Helena Cronin, a philosopher at the London School of Economics.

"Females," she writes, "are much of a muchness, clustering around the mean." With men, "the variance — the difference between the most and the least, the best and the worst — can be vast." Translation: There may be fewer female geniuses in certain fields, but there are also fewer female morons...

...



BLOGGINGHEADS TV
January 5, 2008

Science Saturday: New Beliefs for a New Year

• Edge.org’s annual question
• George’s answer to the Edge question
• John’s answer to the Edge question


John Horgan & George Johnson

John and George’s New Year resolutions; John softens his pessimism about neuroscience ; The soccer club theory of terrorism; The trouble with relying on experts; How George got hooked on garage-band science; Happiness is a burning bridge.

...



THE GLOBE AND MAIL
January 5, 2008
OPINIONS

Hark! A shriek-inducing wake-up call; Culture can change our genes. Men really do outperform women. We can't predict the future ...

Margaret Wente Comment Column; Second Thoughts

If you want to start your year with a jolt of fresh thinking, I have just the thing. Each year around this time, a Web-based outfit called the Edge Foundation asks a few dozen of the world's brightest scientific brains one big question. This year's question: What have you changed your mind about?

The answers address a fabulous array of issues, including the existence of God, the evolution of mankind, climate change and the nature of the universe. Some of the most provocative responses deal with the bonanza of new evidence from the fast-evolving fields of genetics, neuroscience and evolutionary biology. This is the intellectual equivalent of a New Year's dip in the lake - bracing, possibly shriek-inducing, and bound to wake you up. For the full menu, go to www.edge.org. Meantime, here's a taste. ...

...



THE WALL STREET JOURNAL
January 5, 2008

The Informed Reader
CULTURE
Change of Mind Could Spur A Hardening of the Heart

EDGE -- JAN. 4

When scientists and other prominent intellectuals change their mind about important things, their new outlook often is gloomier.

That, at least, is the theme of responses to a survey conducted by online science-and-culture publication the Edge, which asked some influential thinkers: "What have you changed your mind about? Why?" ... d

...Fittingly, Harvard University psychologist Daniel Gilbert says he has changed his mind about the benefits of changing one's mind. In 2002, a study showed him that people are more satisfied with irrevocable decisions than with ones they can reverse. Acting on the data, he proposed to his now-wife. "It turned out that the data were right: I love my wife more than I loved my girlfriend."

...



TORONTO STAR
January 5, 2008

CHANGING YOUR MIND
In praise of the flip
Ralph Waldo Emerson called consistency the hobgoblin of little minds, yet we live in a world where 'flip-floppers' are treated with contempt. An ambitious new survey of top thinkers, however, serves as a reminder of how healthy it is to change one's mind

Sandro Contenta
Staff Reporter

...Challenging this complacency is a project by the Edge Foundation, a group promoting discussion and inquiry into issues of our time. To kick off the New Year, the group put this statement and question to many of the world's leading scientists and thinkers:

"When thinking changes your mind, that's philosophy. When God changes your mind, that's faith. When facts change your mind, that's science. What have you changed your mind about?"

Answers, posted on the website www.edge.org, came from 164 people, many of them physicists, philosophers, psychologists and anthropologists. They ring like scientific odes to uncertainty, humility and doubt; passionate pleas for critical thought in a world threatened by blind convictions. In short, they're calls for more people who can change their minds. ...

...



WASHINGTON POST
January 4, 2008

RAW FISHER
Marc Fisher


RFQ: What Have You Changed Your Mind About? (Plus: Last Chance on the Coin Contest)

...University of Virginia psychologist Jonathan Haidt says he used to consider sports and fraternities to be the height of American celebration of stupidity. "Primitive tribalism, I thought. Initiation rites, alcohol, sports, sexism, and baseball caps turn decent boys into knuckleheads. I'd have gladly voted to ban fraternities, ROTC, and most sports teams from my university." But Haidt has changed his mind: "I had too individualistic a view of human nature. I began to see us not just as chimpanzees with symbolic lives but also as bees without hives. When we made the transition over the last 200 years from tight communities (Gemeinschaft) to free and mobile societies (Gesellschaft), we escaped from bonds that were sometimes oppressive, yes, but into a world so free that it left many of us gasping for connection, purpose, and meaning. I began to think about the many ways that people, particularly young people, have found to combat this isolation. Rave parties and the Burning Man festival are spectacular examples of new ways to satisfy the ancient longing for communitas. But suddenly sports teams, fraternities, and even the military made a lot more sense." ...

...



INFECTIOUS GREED
January 1, 2008

What Have You Changed Your Mind About?
by Paul Kedrosky

This year's Big Question at Edge from John Brockman, et al., is this, What have you changed your mind about? This is, at least, an interesting question, so I'll start by saying that what I've changed my mind about is whether, in general, the Edge's annual question is worth reading. Okay, sometimes it is.

That said, are any specific answers to this year's Big Question worth reading? Somewhat surprisingly, yes. Granted, some of the answers are just wankery, scientists and others saying that they used to think we wouldn't solve Problem X, and now they think we will, someday, etc. Or, worse yet, there is a passel of up-with-the-environment puffery, where the previously unconverted become carbon holy-rollers. ...

Here are a couple worth reading. Feel free to add more.

Economist Dan Kahneman on the aspiration treadmill
Clay Shirky on science and religion
Nassim Taleb on .... nothing (okay, incomplete, but I still like the semiotic pun)...

...



NATIONAL REVIEW ONLINE
January 3, 2008

the corner

Plato Had a Bad Year [John Derbyshire]

For an exceptionally high quotient of interesting ideas to words, this is hard to beat. ... What a feast of egg-head opinionating!

If there's a common tendency running through many of these pieces, it is the fast-rising waters of naturalism, released by a half-century of discoveries in genetics, evolutionary biology, and neuroscience, submerging every other way of looking at the human world.

We are part of nature, a twig on the tree of life. If we are to have any understanding of ourselves, we must start from that. Final answers to ancient questions are beginning to come in. You may not be happy about the answers; but not being happy about them will be like not being happy about Heisenberg's Uncertainty Principle.

...



DIE ZEIT
January 2, 2008

Small issue, big answers

Even the best minds of this world sometimes have to accept that they were wrong. Scientists to answer the question of Edge Foundation, which they change their mind — and why.

The responses of the intellectuals are personal, sometimes very technical, but also political. They cover a wide range of what people employed: Climate change, the difference between men and women, but also the question of the existence of God.

...



Correre Della Sera — Italy
January 2, 2008

A Cultural Forum asks leading thinkers and philosophers to share their mistakes

When a scientist admits: I had it wrong

From theories of evolution to differences among races, some scholars' mea culpa are online

LONDON — "When thinking changes your mind, that's philosophy, when God changes your mind, that's faith, when facts change your mind, that's science". This is the introduction to the year’s question as posed by a cultural association to which belong the principal thinkers of this moment, from Richard Dawkins, British evolutionary biologist and author of cult book The Selfish Gene, to psychologist Steven Pinker, passing through music producer Brian Eno.

Hundreds responded to the challenge (perhaps in part because the answers to preceding questions were published as books) and revealed widespread reversals of opinions—sometimes dramatic, sometimes gracious.

...



EL MUNDO — Spain
January 2, 2008

ZOOM: Edge Question


At the beginning of each year is a great event in the Anglo-Saxon culture, or rather, in the social life of that culture...The event is called the Edge Annual Question, bringing together much of the most interesting thinkers of our world. ...

Anthropologist Richard Wrangham has introduced a subtle shift in the explanation of the evolutionary history of man: he once believed it to be caused by eating meat, now he believes that the decisive factor is the kitchen, ie, changing from raw to cooked. The response from the musician Brian Eno explains how he went from revolution to evolution, and how he left Maoism for Darwin. ...



THE TIMES
January 1, 2008

Science has second thoughts about life
Even the world's best brains have to admit to being wrong sometimes: here, leading scientists respond to a new year challenge


Lewis Smith, Science Reporter

The new year is traditionally a time when people tend to look back and try to work out where it all went wrong – and how to get it right in the future.

The new year is traditionally a time when people tend to look back and try to work out where it all went wrong – and how to get it right in the future.

This time the Edge Foundation asked a number of leading scientists and thinkers why they had changed their minds on some of the pivotal issues in their fields. The foundation, a chat forum for intellectuals, posed the question: 'When thinking changes your mind, that's philosophy. When God changes your mind, that's faith. When facts change your mind, that's science. What have you changed your mind about? Why?"

The group's responses covered controversial issues, including climate change, whether God or souls exist and defining when humanity began.

This time the Edge Foundation asked a number of leading scientists and thinkers why they had changed their minds on some of the pivotal issues in their fields. The foundation, a chat forum for intellectuals, posed the question: 'When thinking changes your mind, that's philosophy. When God changes your mind, that's faith. When facts change your mind, that's science. What have you changed your mind about? Why?"

The group's responses covered controversial issues, including climate change, whether God or souls exist and defining when humanity began.

...


Posted by Zonk on Tuesday January 01, @12:41PM
from the read-dawkins'-it's-awesome dept.

chrisd writes

"The Edge 2008 question (with answers) is in. This year, the question is: 'What did you change your mind about and why?'. Answers are featured from scientists as diverse as Richard Dawkins, Simon Baron-Cohen, George Church, David Brin, J. Craig Venter and the Astronomer Royal, Lord Martin Rees, among others. Very interesting to read. For instance, Stewart Brand writes that he now realizes that 'Good old stuff sucks' and Sam Harris has decided that 'Mother Nature is Not Our Friend.' What did Slashdot readers change their minds about in 2007?"

...



GUARDIAN UNLIMITED
January 1, 2008

Change of heart
What did you change your mind about in 2007? The world's intellectual elite spread some New Year humility.

James Randerson, science correspondent

Since I wrote my piece on this year's show of scientific humility for the New Year's day paper some big names have added their thoughts to the mix.

Here's evolutionary biologist Richard Dawkins on how being a "flip-flopper" is no bad thing in science...

The controversial geneticist Craig Venter has had a change of heart about the capacity of our planet to soak up the punishment humanity is throwing at it...

There are also interesting contributions from Simon Baron-Cohen, the University of Cambridge autism researcher who has changed his mind about equality; psychologist Susan Blackmore, who has gone from embracing the paranormal to debunking it; and artist and composer Brian Eno, who was once seduced by Maoism, but now believes it is a "monstrosity".

What did you change your mind about in 2007?

...



THE INDEPENDENT
January 1, 2008

Deep thinkers reveal that they, too, can change their minds
Steve Connor

Helena Cronin, a philosopher at the London School of Economics, turns her attention to why men appear far more successful than women, by persistently walking off with the top positions and prizes in life — from being heads of state to winning Nobels.

Dr Cronin used to think it was down to sex differences in innate talents, tastes and temperament. But now she believes it has also something to do with the fact that women cluster around a statistical average, whereas men are more likely to be represented at the extreme ends of the normal spectrum — both at the top and the bottom.

Some replies to the Edge question ponder the perennial problem of God. Professor Patrick Bateson of Cambridge University has changed his mind on what to call himself after meeting a virulent creationist. He is no longer an agnostic but an atheist. Meanwhile the actor and writer Alan Alda said that he has changed his mind about God — twice.

What have you changed your mind about? Why?

...



O'REILLY RADAR
January 1, 2008

What Have You Changed Your Mind About?
By Tim O'Reilly

...I eventually offered some ideas and he jumped on one: my skepticism about the term "social software" after Clay Shirky's "Social Software Summit" in November 2002. As it turns out, Clay was right and I was wrong. This was a powerful meme indeed, just five years early.

Here's what I wrote for the 2008 Edge question. As I suspected, it's a meager offering at a remarkable feast of the intellect. Use it, if you must, as an entry point to an amazing group of reflections on science, culture, and the evolution of ideas. Reading the Edge question is like being invited to dinner with some of the most interesting people on the planet.

...



THE GUARDIAN
January 1, 2008

Second thoughts on life, the universe and everything by world's best brains

The changes of mind that gave philosophers and scientists new insights


James Randerson, science correspondent

They are the intellectual elite, the brains the rest of us rely on to make sense of the universe and answer the big questions. But in a refreshing show of new year humility, the world's best thinkers have admitted that from time to time even they are forced to change their minds.

When tackling subjects as diverse as human evolution, the laws of physics and sexual politics, scientists and philosophers, including Steven Pinker, Daniel Dennett, Paul Davies and Richard Wrangham, all confessed yesterday to a change of heart.

The display of scientific modesty was brought about by the annual new year's question posed by the website edge.org, which drew responses from more than 120 of the world's greatest thinkers.

...



THE INDEPENDENT
31 December 2007

Boyd Tonkin: This year, how about some new year's irresolution?

Changes of mind lie at the core of almost every breakthrough in science, art and thought

From tomorrow morning, we can all sample the reasoning that drives shifts in position by a selection of leading scientists and social thinkers. Since 1998, the splendidly enlightened Edge website (www.edge.org) has rounded off each year of inter-disciplinary debate by asking its heavy-hitting contributors to answer one question. This time, the new-year challenge runs: "What have you changed your mind about? Why?". I strongly recommend a visit to anyone who feels browbeaten by fans of that over-rated virtue: mere consistency.

...



ARTS & LETTERS DAILY
January 1 2008

Articles of Note
What have you changed your mind about, and why? John Brockman's Edge put the question to over a hundred scientists and scholars... more»



THE INDEPENDENT
January 1 2008
COMMENT

Leading article: Why, oh why?

It's becoming something of a New Year ritual. For almost a decade, the website www.edge.org has been asking a selection of eminent thinkers and scholars to answer a single question and publishing the results on 1 January.

In the past it has presented such posers as "What do you believe is true, even though you cannot prove it?" and "What is the most important invention of the past 2,000 years?"

This year Edge wanted to know: "What have you changed your mind about and why?" As usual, it's a good question. And the responses of the likes of Steven Pinker and Helena Cronin are as fascinating and weighty as one would imagine.

...



THE TELEGRAPH
December 31, 2007

Scientists reveal what changed their minds
By Roger Highfield, Science Editor

The best men really do outperform the best women, drugs should be used to enhance our mental powers, and marriages suffer from a 'four year itch", not a seven year one.

These are among the provocative ideas put forward today by leading figures who have been asked what has changed their minds about some of the biggest issues.

The poll of Nobel laureates, scientists, futurists and creative thinkers is published by John Brockman, the New York-based literary agent and publisher of The Edge website.

...


JUST PUBLISHED!
What Are You Optimistic About?:
Today's Leading Thinkers on Why Things Are Good and Getting Better

Introduction by Daniel C. Dennett

"Persuasively upbeat." O, The Oprah Magazine "Our greatest minds provide nutshell insights on how science will help forge a better world ahead." Seed "Uplifting...an enthralling book." The Mail on Sunday


What Is Your Dangerous Idea?: Today's Leading Thinkers on the Unthinkable
Introduction by Steven Pinker
Afterword by Richard Dawkins

"Danger —brilliant minds at work...A brilliant book: exhilarating, hilarious, and chilling." The Evening Standard (London) "A selection of the most explosive ideas of our age." Sunday Herald "Provocative" The Independent "Challenging notions put forward by some of the world's sharpest minds" Sunday Times "A titillating compilation" The Guardian "Reads like an intriguing dinner party conversation among great minds in science" Discover


What We Believe but Cannot Prove:
Today's Leading Thinkers on Science in the Age of Certainty
Introduction by Ian McEwan

"An unprecedented roster of brilliant minds, the sum of which is nothing short of an oracle — a book ro be dog-eared and debated." Seed "Scientific pipedreams at their very best." The Guardian "Makes for some astounding reading." Boston Globe Fantastically stimulating...It's like the crack cocaine of the thinking world.... Once you start, you can't stop thinking about that question." BBC Radio 4 "Intellectual and creative magnificence" The Skeptical Inquirer

Harvard Coop, December 24, 2007

INDEX

MARTIN SELIGMAN
Psychologist, University of Pennsylvania, Author, Authentic Happiness

We Are Alone


JOSEPH LEDOUX
Neuroscientist, New York University; Author, The Synaptic Self

Like many scientists in the field of memory, I used to think that a memory is something stored in the brain and then accessed when used.


DOUGLAS RUSHKOFF
Media Analyst; Documentary Writer; Author, Get Back in the Box: Innovation from the Inside Out

The Internet


PIET HUT
Professor of Astrophysics, Institute for Advanced Study, Princeton

Explanations


HOWARD GARDNER
Psychologist, Harvard University; Author, Changing Minds

Wrestling with Jean Piaget, my Paragon


DONALD HOFFMAN
Cognitive Scientist, UC, Irvine; Author, Visual Intelligence

Veridical Perception


MICHAEL SHERMER
Publisher of Skeptic magazine, monthly columnist for Scientific American; Author, Why Darwin Matters

The Nature of Human Nature


JAMES O'DONNELL
Classicist; Cultural Historian; Provost, Georgetown University; Author, Augustine: A New Biography

I stopped cheering for the Romans


COLIN TUDGE
Science Writer; Author, The Tree: A Natural History of What Trees Are, How They Live, and Why They Matter

The Omniscience and Omnipotence of Science


- PAGE 2 -

IRENE PEPPERBERG
Research Associate, Psychology, Harvard University; Author, The Alex Studies

The Fallacy of Hypothesis Testing


MARCELO GLEISER
Physicist, Dartmouth College; Author, The Prophet and the Astronomer

To Unify or Not: That is the Question


FREEMAN DYSON
Physicist, Institute of Advanced Study, Author, A Many Colored Glass

When facts change your mind, that's not always science. It may be history. I changed my mind about an important historical question: did the nuclear bombings of Hiroshima and Nagasaki bring World War Two to an end?


ED REGIS
Science Writer, Author, Nano

Predicting the Future


DAVID BRIN
Physicist; Technical Consultant; Science Fiction Writer; Author, The Transparent Society

Sometimes you are glad to discover you were wrong. My best example of that kind of pleasant surprise is India. I'm delighted to see its recent rise, on (tentative) course toward economic, intellectual and social success.


RUDY RUCKER
Mathematician, Computer Scientist; CyberPunk Pioneer; Novelist; Author,
Lifebox, the Seashell, and the Soul

Can Robots See God?


NICK BOSTROM
Philosopher, University of Oxford; Author,

Everything


GINO SEGRE
Physicist, University of Pennsylvania; Author: Faust In Copenhagen: A Struggle for the Soul of Physics

The Universe's Expansion


ARNOLD TREHUB
Psychologist, University of Massachusetts, Amherst; Author: The Cognitive Brain

I have never questioned the conventional view that a good grounding in the physical sciences is needed for a deep understanding of the biological sciences. It did not occur to me that the opposite view might also be true.


MARK PAGEL
Evolutionary Biologist, Reading University, England

We Differ More Than We Thought


- PAGE 3 -

CHARLES SEIFE
Professor of Journalism, New York University; formerly journalist, Science magazine; Author, Zero: The Biography Of A Dangerous Idea

I used to think that a modern, democratic society had to be a scientific society.


DAVID BODANIS
Writer; Consultant; Author, Passionate Minds

The Bible Is Inane


HAIM HARARI
Physicist, former President, Weizmann Institute of Science

Clear and simple is not the same as provable and well defined


TIMOTHY TAYLOR
Archaeologist, University of Bradford; Author, The Buried Soul

Relativism


LEON LEDERMAN
Physicist and Nobel Laureate; Director Emeritus, Fermilab; Coauthor, The God Particle

The Obligations and Responsibilities of The Scientist


DAN SPERBER
Social and cognitive scientist; Directeur de Recherche, CNRS, Paris; Author, Rethinking Symbolism

How I Became An Evolutionary Psychologist


THOMAS METZINGER
Johannes Gutenberg-Universität Mainz; Author, Being No One

There are No Moral Facts


MARC D. HAUSER
Psychologist and Biologist, Harvard University: Author, Moral Minds

The Limits Of Darwinian Reasoning


ROBERT PROVINE
Psychologist and Neuroscientist, University of Maryland; Author, Laughter

In Praise of Fishing Expeditions


TODD E. FEINBERG, M.D.
Professor of Psychiatry and Neurology, Albert Einstein College of Medicine; Author, Altered Egos

Soul Searching


- PAGE 4 -

KEITH DEVLIN
Mathematician; Executive Director, Center for the Study of Language and Information, Stanford; Author, The Millennium Problems

What is the nature of mathematics?


DAVID G, MYERS
Social psychologist, Hope College; author, Psychology, 8th edition

Reading and reporting on psychological science has changed my mind many times....


DANIEL EVERETT
Researcher of Pirahã Culture; Chair of Languages, Literatures, & Cultures, Professor of Linguistics and Anthropology, Illinois State University

Homeopathic Bias and Language Origins


DAVID DALRYMPLE
Student, MIT's Center for Bits and Atoms; Researcher, Internet 0, Fab Lab Thinner Clients for South Africa, Conformal Computing

Maybe MBAs Should Design Computers After All


MAX TEGMARK
Physicist, MIT; Researcher, Precision Cosmology

Do we need to understand consciousness to understand physics?  I used to answer "yes", thinking that we could never figure out the elusive "theory of everything" for our external physical reality without first understanding the distorting mental lens through which we perceive it.


ROBERT SAPOLSKY
Neuroscientist, Stanford University, Author, A Primate's Memoir


 
I'm both a neurobiologist and a primatologist, and I've changed my mind about plenty of things in both of these realms. But the most fundamental change is one that transcends either of those disciplines — this was my realizing that the most interesting and important things in the life sciences are not going to be explained with sheer reductionism.


TOR NØRRETRANDERS
Science Writer; Consultant; Lecturer, Copenhagen; Author, The Generous Man

Permanent Reincarnation


HELEN FISHER
Research Professor, Department of Anthropology, Rutgers University; Author,
Why We Love

Planned Obsolescence?  The Four-Year Itch


STEVE NADIS
Science writer; Contributing Editor, Astronomy Magazine


The Myth Of The "Open Mind"


PAUL STEINHARDT
Physicist; Albert Einstein Professor of Science, Princeton University; Coauthor, Endless Universe: A New History of the Cosmos

What created the structure of the universe?


- PAGE 5 -

RODNEY A. BROOKS
Panasonic Professor of Robotics, MIT, and CTO, iRobot Corp; author Flesh and Machines

Computation as the Ultimate Metaphor


ROBERT TRIVERS
Evolutionary Biologist, Rutgers University; Coauthor, Genes In Conflict: The Biology of Selfish Genetic Elements

The Science of Self-deception Requires a Deep Understanding of Biology


LAURENCE C. SMITH
Professor of Geography, UCLA

Rapid climate change


LEE M. SILVER
Professor of Molecular Biology and Public Policy,  Woodrow Wilson School, Princeton; Author, Challenging Nature

"If we could just get people to understand the science, they'd agree with us." Not.


GARY MARCUS
Psychologist, New York University; Author, The Birth of the Mind

What's Special About Human Language



A. GARRETT LISI
Independent Theoretical Physicist; Author, "An Exceptionally Simple Theory of Everything"

I Used To Think I Could Change My Mind


JOHN BAEZ
Mathematical Physicist

Should I be thinking about quantum gravity?




STEPHEN M. KOSSLYN
Psychologist, Harvard University; Author, Wet Mind

The World in the Brain


GARY KLEIN
Research Psychologist; Founder, Klein Associates; Author, The Power of Intuition

Exchanging Your Mind


ALAN KRUEGER
Bendheim Professor of Economics and Public Affairs at Princeton University; Author, What Makes a Terrorist: Economics and the Roots of Terrorism

I used to think the labor market was very competitive, but now I think it is better characterized by monopsony, at least in the short run.


SETH LLOYD
Quantum Mechanical Engineer, MIT, Author, Programming the Universe

I have changed my mind about technology.


JOHN MCCARTHY
Computer Scientist; 1st Generation Artificial Intelligence Pioneer, Stanford University

Attitudes Trump Facts


ERNST PÖPPEL
Neuroscientist, Chairman, Board of Directors, Human Science Center and Department of Medical Psychology, Munich University, Germany; Author, Mindworks

Being Caught In The Language Trap — Or Wittgenstein's Straitjacket


SCOTT SAMPSON
Chief Curator, Utah Museum of Natural History; Associate Professor, University of Utah; Host, Dinosaur Planet TV series

The Death of the Dinosaurs


PETER SCHWARTZ
Futurist, Business Strategist; Cofounder. Global Business Network, a Monitor Company; Author, The Long Boom

In the last few years I have changed my mind about nuclear power.


MARCEL KINSBOURNE, M.D.
Neurologist & Cognitive Neuroscientist, The New School; Coauthor, Children's Learning and Attention Problems

The Impressionable Brain


KEVIN KELLY
Editor-At-Large, Wired; Author, New Rules for the New Economy

Much of what I believed about human nature, and the nature of knowledge, has been upended by the Wikipedia.


- PAGE 7 -

MARTI HEARST
Computer Scientist, UC Berkeley, School of Information

Computational Analysis of Language Requires Understanding Language


ALAN KAY
Computer Scientist; Personal Computer Visionary, Senior Fellow, HP Labs

A Big Mind Change At Age 10: Vacuums Don't Suck!


DIANE F. HALPERN
Professor, Claremont McKenna College; Past-president, American Psychological Association; Author, Sex Differences in Cognitive Abilities

From A Simple Truth To "It All Depends"


STEPHEN H. SCHNEIDER
Biologist; Climatologist, Stanford University; Author, Laboratory Earth

Climate Change: Warming Up To The Evidence


XENI JARDIN
Tech Culture Journalist; Co-editor, Boing Boing; Commentator, NPR; Host, Boing Boing tv

Online Communities Rot Without Daily Tending By Human Hands


CARLO ROVELLI
Physicist, Universite' de la Mediterrane' (Marseille, France); Author: What is time? What is Space?

There is nothing to add to the standard interpretation of quantum mechanics.


ROGER C. SCHANK
Psychologist & Computer Scientist; Engines for Education Inc.; Author, Making Minds Less Well Educated than Our Own

AI?


JOHN HORGAN
Director, the Center for Science Writings, Stevens Institute of Technology; Author, Rational Mysticism

Changing My Mind About the Mind-Body Problem


SHERRY TURKLE
Psychologist, MIT; Author, Evocative Objects: Things We Think With

What I've Changed My Mind About


DANIEL GILBERT
Harvard College Professor of Psychology at Harvard University; Author, Stumbling on Happiness

The Benefit of Being Able to Change My Mind


- PAGE 8 -

STEWART BRAND
Founder, Whole Earth Catalog, cofounder; The Well; cofounder, Global Business Network; Author, How Buildings Learn

Good Old Stuff Sucks


OLIVER MORTON
Chief News and Features Editor, Nature; Author, Mapping Mars

Human Spaceflight


JUDITH RICH HARRIS
Independent Investigator and Theoretician; Author,
No Two Alike: Human Nature and Human Individuality

Generalization


GEORGE CHURCH
Professor of Genetics, Harvard Medical School; Director, Center for Computational Genetics

Evolution of Faith In Experiments


TERRENCE SEJNOWSKI
Computational Neuroscientist, Salk Institute, Coauthor, The Computational Brain

I have changed my mind about cortical neurons and now think that they are far more capable than we ever imagined.


JON HAIDT
Psychologist, University of Virginia, author The Happiness Hypothesis

Sports and fraternities are not so bad


PATRICK BATESON
Professor of Ethology, Cambridge University, author Design for a Life

Changing my Mind


ALAN ALDA
Actor, writer, director, and host of PBS program "Scientific American Frontiers."

So far, I've changed my mind twice about God


STEVEN PINKER
Psychologist, Harvard University; Author, The Stuff of Thought

Have Humans Stopped Evolving?


PAUL DAVIES
Physicist, Arizona State University; Author,
The Cosmic Jackpot

I used to be a committed Platonist


- PAGE 9 -

GEORGE B. DYSON
Science Historian; Author, Project Orion

Russian America


JUAN ENRIQUEZ
CEO, Biotechonomy; Founding Director, Harvard Business School's Life Sciences Project; Author, The Untied States of America

The source of long term power


REBECCA GOLDSTEIN
Philosopher, Harvard University; Author, Betraying Spinoza

Falsifiability


EDUARDO PUNSET
Scientist; Spanish Television Presenter; Author, The Happiness Trip

The soul is in the brain


JOHN ALLEN PAULOS
Professor of Mathematics, Temple University, Philadelphia; Author, Irreligion: A Mathematician Explains Why the Arguments ofr God Just Don't Add Up

The Convergence of Belief Change


LEO CHALUPA
Ophthalmologist and Neurobiologist, University of California, Davis

Brain plasticity


SCOTT ATRAN
Anthropologist, University of Michigan; Author, In God's We Trust

The Religious Politics of Fictive Kinship


MARCO IACOBONI
Neuroscientist, UCLA Brain Mapping Center; Author, Mirroring People

The eradication of irrational thinking is (not) inevitable (it will require some serious work)


RICHARD WRANGHAM
Professor of Biology and Anthropology, Harvard University' Coauthor (with Dale Peterson), Demonic Males: Apes, and the Origins Of Human Violence

The Human Recipe


SEAN CARROLL
Theoretical Physicist, Cal Tech

Being a Heretic is Hard Work


- PAGE 10 -

LINDA STONE
Former VP, Microsoft & Co-Founder & Director, Microsoft's Virtual Worlds Group/Social Computing Group

Breathtaking New Technologies


STANISLAS DEHEANE
Cognitive Neuropsychology Researcher, Institut National de la Santé, Paris; Author, The Number Sense

The brain's Schrödinger equation

MARY CATHERINE BATESON
Anthropologist, visiting professor Harvard Graduate School of Education; Author, Full Circles, Overlapping Lives

Making and Changing Minds


WILLIAM CALVIN
Professor, The University of Washington School of Medicine; Author, A Brain For All Seasons

Greenland changed my mind


CAROLYN PORCO
Planetary Scientist; Cassini Imaging Science Team Leader; Director CICLOPS, Boulder CO; Adjunct Professor, University of Colorado

I've changed my mind about the manner in which our future on this planet might evolve.


BRIAN GOODWIN
Biologist, Schumacher College, Devon, UK; Author, How The Leopard Changed Its Spots

I have changed my mind about the general validity of the mechanical worldview that underlies the modern scientific understanding of natural processes.


LISA RANDALL
Physicist, Harvard University; Author, Warped Passages

When I first heard about the solar neutrino puzzle, I had a little trouble taking it seriously.


NICHOLAS CARR
Author, The Big Switch

The Radiant and Infectious Web


AUBREY de GREY
Gerontologist; chairman and chief science officer of the Methuselah Foundation; author, Ending Aging

Curiosity is addictive, and this is not an entirely good thing


HELENA CRONIN
Philosopher, London School of Economics; director and founder [email protected]; author, The Ant and the Peacock

More dumbbells but more Nobels: Why men are at the top


- PAGE 11 -

DANIEL C. DENNETT
Philosopher; University Professor, Co-Director, Center for Cognitive Studies, Tufts University; Author, Breaking the Spell: Religion as a Natural Phenomenon

Competition in the brain


NICHOLAS A. CHRISTAKIS
Physician and social scientist, Harvard

Culture can change our genes


RUPERT SHELDRAKE
Biologist, London; Author, The Sense of Being Stared At

The skepticism of believers


CHRIS ANDERSON
Editor in Chief, Wired Magazine; Author, The Long Tail

Seeing Through a Carbon Lens


FRANK WILCZEK
Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, Fantastic Realities

The Science Formerly Known as Religion


PHILIP CAMPBELL
Editor-in Chief, Nature

I've changed my mind about the use of enhancement drugs by healthy people.


TIM O'REILLY
Founder and CEO of O'Reilly Media, Inc.

I was skeptical of the term "social software"....


JAMES GEARY
Former Europe editor, Time Magazine; Author, Geary's Guide to the World's Great Aphorists

Neuroeconomics really explains human economic behavior


DANIEL GOLEMAN
Psychologist; Author, Social Intelligence

The Inexplicable Monks


ANDRIAN KREYE
Feuilleton (Arts & Ideas) Editor, Sueddeutsche Zeitung, Munich

The empirical data of journalism are no match for the bigger picture of science

- PAGE 12 -

DAVID BUSS
Psychologist, University of Texas, Austin; Author, The Murderer Next Door

Female Sexual Psychology


YOSSI VARDI
Chairman, International Technologies

Life experience changed my mind


SAM HARRIS
Neuroscience Researcher; Author, Letter to a Christian Nation

Mother Nature is Not Our Friend


ROBERT SHAPIRO
Chemist, New York University; Author, Planetary Dreams

Smothering Science with Silence


HANS ULRICH OBRIST
Curator, Serpentine Gallery, London

The question of objects


BRIAN ENO
Artist; Composer; Recording Producer: U2, Talking Heads, Paul Simon; Recording Artist

From Revolutionary to Evolutionary


SEIRIAN SUMNER
Research Fellow, Institute of Zoology, London

Reassessing Relatedness


PAUL EWALD
Professor of Biology, Amherst College; Author, Evolution of Infectious Disease

Trusting Experts


NICHOLAS HUMPHREY
Psychologist, London School of Economics; Author, Seeing Red

The hardness of the problem of consciousness is the key to its solution


ADAM BLY
Founder & Editor-in-Chief, Seed

Technology is Not So Bad


- PAGE 13 -

SUSAN BLACKMORE
Psychologist and Skeptic; Author, Consciousness: An Introduction

The Paranormal


PZ MYERS
Biologist, University of Minnesota; blogger, Pharyngula

I always change my mind about everything, and I never change my mind about anything.


GERD GIGERENZER
Psychologist; Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin; Author, Gut Feelings

The Advent of Health Literacy


ANTON ZEILINGER
University of Vienna and Scientific Director, Institute of Quantum Optics and Quantum Information, Austrian Academy of Sciences

I used to think what I am doing is "useless"


ESTHER DYSON
Editor, Release 1.0; Trustee, Long Now Foundation; Author,
Release 2.0

What have I changed my mind about? Online privacy.


MARTIN REES
President, The Royal Society; Professor of Cosmology & Astrophysics; Master, Trinity College, University of Cambridge; Author, Our Final Century: The 50/50 Threat to Humanity's Survival

We Should Take the 'Posthuman' Era Seriously


JANNA LEVIN
Physicist, Columbia University; Author, A Madman Dreams of Turing Machines

I used to take for granted an assumption that the universe is infinite.


JARON LANIER
Computer Scientist and Musician; Columnist, Discover Magazine

Here's a happy example of me being wrong.


DIMITAR SASSELOV
Astrophysicist, Harvard

I change my mind all the time — keeping an open mind in science is a good thing.


FRANCESCO DE PRETIS
Journalist, La Stampa; Italy Correspondent, Science Magazin

A book on "What is really Science?"


- PAGE 14 -

ROGER HIGHFIELD
Science Editor, The Daily Telegraph; Coauthor, After Dolly

Science as faith


DANIEL ENGBER
Science editor, Slate Magazine

It's hard to perform ethical research on animals


AUSTIN DACEY
philosopher, Center for Inquiry; author, The Secular Conscience

What Matters


SIMON BARON-COHEN
Psychologist, Autism Research Centre, Cambridge University; Author, The Essential Difference

Equality


DAVID SLOAN WILSON
Biologist, Binghamton University; Author, Evolution for Everyone

I Missed the Complexity Revolution


J. CRAIG VENTER
Human Genome Decoder; Director, The J. Craig Venter Institute; Author, A Life Decoded: My Genome: My Life.

The importance of doing something now about the environment.


NEIL GERSHENFELD
Physicist, MIT; Author, FAB

I've long considered myself as working at the boundary between physical science and computer science; I now believe that that boundary is a historical accident and does not really exist.


PAUL SAFFO
Technology Forecaster

The best forecasters will be computers


ALISON GOPNIK
Psychologist, UC-Berkeley; Coauthor, The Scientist In the Crib

Imagination is Real


JORDAN POLLACK
Computer Scientist, Brandeis University

Electronic Mail


- PAGE 15 -

CHRIS DIBONA
Open Source Programs Manager, Google

Oversight and Programmer productivity


BEATRICE GOLOMB, MD, PhD
Associate Professor of Medicine & Associate Professor of Family and Preventive Medicine at UCSD

Reasoning from Evidence: A Call for Education

STEPHON ALEXANDER
Assistant Professor of Physics, Penn State

The Light Side of Locality


GEORGE JOHNSON
Science writer; Author, Miss Leavitt's Stars

Experimental Physics


GEOFFREY MILLER
Evolutionary Psychologist, University of New Mexico; Author, The Mating Mind

Asking for directions


STEVE CONNOR
Science Editor, The Independent in London

The 21st Century

BARRY SMITH
Philosopher, School of Advanced Study, University of London; Coeditor,
Knowing Our Own Minds

The Experience of the Normally Functioning Mind is the Exception


JESSE BERING
Director of the Institute of Cognition and Culture, Queens University, Belfast

I Have No Destiny (and Neither Do You)


ROGER BINGHAM
Cofounder and Director, The Science Network; Neuroscience Researcher, Center for Brain and Cognition, UCSD; Coauthor, The Origin of Minds; Creator PBS Science Programs

Changing My Religion


RICHARD DAWKINS
Evolutionary Biologist, Charles Simonyi Professor For The Understanding Of Science, Oxford University; Author,
The God Delusion

A flip-flop should be no handicap


- PAGE 16 -

GREGORY BENFORD
Physicist, UC Irvine; Author, Deep Time

Evolving the law s of physics


LERA BORODITSKY
Cognitive Psychology & Cognitive Neuroscience, Stanford University

Do our languages shape the nuts and bolts of perception, the very way we see the world?


JAMSHED BHARUCHA
Professor of Psychology, Provost, Senior Vice President, Tufts University

Education as Stretching the Mind


DENIS DUTTON
Professor of the philosophy of art, University of Canterbury, New Zealand, editor of Philosophy and Literature and Arts & Letters Daily

The Self-Made Species


CLAY SHIRKY
Social & Technology Network Topology Researcher; Adjunct Professor, NYU Graduate School of Interactive Telecommunications Program (ITP)

Religion and Science


KAI KRAUSE
Software and Design Pioneer

Software is merely a Performance Art


LINDA S. GOTTFREDSON
Sociologist, University of Delaware; co-director of the Project for the Study of Intelligence and Society.

The Calculus of Small but Consistent Effects


RANDOLPH M. NESSE
Psychiatrist, University of Michigan; Coauthor, Why We Get Sick

Truth does not reside with smart university experts


BART KOSKO
Information Scientist, USC; Author, Noise

The Sample Mean


DAVID GELERNTER
Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, Drawing Life

Users Are Not Reactionary After All

LEWIS WOLPERT
Professor of Biology, University College; Author, Six Impossible Things To Do Before Breakfast

On Pattern Formation


RAY KURZWEIL
Inventor and Technologist; Author,
The Singularity Is Near: When Humans Transcend Biology

SETI


MARK HENDERSON
Science Editor. The Times, London

Consulting the public about science isn't always a waste of time — but consulting bioethicists often is


DAVID GOODHART
Founder & Editor, Prospect Magazine

The nation state is too big for the local things, too small for the international things and the root of most of the world's ills


W.DANIEL HILLIS
Physicist, Computer Scientist; Chairman, Applied Minds, Inc.; Author, The Pattern on the Stone

Try the Experiment Yourself


NASSIM NICHOLAS TALEB
Epistemologist of Randomness and Applied Statistician; Author, The Black Swan

The Irrelevance of "Probability"


DANIEL KAHNEMAN
Psychologist, Princeton; Recipient, 2002 Nobel Prize in Economic Sciences

The sad tale of the aspiration treadmill



MARTIN SELIGMAN
Psychologist, University of Pennsylvania, Author, Authentic Happiness

We Are Alone

If my math had been better, I would have become an astronomer rather than a psychologist. I was after the very greatest questions and finding life elsewhere in the universe seemed the greatest of them all. Understanding thinking, emotion, and mental health was second best — science for weaker minds like mine.Carl Sagan and I were close colleagues in the late 1960's when we both taught at Cornell. I devoured his thrilling book with I.I. Shklovskii (Intelligent Life in the Universe, 1966) in one twenty-four hour sitting, and I came away convinced that intelligent life was commonplace across our galaxy.

The book, as most readers know, estimates a handful of parameters necessary to intelligent life, such as the probability that an advanced technical civilization will in short order destroy itself and the number of "sol-like" stars in the galaxy. Their conclusion is that there are between 10,000 and two million advanced technical civilizations hereabouts. Some of my happiest memories are of discussing all this with Carl, our colleagues, and our students into the wee hours of many a chill Ithaca night.And this made the universe a less chilly place as well. What consolation! That homo sapiens might really partake of something larger, that there really might be numerous civilizations out there populated by more intelligent beings than we are, wiser because they had outlived the dangers of premature self-destruction. What's more we might contact them and learn from them.

A fledging program of listening for intelligent radio signals from out there was starting up. Homo sapiens was just taking its first balky steps off the planet; we exuberantly watched the moon landing together at the faculty club. We worked on the question of how we would respond if humans actually heard an intelligent signal. What would our first "words" be? We worked on what would be inscribed on the almost immortal Voyager plaque that would leave our solar system just about now — allowing the sentient beings who cadged it epochs hence to surmise who we were, where we were, when we were, and what we were (Should the man and woman be holding hands? No, they might think we were one conjoined organism.) SETI (the Search for Extraterrestrial Intelligence) and its forerunners are almost forty years old. They scan the heavens for intelligent radio signals, with three million participants using their home computers to analyze the input. The result has been zilch. There are plenty of excuses for zilch, however, and lots of reason to hope: only a small fraction of the sky has been scanned and larger more efficient arrays are coming on line. Maybe really advanced civilizations don't use communication techniques that produce waves we can pick up.

Maybe intelligent life is so unimaginably different from us that we are looking in all the wrong "places." Maybe really intelligent life forms hide their presence.So I changed my mind. I now take the null hypothesis very seriously: that Sagan and Shklovskii were wrong: that the number of advanced technical civilizations in our galaxy is exactly one, that the number of advanced technical civilizations in the universe is exactly one.What is the implication of the possibility, mounting a bit every day, that we are alone in the universe? It reverses the millennial progression from a geocentric to a heliocentric to a Milky Way centered universe, back to, of all things, a geocentric universe. We are the solitary point of light in a darkness without end. It means that we are precious, infinitely so. It means that nuclear or environmental cataclysm is an infinitely worse fate than we thought.

It means that we have a job to do, a mission that will last all our ages to come: to seed and then to shepherd intelligent life beyond this pale blue dot.


JOSEPH LEDOUX
Neuroscientist, New York University; Author, The Synaptic Self

Like many scientists in the field of memory, I used to think that a memory is something stored in the brain and then accessed when used. Then, in 2000, a researcher in my lab, Karim Nader, did an experiment that convinced me, and many others, that our usual way of thinking was wrong. In a nutshell, what Karim showed was that each time a memory is used, it has to be restored as a new memory in order to be accessible later. The old memory is either not there or is inaccessible. In short, your memory about something is only as good as your last memory about it. This is why people who witness crimes testify about what they read in the paper rather than what they witnessed. Research on this topic, called reconsolidation, has become the basis of a possible treatment for post-traumatic stress disorder, drug addiction, and any other disorder that is based on learning.

That Karim's study changed my mind is clear from the fact that I told him, when he proposed to do the study, that it was a waste of time. I'm not swayed by arguments based on faith, can be moved by good logic, but am always swayed by a good experiment, even if it goes against my scientific beliefs. I might not give up on a scientific belief after one experiment, but when the evidence mounts over multiple studies, I change my mind.


KARL SABBAGH
Writer and Television Producer; Author, The Riemann Hypothesis

I used to believe that there were experts and non-experts and that, on the whole, the judgment of experts is more accurate, more valid, and more correct than my own judgment. But over the years, thinking — and I should add, experience — has changed my mind. What experts have that I don't are knowledge and experience in some specialized area. What, as a class, they don't have any more than I do is the skills of judgment, rational thinking and wisdom. And I've come to believe that some highly ‘qualified' people have less of that than I do.

I now believe that the people I know who are wise are not necessarily knowledgeable; the people I know who are knowledgeable are not necessarily wise. Most of us confuse expertise with judgment. Even in politics, where the only qualities politicians have that the rest of us lack are knowledge of the procedures of parliament or congress, and of how government works, occasionally combined with specific knowledge of economics or foreign affairs, we tend to look to such people for wisdom and decision-making of a high order.

Many people enroll for MBA's to become more successful businessmen. An article in Fortune magazine a couple of years ago compared the academic qualifications of people in business and found the qualification that correlated most highly with success was a philosophy degree. When I ran a television production company and was approached for a job by budding directors or producers, I never employed anyone with a degree in media studies. But I did employ lots of intelligent people with good judgment who knew nothing about television to start with but could make good decisions. The results justified that approach.

Scientists — with a few eccentric exceptions — are, perhaps, the one group of experts who have never claimed for themselves wisdom outside the narrow confines of their specialties. Paradoxically, they are the one group who are blamed for the mistakes of others. Science and scientists are criticized for judgments about weapons, stem cells, global warming, nuclear power, when the decisions are made by people who are not scientists.

As a result of changing my mind about this, I now view the judgments of others, however distinguished or expert they are, as no more valid than my own. If someone who is a ‘specialist' in the field disagrees with me about a book idea, the solution to the Middle East problems, the non-existence of the paranormal or nuclear power, I am now entirely comfortable with the disagreement because I know I'm just as likely to be right as they are.


DOUGLAS RUSHKOFF
Media Analyst; Documentary Writer; Author, Get Back in the Box: Innovation from the Inside Out

The Internet

I thought that it would change people. I thought it would allow us to build a new world through which we could model new behaviors, values, and relationships. In the 90's, I thought the experience of going online for the first time would change a person's consciousness as much as if they had dropped acid in the 60's.

I thought Amazon.com was a ridiculous idea, and that the Internet would shrug off business as easily as it did its original Defense Department minders.

For now, at least, it's turned out to be different.

Virtual worlds like Second Life have been reduced to market opportunities: advertisers from banks to soft drinks purchase space and create fake characters, while kids (and Chinese digital sweatshop laborers) earn "play money" in the game only to sell it to lazier players on eBay for real cash.

The businesspeople running Facebook and MySpace are rivaled only by the members of these online "communities" in their willingness to surrender their identities and ideals for a buck, a click-through, or a better market valuation.

The open source ethos has been reinterpreted through the lens of corporatism as "crowd sourcing" — meaning just another way to get people to do work for no compensation. And even "file-sharing" has been reduced to a frenzy of acquisition that has less to do with music than it does the ever-expanding hard drives of successive iPods.

Sadly, cyberspace has become just another place to do business. The question is no longer how browsing the Internet changes the way we look at the world; it's which browser we'll be using to buy and sell stuff in the same old world.


PIET HUT
Professor of Astrophysics, Institute for Advanced Study, Princeton



Explanations

I used to pride myself on the fact that I could explain almost anything to anyone, on a simple enough level, using analogies. No matter how abstract an idea in physics may be, there always seems to be some way in which we can get at least some part of the idea across. If colleagues shrugged and said, oh, well, that idea is too complicated or too abstract to be explained in simple terms, I thought they were either lazy or not very skilled in thinking creatively around a problem. I could not imagine a form of knowledge that could not be communicated in some limited but valid approximation or other.

However, I've changed my mind, in what was for me a rather unexpected way. I still think I was right in thinking that any type of insight can be summarized to some degree, in what is clearly a correct first approximation when judged by someone who shares in the insight. For a long time my mistake was that I had not realized how totally wrong this first approximation can come across for someone who does not share the original insight.

Quantum mechanics offers a striking example. When someone hears that there is a limit on how accurately you can simultaneously measure various properties of an object, it is tempting to think that the limitations lie in the measuring procedure, and that the object itself somehow can be held to have exact values for each of those properties, even if they cannot be measured. Surprisingly, that interpretation is wrong: John Bell showed that such a 'hidden variables' picture is actually in clear disagreement with quantum mechanics. An initial attempt at explaining the measurement problem in quantum mechanics can be more misleading than not saying anything at all.

So for each insight there is at least some explanation possible, but the same explanation may then be given for radically different insights. There is nothing that cannot be explained, but there are wrong insights that can lead to explanations that are identical to the explanation for a correct but rather subtle insight.


HOWARD GARDNER
Psychologist, Harvard University; Author, Changing Minds

Wrestling with Jean Piaget, my Paragon

Like many other college students, I turned to the study of psychology for personal reasons. I wanted to understand myself better. And so I read the works of Freud; and I was privileged to have as my undergraduate tutor, the psychoanalyst Erik Erikson, himself a sometime pupil of Freud. But once I learned about new trends in psychology, through contacts with another mentor Jerome Bruner, I turned my attention to the operation of the mind in a cognitive sense — and I've remained at that post ever since.

The giant at the time — the middle 1960s — was Jean Piaget. Though I met and interviewed him a few times, Piaget really functioned for me as a paragon. In the term of Dean Keith Simonton, a paragon is someone whom one does not know personally but who serves as a virtual teacher and point of reference. I thought that Piaget had identified the most important question in cognitive psychology — how does the mind develop; developed brilliant methods of observation and experimentation; and put forth a convincing picture of development — a set of general cognitive operations that unfold in the course of essentially lockstep, universally occurring stages. I wrote my first books about Piaget; saw myself as carrying on the Piagetian tradition in my own studies of artistic and symbolic development (two areas that he had not focused on); and even defended Piaget vigorously in print against those who would critique his approach and claims.

Yet, now forty years later, I have come to realize that the bulk of my scholarly career has been a critique of the principal claims that Piaget put forth. As to the specifics of how I changed my mind:

Piaget believed in general stages of development that cut across contents (Space, time, number); I now believe that each area of content has its own rules and operations and I am dubious about the existence of general stages and structures.

Piaget believed that intelligence was a single general capacity that developed pretty much in the same way across individuals: I now believe that humans posses a number of relatively independent intelligences and these can function and interact in idiosyncratic ways,

Piaget was not interested in individual differences; he studied the 'epistemic subject.' Most of my work has focused on individual differences, with particular attention to those with special talents or deficits, and unusual profiles of abilities and disabilities.

Piaget assumed that the newborn had a few basic biological capacities — like sucking and looking — and two major processes of acquiring knowledge, that he called assimilation and accommodation. Nowadays, with many others, I assume that human beings possess considerable innate or easily elicited cognitive capacities, and that Piaget way underestimated the power of this inborn cognitive architecture.

Piaget downplayed the importance of historical and cultural factors — cognitive development consisted of the growing child experimenting largely on his own with the physical (and, minimally, the social ) world. I see development as permeated from the first by contingent forces pervading the time and place of origin.

Finally, Piaget saw language and other symbols systems (graphic, musical, bodily etc) as manifestations, almost epiphenomena, of a single cognitive motor; I see each of these systems as having its own origins and being heavily colored by the particular uses to which a systems is put in one's own culture and one's own time.

Why I changed my mind is an issue principally of biography: some of the change has to do with my own choices (I worked for 20 years with brain damaged patients); and some with the Zeitgeist (I was strongly influenced by the ideas of Noam Chomsky and Jerry Fodor, on the one hand, and by empirical discoveries in psychology and biology on the other).

Still, I consider Piaget to be the giant of the field. He raised the right questions; he developed exquisite methods; and his observations of phenomena have turned out to be robust. It's a tribute to Piaget that we continue to ponder these questions, even as many of us are now far more critical than we once were. Any serious scientist or scholar will change his or her mind; put differently, we will come to agree with those with whom we used to disagree, and vice versa. We differ in whether we are open or secretive about such "changes of mind": and in whether we choose to attack, ignore, or continue to celebrate those with whose views we are no longer in agreement.


DONALD HOFFMAN
Cognitive Scientist, UC, Irvine; Author, Visual Intelligence



Veridical Perception

I have changed my mind about the nature of perception. I thought that a goal of perception is to estimate properties of an objective physical world, and that perception is useful precisely to the extent that its estimates are veridical. After all, incorrect perceptions beget incorrect actions, and incorrect actions beget fewer offspring than correct actions. Hence, on evolutionary grounds, veridical perceptions should proliferate.

Although the image at the eye, for instance, contains insufficient information by itself to recover the true state of the world, natural selection has built into the visual system the correct prior assumptions about the world, and about how it projects onto our retinas, so that our visual estimates are, in general, veridical. And we can verify that this is the case, by deducing those prior assumptions from psychological experiments, and comparing them with the world. Vision scientists are now succeeding in this enterprise. But we need not wait for their final report to conclude with confidence that perception is veridical. All we need is the obvious rhetorical question: Of what possible use is non-veridical perception?

I now think that perception is useful because it is not veridical. The argument that evolution favors veridical perceptions is wrong, both theoretically and empirically. It is wrong in theory, because natural selection hinges on reproductive fitness, not on truth, and the two are not the same: Reproductive fitness in a particular niche might, for instance, be enhanced by reducing expenditures of time and energy in perception; true perceptions, in consequence, might be less fit than niche-specific shortcuts. It is wrong empirically: mimicry, camouflage, mating errors and supernormal stimuli are ubiquitous in nature, and all are predicated on non-veridical perceptions. The cockroach, we suspect, sees little of the truth, but is quite fit, though easily fooled, with its niche-specific perceptual hacks. Moreover, computational simulations based on evolutionary game theory, in which virtual animals that perceive the truth compete with others that sacrifice truth for speed and energy-efficiency, find that true perception generally goes extinct.

It used to be hard to imagine how perceptions could possibly be useful if they were not true. Now, thanks to technology, we have a metaphor that makes it clear — the windows interface of the personal computer. This interface sports colorful geometric icons on a two-dimensional screen. The colors, shapes and positions of the icons on the screen are not true depictions of what they represent inside the computer. And that is why the interface is useful. It hides the complexity of the diodes, resistors, voltages and magnetic fields inside the computer. It allows us to effectively interact with the truth because it hides the truth.

It has not been easy for me to change my mind about the nature of perception. The culprit, I think, is natural selection. I have been shaped by it to take my perceptions seriously. After all, those of our predecessors who did not, for instance, take their tiger or viper or cliff perceptions seriously had less chance of becoming our ancestors. It is apparently a small step, though not a logical one, from taking perception seriously to taking it literally.

Unfortunately our ancestors faced no selective pressures that would prevent them from conflating the serious with the literal: One who takes the cliff both seriously and literally avoids harms just as much as one who takes the cliff seriously but not literally. Hence our collective history of believing in flat earth, geocentric cosmology, and veridical perception. I should very much like to join Samuel Johnson in rejecting the claim that perception is not veridical, by kicking a stone and exclaiming "I refute it thus." But even as my foot ached from the ill-advised kick, I would still harbor the skeptical thought, "Yes, you should have taken that rock more seriously, but should you take it literally?"


MICHAEL SHERMER
Publisher of Skeptic magazine, monthly columnist for Scientific American; Author, Why Darwin Matters

The Nature of Human Nature

When I was a graduate student in experimental psychology I cut my teeth in a Skinnerian behavioral laboratory. As a behaviorist I believed that human nature was largely a blank slate on which we could impose positive and negative reinforcements (and punishments if necessary) to shape people and society into almost anything we want. As a young college professor I taught psychology from this perspective and even created a new course on the history and psychology of war, in which I argued that people are by nature peaceful and nonviolent, and that wars were thus a byproduct of corrupt governments and misguided societies.

The data from evolutionary psychology has now convinced me that we evolved a dual set of moral sentiments: within groups we tend to be pro-social and cooperative, but between groups we are tribal and xenophobic. Archaeological evidence indicates that Paleolithic humans were anything but noble savages, and that civilization has gradually but ineluctably reduced the amount of within-group aggression and between group violence. And behavior genetics has erased the tabula rasa and replaced it with a highly constrained biological template upon which the environment can act.

I have thus changed my mind about this theory of human nature in its extreme form. Human nature is more evolutionarily determined, more cognitively irrational, and more morally complex than I thought.


JAMES O'DONNELL
Classicist; Cultural Historian; Provost, Georgetown University; Author, Augustine: A New Biography

I stopped cheering for the Romans

Sometimes the later Roman empire seems very long ago and far away, but at other times, when we explore Edward Gibbon's famous claim to have described the triumph of "barbarism and religion", it can seem as fresh as next week.  And we always know that we're supposed root for the Romans.  When I began my career as historian thirty years ago, I was all in favor of those who were fighting to preserve the old order.  "I'd rather be Belisarius than Stilicho," I said to my classes often enough that they heard it as a mantra of my attitude — preferring the empire-restoring Roman general of the sixth-century to the barbarian general who served Rome and sought compromise and adjustment with neighbors in the fourth. 

But a career as a historian means growth, development, and change.  I did what the historian — as much a scientist as any biochemist, as the German use of the word Wissenschaft for what both practice — should do:  I studied the primary evidence, I listened to and participated in the debates of the scholars.  I had moments when a new book blew me away, and others when I read the incisive critique of the book that had blown me away and thought through the issues again.  I've been back and forth over a range of about four centuries of late Roman history many times now, looking at events, people, ideas, and evidence in different lights and moods.

What I have found is that the closer historical examination comes to the lived moment of the past, the harder it is to take sides with anybody.  And it is a real fact that the ancient past (I'm talking now about the period from 300-600 CE) draws closer and closer to us all the time.  There is a surprisingly large body of material that survives and really only a handful of hardy scholars sorting through it.  Much remains to be done:  The sophist Libanius of Antioch in the late fourth century, partisan for the renegade 'pagan' emperor Julian, left behind a ton of personal letters and essays that few have read, only a handful have been translated, and so only a few scholars have really worked through his career and thought — but I'd love to read, and even more dearly love to write, a good book about him someday.  In addition to the books, there is a growing body of archaeological evidence as diggers fan out across the Mediterranean, Near East, and Europe, and we are beginning to see new kinds of quantitative evidence as well — climate change measured from tree-ring dating, even genetic analysis that suggests that my O'Donnell ancestors came from one of the most seriously inbred populations (Ireland) on the planet — and right now the argument is going on about the genetic evidence for the size of the Anglo-Saxon migrations to Britain.  We know more than we ever did, and we are learning more all the time, and with each decade, we get closer and closer to even the remote past. 

When you do that, you find that the past is more a tissue of  choices and chances than we had imagined, that fifty or a hundred years of bad times can happen — and can end and be replaced by the united work of people with heads and hearts that makes society peaceful and prosperous again; or the opportunity can be kicked away. 

And we should remember that when we root for the Romans, there are contradictory impulses at work.  Rome brought the ancient world a secure environment (Pompey cleaning up the pirates in the Mediterranean was a real service), a standard currency, and a huge free trade zone.  Its taxes were heavy, but the wealth it taxed so immense that it could support a huge bureaucracy for a long time without damaging local prosperity.  Fine:  but it was an empire by conquest, ruled as a military dictatorship, fundamentally dependent on a slave economy, and with no clue whatever about the realities of economic development and management.  A prosperous emperor was one who managed by conquest or taxation to bring a flood of wealth into the capital city and squander it as ostentatiously as possible.  Rome "fell", if that's the right word for it, partly because it ran out of ideas for new peoples to plunder, and fell into a funk of outrage at the thought that some of the neighboring peoples preferred to move inside the empire's borders, settle down, buy fixer-upper houses, send their kids to the local schools, and generally enjoy the benefits of civilization.  (The real barbarians stayed outside.)  Much of the worst damage to Rome was done by Roman emperors and armies thrashing about, thinking they were preserving what they were in fact destroying.

So now I have a new mantra for my students:  "two hundred years is a long time."  When we talk about Shakespeare's time or the Crusades or the Roman Empire or the ancient Israelites, it's all too easy to talk about centuries as objects, a habit we bring even closer to our own time, but real human beings live in the short window of a generation, and with ancient lifespans shorter than our own, that window was brief.  We need to understand and respect just how much possibility was there and how much accomplishment was achieved if we are to understand as well the opportunities that were squandered.  Learning to do that, learning to sift the finest grains of evidence with care, learning to learn from and debate with others — that's how history gets done. 

The excitement begins when you discover that the past is constantly changing.


COLIN TUDGE
Science Writer; Author, The Tree: A Natural History of What Trees Are, How They Live, and Why They Matter

The Omniscience and Omnipotence of Science

I have changed my mind about the omniscience and omnipotence of science. I now realize that science is strictly limited, and that it is extremely dangerous not to appreciate this.

Science proceeds in general by being reductionist. This term is used in different ways in different contexts but here I take it to mean that scientists begin by observing a world that seems infinitely complex and inchoate, and in order to make sense of it they first "reduce" it to a series of bite-sized problems, each of which can then be made the subject of testable hypotheses which, as far as possible, take mathematical form.

Fair enough. The approach is obviously powerful, and it is hard to see how solid progress of a factual kind could be made in any other way. It produces answers of the kind known as "robust". "Robust" does not of course mean "unequivocally true" and still less does it meet the lawyers' criteria — "the whole truth, and nothing but the truth". But robustness is pretty good; certainly good enough to be going on with.

The limitation is obvious, however. Scientists produce robust answers only because they take great care to tailor the questions. As Sir Peter Medawar said, "Science is the art of the soluble" (within the time and with the tools available). 

Clearly it is a huge mistake to assume that what is soluble is all there is — but some scientists make this mistake routinely.

Or to put the matter another way: they tend conveniently to forget that they arrived at their "robust" conclusions by ignoring as a matter of strategy all the complexities of a kind that seemed inconvenient. But all too often, scientists then are apt to extrapolate from the conclusions they have drawn from their strategically simplified view of the world, to the whole, real world.

Two examples of a quite different kind will suffice: 

1: In the 19th century the study of animal psychology was a mess. On the one hand we had some studies of nerve function by a few physiologists, and on the other we had reams of wondrous but intractable natural history which George Romanes in particular tried to put into some kind of order. But there was nothing much in between. The behaviourists of the 20th century did much to sort out the mess by focusing on the one manifestation of animal psychology that is directly observable and measurable — their behaviour.

Fair enough. But when I was at university in the early 1960s behaviourism ruled everything. Concepts such as "mind" and "consciousness" were banished. B F Skinner even tried to explain the human acquisition of language in terms of his "operant conditioning".

Since then the behaviourist agenda has largely been put in its place. Its methods are still useful (still helping to provide "robust" results) but discussions now are far broader. "Consciousness", "feeling", even "mind" are back on the agenda.

Of course you can argue that in this instance science proved itself to be self-correcting — although this historically is not quite true. Noam Chomsky, not generally recognized as a scientist, did much to dent behaviourist confidence through his own analysis of language.

But for decades the confident assertions of the behaviourists ruled and, I reckon, they were in many ways immensely damaging. In particular they reinforced the Cartesian notion that animals are mere machines, and can be treated as such. Animals such as chimpanzees were routinely regarded simply as useful physiological "models" of human beings who could be more readily abused than humans can. Jane Goodall in particular provided the corrective to this — but she had difficulty getting published at first precisely because she refused to toe the hard-nosed Cartesian (behaviourist-inspired) line. The causes of animal welfare and conservation are still bedeviled by the attitude that animals are simply "machines" and by the crude belief that modern science has "proved" that this is so.

2: In the matter of GMOs we are seeing the crude simplifications still in their uncorrected form. By genetic engineering it is possible (sometimes) to increase crop yield. Other things being equal, high yields are better than low yields. Ergo (the argument goes) GMOs must be good and anyone who says differently must be a fool (unable to understand the science) or wicked (some kind of elitist, trying to hold the peasants back).

But anyone who knows anything about farming in the real world (as opposed to the cosseted experimental fields of the English home counties and of California) knows that yield is by no means the be-all and end-all. Inter alia, high yields require high inputs of resources and capital — the very things that are often lacking. Yield typically matters far less than long-term security — acceptable yields in bad years rather than bumper yields in the best conditions. Security requires individual toughness and variety — neither of which necessarily correlate with super-crop status. In a time of climate change, resilience is obviously of paramount importance — but this is not, alas, obvious to the people who make policy. Bumper crops in good years cause glut — unless the market is regulated;  and glut in the current economic climate (though not necessarily in the real world of the US and the EU) depresses prices and put farmers out of work.

Eventually the penny may drop — that the benison of the trial plot over a few years cannot necessarily be transferred to real farms in the world as a whole. But by that time the traditional crops that could have carried humanity through will be gone, and the people who know how to farm them will be living and dying in urban slums (which, says the UN, are now home to a billion people).

Behind all this nonsense and horror lies the simplistic belief, of a lot of scientists (though by no means all, to be fair) and politicians and captains of industry, that science understands all (ie is omniscient, or soon will be) and that its high technologies can dig us out of any hole we may dig ourselves into (ie is omnipotent).

Absolutely not.


IRENE PEPPERBERG
Research Associate, Psychology, Harvard University; Author, The Alex Studies

The Fallacy of Hypothesis Testing

I've begun to rethink the way we teach students to engage in scientific research. I was trained, as a chemist, to use the classic scientific method: Devise a testable hypothesis, and then design an experiment to see if the hypothesis is correct or not. And I was told that this method is equally valid for the social sciences. I've changed my mind that this is the best way to do science. I have three reasons for this change of mind.

First, and probably most importantly, I've learned that one often needs simply to sit and observe and learn about one's subject before even attempting to devise a testable hypothesis. What are the physical capacities of the subject? What is the social and ecological structure in which it lives? Does some anecdotal evidence suggest the form that the hypothesis should take? Few granting agencies are willing to provide support for this step, but it is critical to the scientific process, particularly for truly innovative research. Often, a proposal to gain observational experience is dismissed as being a "fishing expedition"…but how can one devise a workable hypothesis to test without first acquiring basic knowledge of the system, and how better to obtain such basic knowledge than to observe the system without any preconceived notions?

Second, I've learned that truly interesting questions really often can't be reduced to a simple testable hypothesis, at least not without being somewhat absurd. "Can a parrot label objects?" may be a testable hypothesis, but actually isn't very interesting…what is interesting, for example, is how that labeling compares to the behavior of a young child, exactly what type of training might enable such learning and what type of training is useless, how far can such labeling transfer across exemplars, and….Well, you get the picture…the exciting part is a series of interrelated questions that arise and expand almost indefinitely.

Third, I've learned that the scientific community's emphasis on hypothesis-based research leads too many scientists to devise experiments to prove, rather than test, their hypotheses. Many journal submissions lack any discussion of alternative competing hypotheses: Researchers don't seem to realize that collecting data that are consistent with their original hypothesis doesn't mean that it is unconditionally true. Alternatively, they buy into the fallacy that absence of evidence for something is always evidence of its absence.

I'm all for rigor in scientific research — but let's emphasize the gathering of knowledge rather than the proving of a point.


MARCELO GLEISER
Physicist, Dartmouth College; Author, The Prophet and the Astronomer

To Unify or Not: That is the Question

I grew up infused with the idea of unification. It came first from religion, from my Jewish background. God was all over, was all-powerful, and had a knack for interfering with human affairs, at least in the Old Testament. He then appeared to have decided to be a bit shyer, sending a Son instead, and only revealing Himself through visions and prophecies. Needless to say, when, as a teenager, I started to get interested in science, this vision of an all-pervading God, stories of floods, commandments and plagues, started to become very suspicious. I turned to physics, idolizing Einstein and his science; here was a Jew that saw further, that found a way of translating this old monotheistic tradition into the universal language of science.

As I started my research career, I had absolutely no doubt that I wanted to become a theoretical physicist working on particle physics and cosmology. Why the choice? Simple: it was the joining of the two worlds, of the very large and the very small, that offered the best hope for finding a unified theory of all Nature, that brought together matter and forces into one single magnificent formulation, the final Platonist triumph. This was what Einstein tried to do for the last three decades of his life, although in his days it was more a search for unifying only half of the forces of Nature, gravity and electromagnetism.

I wrote dozens of papers related to the subject of unification, even my Ph.D. dissertation was on the topic. I was fascinated by the modern approaches to the idea, supersymmetry, superstrings, a space with extra, hidden dimensions. A part of me still is. But then, a few years ago, something snapped. It probably was brought by a combination of factors, a deeper understanding of the historical and cultural processes that shape scientific ideas. I started to doubt unification, finding it to be the scientific equivalent of a monotheistic formulation of reality, a search for God revealed in equations. Of course, had we the slightest experimental evidence in favor of unification, of supersymmetry and superstrings, I'd be the first popping the champagne open. But it's been over twenty years, and all attempts so far have failed. Nothing in particle accelerators, nothing in cryogenic dark matter detectors, no magnetic monopoles, no proton decay, all tell-tale signs of unification predicted over the years. Even our wonderful Standard Model of particle physics, where we formulate the unification of electromagnetism and the weak nuclear interactions, is not really a true unification: the theory retains information from both interactions in the form of their strengths or, in more technical jargon, of their coupling constants. A true unification should have a single coupling constant, a single interaction.

All of my recent anti-unification convictions can crumble during the next few years, after our big new machine, the Large Hadron Collider, is turned on. Many colleagues hope that supersymmetry will finally show its face. Others even bet on possible signs of extra dimensions revealed. However, I have a feeling things won't turn out so nicely. The model of unification, which is so aesthetically appealing, may be simply this, an aesthetically appealing description of Nature, which, unfortunately, doesn't correspond to physical reality. Nature doesn't share our myths. The stakes are high indeed. But being a mild agnostic, I don't believe until there is evidence. And then, there is no need to believe any longer, which is precisely the beauty of science.


FREEMAN DYSON
Physicist, Institute of Advanced Study, Author, A Many Colored Glass

When facts change your mind, that's not always science. It may be history. I changed my mind about an important historical question: did the nuclear bombings of Hiroshima and Nagasaki bring World War Two to an end? Until this year I used to say, perhaps. Now, because of new facts, I say no. This question is important, because the myth of the nuclear bombs bringing the war to an end is widely believed. To demolish this myth may be a useful first step toward ridding the world of nuclear weapons.

Until the last few years, the best summary of evidence concerning this question was a book, "Japan's Decision to Surrender", by Robert Butow, published in 1954. Butow interviewed the surviving Japanese leaders who had been directly involved in the decision. He asked them whether Japan would have surrendered if the nuclear bombs had not been dropped. His conclusion, "The Japanese leaders themselves do not know the answer to that question, and if they cannot answer it, neither can I". Until recently, I believed what the Japanese leaders said to Butow, and I concluded that the answer to the question was unknowable.

Facts causing me to change my mind were brought to my attention by Ward Wilson. Wilson summarized the facts in an article, "The Winning Weapon? Rethinking Nuclear Weapons in the Light of Hiroshima", in the Spring 2007 issue of the magazine, "International Security". He gives references to primary source documents and to analyses published by other historians, in particular by Robert Pape and Tsuyoshi Hasegawa. The facts are as follows:

1. Members of the Supreme Council, which customarily met with the Emperor to take important decisions, learned of the nuclear bombing of Hiroshima on the morning of August 6, 1945. Although Foreign Minister Togo asked for a meeting, no meeting was held for three days.

2. A surviving diary records a conversation of Navy Minister Yonai, who was a member of the Supreme Council, with his deputy on August 8. The Hiroshima bombing is mentioned only incidentally. More attention is given to the fact that the rice ration in Tokyo is to be reduced by ten percent.

3. On the morning of August 9, Soviet troops invaded Manchuria. Six hours after hearing this news, the Supreme Council was in session. News of the Nagasaki bombing, which happened the same morning, only reached the Council after the session started.

4. The August 9 session of the Supreme Council resulted in the decision to surrender.

5. The Emperor, in his rescript to the military forces ordering their surrender, does not mention the nuclear bombs but emphasizes the historical analogy between the situation in 1945 and the situation at the end of the Sino-Japanese war in 1895. In 1895 Japan had defeated China, but accepted a humiliating peace when European powers led by Russia moved into Manchuria and the Russians occupied Port Arthur. By making peace, the emperor Meiji had kept the Russians out of Japan. Emperor Hirohito had this analogy in his mind when he ordered the surrender.

6. The Japanese leaders had two good reasons for lying when they spoke to Robert Butow. The first reason was explained afterwards by Lord Privy Seal Kido, another member of the Supreme Council: "If military leaders could convince themselves that they were defeated by the power of science but not by lack of spiritual power or strategic errors, they could save face to some extent". The second reason was that they were telling the Americans what the Americans wanted to hear, and the Americans did not want to hear that the Soviet invasion of Manchuria brought the war to an end.

In addition to the myth of two nuclear bombs bringing the war to an end, there are other myths that need to be demolished. There is the myth that, if Hitler had acquired nuclear weapons before we did, he could have used them to conquer the world. There is the myth that the invention of the hydrogen bomb changed the nature of nuclear warfare. There is the myth that international agreements to abolish weapons without perfect verification are worthless. All these myths are false. After they are demolished, dramatic moves toward a world without nuclear weapons may become possible.


ED REGIS
Science Writer, Author, Nano

Predicting the Future

I used to think you could predict the future.  In "Profiles of the Future," Arthur C. Clarke made it seem so easy.  And so did all those other experts who confidently predicted the paperless office, the artificial intelligentsia who for decades predicted "human equivalence in ten years," the nanotechnology prophets who kept foreseeing major advances toward molecular manufacturing within fifteen years, and so on. 

Mostly, the predictions of science and technology types were wonderful: space colonies, flying cars in everyone's garage, the conquest (or even reversal) of aging.  (There were of course the doomsayers, too, such as the population-bomb theorists who said the world would run out of food by the turn of the century.) 

But at last, after watching all those forecasts not come true, and in fact become falsified in a crashing, breathtaking manner, I began to question the entire business of making predictions.  I mean, if even Nobel prizewinning scientists such as Ernest Rutherford, who gave us essentially the modern concept of the nuclear atom, could say, as he did in 1933, that "We cannot control atomic energy to an extent which would be of any value commercially, and I believe we are not likely ever to be able to do so," and be so spectacularly wrong about it, what hope was there for the rest of us? 

And then I finally decided that I knew the source of this incredible mismatch between confident forecast and actual result.  The universe is a complex system in which countless causal chains are acting and interacting independently and simultaneously (the ultimate nature of some of them unknown to science even today).  There are in fact so many causal sequences and forces at work, all of them running in parallel, and each of them often affecting the course of the others, that it is hopeless to try to specify in advance what's going to happen as they jointly work themselves out.  In the face of that complexity, it becomes difficult if not impossible to know with any assurance the future state of the system except in those comparatively few cases in which the system is governed by ironclad laws of nature such as those that allow us to predict the  phases of the moon, the tides, or the position of Jupiter in tomorrow night's sky.  Otherwise, forget it. 

Further, it's an illusion to think that supercomputer modeling is up to the task of truly reliable crystal-ball gazing.  It isn't.  Witness the epidemiologists who predicted that last year's influenza season would be severe (in fact it was mild); the professional hurricane-forecasters whose models told them that the last two hurricane seasons would be monsters (whereas instead they were wimps).  Certain systems in nature, it seems, are computationally irreducible phenomena, meaning that there is no way of knowing the outcome short of waiting for it to happen. 

Formerly, when I heard or read a prediction, I believed it.  Nowadays I just roll my eyes, shake my head, and turn the page. 


DAVID BRIN
Physicist; Technical Consultant; Science Fiction Writer; Author, The Transparent Society

Sometimes you are glad to discover you were wrong. My best example of that kind of pleasant surprise is India. I'm delighted to see its recent rise, on (tentative) course toward economic, intellectual and social success. If these trends continue, it will matter a lot to Earth civilization, as a whole. The factors that fostered this trend appear to have been atypical — at least according to common preconceptions like "west and east" or "right vs left." I learned a lesson, about questioning my assumptions. 

Alas, there have been darker surprises. The biggest example has been America's slide into what could be diagnosed as bona fide Future Shock. 

Alvin Toffler appears to have sussed it. Back in 1999, while we were fretting over a silly "Y2K Bug" in ancient COBOL code, something else happened, at a deeper level. Our weird governance issues are only surface symptoms of what may have been a culture-wide crisis of confidence, upon the arrival of that "2" in the millennium column. Yes, people seemed to take the shift complacently, going about their business, But, underneath all the blithe shrugs, millions have turned their backs upon the future, even as a topic of discussion or interest.

Other than the tenacious grip of Culture War, what evidence can I offer? Well, in my own fields, let me point to a decline in the futurist-punditry industry. (A recent turnaround offers hope.) And a plummet in the popularity of science fiction literature (as opposed to feudal-retro fantasy.) John B. has already shown us how little draw science books offer, in the public imagination — an observation that not only matches my own, but also reflects the anti-modernist fervor displayed by all dogmatic movements. 

One casualty: the assertive, pragmatic approach to negotiation and human-wrought progress that used to be mother's milk to this civilization. 

Yes, there were initial signs of all this, even in the 1990s. But the extent of future-anomie and distaste for science took me completely by surprise. It makes me wonder why Toffler gets mentioned so seldom. 

Let me close with a final surprise, that's more of a disappointment. 

I certainly expected that, by now, online tools for conversation, work, collaboration and discourse would have become far more useful, sophisticated and effective than they currently are. I know I'm pretty well alone here, but all the glossy avatars and video and social network sites conceal a trivialization of interaction, dragging it down to the level of single-sentence grunts, flirtation and ROTFL [rolling on the floor laughing], at a time when we need discussion and argument to be more effective than ever. 

Indeed, most adults won't have anything to do with all the wondrous gloss that fills the synchronous online world, preferring by far the older, asynchronous modes, like web sites, email, downloads etc. 

This isn't grouchy old-fart testiness toward the new. In fact, there are dozens of discourse-elevating tools just waiting out there to be born. Everybody is still banging rocks together, while bragging about the colors. Meanwhile, half of the tricks that human beings normally use, in real world conversation, have never even been tried online. 


RUDY RUCKER
Mathematician, Computer Scientist; CyberPunk Pioneer; Novelist; Author,
Lifebox, the Seashell, and the Soul

Can Robots See God?

Studying mathematical logic in the 1970s I believed it was possible to put together a convincing argument that no computer program can fully emulate a human mind. Although nobody had quite gotten the argument right, I hoped to straighten it out. 

My belief in this will-o-the-wisp was motivated by a gut feeling that people have numinous inner qualities that will not be found in machines.  For one thing, our self-awareness lets us reflect on ourselves and get into endless mental regresses: "I know that I know that I know..."  For another, we have moments of mystical illumination when we seem to be in contact, if not with God, then with some higher cosmic mind.  I felt that surely no machine could be self-aware or experience the divine light. 

At that point, I'd never actually touched a computer — they were still inaccessible, stygian tools of the establishment.  Three decades rolled by, and I'd morphed into a Silicon Valley computer scientist, in constant contact with nimble chips.  Setting aside my old prejudices, I changed my mind — and came to believe that we can in fact create human-like computer programs.

Although writing out such a program is in some sense beyond the abilities of any one person, we can set up simulated worlds in which such computer programs evolve.  I feel confident that some relatively simple set-up will, in time, produce a human-like program capable of emulating all known intelligent human behaviors: writing books, painting pictures, designing machines, creating scientific theories, discussing philosophy, and even falling in love.  More than that, we will be able to generate an unlimited number of such programs, each with its own particular style and personality. 

What of the old-style attacks from the quarters of mathematical logic?  Roughly speaking, these arguments always hinged upon a spurious belief that we can somehow discern between, on the one hand, human-like systems which are fully reliable and, on the other hand, human-like systems fated to begin spouting gibberish.  But the correct deduction from mathematical logic is that there is absolutely no way to separate the sheep from the goats.  Note that this is already our situation vis-a-vis real humans: you have no way to tell if and when a friend or a loved one will forever stop making sense. 

With the rise of new practical strategies for creating human-like programs and the collapse of the old a priori logical arguments against this endeavor, I have to reconsider my former reasons for believing humans to be different from machines.   Might robots become self-aware?  And — not to put too fine a point on it — might they see God?  I believe both answers are yes. 

Consciousness probably isn't that big a deal.  A simple pair of facing mirrors exhibit a kind of endlessly regressing self-awareness, and this type of pattern can readily be turned into computer code. 

And what about basking in the divine light?  Certainly if we take a reductionistic view that mystical illumination is just a bath of intoxicating brain chemicals, then there seems to be no reason that machines couldn't occasionally be nudged into exceptional states as well.  But I prefer to suppose that mystical experiences involve an objective union with a higher level of mind, possibly mediated by offbeat physics such as quantum entanglement, dark matter, or higher dimensions. 

Might a robot enjoy these true mystical experiences?  Based on my studies of the essential complexity of simple systems, I feel that any physical object at all must be equally capable of enlightenment.  As the Zen apothegm has it, "The universal rain moistens all creatures." 

So, yes, I now think that robots can see God.


NICK BOSTROM
Philosopher, University of Oxford; Author,

Everything

For me, belief is not an all or nothing thing — believe or disbelieve, accept or reject.  Instead, I have degrees of belief, a subjective probability distribution over different possible ways the world could be.  This means that I am constantly changing my mind about all sorts of things, as I reflect or gain more evidence.  While I don't always think explicitly in terms of probabilities, I often do so when I give careful consideration to some matter.  And when I reflect on my own cognitive processes, I must acknowledge the graduated nature of my beliefs. 

The commonest way in which I change my mind is by concentrating my credence function on a narrower set of possibilities than before.  This occurs every time I learn a new piece of information.  Since I started my life knowing virtually nothing, I have changed my mind about virtually everything.  For example, not knowing a friend's birthday, I assign a 1/365 chance (approximately) of it being the 11th of August.  After she tells me that the 11th of August is her birthday, I assign that date a probability of close to 100%.  (Never exactly 100%, for there is always a non-zero probability of miscommunication, deception, or other error.) 

It can also happen that I change my mind by smearing out my credence function over a wider set of possibilities.  I might forget the exact date of my friend's birthday but remember that it is sometime in the summer.  The forgetting changes my credence function, from being almost entirely concentrated on 11th of August to being spread out more or less evenly over all the summer months.  After this change of mind, I might assign a 1% probability to my friend's birthday being on the 11th of August.

My credence function can become more smeared out not only by forgetting but also by learning — learning that what I previously took to be strong evidence for some hypothesis is in fact weak or misleading evidence.  (This type of belief change can often be mathematically modeled as a narrowing rather than a broadening of credence function, but the technicalities of this are not relevant here.)

For example, over the years I have become moderately more uncertain about the benefits of medicine, nutritional supplements, and much conventional health wisdom.  This belief change has come about as a result of several factors.  One of the factors is that I have read some papers that cast doubt on the reliability of the standard methodological protocols used in medical studies and their reporting.  Another factor is my own experience of following up on MEDLINE some of the exciting medical findings reported in the media — almost always, the search of the source literature reveals a much more complicated picture with many studies showing a positive effect, many showing a negative effect, and many showing no effect.  A third factor is the arguments of a health economist friend of mine, who holds a dim view of the marginal benefits of medical care. 

Typically, my beliefs about big issues change in small steps.  Ideally, these steps should approximate a random walk, like the stock market.  It should be impossible for me to predict how my beliefs on some topic will change in the future.  If I believed that a year hence I will assign a higher probability to some hypothesis than I do today — why, in that case I could raise the probability right away.  Given knowledge of what I will believe in the future, I would defer to the beliefs of my future self, provided that I think my future self will be better informed than I am now and at least as rational. 

I have no crystal ball to show me what my future self will believe.  But I do have access to many other selves, who are better informed than I am on many topics.  I can defer to experts.  Provided they are unbiased and are giving me their honest opinion, I should perhaps always defer to people who have more information than I do — or to some weighted average of expert opinion if there is no consensus.  Of course, the proviso is a very big one: often I have reason to disbelieve that other people are unbiased or that they are giving me their honest opinion.  However, it is also possible that I am biased and self-deceiving.  An important unresolved question is how much epistemic weight a wannabe Bayesian thinker should give to the opinions of others.  I'm looking forward to changing my mind on that issue, hopefully by my credence function becoming concentrated on the correct answer. 


GINO SEGRE
Physicist, University of Pennsylvania; Author: Faust In Copenhagen: A Struggle for the Soul of Physics

The Universe's Expansion

The first topic you treat in freshman physics is showing how a ball shot straight up out of the mouth of a cannon will reach a maximum height and then fall back to Earth, unless its initial velocity, known now as escape velocity, is great enough that it breaks out of the Earth' gravitational field. If that is the case, its final velocity is however always less than its initial one. Calculating escape velocity may not be very relevant for cannon balls, but certainly is for rocket ships. 

The situation with the explosion we call the Big Bang is obviously more complicated, but really not that different, or so I thought. The standard picture said that there was an initial explosion, space began to expand and galaxies moved away from one another. The density of matter in the Universe determined whether the Big Bang would eventually be followed by a Big Crunch or whether the celestial objects would continue to move away from one another with decreasing acceleration. In other words one could calculate the Universe's escape velocity.  Admittedly the discovery of Dark Matter, an unknown quantity seemingly five times as abundant as known matter, seriously altered the framework but not in a fundamental way since Dark Matter was after all still matter, even if its identity is unknown. 

This picture changed in 1998 with the announcement by two teams, working independently, that the rate of acceleration of the Universe's expansion was increasing, not decreasing. It was as if freshman physics' cannonball miraculously moved faster and faster as it left the Earth. There was no possibility of a Big Crunch, in which the Universe would collapse back on itself. The groups' analyses, based on observing distant stars of known luminosity, supernovae 1a, was solid. Science magazine dubbed it 1998's Discovery of The Year.

The cause of this apparent gravitational repulsion is not known. Called Dark Energy to distinguish it from Dark Matter, it appears to be the dominant force in the Universe's expansion, roughly three times as abundant as its Dark matter counterpart. The prime candidate for its identity is the so-called Cosmological Constant, a term first introduced into the cosmic gravitation equations by Einstein to neutralize expansion, but done away with by him when Hubble reported that the Universe was in fact expanding.

Finding a theory that will successfully calculate the magnitude of this cosmological constant, assuming this is the cause of the accelerating expansion, is perhaps the outstanding problem in the conjoined areas of cosmology and elementary particle physics. Despite many attempts, success does not seem to be in sight. If the cosmological constant is not the answer, an alternate explanation of the Dark Energy would be equally exciting. 

Furthermore the apparent present equality, to within a factor of three, of matter density and the cosmological constant has raised a series of important questions. Since matter density decreases rapidly as the Universe expands (matter per volume decreases as volume increases) and the cosmological constant does not, we seem to be living in that privileged moment of the Universe's history when the two factors are roughly equal. Is this simply an accident? Will the distant future really be one in which, with Dark Energy increasingly important, celestial objects have moved so far apart so quickly as to fade from sight? 

The discovery of Dark Energy has radically changed our view of the Universe. Future, keenly awaited findings, such as the identities of Dark Matter and Dark Energy will do so again. 


ARNOLD TREHUB
Psychologist, University of Massachusetts, Amherst; Author: The Cognitive Brain

I have never questioned the conventional view that a good grounding in the physical sciences is needed for a deep understanding of the biological sciences. It did not occur to me that the opposite view might also be true. If someone were to have asked me if biological knowledge might significantly influence my understanding of our basic physical sciences, I would have denied it.

Now I am convinced that the future understanding of our most important physical principles will be profoundly shaped by what we learn in the living realm of biology. What have changed my mind are the relatively recent developments in the theoretical constructs and empirical findings in the sciences of the brain — the biological foundation of all thought. Progress here can cast new light on the fundamental subjective factors that constrain our scientific formulations in what we take to be an objective enterprise. 


MARK PAGEL
Evolutionary Biologist, Reading University, England

We Differ More Than We Thought

The last thirty to forty years of social science has brought an overbearing censorship to the way we are allowed to think and talk about the diversity of people on Earth. People of Siberian descent, New Guinean Highlanders, those from the Indian sub-continent, Caucasians, Australian aborigines, Polynesians, Africans — we are, officially, all the same: there are no races. 

Flawed as the old ideas about race are, modern genomic studies reveal a surprising, compelling and different picture of human genetic diversity.  We are on average about 99.5% similar to each other genetically. This is a new figure, down from the previous estimate of 99.9%. To put what may seem like miniscule differences in perspective, we are somewhere around 98.5% similar, maybe more, to chimpanzees, our nearest evolutionary relatives. 

The new figure for us, then, is significant. It derives from among other things, many small genetic differences that have emerged from studies that compare human populations. Some confer the ability among adults to digest milk, others to withstand equatorial sun, others yet confer differences in body shape or size, resistance to particular diseases, tolerance to hot or cold, how many offspring a female might eventually produce, and even the production of endorphins — those internal opiate-like compounds. We also differ by surprising amounts in the numbers of copies of some genes we have. 

Modern humans spread out of Africa only within the last 60-70,000 years, little more than the blink of an eye when stacked against the 6 million or so years that separate us from our Great Ape ancestors. The genetic differences amongst us reveal a species with a propensity to form small and relatively isolated groups on which natural selection has often acted strongly to promote genetic adaptations to particular environments. 

We differ genetically more than we thought, but we should have expected this: how else but through isolation can we explain a single species that speaks at least 7,000 mutually unintelligible languages around the World? 

What this all means is that, like it or not, there may be many genetic differences among human populations — including differences that may even correspond to old categories of 'race' — that are real differences in the sense of making one group better than another at responding to some particular environmental problem. This in no way says one group is in general 'superior' to another, or that one group should be preferred over another.  But it warns us that we must be prepared to discuss genetic differences among human populations. 


CHARLES SEIFE
Professor of Journalism, New York University; formerly journalist, Science magazine; Author, Zero: The Biography Of A Dangerous Idea

I used to think that a modern, democratic society had to be a scientific society. After all, the scientific revolution and the American Revolution were forged in the same flames of the enlightenment. Naturally, I thought, a society that embraces the freedom of thought and expression of a democracy would also embrace science.

However, when I first started reporting on science, I quickly realized that science didn't spring up naturally in the fertile soil of the young American democracy. Americans were extraordinary innovators — wonderful tinkerers and engineers — but you can count the great 19th century American physicists on one hand and have two fingers left over. The United States owes its scientific tradition to aristocratic Europe's universities (and to its refugees), not to any native drive.

In fact, science clashes with the democratic ideal. Though it is meritocratic, it is practiced in the elite and effete world of academe, leaving the vast majority of citizens unable to contribute to it in any meaningful way. Science is about freedom of thought, yet at the same time it imposes a tyranny of ideas.

In a democracy, ideas are protected. It's the sacred right of a citizen to hold — and to disseminate — beliefs that the majority disagrees with, ideas that are abhorrent, ideas that are wrong. However, scientists are not free to be completely open minded; a scientist stops becoming a scientist if he clings to discredited notions. The basic scientific urge to falsify, to disprove, to discredit ideas clashes with the democratic drive to tolerate and protect them.

This is why even those politicians who accept evolution will never attack those politicians who don't; at least publicly, they cast evolutionary theory as a mere personal belief. Attempting to squelch creationism smacks of elitism and intolerance — it would be political suicide. Yet this is exactly what biologists are compelled to do; they exorcise falsehoods and drive them from the realm of public discourse.

We've been lucky that the transplant of science has flourished so beautifully on American soil. But I no longer take it for granted that this will continue; our democratic tendencies might get the best of us in the end.


DAVID BODANIS
Writer; Consultant; Author, Passionate Minds

The Bible Is Inane

When I was very little the question was easy. I simply assumed the whole Bible was true, albeit in a mysterious, grown-up sort of way. But once I learned something of science, at school and then at university, that unquestioning belief slid away.

Mathematics was especially important here, and I remember how entranced I was when I first saw the power of axiomatic systems. Those were logical structures that were as beautiful as complex crystals —  but far, far clearer. If there was one inaccuracy at any point in the system, you could trace it, like a scarcely visible stretching crack through the whole crystal; you could see exactly how it had to undermine the validity of far distant parts as well. Since there are obvious factual inaccuracies in the Bible, as well as repugnant moral commands, then —  just as with any tight axiomatic system —  huge other parts of it had to be wrong, as well. In my mind that discredited it all.

What I've come to see more recently is that the Bible isn't monolithic in that way. It's built up in many, often quite distinct layers. For example, the book of Joshua describes a merciless killing of Jericho's inhabitants, after that city's walls were destroyed. But archaeology shows that when this was supposed to be happening, there was no large city with walls there to be destroyed. On the contrary, careful dating of artifacts, as well as translations from documents of the great empires in surrounding regions, shows that the bloodthirsty Joshua story was quite likely written by one particular group, centuries later, trying to give some validity to a particular royal line in 7th century BC Jerusalem, which wanted to show its rights to the entire country around it. Yet when that Joshua layer is stripped away, other layers in the Bible remain. They can stand, or be judged, on their own.

A few of those remaining layers have survived only because they became taken up by narrow power structures, concerned with aggrandizing themselves, in the style of Philip Pullman's excellent books. But others have survived across the millennia for different reasons. Some speak to the human condition with poetry of aching beauty. And others —  well, there's a further reason I began to doubt the inanity of everything I couldn't understand.

A child age three, however intelligent, and however much it squinches his or her fact tight in concentration, still won't be able to grasp notions that are easy for us, such as 'century', or 'henceforth', let alone greater subtleties which 20th century science has clarified, such as 'simultaneity' or 'causality'. True and important things exist, which young children can't comprehend. It seems odd to be sure that we, adult humans, existing at this one particular moment in evolution, have no such limits.

I realized that the world  isn't divided into science on the one hand, and nonsense or arbitrary biases on the other. And I wonder now what might be worth looking for, hidden there, fleetingly in-between.


HAIM HARARI
Physicist, former President, Weizmann Institute of Science

Clear and simple is not the same as provable and well defined

I used to think that if something is clear and simple, it must also be provable or at least well defined, and if something is well defined, it might be relatively simple. It isn't so.

If you hear about sightings of a weird glow approaching us in the night sky, it might be explained as a meteorite or as little green men arriving in a spaceship from another galaxy. In most specific cases, both hypotheses can be neither proved nor disproved, rigorously. Nothing is well defined here. Yet, it is clear that the meteorite hypothesis is scientifically much more likely.

When you hear about a new perpetual motion machine or about yet another claim of cold fusion, you raise an eyebrow, you are willing to bet against it and, in your guts, you know it is wrong, but it is not always easy to disprove it rigorously.

The reliability of forecasts regarding weather, stock markets and astrology is descending in that order. All of them are based on guesses, with or without historical data. Most of them are rarely revisited by the media, after the fact, thus avoiding being exposed as unreliable. In most cases, predicting that the immediate future will be the same as the immediate past, has a higher probability of being correct, than the predictions of the gurus. Yet, we, as scientists, have considerable faith in weather predictions; much less faith in predicting peaks and dips of the stock market and no faith at all is astrology. We can explain why, and we are certainly right, but we cannot prove why. Proving it by historical success data, is as convincing (for the future) as the predictions themselves.

Richard Feynman in his famous Lectures on Physics provided the ultimate physics definition of Energy: It is that quantity which is conserved. Any Lawyer, Mathematician or Accountant would have laughed at this statement. Energy is perhaps the most useful, clear and common concept in all of science, and Feynman is telling us, correctly and shamelessly, that it has no proper rigorous and logical definition.

How much is five thousand plus two? Not so simple. Sometimes it is five thousands and two (as in your bank statement) and sometimes it is actually five thousand (as in the case of the Cairo tour guide who said "this pyramid is 5002 years old; when I started working here two years ago, I was told it was 5000 years old").

The public thinks, incorrectly, that science is a very accurate discipline where everything is well defined. Not so. But the beauty of it is that all of the above statements are scientific, obvious and useful, without being precisely defined. That is as much part of the scientific method as verifying a theory by an experiment (which is always accurate only to a point).

To speak and to understand the language of science is, among other things, to understand this "clear vagueness". It exists, of course, in other areas of life. Every normal language possesses numerous such examples, and so do all fields of social science.

Judaism is a religion and I am an atheist. Nevertheless, it is clear that I am Jewish. It would take a volume to explain why, and the explanation will remain rather obscure and ill defined. But the fact is simple, clear, well understood and undeniable.

Somehow, it is acceptable to face such situations in nonscientific matters, but most people think, incorrectly, that the quantitative natural sciences must be different. They are different, in many ways, but not in this way.

Common sense has as much place as logic, in scientific research. Intuition often leads to more insight than algorithmic thinking. Familiarity with previous failed attempts to solve a problem may be detrimental, rather than helpful. This may explain why almost all important physics breakthroughs are made by people under forty. This also explains why, in science, asking the right question is at least as important as being able to solve a well posed problem.

You might say that the above kind of thinking is prejudiced and inaccurate, and that it might hinder new discoveries and new scientific ideas. Not so. Good scientists know very well how to treat and use all of these "fuzzy" statements. They also know how to reconsider them, when there is a good reason to do so, based on new solid facts or on a new original line of thinking. This is one of the beautiful features of science.


TIMOTHY TAYLOR
Archaeologist, University of Bradford; Author, The Buried Soul

Relativism

Where once I would have striven to see Incan child sacrifice 'in their terms', I am increasingly committed to seeing it in ours. Where once I would have directed attention to understanding a past cosmology of equal validity to my own, I now feel the urgency to go beyond a culturally-attuned explanation and reveal cold sadism, deployed as a means of social control by a burgeoning imperial power.

In Cambridge at the end of the 70s, I began to be inculcated with the idea that understanding the internal logic and value system of a past culture was the best way to do archaeology and anthropology. The challenge was to achieve this through sensitivity to context, classification and symbolism. A pot was no longer just a pot, but a polyvalent signifier, with a range of case-sensitive meanings. A rubbish pit was no longer an unproblematic heap of trash, but a semiotic entity embodying concepts of contagion and purity, sacred and profane. A ritual killing was not to be judged bad, but as having validity within a different worldview.

Using such 'contextual' thinking, a lump of slag found in a 5000 BC female grave in Serbia was no longer seen as chance contaminant — bi-product garbage from making copper jewelry. Rather it was a kind of poetic statement bearing on the relationship between biological and cultural reproduction. Just as births in the Vin?a culture were attended by midwives who also delivered the warm but useless slab of afterbirth, so Vinca culture ore was heated in a clay furnace that gave birth to metal. From the furnace — known from many ethnographies to have projecting clay breasts and a graphically vulvic stoking opening — the smelters delivered technology's baby. With it came a warm but useless lump of slag. Thus the slag in a Vinca woman's grave, far from being accidental trash, hinted at a complex symbolism of gender, death and rebirth.

So far, so good: relativism worked as a way towards understanding that our industrial waste was not theirs, and their idea of how a woman should be appropriately buried not ours. But what happens when relativism says that our concepts of right and wrong, good and evil, kindness and cruelty, are inherently inapplicable? Relativism self-consciously divests itself of a series of anthropocentric and anachronistic skins — modern, white, western, male-focused, individualist, scientific (or 'scientistic') — to say that the recognition of such value-concepts is radically unstable, the 'objective' outsider opinion a worthless myth.

My colleague Andy Wilson and our team have recently examined the hair of sacrificed children found on some of the high peaks of the Andes. Contrary to historic chronicles that claim that being ritually killed to join the mountain gods was an honour that the Incan rulers accorded only to their own privileged offspring, diachronic isotopic analyses along the scalp hairs of victims indicate that it was peasant children, who, twelve months before death, were given the outward trappings of high status and a much improved diet to make them acceptable offerings. Thus we see past the self-serving accounts of those of the indigenous elite who survived on into Spanish rule. We now understand that the central command in Cuzco engineered the high-visibility sacrifice of children drawn from newly subject populations. And we can guess that this was a means to social control during the massive, 'shock & awe' style imperial expansion southwards into what became Argentina.

But the relativists demur from this understanding, and have painted us as culturally insensitive, ignorant scientists (the last label a clear pejorative). For them, our isotope work is informative only as it reveals 'the inner fantasy life of, mostly, Euro-American archaeologists, who can't possibly access the inner cognitive/cultural life of those Others.' The capital 'O' is significant. Here we have what the journalist Julie Burchill mordantly unpacked as 'the ever-estimable Other' — the albatross that post-Enlightenment and, more importantly, post-colonial scholarship must wear round its neck as a sign of penance.

We need relativism as an aid to understanding past cultural logic, but it does not free us from a duty to discriminate morally and to understand that there are regularities in the negatives of human behaviour as well as in its positives. In this case, it seeks to ignore what Victor Nell has described as 'the historical and cross-cultural stability of the uses of cruelty for punishment, amusement, and social control.' By denying the basis for a consistent underlying algebra of positive and negative, yet consistently claiming the necessary rightness of the internal cultural conduct of 'the Other', relativism steps away from logic into incoherence.


LEON LEDERMAN
Physicist and Nobel Laureate; Director Emeritus, Fermilab; Coauthor, The God Particle

The Obligations and Responsibilities of The Scientist

My academic experience, mainly at Columbia University from 1946-1978, instilled the following firm beliefs:

The role of the Professor, reflecting the mission of the University, is research and dissemination of the knowledge gained. However, the Professor has many citizenship obligations: to his community, State and Nation, to his University, to his field of research, e.g. physics, to his students. In the latter case, one must add to the content knowledge transferred, the moral and ethical concerns that science brings to society. So scientists have an obligation to communicate their knowledge, popularize, and whenever relevant, bring his knowledge to bear on the issues of the time. However, additionally, scientists play a large role in advisory boards and systems from the President's Advisory system all the way to local school boards and PTAs. I have always believed that the above menu more or less covered all the obligations and responsibilities of the scientist. His most sacred obligation is to continue to do science. Now I know that I was dead wrong.

Taking even a cursory stock of current events, I am driven to the ultimately wise advice of my Columbia mentor, I.I. Rabi, who, in our many corridor bull sessions, urged his students to run for public office and get elected. He insisted that to be an advisor (he was an advisor to Oppenheimer at Los Alamos, later to Eisenhower and to the AEC) was ultimately an exercise in futility and that the power belonged to those who are elected. Then, we thought the old man was bonkers. But today......

Just look at our national and international dilemmas: global climate change (U.S. booed in Bali); nuclear weapons (seventeen years after the end of the Cold War, the U.S. has over 7,000 nuclear weapons, many poised to instant flight. Who decided?); stem cell research (still hobbled by White House obstacles). Basic research and science education are rated several nations below "Lower Slobovenia", our national deficit will burden the nation for generations, a wave of religious fundamentalism, an endless war in Iraq and the growing security restrictions on our privacy and freedom (excused by an even more endless and mindless war on terrorism) seem to be paralyzing the Congress. We need to elect people who can think critically.

A Congress which is overwhelmingly dominated by lawyers and MBAs makes no sense in this 21st century in which almost all issues have a science and technology aspect. We need a national movement to seek out scientists and engineers who have demonstrated the required management and communication skills. And we need a strong consensus of mentors that the need for wisdom and knowledge in the Congress must have a huge priority.


DAN SPERBER
Social and cognitive scientist; Directeur de Recherche, CNRS, Paris; Author, Rethinking Symbolism

How I Became An Evolutionary Psychologist

As a student, I was influenced by Claude Lévi-Strauss and even more by Noam Chomsky. Both of them dared talk about "human nature" when the received view was that there was no such thing. In my own work, I argued for a naturalistic approach in the social sciences. I took for granted that human cognitive dispositions were shaped by biological evolution and more specifically by Darwinian selection. While I did occasionally toy with evolutionary speculations, I failed to see at the time how they could play more than a quite marginal role in the study of human psychology and culture.

Luckily, in 1987, I was asked by Jacques Mehler, the founder and editor of Cognition, to review a very long article intriguingly entitled "The logic of social exchange: Has natural selection shaped how humans reason?" In most experimental psychology articles the theoretical sections are short and relatively shallow. Here, on the other hand, the young author, Leda Cosmides, was arguing in an altogether novel way for an ambitious theoretical claim. The forms of cooperation unique to and characteristic of humans could only have evolved, she maintained, if there had also been, at a psychological level, the evolution of a mental mechanism tailored to understand and manage social exchanges and in particular to detect cheaters. Moreover, this mechanism could be investigated by means of standard reasoning experiments.

This is not the place to go into the details of the theoretical argument — which I found and still find remarkably insightful — or of the experimental evidence — which I have criticized in detail with experiments of my own as inadequate. Whatever its shortcoming, this was an extraordinarily stimulating paper, and I strongly recommended acceptance of a revised version. The article was published in 1989 and the controversies it stirred have not yet abated.

Reading the work of Leda Cosmides and of John Tooby, her collaborator (and husband), meeting them shortly after, and initiating a conversation with them that has never ceased made me change my mind. I had known that we could reflect on the mental capacities of our ancestors on the basis of what we know of our minds; I now understood that we can also draw fundamental insights about our present minds through reflecting on the environmental problems and opportunities that have exerted selective pressure on our Paleolithic ancestors.

Ever since, I have tried to contribute to the development of evolutionary psychology, to the surprise and dismay of some of my more standard-social-science friends and also of some evolutionary psychologists who see me more as a heretic than a genuine convert. True, I have no taste or talent for orthodoxy. Moreover, I find much of the work done so far under the label "evolutionary psychology" rather disappointing. Evolutionary psychology will succeed to the extent that it causes cognitive psychologists to rethink central aspects of human cognition in an evolutionary perspective, to the extent, that is, that psychology in general becomes evolutionary.

The human species is exceptional in its massive investment in cognition, and in forms of cognitive activity — language, higher-order thinking, abstraction — that are as unique to humans as echolocation is to bats. Yet more than half of all work done in evolutionary psychology today is about mate choice, a mental activity found in a great many species. There is nothing intrinsically wrong in studying mate choice, of course, and some of the work done in this area is outstanding.

However the promise of evolutionary psychology is first and foremost to help explain aspects of human psychology that are genuinely exceptional among earthly species and that in turn help explain the exceptional character of human culture and ecology. This is what has to be achieved to a much greater extent than has been the case so far if we want more skeptical cognitive and social scientists to change their minds too.


THOMAS METZINGER
Johannes Gutenberg-Universität Mainz; Author, Being No One

There are No Moral Facts

I have become convinced that it would be of fundamental importance to know what a good state of consciousness is. Are there forms of subjective experience which — in a strictly normative sense — are better than others? Or worse? What states of consciousness should be illegal? What states of consciousness do we want to foster and cultivate and integrate into our societies? What states of consciousness can we force upon animals — for instance, in consciousness research itself? What states of consciousness do we want to show our children? And what state of consciousness do we eventually die in ourselves?

2007 has seen the rise of an important new discipline: "neuroethics". This is not simply a new branch of applied ethics for neuroscience — it raises deeper issues about selfhood, society and the image of man.  Neuroscience is now quickly transformed into neurotechnology. I predict that parts of neurotechnology will turn into consciousness technology. In 2002, out-of-body experiences were, for the first time, induced with an electrode in the brain of an epileptic patient.  In 2007 we saw the first two studies, published in Science, demonstrating how the conscious self can be transposed to a location outside of the physical body as experienced, non-invasively and in healthy subjects. Cognitive enhancers are on the rise. The conscious experience of will has been experimentally constructed and manipulated in a number of ways. Acute episodes of depression can be caused by direct interventions in the brain, and they have also been successfully blocked in previously treatment-resistant patients. And so on.

Whenever we understand the specific neural dynamics underlying a specific form of conscious content, we can in principle delete, amplify or modulate this content in our minds. So shouldn't we have a new ethics of consciousness — one that does not ask what a good action is, but that goes directly to the heart of the matter, asks what we want to do with all this new knowledge and what the moral value of states of subjective experience is?

Here is where I have changed my mind. There are no moral facts. Moral sentences have no truth-values. The world itself is silent, it just doesn't speak to us in normative affairs — nothing in the physical universe tells us what makes an action a good action or a specific brain-state a desirable one. Sure, we all would like to know what a good neurophenomenological configuration really is, and how we should optimize our conscious minds in the future. But it looks like, in a more rigorous and serious sense, there is just no ethical knowledge to be had. We are alone. And if that is true, all we have to go by are the contingent moral intuitions evolution has hard-wired into our emotional self-model. If we choose to simply go by what feels good, then our future is easy to predict: It will be primitive hedonism and organized religion.


MARC D. HAUSER
Psychologist and Biologist, Harvard University: Author, Moral Minds

The Limits Of Darwinian Reasoning

Darwin is the man, and like so many biologists, I have benefited from his prescient insights, handed to us 150 years ago. The logic of adaptation has been a guiding engine of my research and my view of life. In fact, it has been difficult to view the world through any other filter. I can still recall with great vividness the day I arrived in Cambridge, in June 1992, a few months before starting my job as an assistant professor at Harvard. I was standing on a street corner, waiting for a bus to arrive, and noticed a group of pigeons on the sidewalk. There were several males displaying, head bobbing and cooing, attempting to seduce the females. The females, however, were not paying attention. They were all turned, in Prussian solider formation, out toward the street, looking at the middle of the intersection where traffic was whizzing by. There, in the intersection, was one male pigeon, displaying his heart out. Was this guy insane? Hadn't he read the handbook of natural selection. Dude, it's about survival. Get out of the street!!!

Further reflection provided the solution to this apparently mutant, male pigeon. The logic of adaptation requires us to ask about the costs and benefits of behavior, trying to understand what the fitness payoffs might be. Even for behaviors that appear absurdly deleterious, there is often a benefit lurking. In the case of our apparently suicidal male pigeon, there was a benefit, and it was lurking in the females' voyeurism, their rubber necking. The females were oriented toward this male, as opposed to the conservative guys on the sidewalk, because he was playing with danger, showing off, proving that even in the face of heavy traffic, he could fly like a butterfly and sting like a bee, jabbing and jiving like the great Muhammed Ali.

The theory comes from the evolutionary biologist Amotz Zahavi who proposed that even costly behaviors that challenge survival can evolve if they have payoffs to genetic fitness; these payoffs arrive in the currency of more matings, and ultimately, more babies. Our male pigeon was showing off his handicap. He was advertising to the females that even in the face of potential costs from Hummers and Beamers and Buses, he was still walking the walk and talking the talk. The females were hooked, mesmerized by this extraordinarily macho male. Handicaps evolve because they are honest indicators of fitness. And Zahavi's theory represents the intellectual descendent of Darwin's original proposal.

I must admit, however, that in recent years, I have made less use of Darwin's adaptive logic. It is not because I think that the adaptive program has failed, or that it can't continue to account for a wide variety of human and animal behavior. But with respect to questions of human and animal mind, and especially some of the unique products of the human mind — language, morality, music, mathematics — I have, well, changed my mind about the power of Darwinian reasoning.

Let me be clear about the claim here. I am not rejecting Darwin's emphasis on comparative approaches, that is, the use of phylogenetic or historical data. I still practice this approach, contrasting the abilities of humans and animals in the service of understanding what is uniquely human and what is shared. And I still think our cognitive prowess evolved, and that the human brain and mind can be studied in some of the same ways that we study other bits of anatomy and behavior. But where I have lost the faith, so to speak, is in the power of the adaptive program to explain or predict particular design features of human thought.

Although it is certainly reasonable to say that language, morality and music have design features that are adaptive, that would enhance reproduction and survival, evidence for such claims is sorely missing. Further, for those who wish to argue that the evidence comes from the complexity of the behavior itself, and the absurdly low odds of constructing such complexity by chance, these arguments just don't cut it with respect to explaining or predicting the intricacies of language, morality, music or many other domains of knowledge.

In fact, I would say that although Darwin's theory has been around, and readily available for the taking for 150 years, it has not advanced the fields of linguistics, ethics, or mathematics. This is not to say that it can't advance these fields. But unlike the areas of economic decision making, mate choice, and social relationships, where the adaptive program has fundamentally transformed our understanding, the same can not be said for linguistics, ethics, and mathematics. What has transformed these disciplines is our growing understanding of mechanism, that is, how the mind represents the world, how physiological processes generate these representations, and how the child grows these systems of knowledge.

Bidding Darwin adieu is not easy. My old friend has served me well. And perhaps one day he will again. Until then, farewell.


ROBERT PROVINE
Psychologist and Neuroscientist, University of Maryland; Author, Laughter

In Praise of Fishing Expeditions

Mentors, paper referees and grant reviewers have warned me on occasion about scientific "fishing expeditions," the conduct of empirical research that does not test a specific hypothesis or is not guided by theory. Such "blind empiricism" was said to be unscientific, to waste time and produce useless data. Although I have never been completely convinced of the hazards of fishing, I now reject them outright, with a few reservations.

I'm not advocating the collection of random facts, but the use of broad-based descriptive studies to learn what to study and how to study it. Those who fish learn where the fish are, their species, number and habits. Without the guidance of preliminary descriptive studies, hypothesis testing can be inefficient and misguided. Hypothesis testing is a powerful means of rejecting error — of trimming the dead limbs from the scientific tree — but it does not generate hypotheses or signify which are worthy of test. I'll provide two examples from my experience.

In graduate school, I became intrigued with neuroembryology and wanted to introduce it to developmental psychology, a discipline that essentially starts at birth. My dissertation was a fishing expedition that described embryonic behavior and its neurophysiological mechanism. I was exploring uncharted waters and sought advice by observing the ultimate expert, the embryo. In this and related work, I discovered that prenatal movement is the product of seizure-like discharges in the spinal cord (not the brain), that the spinal discharges occurred spontaneously (not a response to sensory stimuli), that the function of  movement was to sculpt joints (not to shape postnatal behavior such walking), and to regulate the number of motorneurons. Remarkable! 

But decades later, this and similar work is largely unknown to developmental psychologists who have no category for it. The traditional psychological specialties of perception, learning, memory, motivation and the like, are not relevant during most of the prenatal period. The finding that embryos are profoundly unpsychological beings guided by unique developmental priorities and processes is not appreciated by theory-driven developmental psychologists. When the fishing expedition indicates that there is no appropriate spot in the scientific filing cabinet, it may be time to add another drawer.

Years later and unrepentant, I embarked on a new fishing expedition, this time in pursuit of the human universal of laughter — what it is, when we do it, and what it means. In the spirit of my embryonic research, I wanted the expert to define my agenda—a laughing person. Explorations about research funding with administrators at a federal agency were unpromising. One linguist patiently explained that my project "had no obvious implications for any of the major theoretical issues in linguistics."  Another, a speech scientist, noted that "laughter isn't speech, and therefore had no relevance to my agency's mission." 

Ultimately, this atheoretical and largely descriptive work provided many surprises and counterintuitive findings. For example, laughter, like crying, is not consciously controlled, contrary to literature suggesting that we speak ha-ha as we would choose a word in speech. Most laughter is not a response to humor. Laughter and speech are controlled by different brain mechanisms, with speech dominating laughter. Contagious laughter is the product of neurologically programmed social behavior. Contrasts between chimpanzee and human laughter reveal why chimpanzees can't talk (inadequate breath control), and the evolutionary event necessary for the selection for human speech (bipedality).

Whether embryonic behavior or laughter, fishing expeditions guided me down the appropriate empirical path, provided unanticipated insights, and prevented flights of theoretical fancy. Contrary to lifelong advice, when planning a new research project, I always start by going fishing.


TODD E. FEINBERG, M.D.
Professor of Psychiatry and Neurology, Albert Einstein College of Medicine; Author, Altered Egos

 

 


Soul Searching

For most of my life I viewed any notion of the "soul" a fanciful religious invention. I agreed with the view of the late Nobel Laureate Francis Crick who in his book The Astonishing Hypothesis claimed "A modern neurobiologist sees no need for the religious concept of a soul to explain the behavior of humans and other animals." But is the idea of a soul really so crazy and beyond the limits of scientific reason?

From the standpoint of neuroscience, it is easy to make the claim that Descartes is simply wrong about the separateness of brain and mind. The plain fact is that there is no scientific evidence that a self, an individual mind, or a soul could exist without a physical brain. However, there are persisting reasons why the self and the mind do not appear to be identical with, or entirely reducible to, the brain.

For example, in spite of the claims of Massachusetts physician Dr. Duncan MacDougall, who estimated through his experiments on dying humans that approximately 21 grams of matter — the presumed weight of the human soul — was lost upon death (The New York Times "Soul Has Weight, Physician Thinks" March 11, 1907), unlike the brain, the mind cannot be objectively observed, but only subjectively experienced. The subject that represents the "I" in the statement "I think therefore I am" cannot be directly observed, weighed, or measured. And the experiences of that self, its pains and pleasures, sights and sounds possess an objective reality only to the one who experiences them. In other words, as the philosopher John Searle puts it, the mind is "irreducibly first-person."

On the other hand, although there are many perplexing properties about the brain, mind, and the self that remain to be scientifically explained — subjectivity among them — this does not mean that there must be an immaterial entity at work that explains these mysterious features. Nonetheless, I have come to believe that an individual consciousness represents an entity that is so personal and ontologically unique that it qualifies as something that we might as well call "a soul."

I am not suggesting that anything like a soul survives the death of the brain. Indeed, the link between the life of the brain and the life of the mind is irreducible, the one completely dependant upon the other. Indeed the danger of capturing the beauty and mystery of a personal consciousness and identity with the somewhat metaphorical designation "soul" is the tendency for the grandiose metaphor to obscure the actual accomplishments of the brain. The soul is not a "thing" independent of the living brain; it is part and parcel of it, its most remarkable feature, but nonetheless inextricably bound to its life and death.


KEITH DEVLIN
Mathematician; Executive Director, Center for the Study of Language and Information, Stanford; Author, The

What is the nature of mathematics? Becoming a mathematician in the 1960s, I swallowed hook, line, and sinker the Platonistic philosophy dominant at the time, that the objects of mathematics (the numbers, the geometric figures, the topological spaces, and so forth) had a form of existence in some abstract ("Platonic") realm. Their existence was independent of our existence as living, cognitive creatures, and searching for new mathematical knowledge was a process of explorative discovery not unlike geographic exploration or sending out probes to distant planets.

I now see mathematics as something entirely different, as the creation of the (collective) human mind. As such, mathematics says as much about we ourselves as it does about the external universe we inhabit. Mathematical facts are not eternal truths about the external universe, which held before we entered the picture and will endure long after we are gone. Rather, they are based on, and reflect, our interactions with that external environment.

This is not to say that mathematics is something we have freedom to invent. It's not like literature or music, where there are constraints on the form but writers and musicians exercise great creative freedom within those constraints. From the perspective of the individual human mathematician, mathematics is indeed a process of discovery. But what is being discovered is a product of the human (species)-environment interaction.

This view raises the fascinating possibility that other cognitive creatures in another part of the universe might have different mathematics. Of course, as a human, I cannot begin to imagine what that might mean. It would classify as "mathematics" only insofar as it amounted to that species analyzing the abstract structures that arose from their interactions with their environment.

This shift in philosophy has influenced the way I teach, in that I now stress social aspects of mathematics. But when I'm giving a specific lecture on, say, calculus or topology, my approach is entirely platonistic. We do our mathematics using a physical brain that evolved over hundreds of thousands of years by a process of natural selection to handle the physical and more recently the social environments in which our ancestors found themselves. As a result, the only way for the brain to actually do mathematics is to approach it "platonistically," treating mathematical abstractions as physical objects that exist.

A platonistic standpoint is essential to doing mathematics, just as Cartesian dualism is virtually impossible to dispense with in doing science or just plain communicating with one another ("one another"?). But ultimately, our mathematics is just that: our mathematics, not the universe's.


DAVID G. MYERS
Social psychologist, Hope College; author, Psychology, 8th edition

Reading and reporting on psychological science has changed my mind many times, leading me now to believe that 

• newborns are not the blank slates I once presumed,
• electroconvulsive therapy often alleviates intractable depression,
• economic growth has not improved our morale,
• the automatic unconscious mind dwarfs the controlled conscious mind,
• traumatic experiences rarely get repressed,
• personality is unrelated to birth order,
• most folks have high self-esteem (which sometimes causes problems),
• opposites do not attract,
• sexual orientation is a natural, enduring disposition (most clearly so for men), not a choice.

In this era of science-religion conflict, such revelations underscore our need for what science and religion jointly mandate: humility. Humility, I remind my student audience, is fundamental to the empirical spirit advocated long ago by Moses: "If a prophet speaks in the name of the Lord and what he says does not come true, then it is not the Lord's message." Ergo, if our or anyone's ideas survive being put to the test, so much the better for them. If they crash against a wall of evidence, it is time to rethink.


DANIEL EVERETT
Researcher of Pirahã Culture; Chair of Languages, Literatures, & Cultures, Professor of Linguistics and Anthropology, Illinois State University

Homeopathic Bias and Language Origins

I have wondered why some authors claim that people rarely if ever change their mind. I have changed my mind many times. This could be because I have weak character, because I have no self-defining philosophy, or because I like change. Whatever the reason, I enjoy changing my mind. I have occasionally irritated colleagues with my seeming motto of 'If it ain't broke, break it.'

At the same time, I adhered to a value common in the day-to-day business of scientific research, namely, that changing one's mind is alright for little matters but is suspect when it comes to big questions. Take a theory that is compatible with either conclusion 'x' or conclusion 'y'. First you believed 'x'. Then you received new information and you believed 'y'. This is a little change. And it is a natural form of learning - a change in behavior resulting from exposure to new information.

But change your mind, say, about the general theory that you work with, at least in some fields, and you are looked upon as a kind of maverick, a person without proper research priorities, a pot-stirrer. Why is that, I wonder?

I think that the stigma against major mind changes in science results from what I call 'homeopathic bias' - scientific knowledge is built up bit by little bit as we move cumulatively towards the truth.

This bias can lead researchers to avoid concluding that their work undermines the dominant theory in any significant way. Non-homeopathic doses of criticism can be considered not merely inappropriate, but even arrogant - implying somehow that the researcher is superior to his or her colleagues, whose unifying conceptual scheme is now judged to be weaker than they have noticed or are been willing to concede.

So any scientist publishing an article or a book about a non-homeopathic mind-change could be committing a career-endangering act. But I love to read these kinds of books. They bother people. They bother me.

I changed my mind about this homeopathic bias. I think it is myopic for the most part. And I changed my mind on this because I changed my mind regarding the largest question of my field - where language comes from. This change taught me about the empirical issues that had led to my shift and about the forces that can hold science and scientists in check if we aren't aware of them.

I believed at one time that culture and language were largely independent. Yet there is a growing body of research that suggests the opposite - deep reflexes from culture are to be found in grammar.

But if culture can exercise major effects on grammar, then the theory I had committed most of my research career to - the theory that grammar is part of the human genome and that the variations in the grammars of the world's languages are largely insignificant, was dead wrong. There did not have to be a specific genetic capacity for grammar - the biological basis of grammar could also be the basis of gourmet cooking, of mathematical reasoning, and of medical advances - human reasoning.

Grammar had once seemed to me too complicated to derive from any general human cognitive properties. It appeared to cry out for a specialized component of the brain, or what some linguists call the language organ. But such an organ becomes implausible if we can show that it is not needed because there are other forces that can explain language as both ontogenetic and phlyogentic fact.

Many researchers have discussed the kinds of things that hunters and gatherers needed to talk about and how these influenced language evolution. Our ancestors had to talk about things and events, about relative quantities, and about the contents of the minds of their conspecifics, among other things. If you can't talk about things and what happens to them (events) or what they are like (states), you can't talk about anything. So all languages need verbs and nouns. But I have been convinced by the research of others, as well as my own, that if a language has these, then the basic skeleton of the grammar largely follows. The meanings of verbs require a certain number of nouns and those nouns plus the verb make simple sentences, ordered in logically restricted ways. Other permutations of this foundational grammar follow from culture, contextual prominence, and modification of nouns and verbs. There are other components to grammar, but not all that many. Put like this, as I began to see things, there really doesn't seem to be much need for grammar proper to be part of the human genome as it were. Perhaps there is even much less need for grammar as an independent entity than we might have once thought.


DAVID DALRYMPLE
Student, MIT's Center for Bits and Atoms; Researcher, Internet 0, Fab Lab Thinner Clients for South Africa, Conformal Computing

Maybe MBAs Should Design Computers After All

Not that long ago, I was under the impression that the basic problem of computer architecture had been solved. After all, computers got faster every year, and gradually whole new application domains emerged. There was constantly more memory available, and software hungrily consumed it. Each new computer had a bigger power supply, and more airflow to extract the increasing heat from the processor.

Now, clock speeds aren't rising quite as quickly, and the progress that is made doesn't seem to help our computers start up or run any faster. The traditions of the computing industry, some going as far back as the first digital computers built by John von Neumann in the 1950s, are starting to grow obsolete. The slower computers seem to get faster, and the more deeply I understand the way things actually work, the more these problems become apparent to me. They really come to light when you think about a computer as a business.

Imagine if your company or organization had one fellow [the CPU] who sat in an isolated office, and refused to talk with anyone except his two most trusted deputies [the Northbridge and Southbridge], through which all the actual work the company does must be funneled. Because this one man — let's call him Bob — is so overloaded doing all the work of the entire company, he has several assistants [memory controllers] who remember everything for him. They do this through a complex system [virtual memory] of file cabinets of various sizes [physical memories], the organization over which they have strictly limited autonomy.

Because it is faster to find things in the smaller cabinets [RAM], where there is less to sift through, Bob asks them to put the most commonly used information there. But since he is constantly switching between different tasks, the assistants must swap in and out the files in the smaller cabinets with those in the larger ones whenever Bob works on something different ["thrashing"]. The largest file cabinet is humongous, and rotates slowly in front of a narrow slit [magnetic storage]. The assistant in charge of it must simply wait for the right folder to appear in front of him before passing it along [disk latency].

Any communication with customers must be handled through a team of receptionists [I/O controllers] who don't take the initiative to relay requests to one of Bob's deputies. When Bob needs customer input to continue on a difficult problem, he drops what he is doing to chase after his deputy to chase after a receptionist to chase down the customer, thus preventing work for other customers to be done in that time.

This model is clearly horrendous for numerous reasons. If any staff member goes out to lunch, the whole operation is likely to grind to a halt. Tasks that ought to be quite simple turn out to take a lot of time, since Bob must re-acquaint himself with the issues in question. If a spy gains Bob's trust, all is lost. The only way to make the model any better without giving up and starting over is to hire people who just do their work faster and spend more hours in the office. And yet, this is the way almost every computer in the world operates today.

It is much more sane to hire a large pool of individuals, and, depending on slow-changing customer needs, organize them into business units and assign them to customer accounts. Each person keeps track of his own small workload, and everyone can work on a separate task simultaneously. If the company suddenly acquires new customers, it can recruit more staff instead of forcing Bob to work overtime. If a certain customer demands more attention than was foreseen, more people can be devoted to the effort. And perhaps most importantly, collaboration with other businesses becomes far more meaningful than the highly coded, formal game of telephone that Bob must play with Frank, who works in a similar position at another corporation [a server]. Essentially, this is a business model problem as much as a computer science one.

These complaints only scratch the surface of the design flaws of today's computers. On an extremely low level, with voltages, charge, and transistors, energy is handled recklessly, causing tremendous heat, which would melt the parts in a matter of seconds were it not for the noisy cooling systems we find in most computers. And on a high level, software engineers have constructed a city of competing abstractions based on the fundamentally flawed "CPU" idea.

So I have changed my mind. I used to believe that computers were on the right track, but now I think the right thing to do is to move forward from our 1950s models to a ground-up, fundamentally distributed computing architecture. I started to use computers at 17 months of age and started programming them at 5, so I took the model for granted. But the present stagnation of perceptual computer performance, and the counter-intuitiveness of programming languages, led me to question what I was born into and wonder if there's a better way. Now I'm eager to help make it happen. When discontent changes your mind, that's innovation.


MAX TEGMARK
Physicist, MIT; Researcher, Precision Cosmology

Do we need to understand consciousness to understand physics?  I used to answer "yes", thinking that we could never figure out the elusive "theory of everything" for our external physical reality without first understanding the distorting mental lens through which we perceive it.

After all, physical reality has turned out to be very different from how it seems, and I feel that most of our notions about it have turned out to be illusions. The world looks like it has three primary colors, but that number three tells us nothing about the world out there, merely something about our senses: that our retina has three kinds of cone cells. The world looks like it has impenetrably solid and stationary objects, but all except a quadrillionth of the volume of a rock is empty space between particles in restless schizophrenic vibration. The world feels like a three-dimensional stage where events unfold over time, but Einstein's work suggests that change is an illusion, time being merely the fourth dimension of an unchanging space-time that just is, never created and never destroyed, containing our cosmic history like a DVD contains a movie. The quantum world feels random, but Everett's work suggests that randomness too is an illusion, being simply the way our minds feel when cloned into diverging parallel universes.

The ultimate triumph of physics would be to start with a mathematical description of the world from the "bird's eye view" of a mathematician studying the equations (which are ideally simple enough to fit on her T-shirt) and to derive from them the "frog's eye view" of the world, the way her mind subjectively perceives it. However, there is also a third and intermediate "consensus view" of the world. From your subjectively perceived frog perspective, the world turns upside down when you stand on your head and disappears when you close your eyes, yet you subconsciously interpret your sensory inputs as though there is an external reality that is independent of your orientation, your location and your state of mind. It is striking that although this third view involves both censorship (like rejecting dreams), interpolation (as between eye-blinks) and extrapolation (like attributing existence to unseen cities) of your frog's eye view, independent observers nonetheless appear to share this consensus view. Although the frog's eye view looks black-and-white to a cat, iridescent to a bird seeing four primary colors, and still more different to a bee seeing polarized light, a bat using sonar, a blind person with keener touch and hearing, or the latest robotic vacuum cleaner, all agree on whether the door is open.

This reconstructed consensus view of the world that humans, cats, aliens and future robots would all agree on is not free from some of the above-mentioned shared illusions. However, it is by definition free from illusions that are unique to biological minds, and therefore decouples from the issue of how our human consciousness works. This is why I've changed my mind: although understanding the detailed nature of human consciousness is a fascinating challenge in its own right, it is not necessary for a fundamental theory of physics, which need "only" derive the consensus view from its equations.

In other words, what Douglas Adams called "the ultimate question of life, the universe and everything" splits cleanly into two parts that can be tackled separately: the challenge for physics is deriving the consensus view from the bird's eye view, and the challenge for cognitive science is to derive the frog's eye view from the consensus view. These are two great challenges for the third millennium. They are each daunting in their own right, and I'm relieved that we need not solve them simultaneously.


ROBERT SAPOLSKY
Neuroscientist, Stanford University, Author, A Primate's Memoir

Well, my biggest change of mind came only a few years ago. It was the outcome of a painful journey of self-discovery, where my wife and children stood behind me and made it possible, where I struggled with all my soul, and all my heart and all my might. But that had to do with my realizing that Broadway musicals are not cultural travesties, so it's a little tangential here. Instead I'll focus on science.
 
I'm both a neurobiologist and a primatologist, and I've changed my mind about plenty of things in both of these realms. But the most fundamental change is one that transcends either of those disciplines — this was my realizing that the most interesting and important things in the life sciences are not going to be explained with sheer reductionism.

A specific change of mind concerned my work as a neurobiologist.

This came about 15 years ago, and it challenged neurobiological dogma that I had learned in pre-school, namely that the adult brain does not make new neurons. This fact had always been a point of weird pride in the field — hey, the brain is SO fancy and amazing that its elements are irreplaceable, not like some dumb-ass simplistic liver that's so totally fungible that it can regrow itself. And what this fact also reinforced, in passing, was the dogma that the brain is set in stone very early on in life, that there's all sorts of things that can't be changed once a certain time-window had passed.

Starting in the 1960's, a handful of crackpot scientists had been crying in the wilderness about how the adult brain does make new neurons. At best, their unorthodoxy was ignored; at worst, they were punished for it. But by the 1990's, it had become clear that they were right. And "adult neurogenesis" has turned into the hottest subject in the field — the brain makes new neurons, makes them under interesting circumstances, fails to under other interesting ones.

The new neurons function, are integrated into circuits, might even be required for certain types of learning. And the phenomenon is a cornerstone of a new type of neurobiological chauvinism — part of the very complexity and magnificence of the brain is how it can rebuild itself in response to the world around it.

So, I'll admit, this business about new neurons was a tough one for me to assimilate. I wasn't invested enough in the whole business to be in the crowd indignantly saying, No, this can't be true. Instead, I just tried to ignore it. "New neurons", christ, I can't deal with this, turn the page. And after an embarrassingly long time, enough evidence had piled up that I had to change my mind and decide that I needed to deal with it after all. And it's now one of the things that my lab studies.

The other change concerned my life as a primatologist, where I have been studying male baboons in East Africa. This also came in the early 90's. I study what social behavior has to do with health, and my shtick always was that if you want to know which baboons are going to be festering with stress-related disease, look at the low-ranking ones.  Rank is physiological destiny, and if you have a choice in the matter, you want to win some critical fights and become a dominant male, because you'll be healthier. And my change of mind involved two pieces.

The first was realizing, from my own data and that of others, that being dominant has far less to do with winning fights than with social intelligence and impulse control. The other was realizing that while health has something to do with social rank, it has far more to do with personality and social affiliation — if you want to be a healthy baboon, don't be a socially isolated one. This particular shift has something to do with the accretion of new facts, new statistical techniques for analyzing data, blah blah. Probably most importantly, it has to do with the fact that I was once a hermetic 22-year old studying baboons and now, 30 years later, I've changed my mind about a lot of things in my own life.


TOR NØRRETRANDERS
Science Writer; Consultant; Lecturer, Copenhagen; Author, The Generous Man

Permanent Reincarnation

I have changed my mind about my body. I used to think of it as a kind of hardware on which my mental and behavioral software was running. Now, I primarily think of my body as software. 

My body is not like a typical material object, a stable thing.  It is more like a flame, a river or an eddie. Matter is flowing through it all the time. The constituents are being replaced over and over again.

A chair or a table is stable because the atoms stay where they are. The stability of a river stems from the constant flow of water through it.

98 percent of the atoms in the body are replaced every year. 98 percent! Water molecules stays in your body for two weeks (and for an even shorter time in a hot climate), the atoms in your bones stays there for a few months. Some atoms stay for years. But almost not one single atom stay with you in your body from cradle to grave.

What is constant in you is not material. An average person takes in 1.5 ton of matter every year as food, drinks and oxygen. All this matter has to learn to be you. Every year. New atoms will have to learn to remember your childhood.

These numbers has been known for half a century or more, mostly from studies of radioactive isotopes. Physicist Richard Feynman said in 1955: "Last week's potatoes! They now can remember what was going on in your mind a year ago."

But why is this simple insight not on the all-time Top 10 list of important discoveries? Perhaps because it tastes a little like spiritualism and idealism? Only the ghosts are for real? Wandering souls? 

But digital media now makes it possible to think of all this in a simple way. The music I danced to as a teenager has been moved from vinyl-LPs to magnetic audio tapes to CDs to Pods and whatnot. The physical representation can change and is not important — as long as it is there. The music can jump from medium to medium, but it is lost if it does not have a representation. This physics of information was sorted out by Rolf Landauer in the 1960'ies. Likewise, out memories can move from potato-atoms to burger-atoms to banana-atoms. But the moment they are on their own, they are lost.

We reincarnate ourselves all the time. We constantly give our personality new flesh. I keep my mental life alive by making it jump from atom to atom. A constant flow. Never the same atoms, always the same river. No flow, no river. No flow, no me.

This is what I call permanent reincarnation: Software replacing its hardware all the time. Atoms replacing atoms all the time. Life. This is very different from religious reincarnation with souls jumping from body to body (and souls sitting out there waiting for a body to take home in).

There has to be material continuity for permanent reincarnation to be possible. The software is what is preserved, but it cannot live on its own. It has to jump from molecule to molecule, always in carnation.

I have changed my mind about the stability of my body: It keeps changing all the time. Or I could not stay the same.


HELEN FISHER
Research Professor, Department of Anthropology, Rutgers University; Author,
Why We Love

Planned Obsolescence?  The Four-Year Itch

When asked why all of her marriages failed, anthropologist Margaret Mead apparently replied, "I beg your pardon, I have had three marriages and none of them was a failure."  There are many people like Mead.  Some 90% of Americans marry by middle age.  And when I looked at United Nations data on 97 other societies, I found that more than 90% of men and women eventually wed in the vast majority of these cultures, too.  Moreover, most human beings around the world marry one person at a time: monogamy.  Yet, almost everywhere people have devised social or legal means to untie the knot.  And where they can divorce — and remarry — many do.

So I had long suspected this human habit of "serial monogamy" had evolved for some biological purpose.  Planned obsolescence of the pairbond?  Perhaps the mythological "seven-year itch" evolved millions of years ago to enable a bonded pair to rear two children through infancy together.  If each departed after about seven years to seek "fresh features," as poet Lord Byron put it, both would have ostensibly reproduced themselves and both could breed again — creating more genetic variety in their young.

So I began to cull divorce data on 58 societies collected since 1947 by the Statistical Office of the United Nations.  My mission: to prove that the "seven year itch" was a worldwide biological phenomenon associated in some way with rearing young.  

Not to be.  My intellectual transformation came while I was pouring over these divorce statistics in a rambling cottage, a shack really, on the Massachusetts coast one August morning.  I regularly got up around 5:30, went to a tiny desk that overlooked the deep woods, and poured over the pages I had Xeroxed from the United Nations Demographic Yearbooks.  But in country after country, and decade after decade, divorces tended to peak (the divorce mode) during and around the fourth year of marriage.  There were variations, of course.  Americans tended to divorce between the second and third year of marriage, for example.  Interestingly, this corresponds with the normal duration of intense, early stage, romantic love — often about 18 months to 3 years.  Indeed, in a 2007 Harris poll, 47% of American respondents said they would depart an unhappy marriage when the romance wore off, unless they had conceived a child.

Nevertheless, there was no denying it:  Among these hundreds of millions of people from vastly different cultures, three patterns kept emerging.  Divorces regularly peaked during and around the fourth year after wedding.  Divorces peaked among couples in their late twenties.  And the more children a couple had, the less likely they were to divorce: some 39% of worldwide divorces occurred among couples with no dependent children; 26% occurred among those with one child; 19% occurred among couples with two children; and 7% of divorces occurred among couples with three young.

I was so disappointed.  I mulled about this endlessly.  My friend used to wave his hand over my face, saying, "Earth to Helen; earth to Helen."  Why do so many men and women divorce during and around the 4-year mark; at the height of their reproductive years; and often with a single child?  It seemed like such an unstable reproductive strategy.  Then suddenly I got that "ah-ha" moment:  Women in hunting and gathering societies breastfeed around the clock, eat a low-fat diet and get a lot of exercise — habits that tend to inhibit ovulation.  As a result, they regularly space their children about four years apart.  Thus, the modern duration of many marriages—about four years—conforms to the traditional period of human birth spacing, four years. 

Perhaps human parental bonds originally evolved to last only long enough to raise a single child through infancy, about four years, unless a second infant was conceived.  By age five, a youngster could be reared by mother and a host of relatives. Equally important, both parents could choose a new partner and bear more varied young.

My new theory fit nicely with data on other species.  Only about three percent of mammals form a pairbond to rear their young.  Take foxes.  The vixen's milk is low in fat and protein; she must feed her kits constantly; and she will starve unless the dog fox brings her food.  So foxes pair in February and rear their young together.  But when the kits leave the den in mid summer, the pairbond breaks up.  Among foxes, the partnership lasts only through the breeding season.  This pattern is common in birds.  Among the more than 8,000 avian species, some 90% form a pairbond to rear their young.  But most do not pair for life. A male and female robin, for example, form a bond in the early spring and rear one or more broods together.  But when the last of the fledgling fly away, the pairbond breaks up. 

Like pair-bonding in many other creatures, humans have probably inherited a tendency to love and love again—to create more genetic variety in our young.  We aren't puppets on a string of DNA, of course.  Today some 57% of American marriages last for life.  But deep in the human spirit is a restlessness in long relationships, born of a time long gone, as poet John Dryden put it, "when wild in wood the noble savage ran."


STEVE NADIS
Science writer; Contributing Editor, Astronomy Magazine


The Myth Of The "Open Mind"

When I was 21, I began working for the Union of Concerned Scientists (UCS) in Cambridge Massachusetts. I was still an undergraduate at the time, planning on doing a brief research stint in energy policy before finishing college and heading to graduate school in physics. That "brief research stint" lasted about seven years, off and on, and I never did make it to graduate school. But the experience was instructive nevertheless.

When I started at UCS in the 1970s, nuclear power safety was a hot topic, and I squared off in many debates against nuclear proponents from utility companies, nuclear engineering departments, and so forth regarding reactor safety, radioactive wastes, and the viability of renewable energy alternatives. The next issue I took on for UCS was the nuclear arms race, which was equally polarized. (The neocons of that day weren't "neo" back then; they were just cons.) As with nuclear safety, there was essentially no common ground between the two sides. Each faction was invariably trying to do the other in, through oral rhetoric and tendentious prose, always looking for new material to buttress their case or undermine that of their opponents.

Even though the organization I worked for was called the Union of Concern Scientists, and even though many of the staff members there referred to me as a "scientist" (despite my lack of academic credentials), I knew that what I was doing was not science. (Nor were the many physics PhD's in arms control and energy policy doing science either.) In the back of my head, I always assumed that "real science" was different — that scientists are guided by facts rather than by ideological positions, personal rivalries, and whatnot.

In the decades since, I've learned that while this may be true in many instances, oftentimes it's not. When it comes to the biggest, most contentious issues in physics and cosmology — such as the validity of inflationary theory, string theory, or the multiverse/landscape scenario — the image of the objective truth seeker, standing above the fray, calmly sifting through the evidence without preconceptions or prejudice, may be less accurate than the adversarial model of our justice system. Both sides, to the extent there are sides on these matters, are constantly assembling their briefs, trying to convince themselves as well as the jury at large, while at the same time looking for flaws in the arguments of the opposing counsel.

This fractionalization may stem from scientific intuition, political or philosophical differences,  personal grudges, or pure academic competition. It's not surprising that this happens, nor is it necessarily a bad thing. In fact, it's my impression that this approach works pretty well in the law and in science too. It means that, on the big things at least, science will be vetted; it has to withstand scrutiny, pass muster.

But it's not a cold, passionless exercise either. At its heart, science is a human endeavor, carried out by people. When the questions are truly ambitious, it takes a great personal commitment to make any headway — a big investment in energy and in emotion as well. I know from having met with many of the lead researchers that the debates can get heated, sometimes uncomfortably so. More importantly, when you're engaged in an epic struggle like this — trying, for instance, to put together a theory of broad sweep — it may be difficult, if not impossible, to keep an "open mind" because you may be well beyond that stage, having long since cast your lot with a particular line of reasoning. And after making an investment over the course of many years, it's natural to want to protect it. That doesn't mean you can't change your mind — and I know of several cases where this has occurred — but, no matter what you do, it's never easy to shift from forward to reverse.

Although I haven't worked as a scientist in any of these areas, I have written about many of the "big questions" and know how hard it is to get all the facts lined up so that they fit together into something resembling an organic whole. Doing that, even as a mere scribe, involves periods of single-minded exertion, and during that process the issues can almost take on a life of their own, at least while you're actively thinking about them. Before long, of course, you've moved onto the next story and the excitement of the former recedes. As the urgency fades, you start wondering why you felt so strongly about the landscape or eternal inflation or whatever it was that had taken over your desk some months ago.

It's different, of course, for researchers who may stake out an entire career — or at least big chunks thereof — in a certain field.  You're obliged to keep abreast of all that's going on of note, which means one's interest is continually renewed. As new data comes in, you try to see how it fits in with the pieces of the puzzle you're already grappling with. Or if something significant emerges from the opposing camp, you may instinctively seek out the weak spots, trying to see how those guys messed up this time.

It's possible, of course, that a day may come when, try as you might, you can't find the weak spots in the other guy's story. After many attempts and an equal number of setbacks, you may ultimately have to accede to the view of an intellectual, if not personal, rival. Not that you want to but rather because you can't see any way around it. On the one hand, you might chalk it up as a defeat, something that will hopefully build character down the road. But in the grand scheme of things, it's more of a victory — a sign that sometimes our adversarial system of science actually works.


PAUL STEINHARDT
Physicist; Albert Einstein Professor of Science, Princeton University; Coauthor, Endless Universe: A New History of the Cosmos

What created the structure of the universe?

Most cosmologists would say the answer is "inflation," and, until recently, I would have been among them. But "facts have changed my mind" — and I now feel compelled to seek a new explanation that may or may not incorporate inflation.

The idea always seemed incredibly simple. Inflation is a period of rapid accelerated expansion that can transform the chaos emerging from the big bang into the smooth, flat homogeny observed by astronomers. If one likens the conditions following the bang to a wrinkled and twisted sheet of perfectly elastic rubber, then inflation corresponds to stretching the sheet at faster-than-light speeds until no vestige of its initial state remains. The "inflationary energy" driving the accelerated expansion then decays into the matter and radiation seen today and the stretching slows to a modest pace that allows the matter to condense into atoms, molecules, dust, planets, stars and galaxies.

I would describe this version as the "classical view" of inflation in two senses. First, this is the historic picture of inflation first introduced and now appearing in most popular descriptions. Second, this picture is founded on the laws of classical physics, assuming quantum physics plays a minor role. Unfortunately, this classical view is dead wrong. Quantum physics turns out to play an absolutely dominant role in shaping the inflationary universe. In fact, inflation amplifies the randomness inherent in quantum physics to produce an universe that is random and unpredictable.

This realization has come slowly. Ironically, the role of quantum physics was believed to be a boon to the inflationary paradigm when it was first considered twenty-five years ago by several theorists, including myself. The classical picture of inflation could not be strictly true, we recognized, or else the universe would be so smooth after inflation that galaxies and other large-scale structures would never form. However, inflation ends through the quantum decay of inflationary energy into matter and radiation. The quantum decay is analogous to the decay of radioactive uranium, in which there is some mean rate of decay but inherent unpredictability as to when any particular uranium nucleus will decay. Long after most uranium nuclei have decayed, there remain some nuclei that have yet to fission.

Similarly, inflationary energy decays at slightly different times in different places, leading to spatial variations in the temperature and matter density after inflation ends. The "average" statistical pattern appears to agree beautifully with the pattern of microwave background radiation emanating from the earliest stages of the universe and to produce just the pattern of non-uniformities needed to explain the evolution and distribution of galaxies. The agreement between theoretical calculation and observations is a celebrated triumph of the inflationary picture.

But is this really a triumph? Only if the classical view were correct. In the quantum view, it makes no sense to talk about an "average" pattern. The problem is that, as in the case of uranium nuclei, there always remain some regions of space in which the inflationary energy has not yet decayed into matter and radiation at all. Although one might have guessed the undecayed regions are rare, they expand so much faster than those that have decayed that they soon overtake the volume of the universe. The patches where inflationary energy has decayed and galaxies and stars have evolved become the oddity — rare pockets surrounded by space that continues to inflate away.

The process repeats itself over and over, with the number of pockets and the volume of surrounding space increasing from moment to moment. Due to random quantum fluctuations, pockets with all kinds of properties are produced — some flat, but some curved; some with variations in temperature and density like what we observe, but some not; some with forces and physical laws like those we experience, but some with different laws. The alarming result is that there are an infinite number of pockets of each type and, despite over a decade of attempts to avoid the situation, no mathematical way of deciding which is more probable has been shown to exist.

Curiously, this unpredictable "quantum view" of inflation has not yet found its way into the consciousness of many astronomers working in the field, let alone the greater scientific community or the public at large.

One often reads that recent measurements of the cosmic microwave background or the large-scale structure of the universe have verified a prediction of inflation. This invariably refers to a prediction based on the naïve classical view. But if the measurements ever come out differently, this could not rule out inflation. According to the quantum view, there are invariably pockets with matching properties.

And what of the theorists who have been developing the inflationary theory for the last twenty-five years? Some, like me, have been in denial, harboring the hope that a way can be found to tame the quantum effects and restore the classical view. Others have embraced the idea that cosmology may be inherently unpredictable, although this group is also vociferous in pointing how observations agree with the (classical) predictions of inflation.

Speaking for myself, it may have taken me longer to accept its quantum nature than it should have, but, now that facts have changed my mind, I cannot go back again. Inflation does not explain the structure of the universe. Perhaps some enhancement can explain why the classical view works so well, but then it will be that enhancement rather than inflation itself that explains the structure of the universe. Or maybe the answer lies beyond the big bang. Some of us are considering the possibility that the evolution of the universe is cyclic and that the structure was set by events that occurred before the big bang. One of the draws of this picture is that quantum physics does not play the same dominant role, and there is no escaping its predictions of the uniformity, flatness and structure of the universe.


RODNEY A. BROOKS
Panasonic Professor of Robotics, MIT, and CTO, iRobot Corp; author Flesh and Machines

Computation as the Ultimate Metaphor

Our science, including mine, treats living systems as mechanisms at multiple levels of abstraction.  As we talk about how one bio-molecule docks with another our explanations are purely mechanistic and our science never invokes "and then the soul intercedes and gets them to link up". The underlying assumption of molecular biologists is that their level of mechanistic explanation is ultimately adequate for high level mechanistic descriptions such as physiology and neuroscience to build on as a foundation.

Those of us who are computer scientists by training, and I'm afraid many collaterally damaged scientists of other stripes, tend to use computation as the mechanistic level of explanation for how living systems behave and "think".  I originally gleefully embraced the computational metaphor

If we look back over recent centuries we will see the brain described as a hydrodynamic machine, clockwork, and as a steam engine.  When I was a child in the 1950's I read that the human brain was a telephone switching network.  Later it became a digital computer, and then a massively parallel digital computer.  A few years ago someone put up their hand after a talk I had given at the University of Utah and asked a question I had been waiting for for a couple of years: "Isn't the human brain just like the world wide web?".  The brain always seems to be one of the most advanced technologies that we humans currently have.

The metaphors we have used in the past for the brain have not stood the test of time.  I doubt that our current metaphor of the brain as a network of computers doing computations is going to stand for all eternity either.

Note that I do not doubt that there are mechanistic explanations for how we think, and I certainly proceed with my work of trying to build intelligent robots using computation as a primary tool for expressing mechanisms within those robots.

But I have relatively recently come to question computation as the ultimate metaphor to be used in both the understanding of living systems and as the only important design tool for engineering intelligent artifacts.

Some of my colleagues have managed to recast Pluto's orbital behavior as the body itself carrying out computations on forces that apply to it.  I think we are perhaps better off using Newtonian mechanics (with a little Einstein thrown in) to understand and predict the orbits of planets and others.  It is so much simpler.

Likewise we can think about spike trains as codes and worry about neural coding.  We can think about human memory as data storage and retrieval.  And we can think about walking over rough terrain as computing the optimal place to put down each of our feet.  But I suspect that somewhere down the line we are going to come up with better, less computational metaphors.  The entities we use for metaphors may be more complex but the useful ones will lead to simpler explanations.

Just as the notion of computation is only a short step beyond discrete mathematics, but opens up vast new territories of questions and technologies, these new metaphors might well be just a few steps beyond where we are now in understanding organizational dynamics, but they may have rich and far reaching implications in our abilities to understand the natural world and to engineer new creations.


ROBERT TRIVERS
Evolutionary Biologist, Rutgers University; Coauthor, Genes In Conflict: The Biology of Selfish Genetic Elements

The Science of Self-deception Requires a Deep Understanding of Biology

When I first saw the possibility (some 30 years ago) of grounding a science of human self-deception in evolutionary logic (based on its value in furthering deception of others), I imagined joining evolutionary theory with animal behavior and with those parts of psychology worth preserving. The latter I regarded as a formidable hurdle since so much of psychology (depth and social) appeared to be pure crap, or more generously put, without any foundation in reality or logic.

Now after a couple of years of intensive study of the subject, I am surprised at the number of areas of biology that are important, if not key, to the subject yet are relatively undeveloped by biologists. I am also surprised that many of the important new findings in this regard have been made by psychologists and not biologists.

It was always obvious that when neurophysiology actually became a science (which it did when it learned to measure on-going mental activity) it would be relevant to deceit and self-deception and this is becoming more apparent every day. Also, endocrinology could scarcely be irrelevant and Richard Wrangham has recently argued for an intimate connection between testosterone and self-deception in men but the connections must be much deeper still. The proper way to conceptualize the endocrine system (as David Haig has pointed out to me) is as a series of signals with varying half-lives which give relevant information to organs downstream and many such signals may be relevant to deceit and self-deception and to selves-deception, as defined below.

One thing I never imagined was that the immune system would be a vital component of any science of self-deception, yet two lines of work within psychology make this clear. Richard Davidson and co-workers have shown that relatively positive, up, approach-seeking people are more likely to be left-brain activated (as measured by EEG) and show stronger immune responses to a novel challenge (flu vaccine) than are avoidance, negative emotion (depression, anxiety) right-brained people.  At the same time, James Pennebaker and colleagues have shown that the very act of repressing information from consciousness lowers immune function while sharing information with others (or even a diary) has the opposite effect. Why should the immune system be so important and why should it react in this way?

A key variable in my mind is that the immune system is an extremely expensive one—we produce a grapefruit-sized set of tissue every two weeks—and we can thus borrow against it, apparently in part for brain function. But this immediately raises the larger question of how much we can borrow against any given system—yes fat for energy, bone and teeth when necessary (as for a child in utero), muscle when not used and so on—but with what effects? Why immune function and repression?

While genetics is, in principle, important to all of biology, I thought it would be irrelevant to the study of self-deception until way into the distant future. Yet the 1980s produced the striking discovery that the maternal half of our genome could act against the paternal, and vice-versa, discoveries beautifully exploited in the 90's and 00's by David Haig to produce a range of expected (and demonstrated) internal conflicts which must inevitably interact with self-deception directed toward others. Put differently, internal genetic conflict leads to a quite novel possibility: selves-deception, equally powerful maternal and paternal halves selected to deceive each other (with unknown effects on deception of others).

And consider one of the great mysteries of mental biology. The human brain consumes about 20% of resting metabolic rate come rain or shine, whether depressed or happy, asleep or awake. Why? And why is the brain so quick to die when deprived of this energy? What is the cellular basis for all of this? How exactly does borrowing from other systems, such as immune, interact with this basic metabolic cost? Biologists have been very slow to see the larger picture and to see that fundamental discoveries within psychobiology require a deeper understanding of many fundamental biological processes, especially the logic of energy borrowed from various sources.

Finally, let me express a surprise about psychology. It has led the way in most of the areas mentioned, e.g. immune effects, neurophysiology, brain metabolism. Also, while classical depth psychology (Freud and sundries) can safely be thrown overboard almost in its entirety, social psychology has produced some very clever and hopeful methods, as well as a body of secure results on biased human mentation, from perception, to organization of data, to analysis, to further propagation. Daniel Gilbert gives a well-appreciated lecture in which he likens the human mind to a bad scientist, everything from biased exposure to data and biased analysis of information to outright forgery. Hidden here is a deeper point. Science progresses precisely because it has a series of anti-deceit-and-self-deception devices built into it, from full description of experiments permitting exact replication, to explicit statement of theory permitting precise counter-arguments, to the preference for exploring alternative working hypothesis, to a statistical apparatus able to weed out the effects of chance, and so on.


LAURENCE C. SMITH
Professor of Geography, UCLA

Rapid climate change

The year 2007 marked three memorable events in climate science:  Release of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4); a decade of drought in the American West and the arrival of severe drought in the American Southeast; and the disappearance of nearly half of the polar sea-ice floating over the Arctic Ocean. The IPCC report (a three-volume, three-thousand page synthesis of current scientific knowledge written for policymakers) and the American droughts merely hardened my conviction that anthropogenic climate warming is real and just getting going — a view shared, in the case of the IPCC, a few weeks ago by the Nobel Foundation. The sea-ice collapse, however, changed my mind that it will be decades before we see the real impacts of the warming. I now believe they will happen much sooner.

Let's put the 2007 sea-ice year into context. In the 1970's, when NASA first began mapping sea ice from microwave satellites, its annual minimum extent (in September, at summer's end) hovered close to 8 million square kilometers, about the area of the conterminous United States minus Ohio. In September 2007 it dropped abruptly to 4.3 million square kilometers, the area of the conterminous United State minus Ohio and all the other twenty-four states east of the Mississippi, as well as North Dakota, Minnesota, Missouri, Arkansas, Louisiana, and Iowa. Canada's Northwest Passage was freed of ice for the first time in human memory. From Bering Strait where the U.S. and Russia brush lips, open blue water stretched almost to the North Pole.

What makes the 2007 sea-ice collapse so unnerving is that it happened too soon.  The ensemble averages of our most sophisticated climate model predictions, put forth in the IPCC AR4 report and various other model intercomparison studies, don't predict a downwards lurch of that magnitude for another fifty years. Even the aggressive models -the National Center for Atmospheric Research (NCAR) CCSM3 and the Centre National de Recherches Meteorologiques (CNRM) CM3 simulations, for example — must whittle ice until 2035 or later before the 2007 conditions can be replicated.  Put simply, the models are too slow to match reality. Geophysicists, accustomed to non-linearities and hard to impress after a decade of 'unprecedented' events, are stunned by the totter:  Apparently, the climate system can move even faster than we thought.  This has decidedly recalibrated scientist's attitudes — including my own — to the possibility that even the direst IPCC scenario predictions for the end of this century — 10 to 24 inch higher global sea levels, for example — may be prudish.

What does all this say to us about the future? The first is that rapid climate change — a nonlinearity that occurs when a climate forcing reaches a threshold beyond which little additional forcing is needed to trigger a large impact — is a distinct threat not well captured in our current generation of computer models. This situation will doubtless improve — as the underlying physics of the 2007 ice event and others such as the American Southeast drought are dissected, understood, and codified — but in the meantime, policymakers must work from the IPCC blueprint which seems almost staid after the events of this summer and fall.  The second is that it now seems probable that the northern hemisphere will lose its ice lid far sooner than we ever thought possible.  Over the past three years experts have shifted from 2050, to 2035, to 2013 as plausible dates for an ice-free Arctic Ocean — estimates at first guided by models then revised by reality.

The broader significance of vanishing sea ice extends far beyond suffering polar bears, new shipping routes, or even development of vast Arctic energy reserves. It is absolutely unequivocal that the disappearance of summer sea ice — regardless of exactly which year it arrives — will profoundly alter the northern hemisphere climate, particularly through amplified winter warming of at least twice the global average rate. Its further impacts on the world's precipitation and pressure systems are under study but are likely significant. Effects both positive and negative, from reduced heating oil consumption to outbreaks of fire and disease, will propagate far southward into the United States, Canada, Russia and Scandinavia. Scientists have expected such things in eventuality — but in 2007 we learned they may already be upon us.


LEE M. SILVER
Professor of Molecular Biology and Public Policy,  Woodrow Wilson School, Princeton; Author, Challenging Nature


"If we could just get people to understand the science, they'd agree with us." Not.

In an interview with the New York Times, shortly before he died, Francis Crick told a reporter, "the view of ourselves as [ensouled] 'persons' is just as erroneous as the view that the Sun goes around the Earth. This sort of language will disappear in a few hundred years. In the fullness of time, educated people will believe there is no soul independent of the body, and hence no life after death."

Like the vast majority of academic scientists and philosophers alive today, I accept Crick's philosophical assertion — that when your body dies, you cease to exist — without any reservations. I also used to agree with Crick's psychosocial prognosis — that modern education would inevitably give rise to a populace that rejected the idea of a supernatural soul. But on this point, I have changed my mind.

Underlying Crick's psychosocial claim is a common assumption: the minds of all intelligent people must operate according to the same universal principles of human nature. Of course, anyone who makes this assumption will naturally believe that their own mind-type is the universal one. In the case of Crick and most other molecular biologists, the assumed universal mind-type is highly receptive to the persuasive power of pure logic and rational analysis.

Once upon a time, my own worldview was similarly informed. I was convinced that scientific facts and rational argument alone could win the day with people who were sufficiently intelligent and educated. To my mind, the rejection of rational thought by such people was a sign of disingenuousness to serve political or ideological goals.

My mind began to change one evening in November 2003. I had given a lecture at small liberal arts college along with a member of The President's Council on Bioethics, whose views on human embryo research are diametrically opposed to my own. Surrounded by students at the wine and cheese reception that followed our lectures, the two of us began an informal debate about the true meaning and significance of changes in gene expression and DNA methylation during embryonic development. Six hours later, long after the last student had crept off to sleep, it was 4:00 am, and we were both still convinced that with just one more round of debate, we'd get the other to capitulate. It didn't happen.

Since this experience, I have purposely engaged other well-educated defenders of the irrational, as well as numerous students at my university, in spontaneous one-on-one debates about a host of contentious biological subjects including evolution, organic farming, homeopathy, cloned animals, "chemicals" in our food, and genetic engineering. Much to my chagrin, even after politics, ideology, economics, and other cultural issues have been put aside, there is often a refusal to accept scientific implications of rational argumentation.

While its mode of expression may change over cultures and time, irrationality and mysticism seem to be an integral part of normal human nature, even among highly educated people. No matter what scientific and technological advances are made in the future, I now doubt that supernatural beliefs will ever be eradicated from the human species.


GARY MARCUS
Psychologist, New York University; Author, The Birth of the Mind

What's Special About Human Language

When I was in graduate school, in the early 1990s, I learned two important things: that the human capacity for language was innate, and that the machinery that allowed human beings to learn language was "special", in the sense of being separate from the rest of the human mind.

Both ideas sounded great at the time. But (as far as I can tell know) only one of them turns out to be true.

I still think that I was right to believe in "innateness", the idea that the human mind, arrives, fresh from the factory, with a considerable amount of elaborate machinery. When a human embryo emerges from the womb, it has almost all the neurons it will ever have. All of the basic neural structures are already in place, and most or all of the basic neural pathways are established. There is, to be sure, lots of learning yet to come — an infant's brain is more rough draft than final product — but anybody who still imagines the infant human mind to be little more than an empty sponge isn't in touch with the realities of modern genetics and neuroscience. Almost half our genome is dedicated to the development of brain function, and those ten or fifteen thousand brain-related genes choreograph an enormous amount of biological sophistication.  Chomsky (whose classes I sat in on while in graduate school) was absolutely right to be insisting, for all these years, that language has its origins in the built-in structure of the mind.

But now I believe that I was wrong to accept the idea that language was separate from the rest of the human mind. It's always been clear that we can talk about what we think about, but when I was in graduate school it was popular to talk about language as being acquired by a separate "module" or "instinct" from the rest of cognition, by what Chomsky called a  "Language Acquisition Device" (or LAD). Its mission in life was to acquire language, and nothing else. 

In keeping with idea of language as product of specialized in-born mechanism, we noted how quickly how human toddlers acquired language, and how determined they were to do so; all normal human children acquire language, not just a select few raised in privileged environments, and they manage to do so rapidly, learning most of what they need to know in the first few years of life.  (The average adult, in contrast, often gives up around the time they have to face their fourth list of irregular verbs.)  Combine that with the fact that some children with normal intelligence couldn't learn language and that others with normal language lacked normal cognitive function, and I was convinced. Humans acquired language because they had a built-in module that was uniquely dedicated to that function.

Or so I thought then. By the late 1990s, I started looking beyond the walls of my own field (developmental psycholinguistics) and out towards a whole host of other fields, including genetics, neuroscience, and evolutionary biology.

The idea that most impressed me — and did the most to shake me of the belief that language was separate from the rest of the mind — goes back to Darwin. Not "survival of the fittest" (a phrase actually coined by Herbert Spencer) but his notion, now amply confirmed at the molecular level, that all biology is the product of what he called "descent with modification". Every species, and every biological system evolves through a combination of inheritance (descent) and change (modification). Nothing, no matter how original it may appear, emerges from scratch.

Language, I ultimately realized, must be no different: it emerged quickly, in the space of a few hundred thousand years, and with comparatively little genetic change. It suddenly dawned on me that the striking fact that our genomes overlap almost 99% with those of chimpanzees must be telling something: language couldn't possibly have started from scratch. There isn't enough room in the genome, or in our evolutionary history, for it to be plausible that language is completely separate from what came before.

Instead, I have now come to believe, language must be, largely, a recombination of spare parts, a kind of jury-rigged kluge built largely out of cognitive machinery that evolved for other purposes, long before there was such a thing as language. If there's something special about language, it is not the parts from which it is composed, but the way in which they are put together.

Neuorimaging studies seem to bear this out. Whereas we once imagined language to be produced and comprehended almost entirely by two purpose-built regions — Broca's area and Wernicke's area, we now see that many other parts of the brain are involved (e.g. the cerebellum and basal ganglia) and that the classic language areas (i.e. Broca's and Wernicke's) participate in other aspects of mental life (e.g., music and motor control) and have counterparts in other apes.

At the narrowest level, this means that psycholinguists and cognitive neuroscientists need to rethink their theories about what language is. But if there is a broader lesson, it is this: although we humans in many ways differ radically from any other species, our greatest gifts are built upon a genomic bedrock that we share with the many other apes that walk the earth.


LEE SMOLIN
Physicist, Perimeter Institute; Author, The Trouble With Physics

Although I have changed my mind about several ideas and theories, my longest struggle has been with the concept of time.  The most obvious and universal aspect about reality, as we experience it, is that it is structured as a succession of moments, each of which comes into being, supplanting what was just present and is now past.  But, as soon as we describe nature in terms of mathematical equations, the present moment and the flow of time seem to disappear, and time becomes just a number, a reading on an instrument,  like any other.

Consequently, many philosophers and physicists argue that time is an illusion, that reality consists of the whole four dimensional history of the universe, as represented in Einstein's theory of general relativity.  Some, like Julian Barbour, go further and argue that, when quantum theory is unified with gravity,  time disappears completely.  The world is just a vast collection of moments which are represented by the "wave-function  of the universe."  Time not real, it is just an "emergent quantity" that is helpful to organize our observations of the universe when it is big and complex.

Other physicists argue that aspects of time are real, such as the relationships of causality, that record which events were the necessary causes of others. Penrose, Sorkin and Markopoulou have proposed models of quantum spacetime in which everything real reduces to these relationships of causality.

In my own  thinking, I first embraced the view that quantum reality is timeless.  In our work on loop quantum gravity we were able to take this idea more seriously than people before us could, because we could construct and study exact wave-functions of the universe. Carlo Rovelli , Bianca Dittrich and others worked out in detail how time would "emerge" from the study of the question of what quantities of the theory are observable.

But, somehow, the more this view was worked out in detail the less I was convinced. This was partly due to technical challenges in realizing the emergence of time, and partly because some naïve part of me could never understand conceptually how the basic experience of the passage of time could emerge from a world without time.

So in the late 90s I embraced the view that time,  as causality,  is real. This fit best the next stage of development of loop quantum gravity, which was based on quantum spacetime histories.    However,  even as we continued to make  progress on the technical side of these studies,  I found myself worrying  that the present moment and the flow of time were still nowhere represented.  And I had another motivation, which was to make sense of the idea that laws of nature could evolve in time.

Back in the early 90s I had formulated a view of laws evolving on a landscape of theories along with the universe they govern.  This had been initially ignored, but in the last few years there has been much study of dynamics on landscapes of theories. Most of these are framed in the timeless language of the "wavefunction  of the universe," in contrast to my original presentation, in which theories evolved in real time. As these studies progressed, it became clear that only those in which time played a role could generate  testable predictions — and this made me want  to think more deeply about time.

It is becoming clear to me that the mystery of the nature of time is connected with other fundamental questions such as the nature of truth in mathematics and whether there must be timeless laws of nature. Rather than being an illusion, time may be the only aspect of our present understanding of nature that is not temporary and emergent.


A. GARRETT LISI
Independent Theoretical Physicist; Author, "An Exceptionally Simple Theory of Everything"

I Used To Think I Could Change My Mind

As a scientist, I am motivated to build an objective model of reality. Since we always have incomplete information, it is eminently rational to construct a Bayesian network of likelihoods — assigning a probability for each possibility, supported by a chain of priors. When new facts arise, or if new conditional relationships are discovered, these probabilities are adjusted accordingly — our minds should change. When judgment or action is required, it is based on knowledge of these probabilities. This method of logical inference and prediction is the sine qua non of rational thought, and the method all scientists aspire to employ. However, the ambivalence associated with an even probability distribution makes it terribly difficult for an ideal scientist to decide where to go for dinner.

Even though I strive to achieve an impartial assessment of probabilities for the purpose of making predictions, I cannot consider my assessments to be unbiased. In fact, I no longer think humans are naturally inclined to work this way. When I casually consider the beliefs I hold, I am not readily able to assign them numerical probabilities. If pressed, I can manufacture these numbers, but this seems more akin to rationalization than rational thought. Also, when I learn something new, I do not immediately erase the information I knew before, even if it is contradictory. Instead, the new model of reality is stacked atop the old. And it is in this sense that a mind doesn't change; vestigial knowledge may fade over a long period of time, but it isn't simply replaced. This model of learning matches a parable from Douglas Adams, relayed by Richard Dawkins:

A man didn't understand how televisions work, and was convinced that there must be lots of little men inside the box, manipulating images at high speed. An engineer explained to him about high frequency modulations of the electromagnetic spectrum, about transmitters and receivers, about amplifiers and cathode ray tubes, about scan lines moving across and down a phosphorescent screen. The man listened to the engineer with careful attention, nodding his head at every step of the argument. At the end he pronounced himself satisfied. He really did now understand how televisions work. "But I expect there are just a few little men in there, aren't there?"

As humans, we are inefficient inference engines — we are attached to our "little men," some dormant and some active. To a degree, these imperfect probability assessments and pet beliefs provide scientists with the emotional conviction necessary to motivate the hard work of science. Without the hope that an improbable line of research may succeed where others have failed, difficult challenges would go unmet. People should be encouraged to take long shots in science, since, with so many possibilities, the probability of something improbable happening is very high. At the same time, this emotional optimism must be tempered by a rational estimation of the chance of success — we must not be so optimistic as to delude ourselves. In science, we must test every step, trying to prove our ideas wrong, because nature is merciless. To have a chance of understanding nature, we must challenge our predispositions. And even if we can't fundamentally change our minds, we can acknowledge that others working in science may make progress along their own lines of research. By accommodating a diverse variety of approaches to any existing problem, the scientific community will progress expeditiously in unlocking nature's secrets.


JOHN BAEZ
Mathematical Physicist

Should I be thinking about quantum gravity?

One of the big problems in physics — perhaps the biggest! — is figuring out how our two current best theories fit together. On the one hand we have the Standard Model, which tries to explain all the forces except gravity, and takes quantum mechanics into account.  On the other hand we have General Relativity, which tries to explain gravity, and does not take quantum mechanics into account. Both theories seem to be more or less on the right track — but until we somehow fit them together, or completely discard one or both, our picture of the world will be deeply schizophrenic.

It seems plausible that as a step in the right direction we should figure out a theory of gravity that takes quantum mechanics into account, but reduces to General Relativity when we ignore quantum effects (which should be small in many situations). This is what people mean by "quantum gravity" — the quest for such a theory.

The most popular approach to quantum gravity is string theory.  Despite decades of hard work by many very smart people, it's far from clear that this theory is successful. It's made no predictions that have been confirmed by experiment.  In fact, it's made few predictions that we have any hope of testing anytime soon!  Finding certain sorts of particles at the big new particle accelerator near Geneva would count as partial confirmation, but string theory says very little about the details of what we should expect. In fact, thanks to the vast "landscape" of string theory models that researchers are uncovering, it keeps getting harder to squeeze specific predictions out of this theory.

When I was a postdoc, back in the 1980s, I decided I wanted to work on quantum gravity. The appeal of this big puzzle seemed irresistible.  String theory was very popular back then, but I was skeptical of it.  I became excited when I learned of an alternative approach pioneered by Ashtekar, Rovelli and Smolin, called loop quantum gravity.

Loop quantum gravity was less ambitious than string theory. Instead of a "theory of everything", it only sought to be a theory of something: namely, a theory of quantum gravity.

So, I jumped aboard this train, and for about a decade I was very happy with the progress we were making. A beautiful picture emerged, in which spacetime resembles a random "foam" at very short distance scales, following the laws of quantum mechanics.

We can write down lots of theories of this general sort. However, we have never yet found one for which we can show that General Relativity emerges as a good approximation at large distance scales — the quantum soap suds approximating a smooth surface when viewed from afar, as it were.

I helped my colleagues Dan Christensen and Greg Egan do a lot of computer simulations to study this problem. Most of our results went completely against what everyone had expected.  But worse, the more work we did, the more I realized I didn't know what questions we should be asking!  It's hard to know what to compute to check that a quantum foam is doing its best to mimic General Relativity.

Around this time, string theorists took note of loop quantum gravity people and other critics — in part thanks to Peter Woit's blog, his book Not Even Wrong, and Lee Smolin's book The Trouble with Physics.  String theorists weren't used to criticism like this.  A kind of "string-loop war" began.  There was a lot of pressure for physicists to take sides for one theory or the other. Tempers ran high.

Jaron Lanier put it this way: "One gets the impression that some physicists have gone for so long without any experimental data that might resolve the quantum-gravity debates that they are going a little crazy."  But even more depressing was that as this debate raged on, cosmologists were making wonderful discoveries left and right, getting precise data about dark energy, dark matter and inflation.  None of this data could resolve the string-loop war! Why?  Because neither of the contending theories could make predictions about the numbers the cosmologists were measuring! Both theories were too flexible.

I realized I didn't have enough confidence in either theory to engage in these heated debates.  I also realized that there were other questions to work on: questions where I could actually tell when I was on the right track, questions where researchers cooperate more and fight less.  So, I eventually decided to quit working on quantum gravity.

It was very painful to do this, since quantum gravity had been my holy grail for decades.  After you've convinced yourself that some problem is the one you want to spend your life working on, it's hard to change your mind.  But when I finally did, it was tremendously liberating.

I wouldn't urge anyone else to quit working on quantum gravity. Someday, someone is going to make real progress.  When this happens, I may even rejoin the subject.  But for now, I'm thinking about other things.  And, I'm making more real progress understanding the universe than I ever did before.


KEN FORD
Retired Physicist & Writer; Coauthor (with John Archibald Wheeler), Geons, Black Holes, and Quantum Foam: A Life in Physics

I used to believe that the ethos of science, the very nature of science, guaranteed the ethical behavior of its practitioners. As a student and a young researcher, I could not conceive of cheating, claiming credit for the work of others, or fabricating data. Among my mentors and my colleagues, I saw no evidence that anyone else believed otherwise. And I didn't know enough of the history of my own subject to be aware of ethical lapses by earlier scientists. There was, I sensed, a wonderful purity to science. Looking back, I have to count naiveté as among my virtues as a scientist.

Now I have changed my mind, and I have changed it because of evidence, which is what we scientists are supposed to do. Various examples of cheating, some of them quite serious, have come to light in the last few decades, and misbehaviors in earlier times have been reported as well. Scientists are, as the saying goes, "only human," which, in my opinion, is neither an excuse nor an adequate explanation. Unfortunately, scientists are now subjected to greater competitive pressures, financial and otherwise, than was typical when I was starting out. Some — a few — succumb.

We do need to teach ethics as essential to the conduct of science, and we need to teach the simple lesson that in science crime doesn't pay. But above all, we need to demonstrate by example that the highest ethical standards should, and often do, come naturally.


JEFFREY EPSTEIN
Science Philanthropist

The question presupposes a well defined "you", and an implied ability that is under "your" control to change your "mind". The "you" I now believe is distributed amongst others (family friends , in hierarchal structures,) i.e. suicide bombers, believe their sacrifice is for the other parts of their "you". The question carries with it an intention that I believe is out of one's control. My mind changed as a result of its interaction with its environment. Why? because it is a part of it.


LAWRENCE KRAUSS
Physicist, Case Western Reserve University; Author, Atom

What is the Universe Made of and How Will it End?

Like 99% of particle physicists, and 600 of cosmologist (perhaps 98% of theorists and 90% of observers, to be more specific), I was relatively certain that there was precisely enough matter in the universe to make it geometrically flat.  What does geometrically flat mean?  Well, according to general relativity it means there is a precise balance between the positive kinetic energy associated with the expansion of space, and the negative potential energy associated with the gravitational attraction of matter in the universe so that the total energy is precisely zero. This is not only mathematically attractive, but in fact the only theory we have that explains why the universe looks the way it does today tends to predict a flat universe today.

Now, the only problem with this prediction is that visible matter in the universe only accounts for a few percent of the total amount of matter required to make the universe flat.  Happily, however, during the period from 1970 or so to the early 1990's it had become abundantly clear that our galaxy, and indeed all galaxies are dominated by 'dark matter'... material that does not shine, or, as far as we can tell, interact electromagnetically.  This material, which we think is made up of a new type of elementary particle, accounts for at least 10 times as much matter as can be accounted for in stars, hot gas etc.. With the inference that dark matter existed in such profusion, it was natural to suspect that there was enough of it to account for a flat universe.

The only problem was that the more our observations of the universe improved, the less evidence there appeared to be that there was enough dark matter to result in a flat universe. Moreover, all other other indicators of cosmology, from the age of the universe, to the data on large scale structure, all began to suggest a flat universe dominated by dark matter was inconsistent with observation.  In 1995, this led my colleague Mike Turner and I to suggest that the only way a flat universe could be consistent with observation was if most of the energy, indeed almost 75% of the total energy, was contributed not by matter, but by empty space!

As heretical as our suggestion was, to be fair, I think we were being more provocative than anything, because the one thing that everyone knew was that the energy of empty space had to be precisely zero.  The alternative, which would have resulted in something very much like the 'Cosmological Constant' first proposed by Einstein when he incorrectly thought the universe was static and needed some exotic new adjustment to his equations of general relativity so that the attractive force of gravity was balanced by a repulsive force associated with empty space, was just too ugly to imagine.

And then, in 1998 two teams measuring the recession velocity of distant galaxies using observations of exploding stars within them to probe their distance from us at the same time discovered something amazing.  The expansion of the universe seemed to be speed up with time, not slowing down, as any sensible universe should be doing!  Moreover, if one assumed this acceleration was caused by a new repulsive force throughout empty space that would be caused if the energy of empty space was not precisely zero, then the amount of extra energy needed to produce the observed acceleration was precisely the amount needed to account for a flat universe!

Now here is the really weird thing.  Within a year after the observation of an accelerating universe, even though the data was not yet definitive, I and pretty well everyone else in the community who had previously thought there was enough dark matter to result in a flat universe, and who had previously thought the energy of empty space must be precisely zero had completely changed our minds... All of the signals were just too overwhelming to continue to hold on to our previous rosy picture... even if the alternative was so crazy that none of our fundamental theories could yet account for it.

So we are now pretty sure that the dominant energy-stuff in our universe isn't normal matter, and isn't dark matter, but rather is associated with empty space!  And what is worse (or better, depending upon your viewpoint) is that our whole picture of the possible future of the universe has changed..  An accelerating universe will carry away almost everything we now see, so that in the far future our galaxy will exist alone in a dark, and seemingly endless void....

And that is what I find so satisfying about science.  Not just that I could change my own mind because the evidence of reality forced me to... but that the whole community could throw out a cherished notion, and so quickly!  That is what makes science different than religion, and that is what makes it worth continuing to ask questions about the universe ... because it never fails to surprise us.


STEPHEN M. KOSSLYN
Psychologist, Harvard University; Author, Wet Mind

The World in the Brain

I used to believe that we could understand psychology at different levels of analysis, and events at any one of the levels could be studied independently of events at the other levels. For example, one could study events at the level of the brain (and seek answers in terms of biological mechanisms), the level of the person (and seek answers in terms of the contents of thoughts, beliefs, knowledge, and so forth), or the level of the group (and seek answers in terms of social interactions). This approach seemed reasonable; the strategy of "divide and conquer" is a cornerstone in all of science, isn't it? In fact, virtually all introductory psychology textbooks are written as if events at the different levels are largely independent, with separate chapters (that only rarely include cross-references to each other) on the brain, perception, memory, personality, social psychology, and so on.

I've changed my mind. I don't think it's possible to understand events at any one level of analysis without taking into account what occurs at other levels. In particular, I'm now convinced that at least some aspects of the structure and function of the brain can only be understood by situating the brain in a specific cultural context. I'm not simply saying that the brain has evolved to function in a specific type of environment (an idea that forms a mainstay of evolutionary psychology and some areas of computer vision, where statistics of the natural environment are used to guide processing). Rather, I'm saying that to understand how any specific brain functions, we need to understand how that person was raised, and currently functions, in the surrounding culture.

Here's my line of reasoning. Let's begin with a fundamental fact: The genes, of which we have perhaps only some 30,000, cannot program us to function equally effectively in every possible environment. Hence, evolution has licensed the environment to set up and configure each individual's brain, so that it can work well in that context. For example, consider stereovision. We all know about stereo in audition; the sound from each of two loudspeakers has slightly different phases, so the listener's brain glues them together to provide the sense of an auditory panorama. Something similar is at work in vision. In stereovision, the slight disparity in the images that reach the two eyes are a cue for how far away objects are. If you're focused on an object directly in front of you, your eyes will converge slightly. Aside from the exact point of focus, the rest of the image will strike slightly different places on the two retinas (at the back of the eye, which converts light into neural impulses), and the brain uses the slight disparities to figure out how far away something is.
           
There are two important points here. First, this stereo process — of computing depth on the basis of the disparities in where images strike the two retinas — depends on the distance between the eyes. And second, and this is absolutely critical, there's no way to know at the moment of conception how far apart a person's eyes are going to be, because that depends on bone growth — and bone growth depends partly on the mother's diet and partly on the infant's diet.
           
So, given that bone growth depends partly on the environment, how could the genes set up stereovision circuits in the brain? What the genes did is really clever: Young children (peaking at about age 18 months) have more connections among neurons than do adults; in fact, until about eight years old, children have about twice as many neural connections as they do as adults. But only some of these connections provide useful information. For example, when the infant reaches, only the connections from some neurons will correctly guide reaching. The brain uses a process called pruning to get rid of the useless connections. The connections that turn out to work, with the distance between the eyes the infant happens to have, would not be the ones that would work if the mother did not have enough calcium, or the infant hadn't had enough of various dietary supplements.
           
This is a really elegant solution to the problem that the genes can't know in advance how far apart the eyes will be. To cope with this problem, the genes overpopulate the brain, giving us options for different environments (where the distance between eyes and length of the arms are part of the brain's "environment," in this sense), and then the environment selects which connections are appropriate. In other words, the genes take advantage of the environment to configure the brain.

This overpopulate-and-select mechanism is not limited to stereovision. In general, the environment sets up the brain (above and beyond any role it may have had in the evolution of the species), configuring it to work well in the world a person inhabits. And by environment I'm including everything outside the brain — including the social environment. For example, it's well known that children can learn multiple languages without an accent and with good grammar, if they are exposed to the language before puberty. But after puberty, it's very difficult to learn a second language so well. Similarly, when I first went to Japan, I was told not even to bother trying to bow, that there were something like a dozen different bows and I was always going to "bow with an accent" — and in my case the accent was so thick that it was impenetrable. 
           
The notion is that a variety of factors in our environment, including in our social environment, configure our brains. It's true for language, and I bet it's true for politeness as well as a raft of other kinds of phenomena. The genes result in a profusion of connections among neurons, which provide a playing field for the world to select and configure so that we fit the environment in which we inhabit. The world comes into our head, configuring us. The brain and its surrounding environment are not as separate as they might appear. 

This perspective leads me to wonder whether we can assume that the brains of people living in different cultures process information in precisely the same ways. Yes, people the world over have much in common (we are members of the same species, after all), but even small changes in the wiring may lead us to use the common machinery in different ways. If so, then people from different cultures may have unique perspectives on common problems, and be poised to make unique contributions toward solving such problems.

Changing my mind about the relationship between events at different levels of analysis has led me to change fundamental beliefs. In particular, I now believe that understanding how the surrounding culture affects the brain may be of more than merely "academic interest."


GARY KLEIN
Research Psychologist; Founder, Klein Associates; Author, The Power of Intuition

Exchanging Your Mind

It's generally a bad idea to change your mind and an even worse idea to do it publicly. Politicians who get caught changing their minds are labeled "flip-floppers." When managers change their minds about what they want they risk losing credibility and they create frustration in subordinates who find that much of their work has now been wasted. Researchers who change their minds may be regarded as sloppy, shooting from the hip rather than delaying publication until they nail down all the loose ends in their data.

Clearly the Edge Annual Question for 2008 carries with it some dangers in disclosure:  "What have you changed your mind about? Why?" Nevertheless, I'll take the bait and describe a case where I changed my mind about the nature of the phenomenon I was studying.

My colleagues Roberta Calderwood, Anne Clinton-Cirocco, and I were investigating how people make decisions under time pressure. Obviously, under time pressure people can't canvass all the relevant possibilities and compare them along a common set of dimensions. So what are they doing instead?

I thought I knew what happened. Peer Soelberg had investigated the job-choice strategy of students. In most cases they quickly identified a favorite job option and evaluated it by comparing it to another option, a choice comparison, trying to show that their favorite option was as good as or better than this comparison case on every relevant dimension. This strategy seemed like a very useful way to handle time pressure. Instead of systematically assessing a large number of options, you only have to compare two options until you're satisfied that your favorite dominates the other.

To demonstrate that people used this strategy to handle time pressure I studied fireground commanders. Unhappily, the firefighters had not read the script. We conducted interviews with them about tough cases, probing them about the options they considered. And in the great majority of cases (about 81%), they insisted that they only considered one option.

The evidence obviously didn't support my hypothesis. Still, I wasn't convinced that my hypothesis was wrong. Perhaps we hadn't phrased the questions appropriately. Perhaps the firefighters' memories were inaccurate. At this point I hadn't changed my mind. I had just conducted a study that didn't work out.

People are very good at deflecting inconvenient evidence. There are very few facts that can't be explained away. Facts rarely force us to change our minds.

Eventually my frustration about not getting the results I wanted was replaced by a different emotion: curiosity. If the firefighters weren't comparing options just what were they doing?

They described how they usually knew what to do once they sized up the situation. This claim generated two mysteries:  How could the first option they considered have such a high likelihood of succeeding?  And how could they evaluate an option except by comparing it to another?

Going back over the data we resolved each of these mysteries. They were using their years of experience to rapidly size up situations. The patterns they had acquired suggested typical ways of reacting. But they still needed to evaluate the options they identified. They did so by imagining what might happen if they carried out the action in the context of their situation. If it worked, they proceeded. If it almost worked then they looked for ways to repair any weaknesses or else looked at other typical reactions until they found one that satisfied them.

Together, this forms a recognition-primed decision strategy that is based on pattern recognition but tests the results using deliberate mental simulation. This strategy is very different from the original hypothesis about comparing the favorite versus a choice comparison.

I had an advantage in that I had never received any formal training in decision research. One of my specialty areas was the nature of expertise. Therefore, the conceptual shift I made was about  peripheral constructs, rather than core constructs about how decisions are made. The notions of Peer Soelberg that I was testing weren't central to my understanding of skilled performance.

Changing one's mind isn't merely revising the numerical value of a fact in a mental data base or changing the beliefs we hold. Changing my mind also means changing the way I will then use my mind to search for and interpret facts. When I changed my understanding of how the fireground commanders were making decisions I altered the way I viewed experts and decision makers. I altered the ways I collected and analyzed data in later studies. As a result, I began looking at events with a different mind, one that I had exchanged for the mind I previously had been using.


ALAN KRUEGER
Bendheim Professor of Economics and Public Affairs at Princeton University; Author, What Makes a Terrorist: Economics and the Roots of Terrorism

I used to think the labor market was very competitive, but now I think it is better characterized by monopsony, at least in the short run.


SETH LLOYD
Quantum Mechanical Engineer, MIT, Author, Programming the Universe

I have changed my mind about technology.

I used to take a dim view of technology. One should live one's life in a simple, low-tech fashion, I thought. No cell phone, keep off the computer, don't drive. No nukes, no remote control, no DVD, no TV. Walk, read, think — that was the proper path to follow.

What a fool I was! A dozen years ago or so, by some bizarre accident, I became a professor of Mechanical Engineering at MIT. I had never had any training, experience, or education in engineering. My sole claim to engineering expertise was some work on complex systems and a few designs for quantum computers. Quantum-mechanical engineering was in its early days then, however, and MIT needed a quantum mechanic. I was ready to answer the call.

It was not my fellow professors who converted me to technology, uber-techno-nerds though they were. Indeed, my colleagues in Mech. E. were by and large somewhat suspicious of me, justifiably so. I was wary of them in turn, as one often is of co-workers who are hugely more knowledgeable than one is oneself. (Outside of the Mechanical Engineering department, by contrast, I found large numbers of kindred souls: MIT was full of people whose quanta needed fixing, and as a certified quantum mechanic, I was glad to oblige.) No, it was not the brilliant technologists who filled the faculty lunchroom who changed my mind. Rather, it was the students who had come to have me teach them about engineering who taught me to value technology.

Your average MIT undergraduate is pretty technologically adept. In the old days, freshmen used to arrive MIT having disassembled and reassembled tractors and cars; slightly later on, they arrived having built ham radios and guitar amplifiers; more recently, freshmen and fresh women were showing up with a scary facility with computers. Nowadays, few of them have used a screwdriver (except maybe to install some more memory in their laptop), but they are eager to learn how robots work, and raring to build one themselves.

When I stepped into my first undergraduate classroom, a controls laboratory, I knew just about as little about how to build a robot as the nineteen and twenty year olds who were expectantly sitting, waiting for me to teach them how. I was terrified. Within a half an hour, the basis for my terror was confirmed. Not only did I know as little as the students, in many cases I knew significantly less: about of the quarter of the students knew demonstrably more about robotics than I, and were happy to display their knowledge. I emerged from the first lab session a sweaty mess, having managed to demonstrate my ignorance and incompetence in a startling variety of ways.

I emerged  from the second lab session a little cooler. There is no better way to learn, and learn fast, than to teach. Humility actually turns out to have its virtues, too. It turns out to be rather fun to admit one's ignorance, if that admission takes the form of an appeal to the knowledge of all assembled. In fact, it turned out that, either through my training in math and physics, or through a previous incarnation, I possessed more intuitive knowledge of control theory than I had any right to, given my lack of formal education on the subject. Finally, no student is more empowered than the one who has just correctly told her professor that he is wrong, and showed him why her solution is the right one.

In the end, the experience of teaching the technology that I did not know was one of the most intellectually powerful of my life. In my mental ferment of trying to learn the material faster and deeper than my students, I began to grasp concepts and ways of looking at the world, of whose existence I had no previous notion. One of the primary features of the lab was a set of analog computers, boxy things festooned with dials and plugs, and full of amplifiers, capacitors, and resistors, that were used to simulate, or construct an analog, of the motors and loads that we were trying to control. In my feverish attempt to understand analog computers, I constructed model for a quantum-mechanical analog computer that would operate at the level of individual atoms. This model resulted in one of my best scientific papers. In the end, scarily enough, my student evaluations gave me the highest possible marks for knowledge of the material taught.

And technology? Hey, it's not so bad. When it comes to walking in the rain, Goretex and fleece beat oilskin and wool hollow. If we're not going to swamp our world in greenhouse gases, we damn well better design dramatically more efficient cars and power plants. And if I could contribute to technology by designing and helping to build quantum computers and quantum communication systems, so much the better. Properly conceived and constructed technology does not hinder the simple life, but helps it.

OK. So I was wrong about technology. What's my next misconception? Religion? God forbid.


JOHN MCCARTHY
Computer Scientist; 1st Generation Artificial Intelligence Pioneer, Stanford University

Attitudes Trump Facts

I have a collection of web pages on the sustainability of material progress that treats many problems that have been proposed as possible stoppers. I get email about the pages, both unfavorable and favorable, mostly the latter.

I had believed that the email would concern specific problems or would raise new ones, e.g. "What about erosion of agricultural land?"

There's some of that, but overwhelmingly the email, both pro and con, concerns my attitude,  not my (alleged) facts. "How can you be so blithely cornucopian when everybody knows ..." or "I'm glad someone has the courage to take on all those doomsters."

It seems, to my surprise, that people's attitude that the future stems at least as much from personality as from opinions about facts. People look for facts to support their attitudes — which have earlier antecedents.


ERNST PÖPPEL
Neuroscientist, Chairman, Board of Directors, Human Science Center and Department of Medical Psychology, Munich University, Germany; Author, Mindworks

Being Caught In The Language Trap — Or Wittgenstein's Straitjacket

When I look at something, when I talk to somebody, when I write a few sentences about "what I have changed my mind about and why", the neuronal network in my brain changes all the time and there are even structural changes in the brain. Why is it that these changes don't come to mind all the time but remain subthreshold?  Certainly, if everything would come to mind what goes on in the brain, and if there would not be an efficient mechanism of informational garbage disposal, we would end up in mental chaos (which sometimes happens in unfortunate cases with neuronal dysfunctioning). It is only sometimes that certain events produce so much neuronal energy and catch so much attention that a conscious representation is made possible.

As most neuronal information processing remains in mental darkness, i.e. happens on an implicit level, it is in my view impossible to make a clear statement why somebody changed his or her mind about something. If somebody gives an explicit reason for having changed the mind about something, I am very suspicious. As "it thinks" all the time in my brain, and as these processes are beyond voluntary control, I am much less transparent to myself as I might want, and this is true for everybody. Thus, I cannot give a good reason why I changed my mind about a strong hypothesis or even belief or perhaps a prejudice in my scientific work which I had until several years ago.

A sentence of Ludwig Wittgenstein from his Tractatus Logico-Philosophicus (5.6) was like a dogma for me: "Die Grenzen meiner Sprache bedeuten die Grenzen meiner Welt. — The limits of my language signify the limits of my world " (my translation). Now I react to this sentence with an emphatic "No!".

As a neuroscientist I have to stay away from the language trap. In our research we are easily misguided by words. Without too much thinking we are referring to "consciousness", to "free will", to "thoughts", to "attention", to the "self", etc, and we give an ontological status to these terms. Some people even start to look at the potential site of consciousness or of free will in the brain, or some people ask the "what is ..." question that never can find an answer. The prototypical "what is ..." question was formulated 1600 years ago by Augustinus who said in the 11th book of his Confessions: "Quid est ergo tempus? Si nemo ex me quaerat scio, si quaerenti explicare velim nescio. — What is time? If nobody asks me, I know it, but if I have to explain it to somebody, I don't know it" (my translation).

Interestingly, Augustinus made a nice categorical mistake by referring to "knowing" at first on an implicit, and second on an explicit level. This categorical mistake is still with us when we ask questions like: "What is consciousness, free will,..."; one knows, but one does not. As neuroscientists we have to focus on processes in the brain which rarely or perhaps never map directly onto such terms as we use them. Complexity reduction in brains is necessary and it happens all the time, but the goal of this reductive process is not such terms, that might be useful for our communication, but efficient action. This is what I think today, but why I came to this conclusion I don't know; it was probably several reasons that finally resulted in a shift of mind. i.e. overcoming Wittgenstein's straitjacket.


SCOTT SAMPSON
Chief Curator, Utah Museum of Natural History; Associate Professor, University of Utah; Host, Dinosaur Planet TV series

The Death of the Dinosaurs

An asteroid did it . . . .

Ok, so this may not seem like news to you. The father-son team of Luis and Walter Alvarez first put forth the asteroid hypothesis in 1980 to account for the extinction of dinosaurs and many other lifeforms at the end of the Mesozoic (about 65.5 million years ago). According to this now familiar scenario, an asteroid about 10 km in diameter slammed into the planet at about 100,000 km/hour. Upon impact, the bolide disintegrated, vaporizing a chunk of the earth's crust and propelling a gargantuan cloud of gas and dust high into the atmosphere. This airborne matter circulated around the globe, blocking out the sun and halting photosynthesis for a period of weeks or months. If turning the lights out wasn't bad enough, massive wild fires and copious amounts of acid rain apparently ensued. 

Put simply, it was hell on Earth. Species succumbed in great numbers and food webs collapsed the world over, ultimately wiping out about half of the planet's biodiversity. Key geologic evidence includes remnants of the murder weapon itself; iridium, an element that occurs in small amounts in the Earth's crust but is abundant in asteroids, was found by the Alvarez team to be anomalously abundant in a thin layer within Cretaceous-Tertiary (K-T) boundary sediments at various sites around the world. In 1990, announcement came of discovery of the actual impact crater in the Gulf of Mexico. It seemed as if arguably the most enduring mystery in prehistory had finally been solved. Unsurprisingly, this hypothesis was also a media darling, providing a tidy yet incredibly violent explanation to one of paleontology's most perplexing problems, with the added bonus of a possible repeat performance, this time with humans on the roster of victims.

To some paleontologists, however, the whole idea seemed just a bit too tidy.

Ever since the Alvarezes proposed the asteroid, or "impact winter," hypothesis, many (at times the bulk of) dinosaur paleontologists have argued for an alternative scenario to account for the K-T extinction. I have long counted myself amongst the ranks of doubters. It is not so much that I and my colleagues have questioned the occurrence of an asteroid impact; supporting evidence for this catastrophic event has been firmly established for some time. At issue has been the timing of the event. Whereas the impact hypothesis invokes a rapid extinction—on the order of weeks to years—others argue for a more gradual dying that spanned from one million to several million years. Evidence cited in support of the latter view includes an end-Cretaceous drop in global sea levels and a multi-million year bout of volcanism that makes Mount St. Helens look like brushfire. 

Thus, at present the debate has effectively been reduced to two alternatives. First is the Alvarez scenario, which proposes that the K-T extinction was a sudden event triggered by a single extraterrestrial bullet. Second is the gradualist view, which proposes that the asteroid impact was accompanied by two other global-scale perturbations (volcanism and decreasing sea-level), and that it was only this combination of factors acting in concert that decimated the end-Mesozoic biosphere.

Paleontologists of the gradualist ilk have argued that dinosaurs (and certain other groups) were already on their way out well before the K-T "big bang" occurred. Unfortunately, the fossil record of dinosaurs is relatively poor for the last stage of the Mesozoic and only one place on Earth — a small swath of badlands in the Western Interior of North America — has been investigated in detail. Several authors have argued that the latest Cretaceous Hell Creek fauna, as it's called (best known from eastern Montana), was depauperate relative to earlier dinosaur faunas. In particular, comparisons are often been made with the ca. 75 million year old Late Cretaceous Dinosaur Park Formation of southern Alberta, which has yielded a bewildering array of herbivorous and carnivorous dinosaurs.

For a long time, I regarded myself a card-carrying member of the gradualist camp. However, at least two lines of evidence have persuaded me to change my mind and join the ranks of the sudden-extinction-precipitated-by-an-asteroid group. 

First is a growing database indicating that the terminal Cretaceous world was not stressed to the breaking point, awaiting arrival of the coup de grâce from outer space. With regard to dinosaurs in particular, recent work has demonstrated that the Hell Creek fauna was much more diverse than previously realized. Second, new and improved stratigraphic age controls for dinosaurs and other Late Cretaceous vertebrates in the Western Interior indicate that ecosystems like those preserved the Dinosaur Park Formation were not nearly as diverse as previously supposed. 

Instead, many dinosaur species appear to have existed for relatively short durations (< 1 million years), with some geologic units preserving a succession of relatively short-lived faunas. So, even within the well sampled Western Interior of North America (let alone the rest of the world, for which we currently have little hard data), I see no grounds for arguing that dinosaurs were undergoing a slow, attritional demise. Other groups, like plants, also seem to have been doing fine in the interval leading up to that fateful day 65.5 million years ago. Finally, extraordinary events demand extraordinary explanations, and it does not seem parsimonious to make an argument for a lethal cascade of agents when compelling evidence exists for a single agent capable of doing the job on its own.

So yes, as far as I'm concerned (at least for now), the asteroid did it.


PETER SCHWARTZ
Futurist, Business Strategist; Cofounder. Global Business Network, a Monitor Company; Author, The Long Boom

In the last few years I have changed my mind about nuclear power. I used to believe that expanding nuclear power was too risky. Now I believe that the risks of climate change are much greater than the risks of nuclear power. As a result we need to move urgently toward a new generation of nuclear reactors.  

What led to the change of view? First I came to believe that the likelihood of major climate related catastrophes was increasing rapidly and that they were likely to occur much sooner than the simple  linear models of the IPCC indicated. My analysis developed as a result of work we did for the defense and intelligence community on the national security implications of climate change. Many regions of the Earth are likely to experience an increasing frequency of extreme weather events. These catastrophic events include megastorms, super tornados, torrential rains and floods, extended droughts, ecosystem disruptions all added to steadily rising sea levels. It also became clear that human induced climate change is ever more at the causal center of the story.

Research by climatologists like William Ruddiman indicate that the climate is more sensitive to changes in human societies ranging from agricultural practices like forest clearing and  irrigated rice growing to major plagues to the use of fossil fuels. Human societies have often gone to war as a result of the ecological exhaustion of their local environments. So it becomes an issue of war and peace. Will Vietnam simply roll over and die when the Chinese dam what remains of the trickle of the Mekong as an extended drought develops at is source in the Tibetan highlands?

 Even allowing for much greater efficiency and a huge expansion of renewable energy, the real fuel of the future is coal, especially in the US, China and India. if all three go ahead with their current plans on building coal fired electric generating plants then that alone will over the next two decades double all the CO2 that human kind has put into the atmosphere since the industrial revolution began more than two hundred years ago. And the only meaningful alternative to coal is nuclear power. It is true that we can hope that our ability to capture the CO2 from coal burning and sequester it in various ways will grow, but it will take a decade or more before that technology reaches commercial maturity.

At the same time I also came to believe that risks of  nuclear power are less than we feared. That shift began with a trip to visit the proposed nuclear waste depository at Yucca Mountain in Nevada. A number of Edge folk went including Stewart Brand, Kevin Kelly, Danny Hillis, and Pierre Omidyar. When it became clear that very long term storage of waste (e.g. 10,000 to 250,000 years) is a silly idea and not meaningfully realistic we began to question many of the assumptions about the future of nuclear power. The right answer to nuclear waste is temporary storage for perhaps decades and then recycling the fuel as much of the world already does, not sticking it underground for millennia. We will likely need the fuel we can extract from the waste.

There are emerging technologies for both nuclear power and waste reprocessing that will reduce safety risk, the amount of waste and most especially the risk of nuclear weapons proliferation as the new fuel cycle produces no plutonium, the offending substance of concern. And the economics are increasingly favorable as the French have demonstrated for decades. The average French citizen produces 70% less CO2 than the average American as a result. We have also learned that the long term consequences of the worst nuclear accident in history, Chernobyl were much less than feared.

So the conclusion is that the risks of climate change are far greater than the risks of nuclear power. Furthermore, human skill and knowledge in managing a nuclear system are only likely to grow with time. While the risks of climate change will grow as billions more people get rich and change the face of the planet with their demands for more stuff. Nuclear power is the only source of electricity that we can now see that is likely to enable the next three or four billion who want what we all have to get what they want without radically changing the climate of the Earth.


MARCEL KINSBOURNE, M.D.
Neurologist & Cognitive Neuroscientist, The New School; Coauthor, Children's Learning and Attention Problems

The Impressionable Brain

When the phenomenon of "mirror neurons" that fire both when a specific action is perceived and when it is intended was first reported, I was impressed by the research but skeptical about its significance. Specifically, I doubted, and continue to doubt, that these circuits are specific adaptations for purposes of various higher mental functions. I saw mirror neurons as simple units in circuits that represent specific actions, oblivious as to whether they had been viewed when performed by someone else, or represented as the goal of one's own intended action (so-called reafference copy). Why have two separate representations of the same thing when one will do? Activity elsewhere in the brain represents who the agent is, self or another. I still think that this is the most economical interpretation. But from a broader perspective I have come to realize that mirror neurons are not only less than meets the eye but also more. Instead of being a specific specialization, they play their role as part of a fundamental design characteristic of the brain; that is, when percepts are activated, relevant intentions, memories and feelings automatically fall into place.

External event are "represented" by the patterns of neuronal activity that they engender in sensory cortex. These representations also incorporate the actions that the percepts potentially afford. This "enactive coding" or "common coding" of input implies a propensity in the observer's brain to imitate the actions of others (consciously or unconsciously). This propensity need not result in overt imitation. Prefrontal cortex is thought to hold these impulses to imitate in check. Nonetheless, the fact that these action circuits have been activated, lowers their threshold by subtle increments as the experience in question is repeated over and over again, and the relative loading of synaptic weights in brain circuitry become correspondingly adjusted. Mirror neurons exemplify this type of functioning, which extends far beyond individual circuits to all cell assemblies that can form representations,

That an individual is likely to act in the same ways that others act is seen in the documented benefit for sports training of watching experts perform. "Emotional contagion" occurs when someone witnesses the emotional expressions of another person and therefore experiences that mood state oneself. People's viewpoints can subtly and unconsciously converge when their patterns of neural activation match, in the total absence of argument or attempts at persuasion. When people entrain with each other in gatherings, crowds, assemblies and mobs, diverse individual views reduce into a unified group viewpoint. An extreme example of gradual convergence might be the "Stockholm Syndrome"; captives gradually adopt the worldview of their captors. In general, interacting with others makes one converge to their point of view (and vice versa). Much ink has been spilled on the topic of the lamentable limitations of human rationality. Here is one reason why.

People's views are surreptitiously shaped by their experiences, and rationality comes limping after, downgraded to rationalization. Once opinions are established, they engender corresponding anticipations. People actively seek those experiences that corroborate their own self-serving expectations. This may be why as we grow older, we become ever more like ourselves. Insights become consolidated and biases reinforced when one only pays attention to confirming evidence. Diverse mutually contradictory "firm convictions" are the result. Science does take account of the negative instance as well as the positive instance. It therefore has the potential to help us understand ourselves, and each other.

If I am correct in my changed views as to what mirror neurons stand for and how representation routinely merges perception, action, memory and affect into dynamic reciprocal interaction, these views would have a bearing on currently disputed issues. Whether an effect is due to the brain or the environment would be moot if environmental causes indeed become brain causes, as the impressionable brain resonates with changing circumstances. What we experience contributes mightily to what we are and what we become. An act of kindness has consequences for the beneficiary far beyond the immediate benefit. Acts of violence inculcate violence and contaminate the minds of those who stand by and watch. Not only our private experiences, but also the experiences that are imposed on us by the media, transform our predispositions, whether we want them to or not. The implications for child rearing are obvious, but the same implications apply beyond childhood to the end of personal time.

What people experience indeed changes their brain, for better and for worse. In turn, the changed brain changes what is experienced. Regardless of its apparent stability over time, the brain is in constant flux, and constantly remodels. Heraclitus was right: "You shall not go down twice to the same river". The river will not be the same, but for that matter, neither will you. We are never the same person twice. The past is etched into the neural network, biasing what the brain is and does in the present. William Faulkner recognized this: "The past is never dead. In fact, it's not even past".


KEVIN KELLY
Editor-At-Large, Wired; Author, New Rules for the New Economy

Much of what I believed about human nature, and the nature of knowledge, has been upended by the Wikipedia. I knew that the human propensity for mischief among the young and bored — of which there were many online — would make an encyclopedia editable by anyone an impossibility. I also knew that even among the responsible contributors, the temptation to exaggerate and misremember what we think we know was inescapable, adding to the impossibility of a reliable text. I knew from my own 20-year experience online that you could not rely on what you read in a random posting, and believed that an aggregation of random contributions would be a total mess. Even unedited web pages created by experts failed to impress me, so an entire encyclopedia written by unedited amateurs, not to mention ignoramuses, seemed destined to be junk.

Everything I knew about the structure of information convinced me that knowledge would not spontaneously emerge from data, without a lot of energy and intelligence deliberately directed to transforming it. All the attempts at headless collective writing I had been involved with in the past only generated forgettable trash. Why would anything online be any different?

So when the first incarnation of the Wikipedia launched in 2000 (then called Nupedia) I gave it a look, and was not surprised that it never took off. There was a laborious process of top-down editing and re-writing that discouraged a would-be random contributor. When the back-office wiki created to facilitate the administration of the Nupedia text became the main event and anyone could edit as well as post an article, I expected even less from the effort, now re-named Wikipedia.

How wrong I was. The success of the Wikipedia keeps surpassing my expectations. Despite the flaws of human nature, it keeps getting better. Both the weakness and virtues of individuals are transformed into common wealth, with a minimum of rules and elites. It turns out that with the right tools it is easier to restore damage text (the revert function on Wikipedia) than to create damage text (vandalism) in the first place, and so the good enough article prospers and continues. With the right tools, it turns out the collaborative community can outpace the same number of ambitious individuals competing.

It has always been clear that collectives amplify power — that is what cities and civilizations are — but what's been the big surprise for me is how minimal the tools and oversight are needed. The bureaucracy of Wikipedia is relatively so small as to be invisible. It's the Wiki's embedded code-based governance, versus manager-based governance that is the real news. Yet the greatest surprise brought by the Wikipedia is that we still don't know how far this power can go. We haven't seen the limits of wiki-ized intelligence. Can it make textbooks, music and movies? What about law and political governance?

Before we say, "Impossible!" I say, let's see. I know all the reasons why law can never be written by know-nothing amateurs. But having already changed my mind once on this, I am slow to jump to conclusions again. The Wikipedia is impossible, but here it is. It is one of those things impossible in theory, but possible in practice. Once you confront the fact that it works, you have to shift your expectation of what else that is impossible in theory might work in practice.

I am not the only one who has had his mind changed about this. The reality of a working Wikipedia has made a type of communitarian socialism not only thinkable, but desirable. Along with other tools such as open-source software and open-source everything, this communtarian bias runs deep in the online world.

In other words it runs deep in this young next generation. It may take several decades for this shifting world perspective to show its full colors.  When you grow up knowing rather than admitting that such a thing as the Wikipedia works; when it is obvious to you that open source software is better; when you are certain that sharing your photos and other data yields more than safeguarding them — then these assumptions will become a platform for a yet more radical embrace of the commonwealth. I hate to say it but there is a new type of communism or socialism loose in the world, although neither of these outdated and tinged terms can accurately capture what is new about it.

The Wikipedia has changed my mind, a fairly steady individualist, and lead me toward this new social sphere. I am now much more interested in both the new power of the collective, and the new obligations stemming from individuals toward the collective. In addition to expanding civil rights, I want to expand civil duties. I am convinced that the full impact of the Wikipedia is still subterranean, and that its mind-changing power is working subconsciously on the global millennial generation, providing them with an existence proof of a beneficial hive mind, and an appreciation for believing in the impossible.

That's what it's done for me.


MARTI HEARST
Computer Scientist, UC Berkeley, School of Information

Computational Analysis of Language Requires Understanding Language

To me, having my worldview entirely altered is among the most fun parts of science. One mind-altering event occurred during graduate school. I was studying the field of Artificial Intelligence with a focus on Natural Language Processing. At that time there were intense arguments amongst computer scientists, psychologists, and philosophers about how to represent concepts and knowledge in computers, and if those representations reflected in any realistic way how people represented knowledge. Most researchers thought that language and concepts should be represented in a diffuse manner, distributed across myriad brain cells in a complex network. But some researchers talked about the existence of a "grandmother cell," meaning that one neuron in the brain (or perhaps a concentrated group of neurons) was entirely responsible for representing the concept of, say, your grandmother. I thought this latter view was hogwash.

But one day in the early 90's I heard a story on National Public Radio about children who had Wernicke's aphasia, meaning that a particular region in their brains were damaged. This damage left the children with the ability to form complicated sentences with correct grammatical structure and natural sounding rhythms, but with content that was entirely meaningless. This story was a revelation to me -- it seemed like irrefutable proof that different aspects of language were located in distinct regions of the brain, and that therefore perhaps the grandmother cell could exist. (Steven Pinker subsequently wrote his masterpiece, "The Language Instinct," on this topic.)

Shortly after this, the field of Natural Language Processing became radically changed by an entirely new approach. As I mentioned above, in the early 90's most researchers were introspecting about language use and were trying to hand-code knowledge into computers. So people would enter in data like "when you go to a restaurant, someone shows you to a table. You and your dining partners sit on chairs at your selected table. A waiter or waitress walks up to you and hands you a menu. You read the menu and eventually the waiter comes back and asks for your order. The waiter takes this information back to the kitchen." And so on, in painstaking detail.

But as large volumes of text started to become available online, people started developing algorithms to solve seemingly difficult natural language processing problems using very simple techniques. For example, how hard is it to write a program that can tell which language a stretch of text is written in? Sibun and Reynar found that all you need to do is record how often pairs of characters tend to co-occur in each language, and you only need to extract about a sentence from a piece of text to classify it with 99% accuracy into one of 18 languages! Another wild example is that of author identification. Back in the early 60's, Mosteller and Wallace showed that they could identify which of the disputed Federalist Papers were written by Hamilton vs. those written by Madison, simply by looking at counts of the function words (small structural words like "by", "from", and "to") that each author used.

The field as a whole is chipping away at the hard problems of natural language processing by using statistics derived from that mother-of-all-text-corpora, the Web. For example, how do you write a program to figure out the difference between a "student protest" and a "war protest"? The former is a demonstration against something, done by students, but the latter is not a demonstration done by a war.

In the old days, we would try to code all the information we could about the words in the noun compounds and try to anticipate how they interact. But today we used statistics drawn from counts of simple patterns on the web. Recently my PhD student Preslav Nakov has shown that we can often determine what the intended relationship between two nouns is by simply counting the verbs that fall between the two nouns, if we first reverse their order. So if we search the web for patterns like:

"protests that are * by students"

we find out the important verbs are "draw, involve, galvanize, affect, carried out by" and so on, whereas for "war protests" we find verbs such as "spread by, catalyzed by, precede", and so on.

The lesson we see over and over again is that simple statistics computed over very large text collections can do better at difficult language processing tasks than more complex, elaborate algorithms.


ALAN KAY
Computer Scientist; Personal Computer Visionary, Senior Fellow, HP Labs

A Big Mind Change At Age 10: Vacuums Don't Suck!

At age 10 in 1950, one of the department stores had a pneumatic tube system for moving receipts and money from counters to the cashier's office. I loved this and tried to figure out how it worked The clerks in the store knew all about it. "Vacuum", they said, "Vacuum sucks the canisters, just like your mom's vacuum cleaner". But how does it work, I asked? "Vacuum", they said, "Vacuum, does it all". This was what adults called "an explanation"!

So I took apart my mom's Hoover vacuum cleaner to find out how it worked. There was an electric motor in there, which I had expected, but the only other thing in there was a fan! How could a fan produce a vacuum, and how could it suck?

We had a room fan and I looked at it more closely. I knew that it worked like the propeller of an airplane, but I'd never thought about how those worked. I picked up a board and moved it. This moved air just fine. So the blades of the propeller and the fan were just boards that the motor kept on moving to push air.

But what about the vacuum? I found that a sheet of paper would stick to the back of the fan. But why? I "knew" that air was supposed to be made up of particles too small to be seen. So it was clear why you got a gust of breeze by moving a board — you were knocking little particles one way and not another. But where did the sucking of the paper on the fan and in the vacuum cleaner come from?

Suddenly it occurred to me that the air particles must be already moving very quickly and bumping into each other. When the board or fan blades moved air particles away from the fan there were less near the fan and the already moving particles would have less to bump into and would thus move towards the fan. They didn't know about the fan, but they appeared to.

The "suck" of the vacuum cleaner was not a suck at all. What was happening is that things went into the vacuum cleaner because they were being "blown in" by the air particles' normal movement, which were not being opposed by the usual pressure of air particles inside the fan!

When my physiologist father came home that evening I exclaimed "Dad, the air particles must be moving at least a hundred miles an hour!". I told him what I'd found out and he looked in his physics book. In there was a formula to compute the speed of various air molecules at various temperatures. It turned out that at room temperature ordinary air molecules were moving much faster than I had guessed: more like 1500 miles an hour! This completely blew my mind!

Then I got worried because even small things were clearly not moving that fast going into the vacuum cleaner (nor in the pneumatic tubes). By putting my hand out the window of the car I could feel that the air was probably going into the vacuum cleaner closer to 50 or 60 miles an hour. Another conversation with my Dad led to two ideas (a) the fan was probably not very efficient at moving particles away, and (b) the particles themselves were going in every direction and bumping into each other (this is why it takes a while for perfume from an open bottle to be smelled across a room.

This experience was a big deal for me because I had thought one way using a metaphor and a story about "sucking", and then I suddenly thought just the opposite because of an experiment and non-story thinking. The world was not as it seemed! Or as most adults thought and claimed! I never trusted "just a story" again.


DIANE F. HALPERN
Professor, Claremont McKenna College; Past-president, American Psychological Association; Author, Sex Differences in Cognitive Abilities

From A Simple Truth To "It All Depends"

Why are men underrepresented in teaching, child care, and related fields and women underrepresented in engineering, physics, and related fields? I used to know the answer, but that was before I spent several decades reviewing almost everything written about this question. Like most enduring questions, the responses have grown more contentious and even less is "settled" now that we have mountains of research designed to answer them. At some point, my own answer changed from what I believed to be the simple truth to a convoluted statement complete with qualifiers, hedge terms, and caveats. I guess this shift in my own thinking represents progress, but it doesn't feel or look that way.

I am a feminist, a product of the 60s, who believed that group differences in intelligence or most any other trait are mostly traceable to the lifetime of experiences that mold us into the people we are and will be. Of course, I never doubted the basic premises of evolution, but the lessons that I learned from evolution favor the idea that the brain and behavior are adaptable. Hunter-gatherers never solved calculus problems or traveled to the moon, so I find little in our ancient past to explain these modern-day achievements.

There is also the disturbing fact that evolutionary theories can easily explain almost any outcome, so I never found them to be a useful framework for understanding behavior. Even when I knew the simple truth about sex differences in cognitive abilities, I never doubted that heritability plays a role in cognitive development, but like many others, I believed that once the potential to develop an ability exceeded some threshold value, heritability was of little importance. Now I am less sure about any single answer, and nothing is simple any more.

The literature on sex differences in cognitive abilities is filled with inconsistent findings, contradictory theories, and emotional claims that are unsupported by the research. Yet, despite all of the noise in the data, clear and consistent messages can be heard. There are real, and in some cases sizable, sex differences with respect to some cognitive abilities.

Socialization practices are undoubtedly important, but there is also good evidence that biological sex differences play a role in establishing and maintaining cognitive sex differences, a conclusion that I wasn't prepared to make when I began reviewing the relevant literature. I could not ignore or explain away repeated findings about (small) variations over the menstrual cycle, the effects of exogenously administered sex hormones on cognition, a variety of anomalies that allow us to separate prenatal hormone effects on later development, failed attempts to alter the sex roles of a biological male after an accident that destroyed his penis, differences in preferred modes of thought, international data on the achievement of females and males, to name just a few types of evidence that demand the conclusion that there is some biological basis for sex-typed cognitive development.

My thinking about this controversial topic has changed. I have come to understand that nature needs nurture and the dichotomization of these two influences on development is the wrong way to conceptualize their mutual influences on each other. Our brain structures and functions reflect and direct our life experiences, which create feed back loops that alter the hormones we secrete and how we select environments. Learning is a biological and environmental phenomenon.

And so, what had been a simple truth morphed into a complicated answer for the deceptively simple question about why there are sex differences in cognitive abilities. There is nothing in my new understanding that justifies discrimination or predicts the continuation of the status quo. There is plenty of room for motivation, self-regulation, and persistence to make the question about the underrepresentation of women and men in different academic areas moot in coming years.

Like all complex questions, the question about why men and women achieve in different academic areas depends on a laundry list of influences that do not fall neatly into categories labeled biology or environment. It is time to give up this tired way of thinking about nature and nurture as two independent variables and their interaction and recognize how they exert mutual influences on each other. No single number can capture the extent to which one type of variable is important because they do not operate independently. Nature and nurture do not just interact; they fundamentally change each other. The answer that I give today is far more complicated than the simple truth that I used to believe, but we have no reason to expect that complex phenomena like cognitive development have simple answers.


STEPHEN H. SCHNEIDER
Biologist; Climatologist, Stanford University; Author, Laboratory Earth

Climate Change: Warming Up To The Evidence

In public appearances about global warming, even these days, I often hear: "I don't believe in global warming" and I then typically get asked why I do "when all the evidence is not in". "Global warming is not about beliefs", I typically retort, "but an accumulation of evidence over decades so that we can now say the vast preponderance of evidence — and its consistency with basic climate theory — supports global warming as well established, not that all aspects are fully known, an impossibility in any complex systems science".

But it hasn't always been that way, especially for me at the outset of my career in 1971, when I co-authored a controversial paper calculating that cooling effects from a shroud of atmospheric dust and smoke — aerosols — from human emissions at a global scale appeared to dominate the opposing warming effect of the growing atmospheric concentrations of the greenhouse gas carbon dioxide. Measurements at the time showed both warming and cooling emissions were on the rise, so a calculation of the net balance was essential — though controlling the aerosols made sense with or without climate side effects since they posed — and still pose — serious health effects on vulnerable populations.  In fact for the latter reason laws to clean up the air in most rich countries were just getting negotiated about that time.

When I traveled the globe in the early 1970s to explain our calculations, what I slowly learned from those out there making measurements was that two facts had only recently come to light, and together they appeared to make me consider flipping sign from cooling to warming as the most likely climatic change direction from humans using the atmosphere as a free sewer to dump some of our volatile industrial and agricultural wastes.  These facts were that human-injected aerosols, which we assumed were global in scale in our cooling calculation — were in fact concentrated primarily in industrial regions and bio-mass burning areas of the globe — about 20% of the Earth's surface, whereas we already knew that CO2 emissions are global in extent and about half of the emitted CO2 lasts for a century or more in the air.

But there were new facts that were even more convincing: not only is CO2 an important human-emitted greenhouse gas, but so too were methane, nitrous oxide and chlorofluorocarbons (many of the latter gases now banned because they also deplete stratospheric ozone) , and that together with CO2, these other greenhouse gasses were an enhanced global set of warming factors. On the other hand, aerosols were primarily regional in extent and could not thus overcome the warming effects of the combined global scale greenhouse gases.

I was very proud to have published in the mid-1970s what was wrong with my early calculations well before the so-called "contrarians" — climate change deniers still all too prevalent even today — understood the issues, let alone incorporated these new facts into updated models to make more credible projections. Of course, today the dominance of warming over cooling agents is now well established in the climatology community, but our remaining inability to be very precise over how much warming the planet can expect to have to deal with is in large part still an uncertainty over the partially counteracting cooling effects of aerosols — enough to offset a significant, even if largely unknown, amount of the warming. So although we are very confident in the existence of human-caused warming in the past several decades from greenhouse gases, we are still are working hard to pin down much more precisely how much aerosols offset this warming. Facts on that offset still lag the critical need to estimate better our impacts on climate before they become potentially irreversible.

The sad part of this story is not about science, but the misinterpretation of it in the political world. I still have to endure polemical blogs from contrarian columnists and others about how, as one put it in a grand polemic: "Schneider is an environmentalist for all temperatures" — citing my early calculations. This famous columnist somehow forgot to bring up the later-corrected (by me) faulty assumptions, nor mention that the 1971 calculation was based on not-yet-gathered facts. Simply getting the sign wrong was cited, ipso facto in this blog, as somehow damning of my current credibility.

Ironically, inside the scientific world, this switch of sign of projected effects is viewed as precisely what responsible scientists must do when the facts change. Not only did I change my mind, but published almost immediately what had changed and how that played out over time. Scientists have no crystal ball, but we do have modeling methods that are the closest approximation available. They can't give us truth, but they can tell us the logical consequences of explicit assumptions. Those who update their conclusions explicitly as facts evolve are much more likely to be a credible source than those who stick to old stories for political consistency. Two cheers for the scientific method!


XENI JARDIN
Tech Culture Journalist; Co-editor, Boing Boing; Commentator, NPR; Host, Boing Boing tv

Online Communities Rot Without Daily Tending By Human Hands

I changed my mind about online community this year.

I co-edit a blog that attracts a large number of daily visitors, many of whom have something to say back to us about whatever we write or produce in video. When our audience was small in the early days, interacting was simple: we tacked a little href tag to an open comments thread at the end of each post: Link, Discuss. No moderation, no complication, come as you are, anonymity's fine. Every once in a while, a thread accumulated more noise than signal, but the balance mostly worked.

But then, the audience grew. Fast. And with that, grew the number of antisocial actors, "drive-by trolls," people for whom dialogue wasn't the point. It doesn't take many of them to ruin the experience for much larger numbers of participants acting in good faith.

Some of the more grotesque attacks were pointed at me, and the new experience of being on the receiving end of that much personally-directed nastiness was upsetting. I dreaded hitting the "publish" button on posts, because I knew what would now follow.

The noise on the blog grew, the interaction ceased to be fun for anyone, and with much regret, we removed the comments feature entirely.

I grew to believe that the easier it is to post a drive-by comment, and the easier it is to remain faceless, reputation-less, and real-world-less while doing so, the greater the volume of antisocial behavior that follows. I decided that no online community could remain civil after it grew too large, and gave up on that aspect of internet life.

My co-editors and I debated, we brainstormed, we observed other big sites that included some kind of community forum or comments feature. Some relied on voting systems to "score" whether a comment is of value — this felt clinical, cold, like grading what a friend says to you in conversation. Dialogue shouldn't be a beauty contest. Other sites used other automated systems to rank the relevance of a speech thread. None of this felt natural to us, or an effective way to prevent the toxic sludge buildup. So we stalled for years, and our blog remained more monologue than dialogue. That felt unnatural, too.

Finally, this year, we resurrected comments on the blog, with the one thing that did feel natural. Human hands.

We hired a community manager, and equipped our comments system with a secret weapon: the "disemvoweller." If someone's misbehaving, she can remove all the vowels from their screed with one click. The dialogue stays, but the misanthrope looks ridiculous, and the emotional sting is neutralized.

Now, once again, the balance mostly works. I still believe that there is no fully automated system capable of managing the complexities of online human interaction — no software fix I know of. But I'd underestimated the power of dedicated human attention.

Plucking one early weed from a bed of germinating seeds changes everything. Small actions by focused participants change the tone of the whole. It is possible to maintain big healthy gardens online. The solution isn't cheap, or easy, or hands-free. Few things of value are.


CARLO ROVELLI
Physicist, Universite' de la Mediterrane' (Marseille, France); Author: What is time? What is Space?

There is nothing to add to the standard interpretation of quantum mechanics.

I have learned quantum mechanics as a young man, first from the book by Dirac, and then form a multitude of other excellent textbooks. The theory appeared bizarre and marvelous, but it made perfectly sense to me. The world, as Shakespeare put it, is "strange and admirable", but it is coherent. I could not understand why people remained unhappy with such a clear and rational theory. In particular, I could not understand why some people lost their time on a non-problem called the "interpretation of quantum mechanics".

I have remained of this opinion for many years. Then I moved to Pittsburgh, to work in the group of Ted Newman, great relativist and one of the most brilliant minds in the generation before mine. While there, the experiments made by the team of Alain Aspect Aspect at Orsay, in France, which confirmed spectacularly some of the strangest predictions of quantum mechanics, prompted a long period of discussion in our group. Basically, Ted claimed that quantum theory made no sense. I claimed that it does perfectly, since it is able to predict unambiguously the probability distribution of any conceivable observation.

Long time has passed, and I have changed my mind. Ted's arguments have finally convinced me: I was wrong, and he was right. I have slowly came to realize that in its most common textbook version, quantum mechanics makes sense as a theory of a small portion of the universe, a "system", only under the assumption that something else in the universe fails to obey quantum mechanics. Hence it becomes self contradictory, in its usual version, if we take it as a general description of all physical systems of the universe. Or, at least, there is still something key to understand, with respect to it.

This change of opinion has motivated me to start of a novel line of investigation, which I have called "relational quantum mechanics". It has also affected substantially my work in quantum gravity, taking me to consider a different sort of observable quantities as natural probes of quantum spacetime.

I am now sure that quantum theory has still much to tell us about the deep structure of the world. Unless I'll change my mind again, of course.


ROGER C. SCHANK
Psychologist & Computer Scientist; Engines for Education Inc.; Author, Making Minds Less Well Educated than Our Own

AI?

When reporters interviewed me in the 70's and 80's about the possibilities for Artificial Intelligence I would always say that we would have machines that are as smart as we are within my lifetime. It seemed a safe answer since no one could ever tell me I was wrong. But I no longer believe that will happen. One   reason is that  I am a lot older and we are barely closer to creating smart machines. 

I have not soured on AI. I still believe that we can create very intelligent machines. But I no longer believe that those machines will be like us. Perhaps it was the movies that led us to believe that we would have intelligent robots as companions. (I was certainly influenced early on by 2001.)  Certainly most AI researchers believed that creating machines that were our intellectual equals or better was a real possibility. Early AI workers sought out intelligent behaviors to focus on, like chess or problem solving, and tried to build machines that could equal human beings in those same endeavors. While this was an understandable approach it was, in retrospect, wrong-headed.     Chess playing is not really a typical intelligent human activity. Only some of us are good at it, and it seems to entail a level of cognitive processing that while impressive seems quite at odds with what makes humans smart. Chess players are methodical planners. Human beings are not.

Humans are constantly learning.  We spend years learning some seemingly simple stuff. Every new experience changes what we know and how we see the world. Getting reminded of our pervious experiences helps us process new experiences better than we did the time before. Doing that depends upon an unconscious indexing method that all people learn to do without quite realizing they are learning it. We spend twenty years (or more) learning how to speak properly and learning how to make good decisions and establish good relationships. But we tend to not know what we know. We can speak properly without knowing how we do it. We don't know how we comprehend. We just do.

All this poses a problem for AI. How can we imitate what humans are doing when humans don't know what they are doing when they do it? This conundrum led to a major failure in AI, expert systems, that relied upon rules that were supposed to characterize expert knowledge. But, the major characteristic of experts is that they get faster when they know more, while more rules made systems slower. The idea that rules were not at the center of intelligent systems meant that the flaw was relying upon specific consciously stated knowledge instead of trying to figure out what people meant when they said they just knew it when they saw it, or they had a gut feeling.

People give reasons for their behaviors but they are typically figuring that stuff out after the fact. We reason non-consciously and explain rationally later. Humans dream. There obviously is some important utility in dreaming.  Even if we don't understand precisely what the consequences of dreaming are, it is safe to assume that it is an important part of our unconscious reasoning process that drives our decision making. So, an intelligent machine would have to dream because it needed to, and would have to have intuitions that proved to be good insights, and it would have to have a set of driving goals that made it see the world in a way that a different entity with different goals would not. In other words it would need a personality, and not one that was artificially installed but one that came with the territory of what is was about as an intelligent entity.

What AI can and should build are intelligent special purpose entities. (We can call them Specialized Intelligences or SI's.) Smart computers will indeed be created. But they will arrive in the form of SI's, ones that make lousy companions but know every shipping accident that ever happened and why (the shipping industry's SI) or as an expert on sales (a business world SI.)   The sales SI, because sales is all it ever thought about, would be able to recite every interesting sales story that had ever happened and the lessons to be learned from it. For some salesman about to call on a customer for example, this SI would be quite fascinating. We can expect a foreign policy SI that helps future presidents learn about the past in a timely fashion and helps them make decisions because it knows every decision the government has ever made and has cleverly indexed them so as to be able to apply what it knows to current situations. 

So AI in the traditional sense, will not happen in my lifetime nor in my grandson's lifetime. Perhaps a new kind of machine intelligence will one day evolve and be smarter than us, but we are a really long way from that.


JOHN HORGAN
Director, the Center for Science Writings, Stevens Institute of Technology; Author, Rational Mysticism

Changing My Mind About the Mind-Body Problem

A decade ago, I thought the mind-body problem would never be solved, but I've recently, tentatively, changed my mind.

Philosophers and scientists have long puzzled over how matter — more specifically, gray matter — makes mind, and some have concluded that we'll never find the answer. In 1991 the philosopher Owen Flanagan called these pessimists "mysterians, a term he borrowed from the 1960s rock group "Question Mark and the Mysterians."

One of the earliest mysterians was the German genius Leibniz, who wrote: "Suppose that there be a machine, the structure of which produces thinking, feeling, and perceiving; imagine this machine enlarged but preserving the same proportions, so that you could enter it as if it were a mill… What would you observe there? Nothing but parts which push and move each other, and never anything that could explain perception."

A decade ago I was a hard-core mysterian, because I couldn't imagine what form a solution to the mind-body problem might take. Now I can. If there is a solution, it will come in the form of a neural code, an algorithm, set of rules or syntax that transforms the electrochemical pulses emitted by brain cells into perceptions, memories, decisions, thoughts.

Until recently, a complete decoding of the brain seemed impossibly remote, because technologies for probing living brains were so crude. But over the past decade the temporal and spatial resolution of magnetic resonance imaging, electroencephalography and other external scanning methods has leaped forward. Even more importantly, researchers keep improving the design of microelectrode arrays that can be embedded in the brain to receive messages from — and transmit them to — thousands of individual neurons simultaneously.

Scientists are gleaning information about neural coding not only from non-human animals but also from patients who have had electrodes implanted in their brains to treat epilepsy, paralysis, psychiatric illnesses and other brain disorders. Given these advances, I'm cautiously optimistic that scientists will crack the neural code within the next few decades.

The neural code may resemble relativity and quantum mechanics, in the following sense. These fundamental theories have not resolved all our questions about physical reality. Far from it. Phenomena such as gravity and light still remain profoundly puzzling. Physicists have nonetheless embraced relativity and quantum mechanics because they allow us to predict and manipulate physical reality with extraordinary precision. Relativity and quantum mechanics work.

In the same way, the neural code is unlikely to resolve the mind-body problem to everyone's satisfaction. When it comes to consciousness, many of us seek not an explanation but a revelation, which dispels mystery like sun burning off a morning fog. And yet we will embrace a neural-code theory of mind if it works — that is, if it helps us predict, heal and enhance ourselves. If we can control our minds, who cares if we still cannot comprehend them?


SHERRY TURKLE
Psychologist, MIT; Author, Evocative Objects: Things We Think With

What I've Changed My Mind About

Throughout my academic career – when I was studying the relationship between psychoanalysis and society and when I moved to the social and psychological studies of technology – I've seen myself as a cultural critic. I don't mention this to stress how lofty a job I put myself in, but rather that I saw the job as theoretical in its essence. Technologists designed things; I was able to offer insights about the nature of people's connections to them, the mix of feelings in the thoughts, how passions mixed with cognition. Trained in psychoanalysis, I didn't see my stance as therapeutic, but it did borrow from the reticence of that discipline. I was not there to meddle. I was there to listen and interpret. Over the past year, I've changed my mind: our current relationship with technology calls forth a more meddlesome me.

In the past, because I didn't criticize but tried to analyze, some of my colleagues found me complicit with the agenda of technology-builders. I didn't like that much, but understood that this was perhaps the price to pay for maintaining my distance, as Goldilock's wolf would say, "the better to hear them with." This year I realized that I had changed my stance. In studying reactions to advanced robots, robots that look you in the eye, remember your name, and track your motions, I found more people who were considering such robots as friends, confidants, and as they imagined technical improvements, even as lovers. I became less distanced. I began to think about technological promiscuity. Are we so lonely that we will really love whatever is put in front of us?

I kept listening for what stood behind the new promiscuity – my habit of listening didn't change – and I began to get evidence of a certain fatigue with the difficulties of dealing with people. A female graduate student came up to me after a lecture and told me that she would gladly trade in her boyfriend for a sophisticated humanoid robot as long as the robot could produce what she called "caring behavior." She told me that "she needed the feeling of civility in the house and I don't want to be alone." She said: "If the robot could provide a civil environment, I would be happy to help produce the illusion that there is somebody really with me." What she was looking for, she told me, was a "no-risk relationship" that would stave off loneliness; a responsive robot, even if it was just exhibiting scripted behavior, seemed better to her than an demanding boyfriend. I thought she was joking. She was not.

In a way, I should not have been surprised. For a decade I had studied the appeal of sociable robots. They push our Darwinian buttons. They are programmed to exhibit the kind of behavior we have come to associate with sentience and empathy, which leads us to think of them as creatures with intentions, emotions, and autonomy. Once people see robots as creatures, they feel a desire to nurture them. With this feeling comes the fantasy of reciprocation. As you begin to care for these creatures, you want them to care about you.

And yet, in the past, I had found that people approached computational intelligence with a certain "romantic reaction." Their basic position was that simulated thinking might be feeling but simulated feeling was never feeling and simulated love was never love. Now, I was hearing something new. People were more likely to tell me that human beings might be "simulating" their feelings, or as one woman put it: "How do I know that my lover is not just simulating everything he says he feels?" Everyone I spoke with was busier than ever on with their e-mail and virtual friendships. Everyone was busier than ever with their social networking and always-on/always-on-you PDAs. Someone once said that loneliness is failed solitude. Could no one stand to be alone anymore before they turned to a device? Were cyberconnections paving the way to think