Edge 205 — March 13, 2007
(5,950 words)

By W. Daniel Hillis

By John Brockman

In May, 2004, Edge published Danny Hillis's essay in which he proposed "Aristotle": The Knowledge Web.

"With the knowledge web," he wrote, "humanity's accumulated store of information will become more accessible, more manageable, and more useful. Anyone who wants to learn will be able to find the best and the most meaningful explanations of what they want to know. Anyone with something to teach will have a way to reach those who what to learn. Teachers will move beyond their present role as dispensers of information and become guides, mentors, facilitators, and authors. The knowledge web will make us all smarter. The knowledge web is an idea whose time has come."

Last week, Hillis announced a new company called Metaweb, and the free database, Freebase.com. ...

March 9, 2007

Start-Up Aims for Database to Automate Web Searching

By John Markoff

Danny Hillis, left, is a founder of Metaweb Technologies and Robert Cook is the executive vice president for product development.

SAN FRANCISCO, March 8 — A new company founded by a longtime technologist is setting out to create a vast public database intended to be read by computers rather than people, paving the way for a more automated Internet in which machines will routinely share information.

The company, Metaweb Technologies, is led by Danny Hillis, whose background includes a stint at Walt Disney Imagineering and who has long championed the idea of intelligent machines.

He says his latest effort, to be announced Friday, will help develop a realm frequently described as the "semantic Web" — a set of services that will give rise to software agents that automate many functions now performed manually in front of a Web browser.

The idea of a centralized database storing all of the world's digital information is a fundamental shift away from today's World Wide Web, which is akin to a library of linked digital documents stored separately on millions of computers where search engines serve as the equivalent of a card catalog.

In contrast, Mr. Hillis envisions a centralized repository that is more like a digital almanac. The new system can be extended freely by those wishing to share their information widely. ...

... In its ambitions, Freebase has some similarities to Google — which has asserted that its mission is to organize the world's information and make it universally accessible and useful. But its approach sets it apart.

"As wonderful as Google is, there is still much to do," said Esther Dyson, a computer and Internet industry analyst and investor at EDventure, based in New York.

Most search engines are about algorithms and statistics without structure, while databases have been solely about structure until now, she said.

"In the middle there is something that represents things as they are," she said. "Something that captures the relationships between things."

That addition has long been a vision of researchers in artificial intelligence. The Freebase system will offer a set of controls that will allow both programmers and Web designers to extract information easily from the system.

"It's like a system for building the synapses for the global brain," said Tim O'Reilly, chief executive of O'Reilly Media, a technology publishing firm based in Sebastopol, Calif. ...

Below is Hillis's addendum to his original essay. ...



[W. DANIEL HILLIS:] In the spring of 2000,  while I was writing the Aristotle essay, Jimmy Wales and Larry Sanger began taking a much more practical approach to a similar problem. Their project, called Nupedia, was an attempt to create a carefully edited encyclopedia of the world's knowledge that would be available to anyone, for free. The Nupedia project made some progress, but the going was slow. About a year later, they put a wiki on the web, allowing anyone to contribute feed material to Nupedia. That feeder project was called "Wikipedia". What happened next is a piece of History that should make us turn-of-the-millennium humans all feel proud.

While all this was going on,  I continued to plug away, trying to build a prototype of the more structured, computer-mediated knowledge base that is described in the essay. Comments from my friends, including those posted on Edge, helped me realize that trying to build a tutor and a knowledge base at the same time would be biting off way too much. So, I decided to concentrate on the "Knowledge Web" part of the problem. Even that seemed to be an uphill battle, because the collapse of the dot-com boom dimmed the funding prospects for all things connected. It was not a good time for ambitious ideas.

Or maybe it was a good time. It was during this period, undistracted by the frenzy of a boom, that Google and Wikipedia were able to build spectacularly ambitious tools that made us all smarter. Eventually, their success reminded everyone that the information revolution was just beginning.  There was renewed enthusiasm for ambitious dreams. This time, I was better prepared for it, having met, during the interim, a great product designer (Robert Cook) and a great engineer (John Giannandrea), both of whom shared the dream of building a connected database of human knowledge that could be presented by computers, to humans.

In retrospect the key idea in the "Aristotle" essay was this: if humans could contribute their knowledge to a database that could be read by computers, then the computers could present that knowledge to humans in the time, place and format that would be most useful to them.  The missing link to make the idea work was a universal database containing all human knowledge, represented in a form that could be accessed, filtered and interpreted by computers.

One might reasonably ask: Why isn't that database the Wikipedia or even the World Wide Web? The answer is that these depositories of knowledge are designed to be read directly by humans, not interpreted by computers. They confound the presentation of information with the information itself. The crucial difference of the knowledge web is that the information is represented in the database, while the presentation is generated dynamically. Like Neal Stephenson's storybook, the information is filtered, selected and presented according to the specific needs of the viewer.

John, Robert and I started a project,  then a company, to build that computer-readable database. How successful we will be is yet to be determined, but we are really trying to build it:  a universal database for representing any knowledge that anyone is willing to share. We call the company Metaweb, and the free database, Freebase.com. Of course it has none of the artificial intelligence described in the essay, but it is a database in which each topic is connected to other topics by links that describe their relationship. It is built so that computers can navigate and present it to humans. Still very primitive, a far cry from Neal Stephenson's magical storybook, it is a step, I hope, in the right direction.

"Danger – brilliant minds at work...A brilliant book: exhilarating, hilarious, and chilling." The Evening Standard (London)

Hardcover - UK
£12.99, 352 pp
Free Press, UK

Paperback - US
$13.95, 336 pp
Harper Perennial
(March 1, 2007)

WHAT IS YOUR DANGEROUS IDEA? Today's Leading Thinkers on the Unthinkable With an Introduction by STEVEN PINKER and an Afterword by RICHARD DAWKINS Edited By JOHN BROCKMAN

"A selection of the most explosive ideas of our age." Sunday Herald "Provocative" The Independent "Challenging notions put forward by some of the world's sharpest minds" Sunday Times "A titillating compilation" The Guardian

"...This collection, mostly written by working scientists, does not represent the antithesis of science. These are not simply the unbuttoned musings of professionals on their day off. The contributions, ranging across many disparate fields, express the spirit of a scientific consciousness at its best — informed guesswork "Ian McEwan, from the Introduction, in The Telegraph

Paperback - US
$13.95, 272 pp
Harper Perennial

Paperback - UK
£7.99 288 pp
Pocket Books

WHAT WE BELIEVE BUT CANNOT PROVE Today's Leading Thinkers on Science in the Age of Certainty With an Introduction by IAN MCEWAN Edited By JOHN BROCKMAN

"An unprecedented roster of brilliant minds, the sum of which is nothing short of an oracle — a book ro be dog-eared and debated." Seed "Scientific pipedreams at their very best." The Guardian "Makes for some astounding reading." Boston Globe Fantastically stimulating...It's like the crack cocaine of the thinking world.... Once you start, you can't stop thinking about that question." BBC Radio 4

March 13, 2007

What's So Funny? Well, Maybe Nothing
By John Tierney

When Robert R. Provine tried applying his training in neuroscience to laughter 20 years ago, he naïvely began by dragging people into his laboratory at the University of Maryland, Baltimore County, to watch episodes of "Saturday Night Live" and a George Carlin routine. They didn't laugh much. It was what a stand-up comic would call a bad room.

So he went out into natural habitats — city sidewalks, suburban malls — and carefully observed thousands of "laugh episodes." He found that 80 percent to 90 percent of them came after straight lines like "I know" or "I'll see you guys later." The witticisms that induced laughter rarely rose above the level of "You smell like you had a good workout.""Most prelaugh dialogue," Professor Provine concluded in "Laughter," his 2000 book, "is like that of an interminable television situation comedy scripted by an extremely ungifted writer."He found that most speakers, particularly women, did more laughing than their listeners, using the laughs as punctuation for their sentences. It's a largely involuntary process. People can consciously suppress laughs, but few can make themselves laugh convincingly.

"Laughter is an honest social signal because it's hard to fake," Professor Provine says. "We're dealing with something powerful, ancient and crude. It's a kind of behavioral fossil showing the roots that all human beings, maybe all mammals, have in common."


The Brain on the Stand
By Jeffrey Rosen

How neuroscience is transforming the legal system.

...Two of the most ardent supporters of the claim that neuroscience requires the redefinition of guilt and punishment are Joshua D. Greene, an assistant professor of psychology at Harvard, and Jonathan D. Cohen, a professor of psychology who directs the neuroscience program at Princeton. Greene got Cohen interested in the legal implications of neuroscience, and together they conducted a series of experiments exploring how people's brains react to moral dilemmas involving life and death. In particular, they wanted to test people's responses in the f.M.R.I. scanner to variations of the famous trolley problem, which philosophers have been arguing about for decades. ...

...Michael Gazzaniga, a professor of psychology at the University of California, Santa Barbara, and author of "The Ethical Brain," notes that within 10 years, neuroscientists may be able to show that there are neurological differences when people testify about their own previous acts and when they testify to something they saw. "If you kill someone, you have a procedural memory of that, whereas if I'm standing and watch you kill somebody, that's an episodic memory that uses a different part of the brain," he told me. ...

...In a series of famous experiments in the 1970s and '80s, Benjamin Libet measured people's brain activity while telling them to move their fingers whenever they felt like it. Libet detected brain activity suggesting a readiness to move the finger half a second before the actual movement and about 400 milliseconds before people became aware of their conscious intention to move their finger. Libet argued that this leaves 100 milliseconds for the conscious self to veto the brain's unconscious decision, or to give way to it — suggesting, in the words of the neuroscientist Vilayanur S. Ramachandran, that we have not free will but "free won't.". ...

...The legal implications of the new experiments involving bias and neuroscience are hotly disputed. Mahzarin R. Banaji, a psychology professor at Harvard who helped to pioneer the I.A.T., has argued that there may be a big gap between the concept of intentional bias embedded in law and the reality of unconscious racism revealed by science. When the gap is "substantial," she and the U.C.L.A. law professor Jerry Kang have argued, "the law should be changed to comport with science" — relaxing, for example, the current focus on intentional discrimination and trying to root out unconscious bias in the workplace with "structural interventions," which critics say may be tantamount to racial quotas. . ...

...Others agree with Greene and Cohen that the legal system should be radically refocused on deterrence rather than on retribution. Since the celebrated M'Naughten case in 1843, involving a paranoid British assassin, English and American courts have recognized an insanity defense only for those who are unable to appreciate the difference between right and wrong. (This is consistent with the idea that only rational people can be held criminally responsible for their actions.) According to some neuroscientists, that rule makes no sense in light of recent brain-imaging studies. "You can have a horrendously damaged brain where someone knows the difference between right and wrong but nonetheless can't control their behavior," says Robert Sapolsky, a neurobiologist at Stanford. "At that point, you're dealing with a broken machine, and concepts like punishment and evil and sin become utterly irrelevant. Does that mean the person should be dumped back on the street? Absolutely not. You have a car with the brakes not working, and it shouldn't be allowed to be near anyone it can hurt.". ...


March 2007

defined the 20th century

100 Prospect contributors answered our invitation to respond to the question on the left in no more than 250 words. An edited selection of their responses is printed here — the rest are on our website. (Thanks to John Brockman for allowing us to borrow his Edge website idea). The pessimsm of the responses is striking: almost nobody expects the world to get better in the coming decades, and many predict it will get much worse.

Ed. Note: Among the 100 responses to the question posed by Prospect Editor David Goodhart, are a number of Edge contributors:

Brian Eno
, musician

Interventionists vs laissez-faireists
One of the big divisions of the future will be between those who believe in intervention as a moral duty and those who don't. This issue cuts across the left/right divide, as we saw in the lead-up to the invasion of Iraq. It asks us to consider whether we believe our way of doing things to be so superior that we must persuade others to follow it, or whether, on the other hand, we are prepared to watch as other countries pursue their own, often apparently flawed, paths. It will be a discussion between pluralists, who are prepared to tolerate the discomfort of diversity, and those who feel they know what the best system is and feel it is their moral duty to encourage it.

Globalists vs nationalists
How prepared are we to allow national governments the freedom to make decisions which may not be in the interests of the rest of the world? With issues such as climate change becoming increasingly urgent, many people will begin arguing for a global system of government with the power to overrule specific national interests.

Communities of geography vs communities of choice
At the same time, some people will feel less and less allegiance to "the nation," which will become an increasingly nebulous act of faith, and more allegiance to "communities of choice" which exist outside national identities and geographical restraints. We see the beginnings of this in transnational pressure groups such as Greenpeace, MoveOn and Amnesty International, but also in the choices that people now make about where they live, bank their money, get their healthcare and go on holiday.

Real life vs virtual life
Some people will spend more and more of their time in virtual communities such as Second Life. They will claim that their communities represent the logical extension of citizen democracy. They will be ridiculed and opposed by "First Lifers," who will insist that reality with all its complications always trumps virtual reality, but the second-lifers in turn will insist that they live in a world of their own design and therefore are by definition more creative and free. This division will deepen and intensify, and will develop from just a cultural preference into a choice about how and where people spend their lives.

Life extension for all vs for some
There will be an increasingly agonised division between those who feel that new life-extension technologies should be either available to those who can afford them or available to everyone. Life itself will be the resource over which wars will be fought: the "have nots" will feel that there is a fundamental injustice in the possibility for some people to enjoy conspicuously longer and healthier lives because they happen to be richer.

Anthony Giddens, sociologist

"The future isn't what it used to be," George Burns once said. And he was right. This century we are peering over a precipice, and it's an awful long way down. We have unleashed forces into the world that it is not certain that we can control. We may have already done so much damage to the planet that by the end of the century people will live in a world ravaged by storms, with large areas flooded and others arid. But you have to add in nuclear proliferation, and new diseases that we might have inadvertently created. Space might become militarised. The emergence of mega-computers, allied to robotics, might at some point also create beings able to escape the clutches of their creators.

Against that, you could say that we haven't much clue what the future will bring, except it's bound to be things that we haven't even suspected. Twenty years ago, Bill Gates thought there was no future in the internet. The current century might turn out much more benign than scary.

As for politics, left and right aren't about to disappear—the metaphor is too strongly entrenched for that. My best guess about where politics will focus would be upon life itself. Life politics concerns the environment, lifestyle change, health, ageing, identity and technology. It may be a politics of survival, it may be a politics of hope, or perhaps a bit of both.

Nicholas Humphrey, scientist

How can anyone doubt that the faultline is going to be religion? On one side there will be those who continue to appeal for their political and moral values to what they understand to be God's will. On the other there will be the atheists, agnostics and scientific materialists, who see human lives as being under human control, subject only to the relatively negotiable constraints of our evolved psychology. What makes the outcome uncertain is that our evolved psychology almost certainly leans us towards religion, as an essential defence against the terror of death and meaninglessness.

Marek Kohn, science writer

The right, of course, is still with us; robust structures remain to uphold individualism and the pursuit of wealth. There is also plenty of room in the current orthodoxy for liberalism and conservatism of all kind of stripes. What's left out? Equality and solidarity—which takes us back to the egalite and fraternite of the French revolution, where the terms "left" and "right" came in. These seem to be fundamental values, intuitively recognised as the basis of fair and healthy social relations, so we may expect that they will reassert themselves. But as dominant ideologies fail to give them their fair dues, they will reappear in marginal and often disagreeable guises. Social solidarity may be advanced within narrow group solidarities; equality may be appropriated by demagogues.

Recent manifestations in central Europe and South America have been overlooked because they are accompanied by tendencies that rightly affront liberals. It is hard to imagine what could restore social solidarity and equality to the heart of political discourse, so we must expect that collectivist tendencies in our kind of polity will likely be largely confined to the bureaucratic management of resources placed under ever-growing pressure by economic growth and its environmental consequences.

Mark Pagel, scientist

Modern humans evolved to live in small co-operative groups with extensive divisions of labour among unrelated people linked only by their common culture. Co-operation is fragile, being the contented face of trust, reciprocity and the perception of a shared fate—when they go, the mask can quickly fall. The psychology of the co-operative group, of how we can maintain it and equally how we can control its dangerous tendencies—parochialism, xenophobia, exclusion and warfare—will often be at the front door of 21st-century politics.

The reasons are clear. The politics of the 20th century were expansive and hopeful, enlivened by growing prosperity. In the 21st century, increasing multiculturalism and widespread movements of people will repeatedly challenge the trust and sense of equity that binds together co-operative groups, unleashing instincts for selfish preservation. For politicians and thinkers, a pressing task at all levels of politics is to seek ways to manage these issues that somehow draw all of the actors into the elaborate and fragile reciprocity loops of the co-operative society. It sounds impossible, it won't be easy and there are no simple recipes. But if we fail, we risk sliding into xenophobic hysteria, clashes of culture, and the frenzied and dangerous grabbing of natural resources.

Lisa Randall, scientist

Debates today have descended into those between the lazy and the slightly less lazy. We are faced with urgent issues, yet the speed with which lawmakers approach them is glacial—actually slower than that: glaciers are melting faster than we are attacking the issues.

Steven Rose, biologist

Last century's alternatives were socialism or barbarism. This century's prospects are starker: social justice or the end of human civilisation—if not our species. To achieve that justice it is imperative that we retain the utopian dream of "from each according to their abilities: to each according to their needs," but needs and abilities are constantly being refashioned by runaway sciences and technologies harnessed ever more closely to global industry and imperial power and embedded within a degraded and degrading environment. This century's "left," just as that of the last century, is constituted by those groups, old or newly constituted, struggling against these hegemonic powers.


Out There
By Richard Panek

Only 4 percent of the universe is made of the kind of matter that makes up you and me and all the planets and stars and galaxies. The rest — 96 percent — is ... who knows?

Three days after learning that he won the 2006 Nobel Prize in Physics, George Smoot was talking about the universe. Sitting across from him in his office at the University of California, Berkeley, was Saul Perlmutter, a fellow cosmologist and a probable future Nobelist in Physics himself. Bearded, booming, eyes pinwheeling from adrenaline and lack of sleep, Smoot leaned back in his chair. Perlmutter, onetime acolyte, longtime colleague, now heir apparent, leaned forward in his.

"Time and time again," Smoot shouted, "the universe has turned out to be really simple."

Perlmutter nodded eagerly. "It's like, why are we able to understand the universe at our level?"

"Right. Exactly. It's a universe for beginners! 'The Universe for Dummies'!"

But as Smoot and Perlmutter know, it is also inarguably a universe for Nobelists, and one that in the past decade has become exponentially more complicated. Since the invention of the telescope four centuries ago, astronomers have been able to figure out the workings of the universe simply by observing the heavens and applying some math, and vice versa. Take the discovery of moons, planets, stars and galaxies, apply Newton's laws and you have a universe that runs like clockwork. Take Einstein's modifications of Newton, apply the discovery of an expanding universe and you get the big bang. "It's a ridiculously simple, intentionally cartoonish picture," Perlmutter said. "We're just incredibly lucky that that first try has matched so well."

But is our luck about to run out? Smoot's and Perlmutter's work is part of a revolution that has forced their colleagues to confront a universe wholly unlike any they have ever known, one that is made of only 4 percent of the kind of matter we have always assumed it to be — the material that makes up you and me and this magazine and all the planets and stars in our galaxy and in all 125 billion galaxies beyond. The rest — 96 percent of the universe — is ... who knows?

"Dark," cosmologists call it, in what could go down in history as the ultimate semantic surrender. This is not "dark" as in distant or invisible. This is "dark" as in unknown for now, and possibly forever.


March 11, 2007

Reflections on Life as a Shaker-Upper

By Erik Piepenburg

SINCE the 1960s the playwright, director and designer Richard Foreman has been the emperor of New York experimental theater, with some constants in those experiments: nonlinear tableaus about the unconscious mind, deliriously decorated sets, actors dressed like commandos from Dr. Seuss's special forces. Voices (often Mr. Foreman's own) and, lately, projected films add to the sensory overload.

In his current production, "Wake Up, Mr. Sleepy! Your Unconscious Mind Is Dead!," running through April 1 at the Ontological Theater at St. Mark's Church, he imagines a topsy-turvy world where doll heads and catatonic humans question the concept of sentience. Ben Brantley of The Times called it a "dazzling exercise in reality-shifting."

Mr. Foreman, who turns 70 in June, says he's actually trying to provide clarity in a dense world. Wearing glasses and his trademark droopy clothes, he spoke with Erik Piepenburg about his career, including his critics.

Here are excerpts from the conversation:

FROM THE START I was already 32 and very shy about beginning. I worked under the influence of Jonas Mekas, who started the so-called underground film movement in New York. After graduating from the Yale Drama School I was not happy with any of the theater that I saw, but I happened upon these underground films young people were making, and I was ravished by them. I thought I saw pure poetry at work.

THE FIRST PRODUCTION I spent a couple of months all by myself building a set and had my filmmaker friends, nonactors, play the parts. I really enjoyed the primitiveness of doing all these things. Now the plays are technically pretty sophisticated and complex, so it's a world of difference, but the same basic impulse is at work.

THE AUDIENCE THEN I began in SoHo in 1968 when SoHo was really an artists' community. I made my plays for other artists who have since become famous. They would come see my plays, and I could go see their work. It really was a conversation between artists. ...


March 11, 2007

Tie Space Contimuum

By Herbert Muschamp

Since it is cautiously conceded that composers, poets and visual artists often function in the capacity of philosophers, we should feel reasonably confident in regarding lovers of music, poetry and painting as potential students of philosophy. So why not extend this exalted status to shoppers? Time to stock up on shirts? Think of yourself as a seeker of wearable truth.

Eighteenth-century philosophers would have supported this move. The education of taste in all forms was a pivotal project for Enlightenment thinkers. They applied it to dress, cuisine and interior decoration as well as to opera, architecture and painting. I am reminded of their enlightened outlook when I go shopping for neckties at Battistoni, the celebrated tailor in Rome.

Battistoni makes the handsomest ties in the universe. You don't need a degree in philosophy to wear one. All that's required is the ability to tie a decent knot. I hope to master that masculine art one day. Alas, I have found it difficult to concentrate on practicing it because for me a Battistoni tie represents an altogether different kind of knot: what philosophers call the problem of the One and the Many.

Is there an underlying unity beneath the multiplicity of life's forms? Philosophers have long despaired of finding a solution to this problem. Their pessimism has not, however, deterred scientists from continuing the search for a grand unified theory. String theory, which envisions a universe made of vibrating cords of energy, is the latest contender for a solution.

Shoppers for men's wear should not be discouraged, either. Even if we cannot solve the problem, we can at least live the problem, and live it well, provided we know where to shop. "The Elegant Universe" is the title Brian Greene gave his popular book about string theory.

Let us ponder a possible sequel on the subject of thread theory. We will call it Universal Elegance. Battistoni will be the laboratory for our cosmic research.


March 2007


Why have we not encountered intelligent extraterrestrial life? We used to assume that the aliens had blown themselves up. But perhaps they just got addicted to computer games

By Geoffrey Miller

Sometime in the 1940s, Enrico Fermi was discussing the possibility of extraterrestrial intelligence with other physicists. They argued that our galaxy holds 100bn stars, that intelligent life evolved quickly on earth and that therefore extraterrestrial intelligence must be common. Fermi listened patiently, then asked, "So where is everybody?" That is, if extraterrestrial intelligence is so common, why haven't we met any bright aliens yet? This conundrum became known as Fermi's paradox.

The paradox has become ever more baffling. Over 150 extra-solar planets have been identified in the last few years, suggesting that life-hospitable planets orbit most stars. Paleontology shows that organic life evolved very quickly after the earth's surface cooled. Given simple life, evolution shows progressive trends towards larger bodies, brains and social complexity. Evolutionary psychology reveals several credible paths from simpler social minds to human-level creative intelligence. Yet 40 years of intensive searching for extraterrestrial intelligence has yielded nothing. No radio signals, no credible spacecraft sightings.

It looks as if there are two possibilities. Perhaps our science has. ...


March 7, 2007

Early Christianity's Martyrdom Debate
By David Van Biema

Princeton University's Elaine Pagels is about the nearest thing there is to a superstar in the realm of Christian history scholarship. It is largely through her work that many understand the early non-Orthodox Christianity that she at one point dubbed (and later un-dubbed, finding the term imprecise) the Gnostic Gospels. She breaks new ground with the debut of Reading Judas: The Gospel of Judas and the Shaping of Christianity, her collaboration with Harvard Divinity scholar Karen King about the second-century "Gospel of Judas" that was made public last year.

TIME: You and Karen write that the "Judas" author was angry, particularly at the Christian church's developing cult of martyrdom. You write that he conveyed "the urgency of someone who wants to unmask what he feels is the hideous folly of leaders who encourage people to get themselves killed in this way." Whom might he have meant?"

Pagels: So far as I know, all the so-called "fathers of the church" glorified martyrdom. Ignatius, who wrote in Syria in around 108 AD, speaks passionately about "being ground up by the teeth of wild beasts to become God's wheat," — that is, by martyrdom, he becomes the bread of the Eucharist.

What could have provoked such adamance?

Christians were undergoing terrible persecution at the time. Leaders like Ignatius felt that a willingness to "die for God" was essential for the movement to survive; otherwise, its members could be intimidated, and it might have died out.

Was it a successful strategy?

Yes. We have evidence to that effect. The philosopher Justin wrote that Socrates said that the purpose of philosophy was to prepare us to die bravely, and when Justin saw illiterate Christians facing torture and execution in the public stadium, he became a convert — and later a martyr himself.

And the Judas author objected to this?

He did not suggest that a believer should deny being a Christian, even if the penalty were death. But he challenges leaders who encourage people to "die for God" with what he thought were false promises — huge rewards in heaven, and guaranteed resurrection.

Does this tell us something new?

Before these discoveries, we knew little about Christians who opposed martyrdom — or opposed encouraging it — because the people who challenged the dominant view were ridiculed as cowards and heretics, and their writings didn't survive. The Gospel of Judas shows what intense and painful arguments martyrdom caused among Christians.


March 7, 2007

Damascus, Ramallah, and Jerusalem, March 4, 2007

Give Palestine's Unity Government a Chance
Scott Atran, Robert Axelrod and Richard Davis

Most Israeli leaders we talked to agree that Abbas is sincere in wanting to steer the unity government and all Palestinian factions to recognize Israel.

Whatever the final makeup of the unity government now being formed between Palestinian president Abbas's Fateh and rival Hamas, it is certain to fall short of demands by the United States that it renounce violence, recognize Israel and accept past peace agreements. Although the U.S. and Israel threaten to undermine the unity government through isolation and sanctions, Abbas insists that this power-sharing arrangement is the best chance to end the political and economic crisis resulting from the international embargo of the current Hamas-led government, and to move towards peace with Israel.There are two ways to handle the short-term future. One way is to give Abbas some time to reconcile Palestinian factions into engaging Israel and to use his mandate to pursue the peace process. The risk of this option is that it may strengthen those who will never accept Israel. Another way is to undermine the unity government, with the risks of widening support for Palestinian radicals, increasing Iranian influence, and destabilizing the region through the unpredictable consequences of civil strife. On balance, the better option is to give the unity government a chance.

In this time of great uncertainty in the Middle East, we went to the area as part of a scientific study on the values underpinning political conflict. Our questions evoked deep and informative reflections from a variety of senior Israeli and Palestinian leaders on the prospects of a Palestinian unity government. Upon considering these reflections, we conclude that temporarily supporting the unity government carries less risk than undermining it.


[ED NOTE: See New York Times Magazine cover story "Darwin's God" (3.4.07) on Scott Atran's theories on the evolution of religion.]

march 6, 2007

Jonathan Lethem + Janna Levin
The novelist and the cosmologist meet up to talk about reality.

When theoretical cosmologist Janna Levin began writing A Madman Dreams of Turing Machines it was a work of non-fiction. But she realized, as her subjects Gödel and Turing had, that the tools of non-fiction—or those of scientific inquiry—were insufficient for discerning truth. As a novelist, Jonathan Lethem traffics regularly in different degrees of truth and is similarly fascinated with what constitutes reality. Recently the two met for lunch at the National Arts Club in New York to talk about this elusive concept—its guises, its enchantments, and how we know it when we see it. ...

...Lethem: One notion that I was very interested in—and maybe I was reading your books and finding these moments because it's a preoccupation of mine at the moment—is your awareness and sensitivity to questions of originality and collaboration and individual achievement, which are, of course, obsessions of the scientist maybe even more than the artist. But certainly they're mutual obsessions. And you had Gödel talking about how someone else would have thought of it if he hadn't, and it all broaches the question of whether the ego of the individual, artist or scientist, is essential, and whether the work is truly a collective enterprise.

Levin: Well, I think in science it's a very interesting question. If you really believe in objective reality, then you don't matter at all. If it hadn't been Einstein, eventually it would have been somebody else. And if it hadn't have been Gödel, eventually it would have been somebody else. And so scientists are playing this difficult game with themselves, with their own egos. They want to be the most brilliant, get there first, be accomplished, and yet at the end of the day, they have to say to themselves, "But it really didn't matter that it was me, and my marks cannot be left on this in any way." It's even in the way scientists write—all scientists begin their papers in exactly the same way. I mean, it drives me insane, but there's a certain, I'm not going to say charm, to it, but I understand it. ...


John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2007 By
Edge Foundation, Inc
All Rights Reserved.