Edge 185 — June 15, 2006
(5,900 words)



"Edge.org |The Spectator | Il Foglio| Nepszabadsag | DU | The Economist | L'Express | Die Weltwoche | Folio | Le point | The New York Review of Books " — Magazine Roundup: Arts, Essays, Ideas from Germany by perlentaucher.de



DARK MATERIAL
By Martin Rees

Can civilisation be safeguarded, without humanity having to sacrifice its diversity and individualism? This is a stark question, but I think it's a serious one.


news


[6.13.06]

Magazine Roundup

Edge.org |The Spectator | Il Foglio| Nepszabadsag | DU | The Economist | L'Express | Die Weltwoche | Folio | Le point | The New York Review of Books

Edge.org, 30.05.2006 (USA)

The best essays about the disconcerting media revolution known as the Internet continue to come from the USA. A fortnight ago in the New York Times Magazine, Kevin Kelly (more here) set out his euphoric vision of the Internet-based collective and the universal book. Almost immediately, although without direct reference to Kelly, Jaron Lanier (more here) penned an acerbic counter argument, criticising the collective spirit kindled by projects such as Wikipedia which believes a collective intelligence will aggregate by itself on the net without responsible authors. Lanier talks of a "new online collectivism" and the "resurgence of the idea that the collective is all-wise". "This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous." Lanier does not believe in erasing authorship: "The beauty of the Internet is that it connects people. The value is in the other people. If we start to believe that the Internet itself is an entity that has something to say, we're devaluing those people and making ourselves into idiots."

Lanier's essay provoked many people to enter into the debate at edge.org, Kevin Kelly among them.

[...continue]



vom 13. Juni 2006

Magazinrundschau

Edge.org | L`Express | The Economist | Die Weltwoche | The New York Review of Books | The Spectator | Il Foglio | Nepszabadsag | Folio | DU | Le point | Elsevier | The New York Times Book Review

Edge.org, 30.05.2006

Die besten Essays über die bestürzende Medienrevolution namens Internet kommen nach wie vor aus den USA. Vor ein paar Wochen entwarf Kevin Kelly (mehr hier) im New York Times Magazine die euphorische Vision eines durch das Internet geschaffenen kollektiven und unendlichen Buchs. Fast gleichzeitig setzt Jaron Lanier (mehr hier), ohne direkt auf Kelly zu antworten, einen scharfen Gegenakzent und kritisiert einen von Projekten wie Wikipedia angefachten Kollektivgeist, der glaubt, dass sich der Weltgeist schon von alleine und ohne verantwortliche Autoren im Netz aggregiert. Lanier spricht von einem "new online Collectivism", "einer Wiederkehr der Idee von einem allwissenden Kollektiv": "Diese Idee hatte fürchterliche Konsequenzen, als sie in verschiedenen Epochen von rechts- oder linksextremen Kräften über uns gebracht wurde. Die Tatsache, dass sie nun wieder von prominenten Forschern und Futorologen aufgebracht wird - darunter Leuten, die ich kenne und mag - macht sie nicht weniger gefährlich." Lanier glaubt nicht an eine Abschaffung der Autorenschaft: "Das schöne am Netz ist, dass es Beziehungen zwischen Leuten herstellt. Der Wert liegt in diesen anderen Leuten. Wenn wir glauben, dass das Internet selbst als Ganzes etwas zu sagen hat, dann entwerten wir diese Leute und machen uns zu Idioten."

Über Laniers Essay werden auf edge.org intensive Debatten geführt. Es antwortet unter anderem Kevin Kelly.

[...continue]



vom 13. Juni 2006

Magazinrundschau

Das Wikipedia-Prinzip ist digitaler Maoismus, behauptet Jaron Lanier in Edge. Im Express feiern Eric Hobsbawm und Jacques Attali Karl Marx als Denker der Globalisierung. Segolene Royal sieht das wohl etwas anders, entnehmen wir der Weltwoche. Der Economist traut keinem Roboter. Die New York Review of Books sieht die Opiumindustrie in Afghanistan wachsen und gedeihen. Der Spectator berichtet aus Darfur. DU widmet sich dem Volk der Kritischen Wälder. In Le Point feiert Bernard-Henri Levy Angela Merkel als lebenden Beweis für die Aktualität von Simone de Beauvoirs Werk.

[...continue]



[6.15.06]

A Wiki Situation
By Scott McLemee


You don’t find any of Wells’s meritocracy at work in Wikipedia. There is no benchmark for quality. It is an intellectual equivalent of the Wild West, without the cows or the gold...And yet, strangely enough, you find imagery very similar to that of Wells’s "world brain" emerging in some of the more enthusiastic claims for Wikipedia. As the computer scientist Jaron Lanier noted in a recent essay, there is now an emergent sensibility he calls "a new online collectivism" – one for which "something like a distinct kin to human consciousness is either about to appear any minute, or has already appeared." (Lanier offers a sharp criticism of this outlook. See also the thoughtful responses to his essay assembled by John Brockman.)

[...continue]



Munich [6.13.06]
FEUILLETON —  Seite 13
Lack of evidence (Aus Mangel an Beweisen ):
Science debates faith and intelligent design

by Andrian Kreye

New York City literary agent and head of the Third Culture movement John Brockman knows how to start a debate. He also knows, which debates to avoid, which is why he and his likeminded authors had stayed always stayed away from politics. Brockman and leading scientific thinkers like Pinker, Diamond and Dennett had set upon to challenge humanities by leading intellectual debates with the arguments of science. Just the same they had avoided the debate about intelligent design and the forrays of christian fundamentalists to get the American public to doubt Darwin's theory of evolution. In the past centuries there had rarely been grounds for debate between faith and
science.
. . .

Briefly after the symposium (he staged at Harvard this spring) Brockman had to deal with the tar pits of intelligent design debates after all and published the anthology of essays 'Intelligent Thought'. The book features some of the best science writers who are writing against the folly of creationsim with a passion, as if their life was at stake. Brockman remembers, when he decided to meddle in this debate: "Last fall the president, the majority leader of the Senate and Senator McCain all publicly declared their support to teach Intelligent Design alongside evolution in public schools."

[...continue]



WHAT'S ONLINE
By Dan Mitchell
June 10, 2006

The Trouble With Wikis

There is nothing wrong, per se, with Wikipedia, writes Jaron Lanier, the computer scientist, artist and author, in a provocative essay on the Web site Edge: The Third Culture (edge.org). Rather, he says, the problem is how Wikipedia is used and the way it has been elevated to such importance so quickly.

Is it a good idea to rely on an encyclopedia that can be changed on a whim by any number of anonymous users? Is relying on the "hive mind" envisioned by the former Wired magazine editor Kevin Kelly the way to go about using the Web?

Usually not, Mr. Lanier writes. Doing so amounts to taking techno-utopianism to its extreme — favoring the tool over the worker, and the collective over the individual.


Articles of Note
June 10, 2006

Collectives have their uses, but writing encyclopedias? With no firm editorial hand? Call it the Wikipedia problem...


Responses to Jaron Lanier's Crit of Online Collectivism
By David Pescovitz
June 10, 2006

Two weeks ago, Edge.org published Jaron Lanier's essay "Digital Maoism: The Hazards of the New Online Collectivism," critiquing the importance people are now placing on Wikipedia and other examples of the "hive mind," as people called it in the cyberdelic early 1990s. It's an engaging essay to be sure, but much more thought-provoking to me are the responses from the likes of Clay Shirky, Dan Gillmor, Howard Rheingold, our own Cory Doctorow, Douglas Rushkoff, and, of course, Jimmy Wales to be more thought provoking.

[...continue]


The real bias in Wikipedia
By Robert McHenry
June 7, 2006

No complex project can be expected to yield satisfactory results without a clear vision of what the goal is – and here I mean what a worthy internet encyclopedia actually looks like – and a plan to reach that goal, which will include a careful inventory of the needed skills and knowledge and some meaningful measures of progress. To date, the "hive mind" of Wikipedia's "digital Maoism" (as Jaron Lanier's vigorous critique on edge.org calls it) displays none of these.

[...continue]



Jaron Lanier on the stupidity of the hive mind
By Jack Schofield

May 31, 2006

Jaron Lanier, who more or less invented virtual reality in the 1980s (making me a lifelong Lanier fan), has published a fascinating Edge essay on Digital Maosim: The Hazards of the New Online Collectivism.

...

Comment: Edge is based on the idea of accumulating the knowledge of a very small number of the world's smartest people — more or less the opposite of Google or Wikipedia.

[...continue]




On "QUANTUM MONKEYS" by Seth Lloyd

RUDY RUCKER

[6.13.06]

Lloyd draws on the analogy of monkeys who are pounding away not on typewriters, but on keyboards that input code to a computer. The laws of nature are the computer. And the monkeys are inputting possible programs. Now, as it happens, lots of short programs generate nice-looking complex patterns. These are what Wolfram calls the Class 4 computations; the ones that I call gnarly computations. Water, fire, clouds, trees, these are all examples of natural computations that, given any of a wide range of inputs, will generate much the same kinds of patterns.

In Lloyd's words, "Many beautiful and intricate mathematical patterns — regular geometric shapes, fractal patterns, the laws of quantum mechanics, elementary particles, the laws of chemistry — can be produced by short computer programs. Believe it or not a [programming] monkey has a good shot at producing everything we see."

He then says, "For the computational explanation of complexity to work, two ingredients are necessary: (a) a computer, and (b) monkeys. The laws of quantum mechanics themselves provide our computer. "

Actually, as I have doubts about quantum mechanics, I'd say that maybe we can just say the "laws of logic," rather than "laws of quantum mechanics "

The really debatable issue is what the monkeys are.

Stephen Wolfram would argue that the universe is ultimately deterministic; think of his beloved cone-shell type cellular automaton rule 30, which starts with a single bit, and spews out endlessly many rows of random-looking scuzz. Perhaps the random-looking seeds that feed into the universe's computation aren't in fact really random, they're pseudorandom sequences generated by a lower level randomizing computation. In this view, there is only one possible universe.

The underlying "monkeys" pseudorandomizer is, in other words, a deterministic rule like CA Rule 30, and it feeds inputs into the universal computer that then generates the complex lovely patterns of the world.

Now, Lloyd, being a quantum mechanic, prefers to say that the "monkeys" are quantum fluctuations. One of the problems in this view is that it we aren't philosophically satisfied with the notion of completely random physical events. We like to see a reason. The way quantum mechanics gets out of this is to say that since there's no reason for a particular turn of events, it must be that all possible turns of events happen, which is unsatisfying.

In any case, Lloyd seems to say that planets and trees and people are algorithmically probable. Things like us are fairly likely to occur in any gnarly class four computation, and all the universes, being universal computations, are potentially gnarly, and in fact a large number of random seed will produce gnarly.

But, being a quantum mechanic, Lloyd doesn't give enough consideration to the ability of deterministic computations to generate what Wolfram calls "intrinsic randomness, "indeed, Lloyd writes, "Without the laws of quantum mechanics, the universe would still be featureless and bare."

That's not true. If you look, for instance, at any computer simulation of a physical system, you see gnarly, but these simulations don't in fact use quantum mechanics as a randomizer. They simply use deterministic pseudorandomizers to get their "monkey" variations to feed into the simulated physics. We really don't need true randomness. Pseudorandomness, that is, unpredictable computation, is enough. There's no absolute necessity to rush headlong into quantum mechanics.


On "DIGITAL MAOISM": The Hazards of the New Online Collectivism" By Jaron Lanier

Responses to Lanier's essay from Joseph "Yossi" Vardi and Peter Galison


PETER GALISON
[6.10.06]

It would be interesting to have some empirical evidence — say a comparison between topics head to head (wikipedia v brittanica, say) in physics, chemistry, biology, math, philosophy, political history, and a few biographies. I wonder how well the two would come out?


JOSEPH ("YOSSI") VARDI
[6.10.06]

One classical source about the fallacy of the collective wisdom is, of course, Extraordinary Popular Delusions & the Madness of Crowds, by Charles Mackay, 1850. A book that Bernard Baruch defined as one of the books which influenced him the most.

I wonder what ranking Galileo's blog would have gotten in Google at year 1641?

Collective wisdom is valid in one major area which is "what most of the people want!" That's it. Period. (Now I guess they want the World Cup)

In finance, for instance, Warren Buffet suggests the following strategy: see what the common wisdom is and to do exactly the opposite.



Can civilisation be safeguarded, without humanity having to sacrifice its diversity and individualism? This is a stark question, but I think it's a serious one.

DARK MATERIAL [6.13.06]
By Martin Rees

Introduction

Nuclear scientist Joseph Rotblat campaigned against the atom bomb he had helped unleash. In the Rotblat Memorial Lecture, delivered recently at the Hay Literary Festival, Lord (Martin) Rees wonders whether it's time for today's cyber scientists to heed Rotblat's legacy

JB

LORD (MARTIN) REES, widely acknowledged as one of the world's leading astronomers and cosmologists, is President of the Royal Society, Master of Trinity College, Cambridge; Royal Society Professor at Cambridge University; the UK Astronomer Royal. He is the author of several books, including Our Final Hour: A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future in this Century—on Earth and Beyond (published in the UK as Our Final Century: The 50/50 Threat to Humanity's Survival).

Martin Rees's Edge Bio Page


DARK MATERIALS

(MARTIN REES:) Scientists have had a bad literary press: Dr Frankenstein, Dr Moreau, and especially Dr Strangelove. This lecture commemorates a man who was the utter antithesis of Strangelove.

Jo Rotblat was a nuclear scientist. He helped to make the first atomic bomb. But for decades thereafter, he campaigned to control the powers he'd helped unleash. Until last few months of his long life, he pursued this aim with the dynamism of a man half his age, inspiring others to join the cause. Today, I want to talk about the threats and challenges of science in the 21st century and what younger scientists can learn from Jo's example.

A year ago, Robert McNamara, age 88, spoke here in this tent — his confessional movie 'Fog of War' had just appeared. Jo Rotblat, age 96, was due to be on the platform with him. This might have seemed an incongruous pairing. Back in the 1960s, McNamara was American Secretary of Defense — in charge of the nuclear arsenal. And Rotblat was an antinuclear campaigner. But in old age they converged — McNamara himself came to espouse the aim of eliminating nuclear weapons completely.

Sadly, Jo Rotblat wasn't well enough to come here last Summer He died later that year — after a long life scarred by the turmoils of the last century. Jo was born in Poland in 1908. His family suffered great hardship in World War 1. He was exceptionally intelligent and determined, and managed to become a nuclear physicist. After the invasion of Poland, he came as as a refugee to England to work with James Chadwick at Liverpool University — his wife became a victim of the Nazis.

He then went to Los Alamos as part of the British contingent involved in the Manhattan project to make the first atom bomb.

In his mind there was only one justification for the bomb project: to ensure that Hitler didn't get one first and hold us to ransom. As soon as this ceased to be a credible risk, Jo left Los Alamos — the only scientist to do so. Indeed, he recalls having been disillusioned by hearing General Groves, head of the project, saying as early as March 1944 that the main purpose of the bomb was "to subdue the Russians".

He returned to England; became a professor of medical physics, an expert on the effects of radiation; and a compelling and outspoken campaigner. In 1955, Jo met Bertrand Russell, and encouraged him to prepare a manifesto stressing the extreme gravity of the nuclear peril. Jo got Einstein to sign too — it was Einstein's last public act, he died a week later. This 'Einstein Russell manifesto' was then signed by ten other eminent scientists — all Nobel Prize winners. (Jo was diffident about signing, but Russell urged he should as he might one day earn one himself.) The authors claimed to be "speaking on this occasion not as members of this or that nation, continent or creed, but as human beings, members of the species Man, whose continued existence is in doubt". This manifesto led to the initiation of the Pugwash Conferences — so called after the village in Nova Scotia where the inaugural conference was held; in the decades since, there have been 300 meetings; Jo attended almost all of them.

When the achievements of these Conferences were recognised by the 1995 Nobel Peace Prize, half the award went to the Pugwash organisation, and half to Rotblat personally—as their 'prime mover' and untiring inspiration. Particularly during the 1960s, the Pugwash Conferences offered crucial 'back door' contact between scientists from the US and the Soviet Union when there were few formal channels — these contacts eased the path for the partial test ban treaty of 1963, and the later ABM treaty.

In the two World Wars and their aftermath, 187 million perished by war, massacre, persecution or policy-induced famine. But during the Cold War we were at still greater hazard: a nuclear war between the superpowers could have killed a billion people, and devastated the fabric of civilisation. The superpowers could have stumbled towards armageddon through muddle and miscalculation.

We're now very risk-averse. We fret about statistically tiny risks — carcinogens in food, one in a million chance of being killed in train crashes, and so forth. It's hard to contemplate just how great the risks of nuclear catastrophe once were. The Cuban Missile stand-off in 1962 was the most dangerous moment in history. and McNamara was then the US Secretary of Defense. He later wrote that " we came within a hairbreadth of nuclear war without realising it. It's no credit to us that we escaped — Khrushchev and Kennedy were lucky as well as wise." The prevailing nuclear doctrine was deterrence via the threat of 'mutual assured destruction' (with the eponymous acronym MAD). Each side put the 'worst case' construction on whatever the other did, overestimated the threat, and over-reacted. The net result was an arms race that made both sides less secure.

It wasn't until he'd long retired that McNamara spoke frankly about the events in which he'd been so deeply implicated. He noted that "virtually every technical innovation in the arms race came from the US. But it was always quickly matched by the other side". The decisions that ratcheted the arms race were political, but scientists who develop new weapons must themselves share the blame.

Another who spoke out after retirement was Solly Zuckerman, the UK government's longtime chief scientific advisor. He said "ideas for new weapon systems derived in the first place, not from the military, but from scientists and technologists merely doing what they saw to be their job.... the momentum of the arms race is fueled by technicians in governmental laboratories and in the armaments industries".

Anyone in weapons labs whose skills rose above routine competence, or who displayed any originality, added their iota to this menacing trend. In Zuckerman's view the weapons scientists were "the alchemists of our times, working in secret ... , casting spells which embrace us all".

The great physicist Hans Bethe also came round to this view. He was the chief theorist at Los Alamos. and worked on the H-bomb, but by 1995 his aversion to military research had hardened, and he urged scientists to " desist from work creating, developing, improving and manufacturing nuclear weapons and other weapons of potential mass destruction" Some of Bethe's concerned colleagues started a journal called the The Bulletin of Atomic Scientists. The 'logo' on its cover is a clock, the closeness of whose hands to midnight indicate the Editor's judgment on how precarious the world situation is. Every few year the minute hand is shifted, either forwards or backwards.

When the cold war ended, the nuclear threat plainly eased; the Bulletin's clock was put back to 17 minutes to midnight. There was thereafter far less chance of ten thousand bombs devastating our civilisation. But this catastrophic threat could be merely in temporary abeyance. In the last century the Soviet Union rose and fell, there were two world wars. In the next hundred years, geopolitical realignments could be just as drastic, leading to a nuclear standoff between new superpowers., which might be handled less well than the Cuba crisis was. I think you'd have be optimistic to rate the probability as much below 50 percent But there's now more chance then ever of a few nuclear weapon going off in a localised conflict. We are confronted by proliferation of nuclear weapons (in North Korea and Iran for instance). Al Queda-style terrorists might some day acquire a nuclear weapon. If they did, they would willingly detonate it in a city centre, killing tens of thousands along with themselves; and millions around the world would acclaim them as heroes. I've focused so far on the nuclear threat . It's still with us — it always will be. But it's based on basic science that dates from the 1930s, when Jo Rotblat was a young researcher.

But let's now look forward. What are the promises and threats from 21st century science? My main message is that science offers immense hope, and exciting prospects. But it may have a downside. It may not threaten a sudden world-wide catastrophe — the doomsday clock is not such a good metaphor — but the threats are, in aggregate, as worrying and challenging. But there's a real upside too: indeed there are grounds for being a techno-optimist.

The technologies that fuel economic growth today — IT, miniaturisation and biotech —- are environmentally and socially benign. They're sparing of energy, and of raw materials. They boost quality of life in the developing as well as the developed world, and have much further to go. That's good news. Not only is science advancing faster than ever, it's causing new dimension of change. Whatever else may have changed over preceding centuries, humans haven't — not for thousands of years. But in this century targeted drugs to enhance memory or change mood, genetic modification, and perhaps silicon implants into the brain, may alter human beings themselves — their minds and attitudes, even their physique That's something qualitatively new in our history. It means that our species could be transformed, not on the millions of years of Darwinian selection, but within a few centuries. And it raises all kinds of ethical conundrums. And the work of Ray Kurzweil and others like him reminds us that we should keep our minds open, or at least ajar, to things that today seem beyond the fringe of science fiction.

But we can plausibly predict some disquieting trends. Some are environmental: rising populations, especially in the megacities of the developing world, increasing energy consumption, etc. Indeed, collective human actions are transforming, even ravaging, the entire biosphere — perhaps irreversibly — through global warming and loss of biodiversity. We've entered the new geological era, the anthropocene. We don't fully understand the consequences of our many-faceted assault on the interwoven fabric of atmosphere, water, land and life. We are collectively endangering our planet.

But there's a growing danger from individuals too. Technology empowers each of us ever more and interconnects us more closely. So even a single person will have the capability to cause massive disruption through error or terror.

An organised network would not be required: just a fanatic, or a weirdo with the mindset of those who now design computer viruses — the mindset of an arsonist. There are such people, and some will be scientifically proficient. We're kidding ourselves if we think that technical education leads necessarily to balanced rationality. It can be combined with fanaticism —not just traditional fundamentalism — Christian in the US, Muslim in the East — but new age irrationalities. The Raelians and Heavens Gate cult are disquieting portents: their adherents claim to be 'scientific' but have a precarious foothold in reality. The techniques and expertise for bio or cyber attacks will be accessible to millions — they doesn't require large special purpose facilities like nuclear weapons. It would be hard to eliminate the risk, even with very intrusive surveillance.

The impact of even a local incident — "bio" or "cyber"— would be hyped and globalised by the media, causing wide disruption — psychic and economic. Everyone would be mindful that the same thing could happen again, anywhere, anytime.

There will always be disaffected loners in every country, and the 'leverage' each can exert is ever-growing. The global village will have its global village idiots.

[I recall a talk here by Francis Fukuyama, about his book Our Posthuman Future. He argued that habitual use of mood-altering medications would narrow the range of humanity. He cites the use of prozac to counter depression, and of ritalin to damp down hyperactivity in high-spirited but otherwise healthy children. He feared that drugs will become universally used to tone down extremes of behaviour and mood and that our species would degenerate into pallid acquiescent zombies.

But my worry is the opposite of Fukuyama's. 'Human nature' encompasses a rich variety of personality types, but these include those who are drawn towards the disaffected fringe. The destabilizing and destructive influence of just a few such people will be ever more devastating as their technical powers and expertise grow, and as the world we share becomes more interconnected.

Can civilisation be safeguarded, without humanity having to sacrifice its diversity and individualism? This is a stark questions, but I think it's a serious one.]

Some commentators on biotech, robotics and nanotech worry that when the genie is out of the bottle, the outcome may be impossible to control. They urge caution in 'pushing the envelope' in some areas of science — that we should guard against such nightmares by putting the brakes on the science they're based on.

But that's naive. We can't reap the benefits of science without accepting some risks — the best we can do is minimise the risks. The typical scientific discovery has many applications — some benign, others less so. Even nuclear physics has its upside — its medical uses have saved more people than nuclear weapons actually killed.

The uses of academic research generally can't be foreseen: Rutherford famously said, in the mid-thirties, that nuclear energy was 'moonshine'; the inventors of lasers didn't foresee that an early application of their work would be to eye surgery; the discoverer of x-rays was not searching for ways to see through flesh.

21st century science will present new threats more diverse and more intractable than nuclear weapons did. They'll pose ethical dilemmas. There surely will be more and more 'doors that we could open but which are best left closed' — for ethical or prudential reasons.

A blanket prohibition on all risky experiments and innovations would paralysed science and deny us all its benefits. In the early days of steam, hundreds of people died horribly when poorly designed boilers exploded. Most surgical procedures, even if now routine, were risky and often fatal when they were being pioneered.

But we do need to be more cautious today. The worst conceivable consequences of a boiler explosion are limited and localised. In contrast, some 21st century innovations or experiments, if they went wrong, could have global effects — we confront what some people call 'existential risks'.

Scientists sometimes abide by self-imposed moratoria on specific lines of research. A precedent for this was the so called "Asilomar declaration" in 1975 whereby prominent molecular biologists refrained from some experiments involving the then-new technique of gene-splicing. There are now even more reasons for exercising restraint — ethics, risk of epidemics, and the 'yuk' factor — Just this week there have been moves, again in California, to control the still more powerful techniques of 'synthetic biology'.

But a voluntary moratorium will be harder to achieve today: the academic community is far larger, and competition (enhanced by commercial pressures) is more intense. To be effective, the consensus must be worldwide. If one country alone imposed regulations, the most dynamic researchers and enterprising companies would migrate to another that was more sympathetic or permissive. This is happening already in stem cell research.

How can we prioritise and regulate, to maximise the chance that applications are benign, and restrain their 'dark side'? How can the best science be fed in to the political process?

We can't do everything in science. There's an ever-widening gap between what can be done and what can be afforded.

At the moment, scientific effort is deployed sub optimally. This seems so whether we judge in purely intellectual terms, or take account of likely benefit to human welfare. Some subjects have had the 'inside track' and gained disproportionate resources. Others, such as environmental researches, renewable energy sources, biodiversity studies and so forth, deserve more effort. Within medical research the focus is disproportionately on cancer and cardiovascular studies, the ailments that loom largest in prosperous countries, rather than on the infections endemic in the tropics. Choices on how science is applied shouldn't be made just by scientists. That's why everyone needs a 'feel' for science and a realistic attitude to risk — otherwise public debate won't get beyond sloganising. Jo Rotblat favoured a 'Hippocratic' Oath' whereby scientists would pledge themselves to use their talents to human benefit. Whether or not such an oath would have substance, scientists surely have a special responsibility. It's their ideas that form the basis of new technology.

We feel there is something lacking in parents who don't care what happens to their children in adulthood, even though it's generally beyond their control. Likewise, scientists shouldn't be indifferent to the fruits of their ideas — their intellectual creations. They should plainly forgo experiments that are themselves risky or unethical. More than that, they should try to foster benign spin-offs, but resist, so far as they can, dangerous or threatening applications. They should raise public consciousness of hazards to environment or to health.

The decisions that we make, individually and collectively, will determine whether the outcomes of 21st century sciences are benign or devastating. Some will throw up their hands and say that anything that is scientifically and technically possible will be done — somewhere, sometime — despite ethical and prudential objections, and whatever the laws say — that science is advancing so fast, and is so much influenced by commercial and political pressures, that nothing we can do makes any difference. Whether this idea is true or false, it's an exceedingly dangerous one, because it's engenders despairing pessimism, and demotivates efforts to secure a safer and fairer world. The future will best be safeguarded — and science has the best chance of being applied optimally — through the efforts of people who are less fatalistic. And here I am optimistic. The burgeoning technologies of IT, miniaturisation and biotech are environmentally and socially benign. The challenge of global warming should stimulate a whole raft of manifestly benign innovations — for conserving energy, and generating it by novel 'clean' means (biofuels, innovative renewables, carbon sequestration, and nuclear fusion). Other global challenges include controlling infectious diseases; and preserving biodiversity.

These challenging scientific goals should appeal to the idealistic young. They deserve a priority and commitment from governments, akin to that accorded to the Manhattan project or the Apollo moon landing.

I've spoken as a scientist. But my special subject is cosmology — the study of our environment in the widest conceivable sense. I can assure you, from having observed my colleagues, that a preoccupation with near-infinite spaces doesn't make cosmologists specially 'philosophical' in coping with everyday life. They're not detached from the problems confronting us on the ground, today and tomorrow. For me, a 'cosmic perspective' actually strengthens my concerns about what happens here and now: I'll conclude by explaining why. The stupendous timespans of the evolutionary past are now part of common culture. We and the biosphere are the outcome of more than four billion years of evolution,but most people still somehow think we humans are necessarily the culmination of the evolutionary tree. That's not so. Our Sun is less than half way through its life. We're maybe only the half way stage. Any creatures witnessing the Sun's demise 6 billion years hence won't be human — they'll be as different from us as we are from bacteria.

But, even in this 'hyper-extended' timeline — extending billions of years into the future, as well as into the past — this century may be a defining moment. The 21st-century is the first in our planet's history where one species has Earth's future in its hands, and could jeopardise life's immense potential. I'll leave you with a cosmic vignette. We're all familiar with pictures of the Earth seen from space — its fragile biosphere contrasting with the sterile moonscape where the astronauts left their footprints. Suppose some aliens had been watching our planet for its entire history, what would they have seen? Over nearly all that immense time, 4.5 billion years, Earth's appearance would have altered very gradually. The continents drifted; the ice cover waxed and waned; successive species emerged, evolved and became extinct.

But in just a tiny sliver of the Earth's history — the last one millionth part, a few thousand years — the patterns of vegetation altered much faster than before. This signaled the start of agriculture. The pace of change accelerated as human populations rose.

But then there were other changes, even more abrupt. Within fifty years — little more than one hundredth of a millionth of the Earth's age, the carbon dioxide in the atmosphere began to rise anomalously fast. The planet became an intense emitter of radio waves (the total output from all TV, cellphone, and radar transmissions.)

And something else unprecedented happened: small projectiles lifted from the planet's surface and escaped the biosphere completely. Some were propelled into orbits around the Earth; some journeyed to the Moon and planets.

If they understood astrophysics, the aliens could confidently predict that the biosphere would face doom in a few billion years when the Sun flares up and dies. But could they have predicted this unprecedented spike less than half way through the Earth's life — these human-induced alterations occupying, overall, less than a millionth of the elapsed lifetime and seemingly occurring with runaway speed?

If they continued to keep watch, what might these hypothetical aliens witness in the next hundred years? Will a final spasm be followed by silence? Or will the planet itself stabilise? And will some of the objects launched from the Earth spawn new oases of life elsewhere?

The answer depends on us. The challenges of the 21st century are more complex and intractable than those of the nuclear age. Wise choices will require idealistic and effective campaigners — not just physicists, but biologists, computer experts, and environmentalists as well: latter-day counterparts of Jo Rotblat , inspired by his vision and building on his legacy.

[An abbreviated version of this lecture was published by The Guardian, Saturday June 10, 2006]


|Top|